lustre-devel-lustre.org archive mirror
 help / color / mirror / Atom feed
* [lustre-devel] [PATCH 00/26] lustre: cleanups with no code changes
@ 2019-01-31 17:19 James Simmons
  2019-01-31 17:19 ` [lustre-devel] [PATCH 01/26] lnet: use kernel types for lnet core kernel code James Simmons
                   ` (26 more replies)
  0 siblings, 27 replies; 30+ messages in thread
From: James Simmons @ 2019-01-31 17:19 UTC (permalink / raw)
  To: lustre-devel

To prepare for lustre to leave staging the code needs to be
cleaned up to make it easier for others to examine our code.
This patch series does no real code changes but does the
following:

1) Replace all the UAPI types in the kernel only code. Once
   these patches are applied only the UAPI headers will have
   UAPI types.

2) The lustre code has massive amounts of white spacing. Remove
   the white spacing for variable declarations. Removing the
   white spacing in data structures and align the fields to
   make it easier to read. In a few patches remove access
   blank lines.

James Simmons (26):
  lnet: use kernel types for lnet core kernel code
  lnet: use kernel types for lnet klnd kernel code
  lnet: use kernel types for lnet selftest kernel code
  ptlrpc: use kernel types for kernel code
  lustre: use kernel types for lustre internal headers
  ldlm: use kernel types for kernel code
  obdclass: use kernel types for kernel code
  lustre: convert remaining code to kernel types
  lustre: cleanup white spaces in fid and fld layer
  ldlm: cleanup white spaces
  llite: cleanup white spaces
  lmv: cleanup white spaces
  lov: cleanup white spaces
  mdc: cleanup white spaces
  mgc: cleanup white spaces
  obdclass: cleanup white spaces
  obdecho: cleanup white spaces
  osc: cleanup white spaces
  ptlrpc: cleanup white spaces
  lustre: first batch to cleanup white spaces in internal headers
  lustre: second batch to cleanup white spaces in internal headers
  lustre: last batch to cleanup white spaces in internal headers
  libcfs: cleanup white spaces
  lnet: cleanup white spaces
  socklnd: cleanup white spaces
  o2iblnd: cleanup white spaces

 .../lustre/include/linux/libcfs/libcfs_debug.h     |  74 +-
 .../lustre/include/linux/libcfs/libcfs_fail.h      |   8 +-
 .../lustre/include/linux/libcfs/libcfs_private.h   |  56 +-
 .../lustre/include/linux/libcfs/libcfs_string.h    |  10 +-
 drivers/staging/lustre/include/linux/lnet/api.h    |  40 +-
 .../staging/lustre/include/linux/lnet/lib-lnet.h   |  69 +-
 .../staging/lustre/include/linux/lnet/lib-types.h  | 234 +++----
 .../staging/lustre/include/linux/lnet/socklnd.h    |  20 +-
 .../staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c    | 107 +--
 .../staging/lustre/lnet/klnds/o2iblnd/o2iblnd.h    | 642 +++++++++---------
 .../staging/lustre/lnet/klnds/o2iblnd/o2iblnd_cb.c | 121 ++--
 .../lustre/lnet/klnds/o2iblnd/o2iblnd_modparams.c  |  22 +-
 .../staging/lustre/lnet/klnds/socklnd/socklnd.c    | 125 ++--
 .../staging/lustre/lnet/klnds/socklnd/socklnd.h    | 566 ++++++++--------
 .../staging/lustre/lnet/klnds/socklnd/socklnd_cb.c |  56 +-
 .../lustre/lnet/klnds/socklnd/socklnd_lib.c        |  20 +-
 .../lustre/lnet/klnds/socklnd/socklnd_modparams.c  |  54 +-
 .../lustre/lnet/klnds/socklnd/socklnd_proto.c      |  99 ++-
 drivers/staging/lustre/lnet/libcfs/debug.c         |  22 +-
 drivers/staging/lustre/lnet/libcfs/libcfs_cpu.c    |   8 +-
 drivers/staging/lustre/lnet/libcfs/libcfs_string.c |   4 +-
 .../lustre/lnet/libcfs/linux-crypto-adler.c        |   2 +-
 drivers/staging/lustre/lnet/libcfs/linux-crypto.c  |   1 -
 drivers/staging/lustre/lnet/libcfs/module.c        | 128 ++--
 drivers/staging/lustre/lnet/libcfs/tracefile.c     |   5 +-
 drivers/staging/lustre/lnet/libcfs/tracefile.h     |  12 +-
 drivers/staging/lustre/lnet/lnet/acceptor.c        |  30 +-
 drivers/staging/lustre/lnet/lnet/api-ni.c          | 115 ++--
 drivers/staging/lustre/lnet/lnet/config.c          |  86 ++-
 drivers/staging/lustre/lnet/lnet/lib-eq.c          |   2 +-
 drivers/staging/lustre/lnet/lnet/lib-me.c          |   4 +-
 drivers/staging/lustre/lnet/lnet/lib-move.c        | 112 ++--
 drivers/staging/lustre/lnet/lnet/lib-msg.c         |  72 +-
 drivers/staging/lustre/lnet/lnet/lib-ptl.c         |  36 +-
 drivers/staging/lustre/lnet/lnet/lib-socket.c      |  21 +-
 drivers/staging/lustre/lnet/lnet/module.c          |   3 +-
 drivers/staging/lustre/lnet/lnet/net_fault.c       |   1 -
 drivers/staging/lustre/lnet/lnet/nidstrings.c      | 126 ++--
 drivers/staging/lustre/lnet/lnet/peer.c            |  57 +-
 drivers/staging/lustre/lnet/lnet/router.c          |  42 +-
 drivers/staging/lustre/lnet/lnet/router_proc.c     |  65 +-
 drivers/staging/lustre/lnet/selftest/brw_test.c    |  20 +-
 drivers/staging/lustre/lnet/selftest/console.h     |   2 +-
 drivers/staging/lustre/lnet/selftest/rpc.c         |  22 +-
 drivers/staging/lustre/lnet/selftest/rpc.h         | 132 ++--
 drivers/staging/lustre/lustre/fid/fid_lib.c        |   2 +-
 drivers/staging/lustre/lustre/fid/fid_request.c    |  12 +-
 drivers/staging/lustre/lustre/fid/lproc_fid.c      |   4 +-
 drivers/staging/lustre/lustre/fld/fld_cache.c      |  18 +-
 drivers/staging/lustre/lustre/fld/fld_internal.h   |  36 +-
 drivers/staging/lustre/lustre/fld/fld_request.c    |  14 +-
 drivers/staging/lustre/lustre/include/cl_object.h  | 340 +++++-----
 .../staging/lustre/lustre/include/lprocfs_status.h |  95 +--
 drivers/staging/lustre/lustre/include/lu_object.h  | 354 +++++-----
 .../staging/lustre/lustre/include/lustre_debug.h   |   4 +-
 .../staging/lustre/lustre/include/lustre_disk.h    |  48 +-
 drivers/staging/lustre/lustre/include/lustre_dlm.h | 298 ++++-----
 .../lustre/lustre/include/lustre_dlm_flags.h       | 326 ++++-----
 .../staging/lustre/lustre/include/lustre_export.h  |  82 +--
 drivers/staging/lustre/lustre/include/lustre_fid.h |  44 +-
 drivers/staging/lustre/lustre/include/lustre_fld.h |  18 +-
 drivers/staging/lustre/lustre/include/lustre_ha.h  |   2 +-
 .../staging/lustre/lustre/include/lustre_handles.h |   6 +-
 .../staging/lustre/lustre/include/lustre_import.h  | 225 +++----
 .../staging/lustre/lustre/include/lustre_intent.h  |  24 +-
 drivers/staging/lustre/lustre/include/lustre_lib.h |   2 -
 drivers/staging/lustre/lustre/include/lustre_lmv.h |  26 +-
 drivers/staging/lustre/lustre/include/lustre_log.h |  44 +-
 drivers/staging/lustre/lustre/include/lustre_mdc.h |   4 +-
 drivers/staging/lustre/lustre/include/lustre_mds.h |   4 +-
 drivers/staging/lustre/lustre/include/lustre_net.h | 502 +++++++-------
 .../lustre/lustre/include/lustre_nrs_fifo.h        |   6 +-
 .../lustre/lustre/include/lustre_req_layout.h      |  10 +-
 drivers/staging/lustre/lustre/include/lustre_sec.h | 338 +++++-----
 .../staging/lustre/lustre/include/lustre_swab.h    |   6 +-
 drivers/staging/lustre/lustre/include/obd.h        | 539 +++++++--------
 drivers/staging/lustre/lustre/include/obd_cksum.h  |   4 +-
 drivers/staging/lustre/lustre/include/obd_class.h  | 110 +--
 .../staging/lustre/lustre/include/obd_support.h    | 744 ++++++++++-----------
 drivers/staging/lustre/lustre/include/seq_range.h  |   2 +-
 drivers/staging/lustre/lustre/ldlm/ldlm_extent.c   |   8 +-
 drivers/staging/lustre/lustre/ldlm/ldlm_flock.c    |   6 +-
 drivers/staging/lustre/lustre/ldlm/ldlm_internal.h |  12 +-
 drivers/staging/lustre/lustre/ldlm/ldlm_lib.c      |  22 +-
 drivers/staging/lustre/lustre/ldlm/ldlm_lock.c     |  50 +-
 drivers/staging/lustre/lustre/ldlm/ldlm_lockd.c    |  28 +-
 drivers/staging/lustre/lustre/ldlm/ldlm_pool.c     |  27 +-
 drivers/staging/lustre/lustre/ldlm/ldlm_request.c  |  60 +-
 drivers/staging/lustre/lustre/ldlm/ldlm_resource.c |  90 +--
 drivers/staging/lustre/lustre/llite/dir.c          | 132 ++--
 drivers/staging/lustre/lustre/llite/file.c         | 220 +++---
 drivers/staging/lustre/lustre/llite/glimpse.c      |  18 +-
 drivers/staging/lustre/lustre/llite/lcommon_cl.c   |  24 +-
 drivers/staging/lustre/lustre/llite/lcommon_misc.c |   4 +-
 .../staging/lustre/lustre/llite/llite_internal.h   |  74 +-
 drivers/staging/lustre/lustre/llite/llite_lib.c    |  64 +-
 drivers/staging/lustre/lustre/llite/llite_mmap.c   |  54 +-
 drivers/staging/lustre/lustre/llite/llite_nfs.c    |  54 +-
 drivers/staging/lustre/lustre/llite/lproc_llite.c  |  82 +--
 drivers/staging/lustre/lustre/llite/namei.c        |  65 +-
 drivers/staging/lustre/lustre/llite/range_lock.c   |   4 +-
 drivers/staging/lustre/lustre/llite/range_lock.h   |  10 +-
 drivers/staging/lustre/lustre/llite/rw.c           |  12 +-
 drivers/staging/lustre/lustre/llite/rw26.c         |  57 +-
 drivers/staging/lustre/lustre/llite/statahead.c    | 149 +++--
 drivers/staging/lustre/lustre/llite/super25.c      |  16 +-
 drivers/staging/lustre/lustre/llite/vvp_dev.c      | 104 +--
 drivers/staging/lustre/lustre/llite/vvp_internal.h |  58 +-
 drivers/staging/lustre/lustre/llite/vvp_io.c       | 181 +++--
 drivers/staging/lustre/lustre/llite/vvp_object.c   |  30 +-
 drivers/staging/lustre/lustre/llite/vvp_page.c     |  88 +--
 drivers/staging/lustre/lustre/llite/xattr.c        |  48 +-
 drivers/staging/lustre/lustre/llite/xattr_cache.c  |   6 +-
 drivers/staging/lustre/lustre/lmv/lmv_intent.c     |  46 +-
 drivers/staging/lustre/lustre/lmv/lmv_internal.h   |   2 +-
 drivers/staging/lustre/lustre/lmv/lmv_obd.c        | 382 +++++------
 drivers/staging/lustre/lustre/lmv/lproc_lmv.c      |  24 +-
 .../staging/lustre/lustre/lov/lov_cl_internal.h    |  50 +-
 drivers/staging/lustre/lustre/lov/lov_dev.c        |  72 +-
 drivers/staging/lustre/lustre/lov/lov_internal.h   |  76 +--
 drivers/staging/lustre/lustre/lov/lov_io.c         | 136 ++--
 drivers/staging/lustre/lustre/lov/lov_lock.c       |  34 +-
 drivers/staging/lustre/lustre/lov/lov_merge.c      |   8 +-
 drivers/staging/lustre/lustre/lov/lov_obd.c        |  63 +-
 drivers/staging/lustre/lustre/lov/lov_object.c     | 154 ++---
 drivers/staging/lustre/lustre/lov/lov_offset.c     |   6 +-
 drivers/staging/lustre/lustre/lov/lov_pack.c       |   6 +-
 drivers/staging/lustre/lustre/lov/lov_page.c       |  18 +-
 drivers/staging/lustre/lustre/lov/lov_pool.c       |  24 +-
 drivers/staging/lustre/lustre/lov/lov_request.c    |  16 +-
 drivers/staging/lustre/lustre/lov/lovsub_dev.c     |  32 +-
 drivers/staging/lustre/lustre/lov/lovsub_lock.c    |   4 +-
 drivers/staging/lustre/lustre/lov/lovsub_object.c  |  29 +-
 drivers/staging/lustre/lustre/lov/lovsub_page.c    |   2 +-
 drivers/staging/lustre/lustre/lov/lproc_lov.c      |  18 +-
 drivers/staging/lustre/lustre/mdc/mdc_internal.h   |  24 +-
 drivers/staging/lustre/lustre/mdc/mdc_lib.c        | 146 ++--
 drivers/staging/lustre/lustre/mdc/mdc_locks.c      |  96 +--
 drivers/staging/lustre/lustre/mdc/mdc_reint.c      |   8 +-
 drivers/staging/lustre/lustre/mdc/mdc_request.c    | 266 ++++----
 drivers/staging/lustre/lustre/mgc/mgc_request.c    | 101 +--
 .../staging/lustre/lustre/obdclass/cl_internal.h   |   2 +-
 drivers/staging/lustre/lustre/obdclass/cl_io.c     |  30 +-
 drivers/staging/lustre/lustre/obdclass/cl_lock.c   |   4 +-
 drivers/staging/lustre/lustre/obdclass/cl_object.c |  42 +-
 drivers/staging/lustre/lustre/obdclass/cl_page.c   |  62 +-
 drivers/staging/lustre/lustre/obdclass/class_obd.c |  20 +-
 drivers/staging/lustre/lustre/obdclass/debug.c     |  16 +-
 drivers/staging/lustre/lustre/obdclass/genops.c    |  26 +-
 .../staging/lustre/lustre/obdclass/kernelcomm.c    |   8 +-
 drivers/staging/lustre/lustre/obdclass/linkea.c    |   6 +-
 drivers/staging/lustre/lustre/obdclass/llog.c      |  40 +-
 drivers/staging/lustre/lustre/obdclass/llog_cat.c  |   6 +-
 .../staging/lustre/lustre/obdclass/llog_internal.h |  24 +-
 drivers/staging/lustre/lustre/obdclass/llog_swab.c |  18 +-
 .../lustre/lustre/obdclass/lprocfs_counters.c      |  18 +-
 .../lustre/lustre/obdclass/lprocfs_status.c        | 184 ++---
 drivers/staging/lustre/lustre/obdclass/lu_object.c | 120 ++--
 .../lustre/lustre/obdclass/lustre_handles.c        |   6 +-
 .../staging/lustre/lustre/obdclass/lustre_peer.c   |  12 +-
 .../staging/lustre/lustre/obdclass/obd_config.c    |  39 +-
 drivers/staging/lustre/lustre/obdclass/obd_mount.c |  23 +-
 .../staging/lustre/lustre/obdecho/echo_client.c    | 284 ++++----
 .../staging/lustre/lustre/obdecho/echo_internal.h  |   4 +-
 drivers/staging/lustre/lustre/osc/lproc_osc.c      |   8 +-
 drivers/staging/lustre/lustre/osc/osc_cache.c      |  71 +-
 .../staging/lustre/lustre/osc/osc_cl_internal.h    |   6 +-
 drivers/staging/lustre/lustre/osc/osc_dev.c        |  42 +-
 drivers/staging/lustre/lustre/osc/osc_internal.h   |   6 +-
 drivers/staging/lustre/lustre/osc/osc_io.c         |  42 +-
 drivers/staging/lustre/lustre/osc/osc_lock.c       |  42 +-
 drivers/staging/lustre/lustre/osc/osc_object.c     |  30 +-
 drivers/staging/lustre/lustre/osc/osc_page.c       |   8 +-
 drivers/staging/lustre/lustre/osc/osc_request.c    |  48 +-
 drivers/staging/lustre/lustre/ptlrpc/client.c      |  77 +--
 drivers/staging/lustre/lustre/ptlrpc/events.c      |  10 +-
 drivers/staging/lustre/lustre/ptlrpc/import.c      |  14 +-
 drivers/staging/lustre/lustre/ptlrpc/layout.c      |  21 +-
 drivers/staging/lustre/lustre/ptlrpc/llog_client.c |   2 +-
 .../staging/lustre/lustre/ptlrpc/lproc_ptlrpc.c    | 304 ++++-----
 drivers/staging/lustre/lustre/ptlrpc/niobuf.c      |  11 +-
 drivers/staging/lustre/lustre/ptlrpc/nrs.c         |   1 -
 .../staging/lustre/lustre/ptlrpc/pack_generic.c    | 112 ++--
 .../staging/lustre/lustre/ptlrpc/ptlrpc_internal.h |  18 +-
 drivers/staging/lustre/lustre/ptlrpc/ptlrpcd.c     |  22 +-
 drivers/staging/lustre/lustre/ptlrpc/recover.c     |   3 +-
 drivers/staging/lustre/lustre/ptlrpc/sec.c         |  22 +-
 drivers/staging/lustre/lustre/ptlrpc/sec_bulk.c    |  80 +--
 drivers/staging/lustre/lustre/ptlrpc/sec_config.c  |  22 +-
 drivers/staging/lustre/lustre/ptlrpc/sec_null.c    |  40 +-
 drivers/staging/lustre/lustre/ptlrpc/sec_plain.c   |  94 +--
 drivers/staging/lustre/lustre/ptlrpc/service.c     |  36 +-
 192 files changed, 6805 insertions(+), 6827 deletions(-)

-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [lustre-devel] [PATCH 01/26] lnet: use kernel types for lnet core kernel code
  2019-01-31 17:19 [lustre-devel] [PATCH 00/26] lustre: cleanups with no code changes James Simmons
@ 2019-01-31 17:19 ` James Simmons
  2019-01-31 17:19 ` [lustre-devel] [PATCH 02/26] lnet: use kernel types for lnet klnd " James Simmons
                   ` (25 subsequent siblings)
  26 siblings, 0 replies; 30+ messages in thread
From: James Simmons @ 2019-01-31 17:19 UTC (permalink / raw)
  To: lustre-devel

LNet stack was originally both a user land and kernel
implementation. The source contains many types of the
form __u32 but since this is mostly kernel code change
the types to kernel internal types.

Signed-off-by: James Simmons <jsimmons@infradead.org>
---
 drivers/staging/lustre/include/linux/lnet/api.h    | 16 ++--
 .../staging/lustre/include/linux/lnet/lib-lnet.h   | 66 ++++++++--------
 .../staging/lustre/include/linux/lnet/lib-types.h  | 50 ++++++------
 .../staging/lustre/include/linux/lnet/socklnd.h    | 20 ++---
 drivers/staging/lustre/lnet/lnet/acceptor.c        | 14 ++--
 drivers/staging/lustre/lnet/lnet/api-ni.c          | 26 +++----
 drivers/staging/lustre/lnet/lnet/config.c          | 48 ++++++------
 drivers/staging/lustre/lnet/lnet/lib-me.c          |  4 +-
 drivers/staging/lustre/lnet/lnet/lib-move.c        | 28 +++----
 drivers/staging/lustre/lnet/lnet/lib-ptl.c         | 14 ++--
 drivers/staging/lustre/lnet/lnet/lib-socket.c      | 10 +--
 drivers/staging/lustre/lnet/lnet/nidstrings.c      | 88 +++++++++++-----------
 drivers/staging/lustre/lnet/lnet/peer.c            | 20 ++---
 drivers/staging/lustre/lnet/lnet/router.c          | 16 ++--
 drivers/staging/lustre/lnet/lnet/router_proc.c     |  4 +-
 15 files changed, 212 insertions(+), 212 deletions(-)

diff --git a/drivers/staging/lustre/include/linux/lnet/api.h b/drivers/staging/lustre/include/linux/lnet/api.h
index 70c9919..7c30475 100644
--- a/drivers/staging/lustre/include/linux/lnet/api.h
+++ b/drivers/staging/lustre/include/linux/lnet/api.h
@@ -76,7 +76,7 @@
  * @{
  */
 int LNetGetId(unsigned int index, struct lnet_process_id *id);
-int LNetDist(lnet_nid_t nid, lnet_nid_t *srcnid, __u32 *order);
+int LNetDist(lnet_nid_t nid, lnet_nid_t *srcnid, u32 *order);
 lnet_nid_t LNetPrimaryNID(lnet_nid_t nid);
 
 /** @} lnet_addr */
@@ -96,16 +96,16 @@
  */
 int LNetMEAttach(unsigned int      portal,
 		 struct lnet_process_id match_id_in,
-		 __u64		   match_bits_in,
-		 __u64		   ignore_bits_in,
+		 u64		   match_bits_in,
+		 u64		   ignore_bits_in,
 		 enum lnet_unlink unlink_in,
 		 enum lnet_ins_pos pos_in,
 		 struct lnet_handle_me *handle_out);
 
 int LNetMEInsert(struct lnet_handle_me current_in,
 		 struct lnet_process_id match_id_in,
-		 __u64		   match_bits_in,
-		 __u64		   ignore_bits_in,
+		 u64		   match_bits_in,
+		 u64		   ignore_bits_in,
 		 enum lnet_unlink unlink_in,
 		 enum lnet_ins_pos position_in,
 		 struct lnet_handle_me *handle_out);
@@ -186,15 +186,15 @@ int LNetPut(lnet_nid_t	      self,
 	    enum lnet_ack_req ack_req_in,
 	    struct lnet_process_id target_in,
 	    unsigned int      portal_in,
-	    __u64	      match_bits_in,
+	    u64 match_bits_in,
 	    unsigned int      offset_in,
-	    __u64	      hdr_data_in);
+	    u64	hdr_data_in);
 
 int LNetGet(lnet_nid_t	      self,
 	    struct lnet_handle_md md_in,
 	    struct lnet_process_id target_in,
 	    unsigned int      portal_in,
-	    __u64	      match_bits_in,
+	    u64	match_bits_in,
 	    unsigned int      offset_in);
 /** @} lnet_data */
 
diff --git a/drivers/staging/lustre/include/linux/lnet/lib-lnet.h b/drivers/staging/lustre/include/linux/lnet/lib-lnet.h
index fcfd844..5c3f5e3 100644
--- a/drivers/staging/lustre/include/linux/lnet/lib-lnet.h
+++ b/drivers/staging/lustre/include/linux/lnet/lib-lnet.h
@@ -127,7 +127,7 @@ static inline int lnet_md_unlinkable(struct lnet_libmd *md)
 #define lnet_cpt_current()	cfs_cpt_current(the_lnet.ln_cpt_table, 1)
 
 static inline int
-lnet_cpt_of_cookie(__u64 cookie)
+lnet_cpt_of_cookie(u64 cookie)
 {
 	unsigned int cpt = (cookie >> LNET_COOKIE_TYPE_BITS) & LNET_CPT_MASK;
 
@@ -217,7 +217,7 @@ static inline int lnet_md_unlinkable(struct lnet_libmd *md)
 }
 
 struct lnet_libhandle *lnet_res_lh_lookup(struct lnet_res_container *rec,
-					  __u64 cookie);
+					  u64 cookie);
 void lnet_res_lh_initialize(struct lnet_res_container *rec,
 			    struct lnet_libhandle *lh);
 static inline void
@@ -404,13 +404,13 @@ void lnet_res_lh_initialize(struct lnet_res_container *rec,
 void lnet_net_free(struct lnet_net *net);
 
 struct lnet_net *
-lnet_net_alloc(__u32 net_type, struct list_head *netlist);
+lnet_net_alloc(u32 net_type, struct list_head *netlist);
 
 struct lnet_ni *
 lnet_ni_alloc(struct lnet_net *net, struct cfs_expr_list *el,
 	      char *iface);
 struct lnet_ni *
-lnet_ni_alloc_w_cpt_array(struct lnet_net *net, __u32 *cpts, __u32 ncpts,
+lnet_ni_alloc_w_cpt_array(struct lnet_net *net, u32 *cpts, u32 ncpts,
 			  char *iface);
 
 static inline int
@@ -420,7 +420,7 @@ struct lnet_ni *
 }
 
 static inline struct list_head *
-lnet_net2rnethash(__u32 net)
+lnet_net2rnethash(u32 net)
 {
 	return &the_lnet.ln_remote_nets_hash[(LNET_NETNUM(net) +
 		LNET_NETTYP(net)) &
@@ -435,8 +435,8 @@ struct lnet_ni *
 int lnet_cpt_of_nid(lnet_nid_t nid, struct lnet_ni *ni);
 struct lnet_ni *lnet_nid2ni_locked(lnet_nid_t nid, int cpt);
 struct lnet_ni *lnet_nid2ni_addref(lnet_nid_t nid);
-struct lnet_ni *lnet_net2ni_locked(__u32 net, int cpt);
-struct lnet_ni *lnet_net2ni_addref(__u32 net);
+struct lnet_ni *lnet_net2ni_locked(u32 net, int cpt);
+struct lnet_ni *lnet_net2ni_addref(u32 net);
 bool lnet_is_ni_healthy_locked(struct lnet_ni *ni);
 struct lnet_net *lnet_get_net_locked(u32 net_id);
 
@@ -451,18 +451,18 @@ int lnet_notify(struct lnet_ni *ni, lnet_nid_t peer, int alive,
 		time64_t when);
 void lnet_notify_locked(struct lnet_peer_ni *lp, int notifylnd, int alive,
 			time64_t when);
-int lnet_add_route(__u32 net, __u32 hops, lnet_nid_t gateway_nid,
+int lnet_add_route(u32 net, u32 hops, lnet_nid_t gateway_nid,
 		   unsigned int priority);
 int lnet_check_routes(void);
-int lnet_del_route(__u32 net, lnet_nid_t gw_nid);
+int lnet_del_route(u32 net, lnet_nid_t gw_nid);
 void lnet_destroy_routes(void);
-int lnet_get_route(int idx, __u32 *net, __u32 *hops,
-		   lnet_nid_t *gateway, __u32 *alive, __u32 *priority);
+int lnet_get_route(int idx, u32 *net, u32 *hops,
+		   lnet_nid_t *gateway, u32 *alive, u32 *priority);
 int lnet_get_rtr_pool_cfg(int idx, struct lnet_ioctl_pool_cfg *pool_cfg);
 struct lnet_ni *lnet_get_next_ni_locked(struct lnet_net *mynet,
 					struct lnet_ni *prev);
 struct lnet_ni *lnet_get_ni_idx_locked(int idx);
-int lnet_get_peer_list(__u32 *countp, __u32 *sizep,
+int lnet_get_peer_list(u32 *countp, u32 *sizep,
 		       struct lnet_process_id __user *ids);
 
 void lnet_router_debugfs_init(void);
@@ -473,16 +473,16 @@ int lnet_get_peer_list(__u32 *countp, __u32 *sizep,
 int lnet_rtrpools_enable(void);
 void lnet_rtrpools_disable(void);
 void lnet_rtrpools_free(int keep_pools);
-struct lnet_remotenet *lnet_find_rnet_locked(__u32 net);
+struct lnet_remotenet *lnet_find_rnet_locked(u32 net);
 int lnet_dyn_add_net(struct lnet_ioctl_config_data *conf);
-int lnet_dyn_del_net(__u32 net);
+int lnet_dyn_del_net(u32 net);
 int lnet_dyn_add_ni(struct lnet_ioctl_config_ni *conf);
 int lnet_dyn_del_ni(struct lnet_ioctl_config_ni *conf);
 int lnet_clear_lazy_portal(struct lnet_ni *ni, int portal, char *reason);
-struct lnet_net *lnet_get_net_locked(__u32 net_id);
+struct lnet_net *lnet_get_net_locked(u32 net_id);
 
 int lnet_islocalnid(lnet_nid_t nid);
-int lnet_islocalnet(__u32 net);
+int lnet_islocalnet(u32 net);
 
 void lnet_msg_attach_md(struct lnet_msg *msg, struct lnet_libmd *md,
 			unsigned int offset, unsigned int mlen);
@@ -536,10 +536,10 @@ void lnet_prep_send(struct lnet_msg *msg, int type,
 
 /* match-table functions */
 struct list_head *lnet_mt_match_head(struct lnet_match_table *mtable,
-				     struct lnet_process_id id, __u64 mbits);
+				     struct lnet_process_id id, u64 mbits);
 struct lnet_match_table *lnet_mt_of_attach(unsigned int index,
 					   struct lnet_process_id id,
-					   __u64 mbits, __u64 ignore_bits,
+					   u64 mbits, u64 ignore_bits,
 					   enum lnet_ins_pos pos);
 int lnet_mt_match_md(struct lnet_match_table *mtable,
 		     struct lnet_match_info *info, struct lnet_msg *msg);
@@ -575,7 +575,7 @@ void lnet_set_reply_msg_len(struct lnet_ni *ni, struct lnet_msg *msg,
 void lnet_finalize(struct lnet_msg *msg, int rc);
 
 void lnet_drop_message(struct lnet_ni *ni, int cpt, void *private,
-		       unsigned int nob, __u32 msg_type);
+		       unsigned int nob, u32 msg_type);
 void lnet_drop_delayed_msg_list(struct list_head *head, char *reason);
 void lnet_recv_delayed_msg_list(struct list_head *head);
 
@@ -637,9 +637,9 @@ void lnet_copy_kiov2iter(struct iov_iter *to,
 void lnet_unregister_lnd(struct lnet_lnd *lnd);
 
 int lnet_connect(struct socket **sockp, lnet_nid_t peer_nid,
-		 __u32 local_ip, __u32 peer_ip, int peer_port);
+		 u32 local_ip, u32 peer_ip, int peer_port);
 void lnet_connect_console_error(int rc, lnet_nid_t peer_nid,
-				__u32 peer_ip, int port);
+				u32 peer_ip, int port);
 int lnet_count_acceptor_nets(void);
 int lnet_acceptor_timeout(void);
 int lnet_acceptor_port(void);
@@ -652,15 +652,15 @@ void lnet_connect_console_error(int rc, lnet_nid_t peer_nid,
 
 int lnet_sock_setbuf(struct socket *socket, int txbufsize, int rxbufsize);
 int lnet_sock_getbuf(struct socket *socket, int *txbufsize, int *rxbufsize);
-int lnet_sock_getaddr(struct socket *socket, bool remote, __u32 *ip, int *port);
+int lnet_sock_getaddr(struct socket *socket, bool remote, u32 *ip, int *port);
 int lnet_sock_write(struct socket *sock, void *buffer, int nob, int timeout);
 int lnet_sock_read(struct socket *sock, void *buffer, int nob, int timeout);
 
-int lnet_sock_listen(struct socket **sockp, __u32 ip, int port, int backlog);
+int lnet_sock_listen(struct socket **sockp, u32 ip, int port, int backlog);
 int lnet_sock_accept(struct socket **newsockp, struct socket *sock);
 int lnet_sock_connect(struct socket **sockp, int *fatal,
-		      __u32 local_ip, int local_port,
-		      __u32 peer_ip, int peer_port);
+		      u32 local_ip, int local_port,
+		      u32 peer_ip, int peer_port);
 void libcfs_sock_release(struct socket *sock);
 
 int lnet_peers_start_down(void);
@@ -668,7 +668,7 @@ int lnet_sock_connect(struct socket **sockp, int *fatal,
 
 int lnet_router_checker_start(void);
 void lnet_router_checker_stop(void);
-void lnet_router_ni_update_locked(struct lnet_peer_ni *gw, __u32 net);
+void lnet_router_ni_update_locked(struct lnet_peer_ni *gw, u32 net);
 void lnet_swap_pinginfo(struct lnet_ping_buffer *pbuf);
 
 int lnet_ping_info_validate(struct lnet_ping_info *pinfo);
@@ -703,7 +703,7 @@ static inline int lnet_push_target_resize_needed(void)
 int lnet_parse_routes(char *route_str, int *im_a_router);
 int lnet_parse_networks(struct list_head *nilist, char *networks,
 			bool use_tcp_bonding);
-bool lnet_net_unique(__u32 net_id, struct list_head *nilist,
+bool lnet_net_unique(u32 net_id, struct list_head *nilist,
 		     struct lnet_net **net);
 bool lnet_ni_unique_net(struct list_head *nilist, char *iface);
 void lnet_incr_dlc_seq(void);
@@ -734,12 +734,12 @@ struct lnet_peer_net *lnet_peer_get_net_locked(struct lnet_peer *peer,
 int lnet_add_peer_ni(lnet_nid_t key_nid, lnet_nid_t nid, bool mr);
 int lnet_del_peer_ni(lnet_nid_t key_nid, lnet_nid_t nid);
 int lnet_get_peer_info(struct lnet_ioctl_peer_cfg *cfg, void __user *bulk);
-int lnet_get_peer_ni_info(__u32 peer_index, __u64 *nid,
+int lnet_get_peer_ni_info(u32 peer_index, u64 *nid,
 			  char alivness[LNET_MAX_STR_LEN],
-			  __u32 *cpt_iter, __u32 *refcount,
-			  __u32 *ni_peer_tx_credits, __u32 *peer_tx_credits,
-			  __u32 *peer_rtr_credits, __u32 *peer_min_rtr_credtis,
-			  __u32 *peer_tx_qnob);
+			  u32 *cpt_iter, u32 *refcount,
+			  u32 *ni_peer_tx_credits, u32 *peer_tx_credits,
+			  u32 *peer_rtr_credits, u32 *peer_min_rtr_credtis,
+			  u32 *peer_tx_qnob);
 
 static inline bool
 lnet_is_peer_ni_healthy_locked(struct lnet_peer_ni *lpni)
@@ -827,7 +827,7 @@ void lnet_incr_stats(struct lnet_element_stats *stats,
 		     enum lnet_msg_type msg_type,
 		     enum lnet_stats_type stats_type);
 
-__u32 lnet_sum_stats(struct lnet_element_stats *stats,
+u32 lnet_sum_stats(struct lnet_element_stats *stats,
 		     enum lnet_stats_type stats_type);
 
 void lnet_usr_translate_stats(struct lnet_ioctl_element_msg_stats *msg_stats,
diff --git a/drivers/staging/lustre/include/linux/lnet/lib-types.h b/drivers/staging/lustre/include/linux/lnet/lib-types.h
index 3a54e06..0646f07 100644
--- a/drivers/staging/lustre/include/linux/lnet/lib-types.h
+++ b/drivers/staging/lustre/include/linux/lnet/lib-types.h
@@ -64,7 +64,7 @@ struct lnet_msg {
 	lnet_nid_t		msg_initiator;
 	/* where is it from, it's only for building event */
 	lnet_nid_t		msg_from;
-	__u32			msg_type;
+	u32			msg_type;
 
 	/*
 	 * hold parameters in case message is with held due
@@ -123,7 +123,7 @@ struct lnet_msg {
 
 struct lnet_libhandle {
 	struct list_head	lh_hash_chain;
-	__u64			lh_cookie;
+	u64			lh_cookie;
 };
 
 #define lh_entry(ptr, type, member) \
@@ -146,8 +146,8 @@ struct lnet_me {
 	struct lnet_process_id	 me_match_id;
 	unsigned int		 me_portal;
 	unsigned int		 me_pos;	/* hash offset in mt_hash */
-	__u64			 me_match_bits;
-	__u64			 me_ignore_bits;
+	u64			 me_match_bits;
+	u64			 me_ignore_bits;
 	enum lnet_unlink	 me_unlink;
 	struct lnet_libmd	*me_md;
 };
@@ -199,7 +199,7 @@ struct lnet_lnd {
 	int			lnd_refcount;	/* # active instances */
 
 	/* fields initialised by the LND */
-	__u32			lnd_type;
+	u32			lnd_type;
 
 	int  (*lnd_startup)(struct lnet_ni *ni);
 	void (*lnd_shutdown)(struct lnet_ni *ni);
@@ -306,13 +306,13 @@ struct lnet_net {
 	 * (net_type << 16) | net_num.
 	 * net_type can be one of the enumerated types defined in
 	 * lnet/include/lnet/nidstr.h */
-	__u32			net_id;
+	u32			net_id;
 
 	/* total number of CPTs in the array */
-	__u32			net_ncpts;
+	u32			net_ncpts;
 
 	/* cumulative CPTs of all NIs in this net */
-	__u32			*net_cpts;
+	u32			*net_cpts;
 
 	/* network tunables */
 	struct lnet_ioctl_config_lnd_cmn_tunables net_tunables;
@@ -343,7 +343,7 @@ struct lnet_ni {
 	int			ni_ncpts;
 
 	/* bond NI on some CPTs */
-	__u32			*ni_cpts;
+	u32			*ni_cpts;
 
 	/* interface's NID */
 	lnet_nid_t		ni_nid;
@@ -497,7 +497,7 @@ struct lnet_peer_ni {
 	/* sequence number used to round robin over peer nis within a net */
 	u32			lpni_seq;
 	/* sequence number used to round robin over gateways */
-	__u32			lpni_gw_seq;
+	u32			lpni_gw_seq;
 	/* health flag */
 	bool			lpni_healthy;
 	/* returned RC ping features. Protected with lpni_lock */
@@ -559,13 +559,13 @@ struct lnet_peer {
 	int			lp_data_nnis;
 
 	/* NI config sequence number of peer */
-	__u32			lp_peer_seqno;
+	u32			lp_peer_seqno;
 
 	/* Local NI config sequence number acked by peer */
-	__u32			lp_node_seqno;
+	u32			lp_node_seqno;
 
 	/* Local NI config sequence number sent to peer */
-	__u32			lp_node_seqno_sent;
+	u32			lp_node_seqno_sent;
 
 	/* Ping error encountered during discovery. */
 	int			lp_ping_error;
@@ -645,7 +645,7 @@ struct lnet_peer_net {
 	struct lnet_peer	*lpn_peer;
 
 	/* Net ID */
-	__u32			lpn_net_id;
+	u32			lpn_net_id;
 
 	/* reference count */
 	atomic_t		lpn_refcount;
@@ -706,13 +706,13 @@ struct lnet_route {
 	/* router node */
 	struct lnet_peer_ni	*lr_gateway;
 	/* remote network number */
-	__u32			lr_net;
+	u32			lr_net;
 	/* sequence for round-robin */
 	int			lr_seq;
 	/* number of down NIs */
 	unsigned int		lr_downis;
 	/* how far I am */
-	__u32			lr_hops;
+	u32			lr_hops;
 	/* route priority */
 	unsigned int		lr_priority;
 };
@@ -727,7 +727,7 @@ struct lnet_remotenet {
 	/* routes to me */
 	struct list_head	lrn_routes;
 	/* my net number */
-	__u32			lrn_net;
+	u32			lrn_net;
 };
 
 /** lnet message has credit and can be submitted to lnd for send/receive */
@@ -788,7 +788,7 @@ enum lnet_match_flags {
 
 /* parameter for matching operations (GET, PUT) */
 struct lnet_match_info {
-	__u64			mi_mbits;
+	u64			mi_mbits;
 	struct lnet_process_id	mi_id;
 	unsigned int		mi_cpt;
 	unsigned int		mi_opc;
@@ -807,8 +807,8 @@ struct lnet_match_info {
  */
 #define LNET_MT_HASH_IGNORE		LNET_MT_HASH_SIZE
 /*
- * __u64 has 2^6 bits, so need 2^(LNET_MT_HASH_BITS - LNET_MT_BITS_U64) which
- * is 4 __u64s as bit-map, and add an extra __u64 (only use one bit) for the
+ * u64 has 2^6 bits, so need 2^(LNET_MT_HASH_BITS - LNET_MT_BITS_U64) which
+ * is 4 u64s as bit-map, and add an extra u64 (only use one bit) for the
  * ME-list with ignore-bits, which is mtable::mt_hash[LNET_MT_HASH_IGNORE]
  */
 #define LNET_MT_BITS_U64		6	/* 2^6 bits */
@@ -826,7 +826,7 @@ struct lnet_match_table {
 	 */
 	unsigned int		 mt_enabled;
 	/* bitmap to flag whether MEs on mt_hash are exhausted or not */
-	__u64			 mt_exhausted[LNET_MT_EXHAUSTED_BMAP];
+	u64			 mt_exhausted[LNET_MT_EXHAUSTED_BMAP];
 	struct list_head	*mt_mhash;	/* matching hash */
 };
 
@@ -866,7 +866,7 @@ struct lnet_portal {
 /* resource container (ME, MD, EQ) */
 struct lnet_res_container {
 	unsigned int		 rec_type;	/* container type */
-	__u64			 rec_lh_cookie;	/* cookie generator */
+	u64			 rec_lh_cookie;	/* cookie generator */
 	struct list_head	 rec_active;	/* active resource list */
 	struct list_head	*rec_lh_hash;	/* handle hash */
 };
@@ -949,11 +949,11 @@ struct lnet {
 	/* remote networks with routes to them */
 	struct list_head		 *ln_remote_nets_hash;
 	/* validity stamp */
-	__u64				  ln_remote_nets_version;
+	u64				  ln_remote_nets_version;
 	/* list of all known routers */
 	struct list_head		  ln_routers;
 	/* validity stamp */
-	__u64				  ln_routers_version;
+	u64				  ln_routers_version;
 	/* percpt router buffer pools */
 	struct lnet_rtrbufpool		**ln_rtrpools;
 
@@ -1019,7 +1019,7 @@ struct lnet {
 	int				  ln_routing;	/* am I a router? */
 	lnet_pid_t			  ln_pid;	/* requested pid */
 	/* uniquely identifies this ni in this epoch */
-	__u64				  ln_interface_cookie;
+	u64				  ln_interface_cookie;
 	/* registered LNDs */
 	struct list_head		  ln_lnds;
 
diff --git a/drivers/staging/lustre/include/linux/lnet/socklnd.h b/drivers/staging/lustre/include/linux/lnet/socklnd.h
index 9f69257..20fa221d 100644
--- a/drivers/staging/lustre/include/linux/lnet/socklnd.h
+++ b/drivers/staging/lustre/include/linux/lnet/socklnd.h
@@ -39,17 +39,17 @@
 #include <uapi/linux/lnet/socklnd.h>
 
 struct ksock_hello_msg {
-	__u32		kshm_magic;	/* magic number of socklnd message */
-	__u32		kshm_version;	/* version of socklnd message */
+	u32		kshm_magic;	/* magic number of socklnd message */
+	u32		kshm_version;	/* version of socklnd message */
 	lnet_nid_t      kshm_src_nid;	/* sender's nid */
 	lnet_nid_t	kshm_dst_nid;	/* destination nid */
 	lnet_pid_t	kshm_src_pid;	/* sender's pid */
 	lnet_pid_t	kshm_dst_pid;	/* destination pid */
-	__u64		kshm_src_incarnation; /* sender's incarnation */
-	__u64		kshm_dst_incarnation; /* destination's incarnation */
-	__u32		kshm_ctype;	/* connection type */
-	__u32		kshm_nips;	/* # IP addrs */
-	__u32		kshm_ips[0];	/* IP addrs */
+	u64		kshm_src_incarnation; /* sender's incarnation */
+	u64		kshm_dst_incarnation; /* destination's incarnation */
+	u32		kshm_ctype;	/* connection type */
+	u32		kshm_nips;	/* # IP addrs */
+	u32		kshm_ips[0];	/* IP addrs */
 } __packed;
 
 struct ksock_lnet_msg {
@@ -64,9 +64,9 @@ struct ksock_lnet_msg {
 } __packed;
 
 struct ksock_msg {
-	__u32	ksm_type;		/* type of socklnd message */
-	__u32	ksm_csum;		/* checksum if != 0 */
-	__u64	ksm_zc_cookies[2];	/* Zero-Copy request/ACK cookie */
+	u32	ksm_type;		/* type of socklnd message */
+	u32	ksm_csum;		/* checksum if != 0 */
+	u64	ksm_zc_cookies[2];	/* Zero-Copy request/ACK cookie */
 	union {
 		struct ksock_lnet_msg lnetmsg; /* lnet message, it's empty if
 						* it's NOOP
diff --git a/drivers/staging/lustre/lnet/lnet/acceptor.c b/drivers/staging/lustre/lnet/lnet/acceptor.c
index 150fcd6..aa28a9f 100644
--- a/drivers/staging/lustre/lnet/lnet/acceptor.c
+++ b/drivers/staging/lustre/lnet/lnet/acceptor.c
@@ -56,7 +56,7 @@
 EXPORT_SYMBOL(lnet_acceptor_port);
 
 static inline int
-lnet_accept_magic(__u32 magic, __u32 constant)
+lnet_accept_magic(u32 magic, u32 constant)
 {
 	return (magic == constant ||
 		magic == __swab32(constant));
@@ -96,7 +96,7 @@
 
 void
 lnet_connect_console_error(int rc, lnet_nid_t peer_nid,
-			   __u32 peer_ip, int peer_port)
+			   u32 peer_ip, int peer_port)
 {
 	switch (rc) {
 	/* "normal" errors */
@@ -142,7 +142,7 @@
 
 int
 lnet_connect(struct socket **sockp, lnet_nid_t peer_nid,
-	     __u32 local_ip, __u32 peer_ip, int peer_port)
+	     u32 local_ip, u32 peer_ip, int peer_port)
 {
 	struct lnet_acceptor_connreq cr;
 	struct socket *sock;
@@ -205,10 +205,10 @@
 EXPORT_SYMBOL(lnet_connect);
 
 static int
-lnet_accept(struct socket *sock, __u32 magic)
+lnet_accept(struct socket *sock, u32 magic)
 {
 	struct lnet_acceptor_connreq cr;
-	__u32 peer_ip;
+	u32 peer_ip;
 	int peer_port;
 	int rc;
 	int flip;
@@ -328,8 +328,8 @@
 {
 	struct socket *newsock;
 	int rc;
-	__u32 magic;
-	__u32 peer_ip;
+	u32 magic;
+	u32 peer_ip;
 	int peer_port;
 	int secure = (int)((long)arg);
 
diff --git a/drivers/staging/lustre/lnet/lnet/api-ni.c b/drivers/staging/lustre/lnet/lnet/api-ni.c
index 6d52824..be77e10 100644
--- a/drivers/staging/lustre/lnet/lnet/api-ni.c
+++ b/drivers/staging/lustre/lnet/lnet/api-ni.c
@@ -104,7 +104,7 @@ struct lnet the_lnet = {
 static int lnet_ping(struct lnet_process_id id, signed long timeout,
 		     struct lnet_process_id __user *ids, int n_ids);
 
-static int lnet_discover(struct lnet_process_id id, __u32 force,
+static int lnet_discover(struct lnet_process_id id, u32 force,
 			 struct lnet_process_id __user *ids, int n_ids);
 
 static int
@@ -420,7 +420,7 @@ static void lnet_assert_wire_constants(void)
 }
 
 static struct lnet_lnd *
-lnet_find_lnd_by_type(__u32 type)
+lnet_find_lnd_by_type(u32 type)
 {
 	struct lnet_lnd *lnd;
 
@@ -632,7 +632,7 @@ static void lnet_assert_wire_constants(void)
 }
 
 struct lnet_libhandle *
-lnet_res_lh_lookup(struct lnet_res_container *rec, __u64 cookie)
+lnet_res_lh_lookup(struct lnet_res_container *rec, u64 cookie)
 {
 	/* ALWAYS called with lnet_res_lock held */
 	struct list_head *head;
@@ -803,7 +803,7 @@ struct lnet_libhandle *
 }
 
 struct lnet_ni  *
-lnet_net2ni_locked(__u32 net_id, int cpt)
+lnet_net2ni_locked(u32 net_id, int cpt)
 {
 	struct lnet_ni *ni;
 	struct lnet_net *net;
@@ -822,7 +822,7 @@ struct lnet_ni  *
 }
 
 struct lnet_ni *
-lnet_net2ni_addref(__u32 net)
+lnet_net2ni_addref(u32 net)
 {
 	struct lnet_ni *ni;
 
@@ -837,7 +837,7 @@ struct lnet_ni *
 EXPORT_SYMBOL(lnet_net2ni_addref);
 
 struct lnet_net *
-lnet_get_net_locked(__u32 net_id)
+lnet_get_net_locked(u32 net_id)
 {
 	struct lnet_net *net;
 
@@ -852,7 +852,7 @@ struct lnet_net *
 unsigned int
 lnet_nid_cpt_hash(lnet_nid_t nid, unsigned int number)
 {
-	__u64 key = nid;
+	u64 key = nid;
 	unsigned int val;
 
 	LASSERT(number >= 1 && number <= LNET_CPT_NUMBER);
@@ -920,7 +920,7 @@ struct lnet_net *
 EXPORT_SYMBOL(lnet_cpt_of_nid);
 
 int
-lnet_islocalnet(__u32 net_id)
+lnet_islocalnet(u32 net_id)
 {
 	struct lnet_net *net;
 	int cpt;
@@ -2234,7 +2234,7 @@ void lnet_lib_exit(void)
 lnet_fill_ni_info(struct lnet_ni *ni, struct lnet_ioctl_config_ni *cfg_ni,
 		  struct lnet_ioctl_config_lnd_tunables *tun,
 		  struct lnet_ioctl_element_stats *stats,
-		  __u32 tun_size)
+		  u32 tun_size)
 {
 	size_t min_size = 0;
 	int i;
@@ -2471,7 +2471,7 @@ struct lnet_ni *
 lnet_get_ni_config(struct lnet_ioctl_config_ni *cfg_ni,
 		   struct lnet_ioctl_config_lnd_tunables *tun,
 		   struct lnet_ioctl_element_stats *stats,
-		   __u32 tun_size)
+		   u32 tun_size)
 {
 	struct lnet_ni *ni;
 	int cpt;
@@ -2848,7 +2848,7 @@ int lnet_dyn_del_ni(struct lnet_ioctl_config_ni *conf)
 }
 
 int
-lnet_dyn_del_net(__u32 net_id)
+lnet_dyn_del_net(u32 net_id)
 {
 	struct lnet_net *net;
 	struct lnet_ping_buffer *pbuf;
@@ -2980,7 +2980,7 @@ u32 lnet_get_dlc_seq_locked(void)
 		struct lnet_ioctl_config_ni *cfg_ni;
 		struct lnet_ioctl_config_lnd_tunables *tun = NULL;
 		struct lnet_ioctl_element_stats *stats;
-		__u32 tun_size;
+		u32 tun_size;
 
 		cfg_ni = arg;
 
@@ -3522,7 +3522,7 @@ static int lnet_ping(struct lnet_process_id id, signed long timeout,
 }
 
 static int
-lnet_discover(struct lnet_process_id id, __u32 force,
+lnet_discover(struct lnet_process_id id, u32 force,
 	      struct lnet_process_id __user *ids,
 	      int n_ids)
 {
diff --git a/drivers/staging/lustre/lnet/lnet/config.c b/drivers/staging/lustre/lnet/lnet/config.c
index 087d9a8..16c42bf 100644
--- a/drivers/staging/lustre/lnet/lnet/config.c
+++ b/drivers/staging/lustre/lnet/lnet/config.c
@@ -82,7 +82,7 @@ struct lnet_text_buf {	    /* tmp struct for parsing routes */
 }
 
 bool
-lnet_net_unique(__u32 net_id, struct list_head *netlist,
+lnet_net_unique(u32 net_id, struct list_head *netlist,
 		struct lnet_net **net)
 {
 	struct lnet_net *net_l;
@@ -136,7 +136,7 @@ struct lnet_text_buf {	    /* tmp struct for parsing routes */
 }
 
 static bool
-in_array(__u32 *array, __u32 size, __u32 value)
+in_array(u32 *array, u32 size, u32 value)
 {
 	int i;
 
@@ -149,9 +149,9 @@ struct lnet_text_buf {	    /* tmp struct for parsing routes */
 }
 
 static int
-lnet_net_append_cpts(__u32 *cpts, __u32 ncpts, struct lnet_net *net)
+lnet_net_append_cpts(u32 *cpts, u32 ncpts, struct lnet_net *net)
 {
-	__u32 *added_cpts = NULL;
+	u32 *added_cpts = NULL;
 	int i, j = 0, rc = 0;
 
 	/*
@@ -193,8 +193,8 @@ struct lnet_text_buf {	    /* tmp struct for parsing routes */
 
 	/* append the new cpts if any to the list of cpts in the net */
 	if (j > 0) {
-		__u32 *array = NULL, *loc;
-		__u32 total_entries = j + net->net_ncpts;
+		u32 *array = NULL, *loc;
+		u32 total_entries = j + net->net_ncpts;
 
 		array = kmalloc_array(total_entries, sizeof(*net->net_cpts),
 				      GFP_KERNEL);
@@ -220,7 +220,7 @@ struct lnet_text_buf {	    /* tmp struct for parsing routes */
 }
 
 static void
-lnet_net_remove_cpts(__u32 *cpts, __u32 ncpts, struct lnet_net *net)
+lnet_net_remove_cpts(u32 *cpts, u32 ncpts, struct lnet_net *net)
 {
 	struct lnet_ni *ni;
 	int rc;
@@ -344,7 +344,7 @@ struct lnet_text_buf {	    /* tmp struct for parsing routes */
 }
 
 struct lnet_net *
-lnet_net_alloc(__u32 net_id, struct list_head *net_list)
+lnet_net_alloc(u32 net_id, struct list_head *net_list)
 {
 	struct lnet_net *net;
 
@@ -525,7 +525,7 @@ struct lnet_ni *
 }
 
 struct lnet_ni *
-lnet_ni_alloc_w_cpt_array(struct lnet_net *net, __u32 *cpts, __u32 ncpts,
+lnet_ni_alloc_w_cpt_array(struct lnet_net *net, u32 *cpts, u32 ncpts,
 			  char *iface)
 {
 	struct lnet_ni *ni;
@@ -573,7 +573,7 @@ struct lnet_ni *
 	char *str;
 	struct lnet_net *net;
 	struct lnet_ni *ni = NULL;
-	__u32 net_id;
+	u32 net_id;
 	int nnets = 0;
 
 	if (!networks) {
@@ -1098,7 +1098,7 @@ struct lnet_ni *
 	struct list_head gateways;
 	struct list_head *tmp1;
 	struct list_head *tmp2;
-	__u32 net;
+	u32 net;
 	lnet_nid_t nid;
 	struct lnet_text_buf *ltb;
 	struct lnet_text_buf *ltb1, *ltb2;
@@ -1107,7 +1107,7 @@ struct lnet_ni *
 	char *token = str;
 	int ntokens = 0;
 	int myrc = -1;
-	__u32 hops;
+	u32 hops;
 	int got_hops = 0;
 	unsigned int priority = 0;
 
@@ -1276,7 +1276,7 @@ struct lnet_ni *
 }
 
 static int
-lnet_match_network_token(char *token, int len, __u32 *ipaddrs, int nip)
+lnet_match_network_token(char *token, int len, u32 *ipaddrs, int nip)
 {
 	LIST_HEAD(list);
 	int rc;
@@ -1295,7 +1295,7 @@ struct lnet_ni *
 }
 
 static int
-lnet_match_network_tokens(char *net_entry, __u32 *ipaddrs, int nip)
+lnet_match_network_tokens(char *net_entry, u32 *ipaddrs, int nip)
 {
 	static char tokens[LNET_SINGLE_TEXTBUF_NOB];
 
@@ -1352,11 +1352,11 @@ struct lnet_ni *
 	return 1;
 }
 
-static __u32
+static u32
 lnet_netspec2net(char *netspec)
 {
 	char *bracket = strchr(netspec, '(');
-	__u32 net;
+	u32 net;
 
 	if (bracket)
 		*bracket = 0;
@@ -1379,7 +1379,7 @@ struct lnet_ni *
 	struct lnet_text_buf *tb2;
 	char *sep;
 	char *bracket;
-	__u32 net;
+	u32 net;
 
 	LASSERT(!list_empty(nets));
 	LASSERT(nets->next == nets->prev);     /* single entry */
@@ -1447,7 +1447,7 @@ struct lnet_ni *
 }
 
 static int
-lnet_match_networks(char **networksp, char *ip2nets, __u32 *ipaddrs, int nip)
+lnet_match_networks(char **networksp, char *ip2nets, u32 *ipaddrs, int nip)
 {
 	static char networks[LNET_SINGLE_TEXTBUF_NOB];
 	static char source[LNET_SINGLE_TEXTBUF_NOB];
@@ -1459,8 +1459,8 @@ struct lnet_ni *
 	struct list_head *t2;
 	struct lnet_text_buf *tb;
 	struct lnet_text_buf *tb2;
-	__u32 net1;
-	__u32 net2;
+	u32 net1;
+	u32 net2;
 	int len;
 	int count;
 	int dup;
@@ -1563,10 +1563,10 @@ struct lnet_ni *
 }
 
 static int
-lnet_ipaddr_enumerate(__u32 **ipaddrsp)
+lnet_ipaddr_enumerate(u32 **ipaddrsp)
 {
 	struct net_device *dev;
-	__u32 *ipaddrs;
+	u32 *ipaddrs;
 	int nalloc = 64;
 	int nip;
 
@@ -1594,7 +1594,7 @@ struct lnet_ni *
 		}
 
 		if (nip >= nalloc) {
-			__u32 *ipaddrs2;
+			u32 *ipaddrs2;
 			nalloc += nalloc;
 			ipaddrs2 = krealloc(ipaddrs, nalloc * sizeof(*ipaddrs2),
 					    GFP_KERNEL);
@@ -1622,7 +1622,7 @@ struct lnet_ni *
 int
 lnet_parse_ip2nets(char **networksp, char *ip2nets)
 {
-	__u32 *ipaddrs = NULL;
+	u32 *ipaddrs = NULL;
 	int nip = lnet_ipaddr_enumerate(&ipaddrs);
 	int rc;
 
diff --git a/drivers/staging/lustre/lnet/lnet/lib-me.c b/drivers/staging/lustre/lnet/lnet/lib-me.c
index 672e37b..4a5ffb1 100644
--- a/drivers/staging/lustre/lnet/lnet/lib-me.c
+++ b/drivers/staging/lustre/lnet/lnet/lib-me.c
@@ -72,7 +72,7 @@
 int
 LNetMEAttach(unsigned int portal,
 	     struct lnet_process_id match_id,
-	     __u64 match_bits, __u64 ignore_bits,
+	     u64 match_bits, u64 ignore_bits,
 	     enum lnet_unlink unlink, enum lnet_ins_pos pos,
 	     struct lnet_handle_me *handle)
 {
@@ -143,7 +143,7 @@
 int
 LNetMEInsert(struct lnet_handle_me current_meh,
 	     struct lnet_process_id match_id,
-	     __u64 match_bits, __u64 ignore_bits,
+	     u64 match_bits, u64 ignore_bits,
 	     enum lnet_unlink unlink, enum lnet_ins_pos pos,
 	     struct lnet_handle_me *handle)
 {
diff --git a/drivers/staging/lustre/lnet/lnet/lib-move.c b/drivers/staging/lustre/lnet/lnet/lib-move.c
index eaa1dfa..639f67ed 100644
--- a/drivers/staging/lustre/lnet/lnet/lib-move.c
+++ b/drivers/staging/lustre/lnet/lnet/lib-move.c
@@ -94,7 +94,7 @@ void lnet_incr_stats(struct lnet_element_stats *stats,
 	}
 }
 
-__u32 lnet_sum_stats(struct lnet_element_stats *stats,
+u32 lnet_sum_stats(struct lnet_element_stats *stats,
 		     enum lnet_stats_type stats_type)
 {
 	struct lnet_comm_count *counts = get_stats_counts(stats, stats_type);
@@ -1845,7 +1845,7 @@ void lnet_usr_translate_stats(struct lnet_ioctl_element_msg_stats *msg_stats,
 	 */
 	cpt2 = lnet_cpt_of_nid_locked(best_lpni->lpni_nid, best_ni);
 	if (cpt != cpt2) {
-		__u32 seq = lnet_get_dlc_seq_locked();
+		u32 seq = lnet_get_dlc_seq_locked();
 		lnet_net_unlock(cpt);
 		cpt = cpt2;
 		lnet_net_lock(cpt);
@@ -1962,7 +1962,7 @@ void lnet_usr_translate_stats(struct lnet_ioctl_element_msg_stats *msg_stats,
 
 void
 lnet_drop_message(struct lnet_ni *ni, int cpt, void *private, unsigned int nob,
-		  __u32 msg_type)
+		  u32 msg_type)
 {
 	lnet_net_lock(cpt);
 	lnet_incr_stats(&ni->ni_stats, msg_type, LNET_STATS_TYPE_DROP);
@@ -2383,8 +2383,8 @@ void lnet_usr_translate_stats(struct lnet_ioctl_element_msg_stats *msg_stats,
 	lnet_nid_t dest_nid;
 	lnet_nid_t src_nid;
 	struct lnet_peer_ni *lpni;
-	__u32 payload_length;
-	__u32 type;
+	u32 payload_length;
+	u32 type;
 
 	LASSERT(!in_interrupt());
 
@@ -2418,7 +2418,7 @@ void lnet_usr_translate_stats(struct lnet_ioctl_element_msg_stats *msg_stats,
 	case LNET_MSG_PUT:
 	case LNET_MSG_REPLY:
 		if (payload_length >
-		   (__u32)(for_me ? LNET_MAX_PAYLOAD : LNET_MTU)) {
+		   (u32)(for_me ? LNET_MAX_PAYLOAD : LNET_MTU)) {
 			CERROR("%s, src %s: bad %s payload %d (%d max expected)\n",
 			       libcfs_nid2str(from_nid),
 			       libcfs_nid2str(src_nid),
@@ -2741,8 +2741,8 @@ void lnet_usr_translate_stats(struct lnet_ioctl_element_msg_stats *msg_stats,
 int
 LNetPut(lnet_nid_t self, struct lnet_handle_md mdh, enum lnet_ack_req ack,
 	struct lnet_process_id target, unsigned int portal,
-	__u64 match_bits, unsigned int offset,
-	__u64 hdr_data)
+	u64 match_bits, unsigned int offset,
+	u64 hdr_data)
 {
 	struct lnet_msg *msg;
 	struct lnet_libmd *md;
@@ -2948,7 +2948,7 @@ struct lnet_msg *
 int
 LNetGet(lnet_nid_t self, struct lnet_handle_md mdh,
 	struct lnet_process_id target, unsigned int portal,
-	__u64 match_bits, unsigned int offset)
+	u64 match_bits, unsigned int offset)
 {
 	struct lnet_msg *msg;
 	struct lnet_libmd *md;
@@ -3037,14 +3037,14 @@ struct lnet_msg *
  * \retval -EHOSTUNREACH If \a dstnid is not reachable.
  */
 int
-LNetDist(lnet_nid_t dstnid, lnet_nid_t *srcnidp, __u32 *orderp)
+LNetDist(lnet_nid_t dstnid, lnet_nid_t *srcnidp, u32 *orderp)
 {
 	struct lnet_ni *ni = NULL;
 	struct lnet_remotenet *rnet;
-	__u32 dstnet = LNET_NIDNET(dstnid);
+	u32 dstnet = LNET_NIDNET(dstnid);
 	int hops;
 	int cpt;
-	__u32 order = 2;
+	u32 order = 2;
 	struct list_head *rn_list;
 
 	/*
@@ -3098,8 +3098,8 @@ struct lnet_msg *
 		if (rnet->lrn_net == dstnet) {
 			struct lnet_route *route;
 			struct lnet_route *shortest = NULL;
-			__u32 shortest_hops = LNET_UNDEFINED_HOPS;
-			__u32 route_hops;
+			u32 shortest_hops = LNET_UNDEFINED_HOPS;
+			u32 route_hops;
 
 			LASSERT(!list_empty(&rnet->lrn_routes));
 
diff --git a/drivers/staging/lustre/lnet/lnet/lib-ptl.c b/drivers/staging/lustre/lnet/lnet/lib-ptl.c
index 6fa5bbf..fa391ee 100644
--- a/drivers/staging/lustre/lnet/lnet/lib-ptl.c
+++ b/drivers/staging/lustre/lnet/lnet/lib-ptl.c
@@ -41,7 +41,7 @@
 
 static int
 lnet_ptl_match_type(unsigned int index, struct lnet_process_id match_id,
-		    __u64 mbits, __u64 ignore_bits)
+		    u64 mbits, u64 ignore_bits)
 {
 	struct lnet_portal *ptl = the_lnet.ln_portals[index];
 	int unique;
@@ -213,7 +213,7 @@
 }
 
 static struct lnet_match_table *
-lnet_match2mt(struct lnet_portal *ptl, struct lnet_process_id id, __u64 mbits)
+lnet_match2mt(struct lnet_portal *ptl, struct lnet_process_id id, u64 mbits)
 {
 	if (LNET_CPT_NUMBER == 1)
 		return ptl->ptl_mtables[0]; /* the only one */
@@ -225,7 +225,7 @@
 
 struct lnet_match_table *
 lnet_mt_of_attach(unsigned int index, struct lnet_process_id id,
-		  __u64 mbits, __u64 ignore_bits, enum lnet_ins_pos pos)
+		  u64 mbits, u64 ignore_bits, enum lnet_ins_pos pos)
 {
 	struct lnet_portal *ptl;
 	struct lnet_match_table	*mtable;
@@ -316,7 +316,7 @@ struct lnet_match_table *
 static int
 lnet_mt_test_exhausted(struct lnet_match_table *mtable, int pos)
 {
-	__u64 *bmap;
+	u64 *bmap;
 	int i;
 
 	if (!lnet_ptl_is_wildcard(the_lnet.ln_portals[mtable->mt_portal]))
@@ -324,7 +324,7 @@ struct lnet_match_table *
 
 	if (pos < 0) { /* check all bits */
 		for (i = 0; i < LNET_MT_EXHAUSTED_BMAP; i++) {
-			if (mtable->mt_exhausted[i] != (__u64)(-1))
+			if (mtable->mt_exhausted[i] != (u64)(-1))
 				return 0;
 		}
 		return 1;
@@ -341,7 +341,7 @@ struct lnet_match_table *
 static void
 lnet_mt_set_exhausted(struct lnet_match_table *mtable, int pos, int exhausted)
 {
-	__u64 *bmap;
+	u64 *bmap;
 
 	LASSERT(lnet_ptl_is_wildcard(the_lnet.ln_portals[mtable->mt_portal]));
 	LASSERT(pos <= LNET_MT_HASH_IGNORE);
@@ -358,7 +358,7 @@ struct lnet_match_table *
 
 struct list_head *
 lnet_mt_match_head(struct lnet_match_table *mtable,
-		   struct lnet_process_id id, __u64 mbits)
+		   struct lnet_process_id id, u64 mbits)
 {
 	struct lnet_portal *ptl = the_lnet.ln_portals[mtable->mt_portal];
 	unsigned long hash = mbits;
diff --git a/drivers/staging/lustre/lnet/lnet/lib-socket.c b/drivers/staging/lustre/lnet/lnet/lib-socket.c
index 62a742e..cff3d1e 100644
--- a/drivers/staging/lustre/lnet/lnet/lib-socket.c
+++ b/drivers/staging/lustre/lnet/lnet/lib-socket.c
@@ -146,7 +146,7 @@
 EXPORT_SYMBOL(lnet_sock_read);
 
 static int
-lnet_sock_create(struct socket **sockp, int *fatal, __u32 local_ip,
+lnet_sock_create(struct socket **sockp, int *fatal, u32 local_ip,
 		 int local_port)
 {
 	struct sockaddr_in locaddr;
@@ -233,7 +233,7 @@
 EXPORT_SYMBOL(lnet_sock_setbuf);
 
 int
-lnet_sock_getaddr(struct socket *sock, bool remote, __u32 *ip, int *port)
+lnet_sock_getaddr(struct socket *sock, bool remote, u32 *ip, int *port)
 {
 	struct sockaddr_in sin;
 	int rc;
@@ -272,7 +272,7 @@
 EXPORT_SYMBOL(lnet_sock_getbuf);
 
 int
-lnet_sock_listen(struct socket **sockp, __u32 local_ip, int local_port,
+lnet_sock_listen(struct socket **sockp, u32 local_ip, int local_port,
 		 int backlog)
 {
 	int fatal;
@@ -337,8 +337,8 @@
 }
 
 int
-lnet_sock_connect(struct socket **sockp, int *fatal, __u32 local_ip,
-		  int local_port, __u32 peer_ip, int peer_port)
+lnet_sock_connect(struct socket **sockp, int *fatal, u32 local_ip,
+		  int local_port, u32 peer_ip, int peer_port)
 {
 	struct sockaddr_in srvaddr;
 	int rc;
diff --git a/drivers/staging/lustre/lnet/lnet/nidstrings.c b/drivers/staging/lustre/lnet/lnet/nidstrings.c
index 43d957f..0f2b75e 100644
--- a/drivers/staging/lustre/lnet/lnet/nidstrings.c
+++ b/drivers/staging/lustre/lnet/lnet/nidstrings.c
@@ -465,12 +465,12 @@ int cfs_print_nidlist(char *buffer, int count, struct list_head *nidlist)
  * \param	min_nid
  * \param	max_nid
  */
-static void cfs_ip_ar_min_max(struct addrrange *ar, __u32 *min_nid,
-			      __u32 *max_nid)
+static void cfs_ip_ar_min_max(struct addrrange *ar, u32 *min_nid,
+			      u32 *max_nid)
 {
 	struct cfs_expr_list *el;
 	struct cfs_range_expr *re;
-	__u32 tmp_ip_addr = 0;
+	u32 tmp_ip_addr = 0;
 	unsigned int min_ip[4] = {0};
 	unsigned int max_ip[4] = {0};
 	int re_count = 0;
@@ -504,8 +504,8 @@ static void cfs_ip_ar_min_max(struct addrrange *ar, __u32 *min_nid,
  * \param	min_nid
  * \param	max_nid
  */
-static void cfs_num_ar_min_max(struct addrrange *ar, __u32 *min_nid,
-			       __u32 *max_nid)
+static void cfs_num_ar_min_max(struct addrrange *ar, u32 *min_nid,
+			       u32 *max_nid)
 {
 	struct cfs_expr_list *el;
 	struct cfs_range_expr *re;
@@ -581,9 +581,9 @@ static bool cfs_num_is_contiguous(struct list_head *nidlist)
 	struct cfs_expr_list *el;
 	struct cfs_range_expr *re;
 	int last_hi = 0;
-	__u32 last_end_nid = 0;
-	__u32 current_start_nid = 0;
-	__u32 current_end_nid = 0;
+	u32 last_end_nid = 0;
+	u32 current_start_nid = 0;
+	u32 current_end_nid = 0;
 
 	list_for_each_entry(nr, nidlist, nr_link) {
 		list_for_each_entry(ar, &nr->nr_addrranges, ar_link) {
@@ -629,9 +629,9 @@ static bool cfs_ip_is_contiguous(struct list_head *nidlist)
 	int expr_count;
 	int last_hi = 255;
 	int last_diff = 0;
-	__u32 last_end_nid = 0;
-	__u32 current_start_nid = 0;
-	__u32 current_end_nid = 0;
+	u32 last_end_nid = 0;
+	u32 current_start_nid = 0;
+	u32 current_end_nid = 0;
 
 	list_for_each_entry(nr, nidlist, nr_link) {
 		list_for_each_entry(ar, &nr->nr_addrranges, ar_link) {
@@ -678,8 +678,8 @@ void cfs_nidrange_find_min_max(struct list_head *nidlist, char *min_nid,
 	struct nidrange *nr;
 	struct netstrfns *nf = NULL;
 	int netnum = -1;
-	__u32 min_addr;
-	__u32 max_addr;
+	u32 min_addr;
+	u32 max_addr;
 	char *lndname = NULL;
 	char min_addr_str[IPSTRING_LENGTH];
 	char max_addr_str[IPSTRING_LENGTH];
@@ -709,8 +709,8 @@ void cfs_nidrange_find_min_max(struct list_head *nidlist, char *min_nid,
  * \param	*min_nid
  * \param	*max_nid
  */
-static void cfs_num_min_max(struct list_head *nidlist, __u32 *min_nid,
-			    __u32 *max_nid)
+static void cfs_num_min_max(struct list_head *nidlist, u32 *min_nid,
+			    u32 *max_nid)
 {
 	struct nidrange	*nr;
 	struct addrrange *ar;
@@ -741,15 +741,15 @@ static void cfs_num_min_max(struct list_head *nidlist, __u32 *min_nid,
  * \param	*min_nid
  * \param	*max_nid
  */
-static void cfs_ip_min_max(struct list_head *nidlist, __u32 *min_nid,
-			   __u32 *max_nid)
+static void cfs_ip_min_max(struct list_head *nidlist, u32 *min_nid,
+			   u32 *max_nid)
 {
 	struct nidrange *nr;
 	struct addrrange *ar;
-	__u32 tmp_min_ip_addr = 0;
-	__u32 tmp_max_ip_addr = 0;
-	__u32 min_ip_addr = 0;
-	__u32 max_ip_addr = 0;
+	u32 tmp_min_ip_addr = 0;
+	u32 tmp_max_ip_addr = 0;
+	u32 min_ip_addr = 0;
+	u32 max_ip_addr = 0;
 
 	list_for_each_entry(nr, nidlist, nr_link) {
 		list_for_each_entry(ar, &nr->nr_addrranges, ar_link) {
@@ -769,14 +769,14 @@ static void cfs_ip_min_max(struct list_head *nidlist, __u32 *min_nid,
 }
 
 static int
-libcfs_lo_str2addr(const char *str, int nob, __u32 *addr)
+libcfs_lo_str2addr(const char *str, int nob, u32 *addr)
 {
 	*addr = 0;
 	return 1;
 }
 
 static void
-libcfs_ip_addr2str(__u32 addr, char *str, size_t size)
+libcfs_ip_addr2str(u32 addr, char *str, size_t size)
 {
 	snprintf(str, size, "%u.%u.%u.%u",
 		 (addr >> 24) & 0xff, (addr >> 16) & 0xff,
@@ -792,7 +792,7 @@ static void cfs_ip_min_max(struct list_head *nidlist, __u32 *min_nid,
  * fine too :)
  */
 static int
-libcfs_ip_str2addr(const char *str, int nob, __u32 *addr)
+libcfs_ip_str2addr(const char *str, int nob, u32 *addr)
 {
 	unsigned int	a;
 	unsigned int	b;
@@ -873,7 +873,7 @@ static void cfs_ip_min_max(struct list_head *nidlist, __u32 *min_nid,
  * \retval 0 otherwise
  */
 int
-cfs_ip_addr_match(__u32 addr, struct list_head *list)
+cfs_ip_addr_match(u32 addr, struct list_head *list)
 {
 	struct cfs_expr_list *el;
 	int i = 0;
@@ -889,13 +889,13 @@ static void cfs_ip_min_max(struct list_head *nidlist, __u32 *min_nid,
 }
 
 static void
-libcfs_decnum_addr2str(__u32 addr, char *str, size_t size)
+libcfs_decnum_addr2str(u32 addr, char *str, size_t size)
 {
 	snprintf(str, size, "%u", addr);
 }
 
 static int
-libcfs_num_str2addr(const char *str, int nob, __u32 *addr)
+libcfs_num_str2addr(const char *str, int nob, u32 *addr)
 {
 	int     n;
 
@@ -955,7 +955,7 @@ static void cfs_ip_min_max(struct list_head *nidlist, __u32 *min_nid,
  * \retval 0 otherwise
  */
 static int
-libcfs_num_match(__u32 addr, struct list_head *numaddr)
+libcfs_num_match(u32 addr, struct list_head *numaddr)
 {
 	struct cfs_expr_list *el;
 
@@ -1021,7 +1021,7 @@ static void cfs_ip_min_max(struct list_head *nidlist, __u32 *min_nid,
 static const size_t libcfs_nnetstrfns = ARRAY_SIZE(libcfs_netstrfns);
 
 static struct netstrfns *
-libcfs_lnd2netstrfns(__u32 lnd)
+libcfs_lnd2netstrfns(u32 lnd)
 {
 	int i;
 
@@ -1059,14 +1059,14 @@ static void cfs_ip_min_max(struct list_head *nidlist, __u32 *min_nid,
 }
 
 int
-libcfs_isknown_lnd(__u32 lnd)
+libcfs_isknown_lnd(u32 lnd)
 {
 	return !!libcfs_lnd2netstrfns(lnd);
 }
 EXPORT_SYMBOL(libcfs_isknown_lnd);
 
 char *
-libcfs_lnd2modname(__u32 lnd)
+libcfs_lnd2modname(u32 lnd)
 {
 	struct netstrfns *nf = libcfs_lnd2netstrfns(lnd);
 
@@ -1087,7 +1087,7 @@ static void cfs_ip_min_max(struct list_head *nidlist, __u32 *min_nid,
 EXPORT_SYMBOL(libcfs_str2lnd);
 
 char *
-libcfs_lnd2str_r(__u32 lnd, char *buf, size_t buf_size)
+libcfs_lnd2str_r(u32 lnd, char *buf, size_t buf_size)
 {
 	struct netstrfns *nf;
 
@@ -1102,10 +1102,10 @@ static void cfs_ip_min_max(struct list_head *nidlist, __u32 *min_nid,
 EXPORT_SYMBOL(libcfs_lnd2str_r);
 
 char *
-libcfs_net2str_r(__u32 net, char *buf, size_t buf_size)
+libcfs_net2str_r(u32 net, char *buf, size_t buf_size)
 {
-	__u32 nnum = LNET_NETNUM(net);
-	__u32 lnd = LNET_NETTYP(net);
+	u32 nnum = LNET_NETNUM(net);
+	u32 lnd = LNET_NETTYP(net);
 	struct netstrfns *nf;
 
 	nf = libcfs_lnd2netstrfns(lnd);
@@ -1123,10 +1123,10 @@ static void cfs_ip_min_max(struct list_head *nidlist, __u32 *min_nid,
 char *
 libcfs_nid2str_r(lnet_nid_t nid, char *buf, size_t buf_size)
 {
-	__u32 addr = LNET_NIDADDR(nid);
-	__u32 net = LNET_NIDNET(nid);
-	__u32 nnum = LNET_NETNUM(net);
-	__u32 lnd = LNET_NETTYP(net);
+	u32 addr = LNET_NIDADDR(nid);
+	u32 net = LNET_NIDNET(nid);
+	u32 nnum = LNET_NETNUM(net);
+	u32 lnd = LNET_NETTYP(net);
 	struct netstrfns *nf;
 
 	if (nid == LNET_NID_ANY) {
@@ -1156,7 +1156,7 @@ static void cfs_ip_min_max(struct list_head *nidlist, __u32 *min_nid,
 EXPORT_SYMBOL(libcfs_nid2str_r);
 
 static struct netstrfns *
-libcfs_str2net_internal(const char *str, __u32 *net)
+libcfs_str2net_internal(const char *str, u32 *net)
 {
 	struct netstrfns *nf = NULL;
 	int nob;
@@ -1191,10 +1191,10 @@ static void cfs_ip_min_max(struct list_head *nidlist, __u32 *min_nid,
 	return nf;
 }
 
-__u32
+u32
 libcfs_str2net(const char *str)
 {
-	__u32  net;
+	u32  net;
 
 	if (libcfs_str2net_internal(str, &net))
 		return net;
@@ -1208,8 +1208,8 @@ static void cfs_ip_min_max(struct list_head *nidlist, __u32 *min_nid,
 {
 	const char *sep = strchr(str, '@');
 	struct netstrfns *nf;
-	__u32 net;
-	__u32 addr;
+	u32 net;
+	u32 addr;
 
 	if (sep) {
 		nf = libcfs_str2net_internal(sep + 1, &net);
diff --git a/drivers/staging/lustre/lnet/lnet/peer.c b/drivers/staging/lustre/lnet/lnet/peer.c
index db36b5c..d807dd4 100644
--- a/drivers/staging/lustre/lnet/lnet/peer.c
+++ b/drivers/staging/lustre/lnet/lnet/peer.c
@@ -697,17 +697,17 @@ struct lnet_peer_ni *
 
 /* Call with the ln_api_mutex held */
 int
-lnet_get_peer_list(__u32 *countp, __u32 *sizep,
+lnet_get_peer_list(u32 *countp, u32 *sizep,
 		   struct lnet_process_id __user *ids)
 {
 	struct lnet_process_id id;
 	struct lnet_peer_table *ptable;
 	struct lnet_peer *lp;
-	__u32 count = 0;
-	__u32 size = 0;
+	u32 count = 0;
+	u32 size = 0;
 	int lncpt;
 	int cpt;
-	__u32 i;
+	u32 i;
 	int rc;
 
 	rc = -ESHUTDOWN;
@@ -3234,12 +3234,12 @@ void lnet_peer_discovery_stop(void)
 /* Gathering information for userspace. */
 
 int
-lnet_get_peer_ni_info(__u32 peer_index, __u64 *nid,
+lnet_get_peer_ni_info(u32 peer_index, u64 *nid,
 		      char aliveness[LNET_MAX_STR_LEN],
-		      __u32 *cpt_iter, __u32 *refcount,
-		      __u32 *ni_peer_tx_credits, __u32 *peer_tx_credits,
-		      __u32 *peer_rtr_credits, __u32 *peer_min_rtr_credits,
-		      __u32 *peer_tx_qnob)
+		      u32 *cpt_iter, u32 *refcount,
+		      u32 *ni_peer_tx_credits, u32 *peer_tx_credits,
+		      u32 *peer_rtr_credits, u32 *peer_min_rtr_credits,
+		      u32 *peer_tx_qnob)
 {
 	struct lnet_peer_table *peer_table;
 	struct lnet_peer_ni *lp;
@@ -3305,7 +3305,7 @@ int lnet_get_peer_info(struct lnet_ioctl_peer_cfg *cfg, void __user *bulk)
 	struct lnet_peer_ni *lpni;
 	struct lnet_peer *lp;
 	lnet_nid_t nid;
-	__u32 size;
+	u32 size;
 	int rc;
 
 	lp = lnet_find_peer(cfg->prcfg_prim_nid);
diff --git a/drivers/staging/lustre/lnet/lnet/router.c b/drivers/staging/lustre/lnet/lnet/router.c
index 86cce27..22c88ec 100644
--- a/drivers/staging/lustre/lnet/lnet/router.c
+++ b/drivers/staging/lustre/lnet/lnet/router.c
@@ -247,7 +247,7 @@
 }
 
 struct lnet_remotenet *
-lnet_find_rnet_locked(__u32 net)
+lnet_find_rnet_locked(u32 net)
 {
 	struct lnet_remotenet *rnet;
 	struct list_head *rn_list;
@@ -273,7 +273,7 @@ static void lnet_shuffle_seed(void)
 	/* Nodes with small feet have little entropy
 	 * the NID for this node gives the most entropy in the low bits */
 	while ((ni = lnet_get_next_ni_locked(NULL, ni))) {
-		__u32 lnd_type, seed;
+		u32 lnd_type, seed;
 		lnd_type = LNET_NETTYP(LNET_NIDNET(ni->ni_nid));
 		if (lnd_type != LOLND) {
 			seed = (LNET_NIDADDR(ni->ni_nid) | lnd_type);
@@ -313,7 +313,7 @@ static void lnet_shuffle_seed(void)
 }
 
 int
-lnet_add_route(__u32 net, __u32 hops, lnet_nid_t gateway,
+lnet_add_route(u32 net, u32 hops, lnet_nid_t gateway,
 	       unsigned int priority)
 {
 	struct lnet_remotenet *rnet;
@@ -479,7 +479,7 @@ static void lnet_shuffle_seed(void)
 }
 
 int
-lnet_del_route(__u32 net, lnet_nid_t gw_nid)
+lnet_del_route(u32 net, lnet_nid_t gw_nid)
 {
 	struct lnet_peer_ni *gateway;
 	struct lnet_remotenet *rnet;
@@ -585,8 +585,8 @@ int lnet_get_rtr_pool_cfg(int idx, struct lnet_ioctl_pool_cfg *pool_cfg)
 }
 
 int
-lnet_get_route(int idx, __u32 *net, __u32 *hops,
-	       lnet_nid_t *gateway, __u32 *alive, __u32 *priority)
+lnet_get_route(int idx, u32 *net, u32 *hops,
+	       lnet_nid_t *gateway, u32 *alive, u32 *priority)
 {
 	struct lnet_remotenet *rnet;
 	struct lnet_route *route;
@@ -836,7 +836,7 @@ int lnet_get_rtr_pool_cfg(int idx, struct lnet_ioctl_pool_cfg *pool_cfg)
 }
 
 void
-lnet_router_ni_update_locked(struct lnet_peer_ni *gw, __u32 net)
+lnet_router_ni_update_locked(struct lnet_peer_ni *gw, u32 net)
 {
 	struct lnet_route *rte;
 
@@ -1274,7 +1274,7 @@ int lnet_get_rtr_pool_cfg(int idx, struct lnet_ioctl_pool_cfg *pool_cfg)
 	struct lnet_peer_ni *rtr;
 
 	while (the_lnet.ln_rc_state == LNET_RC_STATE_RUNNING) {
-		__u64 version;
+		u64 version;
 		int cpt;
 		int cpt2;
 
diff --git a/drivers/staging/lustre/lnet/lnet/router_proc.c b/drivers/staging/lustre/lnet/lnet/router_proc.c
index 236b5a1..e8cc70f 100644
--- a/drivers/staging/lustre/lnet/lnet/router_proc.c
+++ b/drivers/staging/lustre/lnet/lnet/router_proc.c
@@ -211,8 +211,8 @@ static int proc_lnet_routes(struct ctl_table *table, int write,
 		}
 
 		if (route) {
-			__u32 net = rnet->lrn_net;
-			__u32 hops = route->lr_hops;
+			u32 net = rnet->lrn_net;
+			u32 hops = route->lr_hops;
 			unsigned int priority = route->lr_priority;
 			lnet_nid_t nid = route->lr_gateway->lpni_nid;
 			int alive = lnet_is_route_alive(route);
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [lustre-devel] [PATCH 02/26] lnet: use kernel types for lnet klnd kernel code
  2019-01-31 17:19 [lustre-devel] [PATCH 00/26] lustre: cleanups with no code changes James Simmons
  2019-01-31 17:19 ` [lustre-devel] [PATCH 01/26] lnet: use kernel types for lnet core kernel code James Simmons
@ 2019-01-31 17:19 ` James Simmons
  2019-01-31 17:19 ` [lustre-devel] [PATCH 03/26] lnet: use kernel types for lnet selftest " James Simmons
                   ` (24 subsequent siblings)
  26 siblings, 0 replies; 30+ messages in thread
From: James Simmons @ 2019-01-31 17:19 UTC (permalink / raw)
  To: lustre-devel

LNet klnd drivers was originally both a user land and kernel
implementation. The source contains many types of the
form __u32 but since this is mostly kernel code change
the types to kernel internal types.

Signed-off-by: James Simmons <jsimmons@infradead.org>
---
 .../staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c    |  30 ++---
 .../staging/lustre/lnet/klnds/o2iblnd/o2iblnd.h    | 126 ++++++++++-----------
 .../staging/lustre/lnet/klnds/o2iblnd/o2iblnd_cb.c |  22 ++--
 .../staging/lustre/lnet/klnds/socklnd/socklnd.c    |  54 ++++-----
 .../staging/lustre/lnet/klnds/socklnd/socklnd.h    |  44 +++----
 .../staging/lustre/lnet/klnds/socklnd/socklnd_cb.c |   8 +-
 .../lustre/lnet/klnds/socklnd/socklnd_lib.c        |   4 +-
 .../lustre/lnet/klnds/socklnd/socklnd_proto.c      |  24 ++--
 8 files changed, 156 insertions(+), 156 deletions(-)

diff --git a/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c b/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c
index bd7ff7d..1a6bc45 100644
--- a/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c
+++ b/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c
@@ -44,10 +44,10 @@
 
 struct kib_data kiblnd_data;
 
-static __u32 kiblnd_cksum(void *ptr, int nob)
+static u32 kiblnd_cksum(void *ptr, int nob)
 {
 	char *c = ptr;
-	__u32 sum = 0;
+	u32 sum = 0;
 
 	while (nob-- > 0)
 		sum = ((sum << 1) | (sum >> 31)) + *c++;
@@ -175,7 +175,7 @@ static int kiblnd_unpack_rd(struct kib_msg *msg, int flip)
 }
 
 void kiblnd_pack_msg(struct lnet_ni *ni, struct kib_msg *msg, int version,
-		     int credits, lnet_nid_t dstnid, __u64 dststamp)
+		     int credits, lnet_nid_t dstnid, u64 dststamp)
 {
 	struct kib_net *net = ni->ni_data;
 
@@ -203,8 +203,8 @@ void kiblnd_pack_msg(struct lnet_ni *ni, struct kib_msg *msg, int version,
 int kiblnd_unpack_msg(struct kib_msg *msg, int nob)
 {
 	const int hdr_size = offsetof(struct kib_msg, ibm_u);
-	__u32 msg_cksum;
-	__u16 version;
+	u32 msg_cksum;
+	u16 version;
 	int msg_nob;
 	int flip;
 
@@ -994,7 +994,7 @@ int kiblnd_close_peer_conns_locked(struct kib_peer_ni *peer_ni, int why)
 }
 
 int kiblnd_close_stale_conns_locked(struct kib_peer_ni *peer_ni,
-				    int version, __u64 incarnation)
+				    int version, u64 incarnation)
 {
 	struct kib_conn *conn;
 	struct list_head *ctmp;
@@ -1240,7 +1240,7 @@ void kiblnd_map_rx_descs(struct kib_conn *conn)
 
 		CDEBUG(D_NET, "rx %d: %p %#llx(%#llx)\n",
 		       i, rx->rx_msg, rx->rx_msgaddr,
-		       (__u64)(page_to_phys(pg) + pg_off));
+		       (u64)(page_to_phys(pg) + pg_off));
 
 		pg_off += IBLND_MSG_SIZE;
 		LASSERT(pg_off <= PAGE_SIZE);
@@ -1610,7 +1610,7 @@ static int kiblnd_fmr_pool_is_idle(struct kib_fmr_pool *fpo, time64_t now)
 static int
 kiblnd_map_tx_pages(struct kib_tx *tx, struct kib_rdma_desc *rd)
 {
-	__u64 *pages = tx->tx_pages;
+	u64 *pages = tx->tx_pages;
 	struct kib_hca_dev *hdev;
 	int npages;
 	int size;
@@ -1685,15 +1685,15 @@ void kiblnd_fmr_pool_unmap(struct kib_fmr *fmr, int status)
 }
 
 int kiblnd_fmr_pool_map(struct kib_fmr_poolset *fps, struct kib_tx *tx,
-			struct kib_rdma_desc *rd, __u32 nob, __u64 iov,
+			struct kib_rdma_desc *rd, u32 nob, u64 iov,
 			struct kib_fmr *fmr)
 {
-	__u64 *pages = tx->tx_pages;
+	u64 *pages = tx->tx_pages;
 	bool is_rx = (rd != tx->tx_rd);
 	bool tx_pages_mapped = false;
 	struct kib_fmr_pool *fpo;
 	int npages = 0;
-	__u64 version;
+	u64 version;
 	int rc;
 
  again:
@@ -1740,7 +1740,7 @@ int kiblnd_fmr_pool_map(struct kib_fmr_poolset *fps, struct kib_tx *tx,
 				mr = frd->frd_mr;
 
 				if (!frd->frd_valid) {
-					__u32 key = is_rx ? mr->rkey : mr->lkey;
+					u32 key = is_rx ? mr->rkey : mr->lkey;
 					struct ib_send_wr *inv_wr;
 
 					inv_wr = &frd->frd_inv_wr;
@@ -2204,7 +2204,7 @@ static void kiblnd_net_fini_pools(struct kib_net *net)
 }
 
 static int kiblnd_net_init_pools(struct kib_net *net, struct lnet_ni *ni,
-				 __u32 *cpts, int ncpts)
+				 u32 *cpts, int ncpts)
 {
 	struct lnet_ioctl_config_o2iblnd_tunables *tunables;
 	int cpt;
@@ -2303,7 +2303,7 @@ static int kiblnd_hdev_get_attr(struct kib_hca_dev *hdev)
 	 */
 	hdev->ibh_page_shift = PAGE_SHIFT;
 	hdev->ibh_page_size  = 1 << PAGE_SHIFT;
-	hdev->ibh_page_mask  = ~((__u64)hdev->ibh_page_size - 1);
+	hdev->ibh_page_mask  = ~((u64)hdev->ibh_page_size - 1);
 
 	if (hdev->ibh_ibdev->ops.alloc_fmr &&
 	    hdev->ibh_ibdev->ops.dealloc_fmr &&
@@ -2878,7 +2878,7 @@ static int kiblnd_start_schedulers(struct kib_sched_info *sched)
 	return rc;
 }
 
-static int kiblnd_dev_start_threads(struct kib_dev *dev, int newdev, __u32 *cpts,
+static int kiblnd_dev_start_threads(struct kib_dev *dev, int newdev, u32 *cpts,
 				    int ncpts)
 {
 	int cpt;
diff --git a/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.h b/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.h
index 2ddd83b..423bae7 100644
--- a/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.h
+++ b/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.h
@@ -157,7 +157,7 @@ enum kib_dev_caps {
 struct kib_dev {
 	struct list_head   ibd_list;            /* chain on kib_devs */
 	struct list_head   ibd_fail_list;       /* chain on kib_failed_devs */
-	__u32              ibd_ifip;            /* IPoIB interface IP */
+	u32              ibd_ifip;            /* IPoIB interface IP */
 
 	/* IPoIB interface name */
 	char               ibd_ifname[KIB_IFNAME_SIZE];
@@ -177,9 +177,9 @@ struct kib_hca_dev {
 	struct ib_device   *ibh_ibdev;          /* IB device */
 	int                ibh_page_shift;      /* page shift of current HCA */
 	int                ibh_page_size;       /* page size of current HCA */
-	__u64              ibh_page_mask;       /* page mask of current HCA */
+	u64              ibh_page_mask;       /* page mask of current HCA */
 	int                ibh_mr_shift;        /* bits shift of max MR size */
-	__u64              ibh_mr_size;         /* size of MR */
+	u64              ibh_mr_size;         /* size of MR */
 	struct ib_pd       *ibh_pd;             /* PD */
 	struct kib_dev	   *ibh_dev;		/* owner */
 	atomic_t           ibh_ref;             /* refcount */
@@ -238,7 +238,7 @@ struct kib_pool {
 
 struct kib_tx_poolset {
 	struct kib_poolset	tps_poolset;		/* pool-set */
-	__u64                 tps_next_tx_cookie; /* cookie of TX */
+	u64                 tps_next_tx_cookie; /* cookie of TX */
 };
 
 struct kib_tx_pool {
@@ -253,7 +253,7 @@ struct kib_fmr_poolset {
 	struct kib_net        *fps_net;            /* IB network */
 	struct list_head      fps_pool_list;       /* FMR pool list */
 	struct list_head      fps_failed_pool_list;/* FMR pool list */
-	__u64                 fps_version;         /* validity stamp */
+	u64                 fps_version;         /* validity stamp */
 	int                   fps_cpt;             /* CPT id */
 	int                   fps_pool_size;
 	int                   fps_flush_trigger;
@@ -299,7 +299,7 @@ struct kib_fmr {
 
 struct kib_net {
 	struct list_head      ibn_list;       /* chain on struct kib_dev::ibd_nets */
-	__u64                 ibn_incarnation;/* my epoch */
+	u64                 ibn_incarnation;/* my epoch */
 	int                   ibn_init;       /* initialisation state */
 	int                   ibn_shutdown;   /* shutting down? */
 
@@ -365,9 +365,9 @@ struct kib_data {
  */
 
 struct kib_connparams {
-	__u16        ibcp_queue_depth;
-	__u16        ibcp_max_frags;
-	__u32        ibcp_max_msg_size;
+	u16        ibcp_queue_depth;
+	u16        ibcp_max_frags;
+	u32        ibcp_max_msg_size;
 } __packed;
 
 struct kib_immediate_msg {
@@ -376,51 +376,51 @@ struct kib_immediate_msg {
 } __packed;
 
 struct kib_rdma_frag {
-	__u32        rf_nob;          /* # bytes this frag */
-	__u64        rf_addr;         /* CAVEAT EMPTOR: misaligned!! */
+	u32        rf_nob;          /* # bytes this frag */
+	u64        rf_addr;         /* CAVEAT EMPTOR: misaligned!! */
 } __packed;
 
 struct kib_rdma_desc {
-	__u32           rd_key;       /* local/remote key */
-	__u32           rd_nfrags;    /* # fragments */
+	u32           rd_key;       /* local/remote key */
+	u32           rd_nfrags;    /* # fragments */
 	struct kib_rdma_frag	rd_frags[0];	/* buffer frags */
 } __packed;
 
 struct kib_putreq_msg {
 	struct lnet_hdr	ibprm_hdr;    /* portals header */
-	__u64           ibprm_cookie; /* opaque completion cookie */
+	u64           ibprm_cookie; /* opaque completion cookie */
 } __packed;
 
 struct kib_putack_msg {
-	__u64           ibpam_src_cookie; /* reflected completion cookie */
-	__u64           ibpam_dst_cookie; /* opaque completion cookie */
+	u64           ibpam_src_cookie; /* reflected completion cookie */
+	u64           ibpam_dst_cookie; /* opaque completion cookie */
 	struct kib_rdma_desc ibpam_rd;         /* sender's sink buffer */
 } __packed;
 
 struct kib_get_msg {
 	struct lnet_hdr ibgm_hdr;     /* portals header */
-	__u64           ibgm_cookie;  /* opaque completion cookie */
+	u64           ibgm_cookie;  /* opaque completion cookie */
 	struct kib_rdma_desc ibgm_rd;      /* rdma descriptor */
 } __packed;
 
 struct kib_completion_msg {
-	__u64           ibcm_cookie;  /* opaque completion cookie */
-	__s32           ibcm_status;  /* < 0 failure: >= 0 length */
+	u64           ibcm_cookie;  /* opaque completion cookie */
+	s32           ibcm_status;  /* < 0 failure: >= 0 length */
 } __packed;
 
 struct kib_msg {
 	/* First 2 fields fixed FOR ALL TIME */
-	__u32           ibm_magic;    /* I'm an ibnal message */
-	__u16           ibm_version;  /* this is my version number */
-
-	__u8            ibm_type;     /* msg type */
-	__u8            ibm_credits;  /* returned credits */
-	__u32           ibm_nob;      /* # bytes in whole message */
-	__u32           ibm_cksum;    /* checksum (0 == no checksum) */
-	__u64           ibm_srcnid;   /* sender's NID */
-	__u64           ibm_srcstamp; /* sender's incarnation */
-	__u64           ibm_dstnid;   /* destination's NID */
-	__u64           ibm_dststamp; /* destination's incarnation */
+	u32           ibm_magic;    /* I'm an ibnal message */
+	u16           ibm_version;  /* this is my version number */
+
+	u8            ibm_type;     /* msg type */
+	u8            ibm_credits;  /* returned credits */
+	u32           ibm_nob;      /* # bytes in whole message */
+	u32           ibm_cksum;    /* checksum (0 == no checksum) */
+	u64           ibm_srcnid;   /* sender's NID */
+	u64           ibm_srcstamp; /* sender's incarnation */
+	u64           ibm_dstnid;   /* destination's NID */
+	u64           ibm_dststamp; /* destination's incarnation */
 
 	union {
 		struct kib_connparams		connparams;
@@ -450,11 +450,11 @@ struct kib_msg {
 #define IBLND_MSG_GET_DONE  0xd7	/* completion (src->sink: all OK) */
 
 struct kib_rej {
-	__u32            ibr_magic;       /* sender's magic */
-	__u16            ibr_version;     /* sender's version */
-	__u8             ibr_why;         /* reject reason */
-	__u8             ibr_padding;     /* padding */
-	__u64            ibr_incarnation; /* incarnation of peer_ni */
+	u32            ibr_magic;       /* sender's magic */
+	u16            ibr_version;     /* sender's version */
+	u8             ibr_why;         /* reject reason */
+	u8             ibr_padding;     /* padding */
+	u64            ibr_incarnation; /* incarnation of peer_ni */
 	struct kib_connparams ibr_cp;          /* connection parameters */
 } __packed;
 
@@ -478,7 +478,7 @@ struct kib_rx {					/* receive message */
 	int                    rx_nob; /* # bytes received (-1 while posted) */
 	enum ib_wc_status      rx_status;     /* completion status */
 	struct kib_msg		*rx_msg;	/* message buffer (host vaddr) */
-	__u64                  rx_msgaddr;    /* message buffer (I/O addr) */
+	u64                  rx_msgaddr;    /* message buffer (I/O addr) */
 	DEFINE_DMA_UNMAP_ADDR(rx_msgunmap);  /* for dma_unmap_single() */
 	struct ib_recv_wr      rx_wrq;        /* receive work item... */
 	struct ib_sge          rx_sge;        /* ...and its memory */
@@ -498,10 +498,10 @@ struct kib_tx {					/* transmit message */
 	short                 tx_waiting;     /* waiting for peer_ni */
 	int                   tx_status;      /* LNET completion status */
 	ktime_t			tx_deadline;	/* completion deadline */
-	__u64                 tx_cookie;      /* completion cookie */
+	u64                 tx_cookie;      /* completion cookie */
 	struct lnet_msg		*tx_lntmsg[2];	/* lnet msgs to finalize on completion */
 	struct kib_msg	      *tx_msg;        /* message buffer (host vaddr) */
-	__u64                 tx_msgaddr;     /* message buffer (I/O addr) */
+	u64                 tx_msgaddr;     /* message buffer (I/O addr) */
 	DEFINE_DMA_UNMAP_ADDR(tx_msgunmap);  /* for dma_unmap_single() */
 	/** sge for tx_msgaddr */
 	struct ib_sge		tx_msgsge;
@@ -513,7 +513,7 @@ struct kib_tx {					/* transmit message */
 	struct kib_rdma_desc  *tx_rd;         /* rdma descriptor */
 	int                   tx_nfrags;      /* # entries in... */
 	struct scatterlist    *tx_frags;      /* dma_map_sg descriptor */
-	__u64                 *tx_pages;      /* rdma phys page addrs */
+	u64                 *tx_pages;      /* rdma phys page addrs */
 	/* gaps in fragments */
 	bool			tx_gaps;
 	struct kib_fmr		tx_fmr;		/* FMR */
@@ -530,10 +530,10 @@ struct kib_conn {
 	struct kib_hca_dev         *ibc_hdev;       /* HCA bound on */
 	struct list_head ibc_list;            /* stash on peer_ni's conn list */
 	struct list_head      ibc_sched_list;  /* schedule for attention */
-	__u16                 ibc_version;     /* version of connection */
+	u16                 ibc_version;     /* version of connection */
 	/* reconnect later */
-	__u16			ibc_reconnect:1;
-	__u64                 ibc_incarnation; /* which instance of the peer_ni */
+	u16			ibc_reconnect:1;
+	u64                 ibc_incarnation; /* which instance of the peer_ni */
 	atomic_t              ibc_refcount;    /* # users */
 	int                   ibc_state;       /* what's happening */
 	int                   ibc_nsends_posted; /* # uncompleted sends */
@@ -543,9 +543,9 @@ struct kib_conn {
 	int                   ibc_reserved_credits; /* # ACK/DONE msg credits */
 	int                   ibc_comms_error; /* set on comms error */
 	/* connections queue depth */
-	__u16		      ibc_queue_depth;
+	u16		      ibc_queue_depth;
 	/* connections max frags */
-	__u16		      ibc_max_frags;
+	u16		      ibc_max_frags;
 	unsigned int          ibc_nrx:16;      /* receive buffers owned */
 	unsigned int          ibc_scheduled:1; /* scheduled for attention */
 	unsigned int          ibc_ready:1;     /* CQ callback fired */
@@ -586,13 +586,13 @@ struct kib_peer_ni {
 	struct kib_conn	*ibp_next_conn;  /* next connection to send on for
 					  * round robin */
 	struct list_head ibp_tx_queue;    /* msgs waiting for a conn */
-	__u64            ibp_incarnation; /* incarnation of peer_ni */
+	u64            ibp_incarnation; /* incarnation of peer_ni */
 	/* when (in seconds) I was last alive */
 	time64_t		ibp_last_alive;
 	/* # users */
 	atomic_t		ibp_refcount;
 	/* version of peer_ni */
-	__u16			ibp_version;
+	u16			ibp_version;
 	/* current passive connection attempts */
 	unsigned short		ibp_accepting;
 	/* current active connection attempts */
@@ -606,9 +606,9 @@ struct kib_peer_ni {
 	/* errno on closing this peer_ni */
 	int              ibp_error;
 	/* max map_on_demand */
-	__u16		 ibp_max_frags;
+	u16		 ibp_max_frags;
 	/* max_peer_credits */
-	__u16		 ibp_queue_depth;
+	u16		 ibp_queue_depth;
 };
 
 extern struct kib_data kiblnd_data;
@@ -819,24 +819,24 @@ struct kib_peer_ni {
 #define IBLND_WID_MR	4
 #define IBLND_WID_MASK	7UL
 
-static inline __u64
+static inline u64
 kiblnd_ptr2wreqid(void *ptr, int type)
 {
 	unsigned long lptr = (unsigned long)ptr;
 
 	LASSERT(!(lptr & IBLND_WID_MASK));
 	LASSERT(!(type & ~IBLND_WID_MASK));
-	return (__u64)(lptr | type);
+	return (u64)(lptr | type);
 }
 
 static inline void *
-kiblnd_wreqid2ptr(__u64 wreqid)
+kiblnd_wreqid2ptr(u64 wreqid)
 {
 	return (void *)(((unsigned long)wreqid) & ~IBLND_WID_MASK);
 }
 
 static inline int
-kiblnd_wreqid2type(__u64 wreqid)
+kiblnd_wreqid2type(u64 wreqid)
 {
 	return wreqid & IBLND_WID_MASK;
 }
@@ -867,26 +867,26 @@ struct kib_peer_ni {
 	return size;
 }
 
-static inline __u64
+static inline u64
 kiblnd_rd_frag_addr(struct kib_rdma_desc *rd, int index)
 {
 	return rd->rd_frags[index].rf_addr;
 }
 
-static inline __u32
+static inline u32
 kiblnd_rd_frag_size(struct kib_rdma_desc *rd, int index)
 {
 	return rd->rd_frags[index].rf_nob;
 }
 
-static inline __u32
+static inline u32
 kiblnd_rd_frag_key(struct kib_rdma_desc *rd, int index)
 {
 	return rd->rd_key;
 }
 
 static inline int
-kiblnd_rd_consume_frag(struct kib_rdma_desc *rd, int index, __u32 nob)
+kiblnd_rd_consume_frag(struct kib_rdma_desc *rd, int index, u32 nob)
 {
 	if (nob < rd->rd_frags[index].rf_nob) {
 		rd->rd_frags[index].rf_addr += nob;
@@ -909,13 +909,13 @@ struct kib_peer_ni {
 	       offsetof(struct kib_putack_msg, ibpam_rd.rd_frags[n]);
 }
 
-static inline __u64
+static inline u64
 kiblnd_dma_mapping_error(struct ib_device *dev, u64 dma_addr)
 {
 	return ib_dma_mapping_error(dev, dma_addr);
 }
 
-static inline __u64 kiblnd_dma_map_single(struct ib_device *dev,
+static inline u64 kiblnd_dma_map_single(struct ib_device *dev,
 					  void *msg, size_t size,
 					  enum dma_data_direction direction)
 {
@@ -923,7 +923,7 @@ static inline __u64 kiblnd_dma_map_single(struct ib_device *dev,
 }
 
 static inline void kiblnd_dma_unmap_single(struct ib_device *dev,
-					   __u64 addr, size_t size,
+					   u64 addr, size_t size,
 					  enum dma_data_direction direction)
 {
 	ib_dma_unmap_single(dev, addr, size, direction);
@@ -946,7 +946,7 @@ static inline void kiblnd_dma_unmap_sg(struct ib_device *dev,
 	ib_dma_unmap_sg(dev, sg, nents, direction);
 }
 
-static inline __u64 kiblnd_sg_dma_address(struct ib_device *dev,
+static inline u64 kiblnd_sg_dma_address(struct ib_device *dev,
 					  struct scatterlist *sg)
 {
 	return ib_sg_dma_address(dev, sg);
@@ -971,7 +971,7 @@ static inline unsigned int kiblnd_sg_dma_len(struct ib_device *dev,
 struct list_head *kiblnd_pool_alloc_node(struct kib_poolset *ps);
 
 int  kiblnd_fmr_pool_map(struct kib_fmr_poolset *fps, struct kib_tx *tx,
-			 struct kib_rdma_desc *rd, __u32 nob, __u64 iov,
+			 struct kib_rdma_desc *rd, u32 nob, u64 iov,
 			 struct kib_fmr *fmr);
 void kiblnd_fmr_pool_unmap(struct kib_fmr *fmr, int status);
 
@@ -998,7 +998,7 @@ int kiblnd_create_peer(struct lnet_ni *ni, struct kib_peer_ni **peerp,
 void kiblnd_unlink_peer_locked(struct kib_peer_ni *peer_ni);
 struct kib_peer_ni *kiblnd_find_peer_locked(struct lnet_ni *ni, lnet_nid_t nid);
 int  kiblnd_close_stale_conns_locked(struct kib_peer_ni *peer_ni,
-				     int version, __u64 incarnation);
+				     int version, u64 incarnation);
 int  kiblnd_close_peer_conns_locked(struct kib_peer_ni *peer_ni, int why);
 
 struct kib_conn *kiblnd_create_conn(struct kib_peer_ni *peer_ni,
@@ -1016,7 +1016,7 @@ struct kib_conn *kiblnd_create_conn(struct kib_peer_ni *peer_ni,
 void kiblnd_cq_completion(struct ib_cq *cq, void *arg);
 
 void kiblnd_pack_msg(struct lnet_ni *ni, struct kib_msg *msg, int version,
-		     int credits, lnet_nid_t dstnid, __u64 dststamp);
+		     int credits, lnet_nid_t dstnid, u64 dststamp);
 int  kiblnd_unpack_msg(struct kib_msg *msg, int nob);
 int  kiblnd_post_rx(struct kib_rx *rx, int credit);
 
diff --git a/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd_cb.c b/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd_cb.c
index 57fe037..48f2814 100644
--- a/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd_cb.c
+++ b/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd_cb.c
@@ -47,7 +47,7 @@ static void kiblnd_init_tx_msg(struct lnet_ni *ni, struct kib_tx *tx,
 			       int type, int body_nob);
 static int kiblnd_init_rdma(struct kib_conn *conn, struct kib_tx *tx, int type,
 			    int resid, struct kib_rdma_desc *dstrd,
-			    __u64 dstcookie);
+			    u64 dstcookie);
 static void kiblnd_queue_tx_locked(struct kib_tx *tx, struct kib_conn *conn);
 static void kiblnd_queue_tx(struct kib_tx *tx, struct kib_conn *conn);
 static void kiblnd_unmap_tx(struct kib_tx *tx);
@@ -223,7 +223,7 @@ static int kiblnd_init_rdma(struct kib_conn *conn, struct kib_tx *tx, int type,
 }
 
 static struct kib_tx *
-kiblnd_find_waiting_tx_locked(struct kib_conn *conn, int txtype, __u64 cookie)
+kiblnd_find_waiting_tx_locked(struct kib_conn *conn, int txtype, u64 cookie)
 {
 	struct kib_tx *tx;
 
@@ -246,7 +246,7 @@ static int kiblnd_init_rdma(struct kib_conn *conn, struct kib_tx *tx, int type,
 }
 
 static void
-kiblnd_handle_completion(struct kib_conn *conn, int txtype, int status, __u64 cookie)
+kiblnd_handle_completion(struct kib_conn *conn, int txtype, int status, u64 cookie)
 {
 	struct kib_tx *tx;
 	struct lnet_ni *ni = conn->ibc_peer->ibp_ni;
@@ -284,7 +284,7 @@ static int kiblnd_init_rdma(struct kib_conn *conn, struct kib_tx *tx, int type,
 }
 
 static void
-kiblnd_send_completion(struct kib_conn *conn, int type, int status, __u64 cookie)
+kiblnd_send_completion(struct kib_conn *conn, int type, int status, u64 cookie)
 {
 	struct lnet_ni *ni = conn->ibc_peer->ibp_ni;
 	struct kib_tx *tx = kiblnd_get_idle_tx(ni, conn->ibc_peer->ibp_nid);
@@ -536,7 +536,7 @@ static int kiblnd_init_rdma(struct kib_conn *conn, struct kib_tx *tx, int type,
 }
 
 static int
-kiblnd_fmr_map_tx(struct kib_net *net, struct kib_tx *tx, struct kib_rdma_desc *rd, __u32 nob)
+kiblnd_fmr_map_tx(struct kib_net *net, struct kib_tx *tx, struct kib_rdma_desc *rd, u32 nob)
 {
 	struct kib_hca_dev *hdev;
 	struct kib_fmr_poolset *fps;
@@ -637,7 +637,7 @@ static int kiblnd_map_tx(struct lnet_ni *ni, struct kib_tx *tx,
 {
 	struct kib_net *net = ni->ni_data;
 	struct kib_hca_dev *hdev = net->ibn_dev->ibd_hdev;
-	__u32 nob;
+	u32 nob;
 	int i;
 
 	/*
@@ -1086,7 +1086,7 @@ static int kiblnd_map_tx(struct lnet_ni *ni, struct kib_tx *tx,
 
 static int
 kiblnd_init_rdma(struct kib_conn *conn, struct kib_tx *tx, int type,
-		 int resid, struct kib_rdma_desc *dstrd, __u64 dstcookie)
+		 int resid, struct kib_rdma_desc *dstrd, u64 dstcookie)
 {
 	struct kib_msg *ibmsg = tx->tx_msg;
 	struct kib_rdma_desc *srcrd = tx->tx_rd;
@@ -2296,7 +2296,7 @@ static int kiblnd_resolve_addr(struct rdma_cm_id *cmid,
 	peer_addr = (struct sockaddr_in *)&cmid->route.addr.dst_addr;
 	if (*kiblnd_tunables.kib_require_priv_port &&
 	    ntohs(peer_addr->sin_port) >= PROT_SOCK) {
-		__u32 ip = ntohl(peer_addr->sin_addr.s_addr);
+		u32 ip = ntohl(peer_addr->sin_addr.s_addr);
 
 		CERROR("peer_ni's port (%pI4h:%hu) is not privileged\n",
 		       &ip, ntohs(peer_addr->sin_port));
@@ -2589,7 +2589,7 @@ static int kiblnd_resolve_addr(struct rdma_cm_id *cmid,
 
 static void
 kiblnd_check_reconnect(struct kib_conn *conn, int version,
-		       __u64 incarnation, int why, struct kib_connparams *cp)
+		       u64 incarnation, int why, struct kib_connparams *cp)
 {
 	rwlock_t *glock = &kiblnd_data.kib_global_lock;
 	struct kib_peer_ni *peer_ni = conn->ibc_peer;
@@ -2734,7 +2734,7 @@ static int kiblnd_resolve_addr(struct rdma_cm_id *cmid,
 			struct kib_rej *rej = priv;
 			struct kib_connparams *cp = NULL;
 			int flip = 0;
-			__u64 incarnation = -1;
+			u64 incarnation = -1;
 
 			/* NB. default incarnation is -1 because:
 			 * a) V1 will ignore dst incarnation in connreq.
@@ -2947,7 +2947,7 @@ static int kiblnd_resolve_addr(struct rdma_cm_id *cmid,
 	struct kib_msg *msg;
 	struct rdma_conn_param cp;
 	int version;
-	__u64 incarnation;
+	u64 incarnation;
 	unsigned long flags;
 	int rc;
 
diff --git a/drivers/staging/lustre/lnet/klnds/socklnd/socklnd.c b/drivers/staging/lustre/lnet/klnds/socklnd/socklnd.c
index ff8d732..f048f0a 100644
--- a/drivers/staging/lustre/lnet/klnds/socklnd/socklnd.c
+++ b/drivers/staging/lustre/lnet/klnds/socklnd/socklnd.c
@@ -46,7 +46,7 @@
 struct ksock_nal_data ksocknal_data;
 
 static struct ksock_interface *
-ksocknal_ip2iface(struct lnet_ni *ni, __u32 ip)
+ksocknal_ip2iface(struct lnet_ni *ni, u32 ip)
 {
 	struct ksock_net *net = ni->ni_data;
 	int i;
@@ -64,7 +64,7 @@
 }
 
 static struct ksock_route *
-ksocknal_create_route(__u32 ipaddr, int port)
+ksocknal_create_route(u32 ipaddr, int port)
 {
 	struct ksock_route *route;
 
@@ -217,7 +217,7 @@ struct ksock_peer *
 ksocknal_unlink_peer_locked(struct ksock_peer *peer_ni)
 {
 	int i;
-	__u32 ip;
+	u32 ip;
 	struct ksock_interface *iface;
 
 	for (i = 0; i < peer_ni->ksnp_n_passive_ips; i++) {
@@ -247,7 +247,7 @@ struct ksock_peer *
 
 static int
 ksocknal_get_peer_info(struct lnet_ni *ni, int index,
-		       struct lnet_process_id *id, __u32 *myip, __u32 *peer_ip,
+		       struct lnet_process_id *id, u32 *myip, u32 *peer_ip,
 		       int *port, int *conn_count, int *share_count)
 {
 	struct ksock_peer *peer_ni;
@@ -440,7 +440,7 @@ struct ksock_peer *
 }
 
 int
-ksocknal_add_peer(struct lnet_ni *ni, struct lnet_process_id id, __u32 ipaddr,
+ksocknal_add_peer(struct lnet_ni *ni, struct lnet_process_id id, u32 ipaddr,
 		  int port)
 {
 	struct ksock_peer *peer_ni;
@@ -497,7 +497,7 @@ struct ksock_peer *
 }
 
 static void
-ksocknal_del_peer_locked(struct ksock_peer *peer_ni, __u32 ip)
+ksocknal_del_peer_locked(struct ksock_peer *peer_ni, u32 ip)
 {
 	struct ksock_conn *conn;
 	struct ksock_route *route;
@@ -553,7 +553,7 @@ struct ksock_peer *
 }
 
 static int
-ksocknal_del_peer(struct lnet_ni *ni, struct lnet_process_id id, __u32 ip)
+ksocknal_del_peer(struct lnet_ni *ni, struct lnet_process_id id, u32 ip)
 {
 	LIST_HEAD(zombies);
 	struct ksock_peer *pnxt;
@@ -680,7 +680,7 @@ struct ksock_peer *
 }
 
 static int
-ksocknal_local_ipvec(struct lnet_ni *ni, __u32 *ipaddrs)
+ksocknal_local_ipvec(struct lnet_ni *ni, u32 *ipaddrs)
 {
 	struct ksock_net *net = ni->ni_data;
 	int i;
@@ -710,7 +710,7 @@ struct ksock_peer *
 }
 
 static int
-ksocknal_match_peerip(struct ksock_interface *iface, __u32 *ips, int nips)
+ksocknal_match_peerip(struct ksock_interface *iface, u32 *ips, int nips)
 {
 	int best_netmatch = 0;
 	int best_xor      = 0;
@@ -742,7 +742,7 @@ struct ksock_peer *
 }
 
 static int
-ksocknal_select_ips(struct ksock_peer *peer_ni, __u32 *peerips, int n_peerips)
+ksocknal_select_ips(struct ksock_peer *peer_ni, u32 *peerips, int n_peerips)
 {
 	rwlock_t *global_lock = &ksocknal_data.ksnd_global_lock;
 	struct ksock_net *net = peer_ni->ksnp_ni->ni_data;
@@ -752,8 +752,8 @@ struct ksock_peer *
 	int i;
 	int j;
 	int k;
-	__u32 ip;
-	__u32 xor;
+	u32 ip;
+	u32 xor;
 	int this_netmatch;
 	int best_netmatch;
 	int best_npeers;
@@ -858,7 +858,7 @@ struct ksock_peer *
 
 static void
 ksocknal_create_routes(struct ksock_peer *peer_ni, int port,
-		       __u32 *peer_ipaddrs, int npeer_ipaddrs)
+		       u32 *peer_ipaddrs, int npeer_ipaddrs)
 {
 	struct ksock_route *newroute = NULL;
 	rwlock_t *global_lock = &ksocknal_data.ksnd_global_lock;
@@ -968,7 +968,7 @@ struct ksock_peer *
 {
 	struct ksock_connreq *cr;
 	int rc;
-	__u32 peer_ip;
+	u32 peer_ip;
 	int peer_port;
 
 	rc = lnet_sock_getaddr(sock, 1, &peer_ip, &peer_port);
@@ -995,7 +995,7 @@ struct ksock_peer *
 }
 
 static int
-ksocknal_connecting(struct ksock_peer *peer_ni, __u32 ipaddr)
+ksocknal_connecting(struct ksock_peer *peer_ni, u32 ipaddr)
 {
 	struct ksock_route *route;
 
@@ -1013,7 +1013,7 @@ struct ksock_peer *
 	rwlock_t *global_lock = &ksocknal_data.ksnd_global_lock;
 	LIST_HEAD(zombies);
 	struct lnet_process_id peerid;
-	__u64 incarnation;
+	u64 incarnation;
 	struct ksock_conn *conn;
 	struct ksock_conn *conn2;
 	struct ksock_peer *peer_ni = NULL;
@@ -1714,7 +1714,7 @@ struct ksock_peer *
 
 int
 ksocknal_close_peer_conns_locked(struct ksock_peer *peer_ni,
-				 __u32 ipaddr, int why)
+				 u32 ipaddr, int why)
 {
 	struct ksock_conn *conn;
 	struct list_head *ctmp;
@@ -1737,7 +1737,7 @@ struct ksock_peer *
 ksocknal_close_conn_and_siblings(struct ksock_conn *conn, int why)
 {
 	struct ksock_peer *peer_ni = conn->ksnc_peer;
-	__u32 ipaddr = conn->ksnc_ipaddr;
+	u32 ipaddr = conn->ksnc_ipaddr;
 	int count;
 
 	write_lock_bh(&ksocknal_data.ksnd_global_lock);
@@ -1750,7 +1750,7 @@ struct ksock_peer *
 }
 
 int
-ksocknal_close_matching_conns(struct lnet_process_id id, __u32 ipaddr)
+ksocknal_close_matching_conns(struct lnet_process_id id, u32 ipaddr)
 {
 	struct ksock_peer *peer_ni;
 	struct ksock_peer *pnxt;
@@ -1964,7 +1964,7 @@ static int ksocknal_push(struct lnet_ni *ni, struct lnet_process_id id)
 }
 
 static int
-ksocknal_add_interface(struct lnet_ni *ni, __u32 ipaddress, __u32 netmask)
+ksocknal_add_interface(struct lnet_ni *ni, u32 ipaddress, u32 netmask)
 {
 	struct ksock_net *net = ni->ni_data;
 	struct ksock_interface *iface;
@@ -2027,7 +2027,7 @@ static int ksocknal_push(struct lnet_ni *ni, struct lnet_process_id id)
 }
 
 static void
-ksocknal_peer_del_interface_locked(struct ksock_peer *peer_ni, __u32 ipaddr)
+ksocknal_peer_del_interface_locked(struct ksock_peer *peer_ni, u32 ipaddr)
 {
 	struct list_head *tmp;
 	struct list_head *nxt;
@@ -2068,13 +2068,13 @@ static int ksocknal_push(struct lnet_ni *ni, struct lnet_process_id id)
 }
 
 static int
-ksocknal_del_interface(struct lnet_ni *ni, __u32 ipaddress)
+ksocknal_del_interface(struct lnet_ni *ni, u32 ipaddress)
 {
 	struct ksock_net *net = ni->ni_data;
 	int rc = -ENOENT;
 	struct ksock_peer *nxt;
 	struct ksock_peer *peer_ni;
-	__u32 this_ip;
+	u32 this_ip;
 	int i;
 	int j;
 
@@ -2126,7 +2126,7 @@ static int ksocknal_push(struct lnet_ni *ni, struct lnet_process_id id)
 
 		read_lock(&ksocknal_data.ksnd_global_lock);
 
-		if (data->ioc_count >= (__u32)net->ksnn_ninterfaces) {
+		if (data->ioc_count >= (u32)net->ksnn_ninterfaces) {
 			rc = -ENOENT;
 		} else {
 			rc = 0;
@@ -2152,8 +2152,8 @@ static int ksocknal_push(struct lnet_ni *ni, struct lnet_process_id id)
 					      data->ioc_u32[0]); /* IP address */
 
 	case IOC_LIBCFS_GET_PEER: {
-		__u32 myip = 0;
-		__u32 ip = 0;
+		u32 myip = 0;
+		u32 ip = 0;
 		int port = 0;
 		int conn_count = 0;
 		int share_count = 0;
@@ -2742,7 +2742,7 @@ static int ksocknal_push(struct lnet_ni *ni, struct lnet_process_id id)
 }
 
 static int
-ksocknal_net_start_threads(struct ksock_net *net, __u32 *cpts, int ncpts)
+ksocknal_net_start_threads(struct ksock_net *net, u32 *cpts, int ncpts)
 {
 	int newif = ksocknal_search_new_ipif(net);
 	int rc;
diff --git a/drivers/staging/lustre/lnet/klnds/socklnd/socklnd.h b/drivers/staging/lustre/lnet/klnds/socklnd/socklnd.h
index 297d1e5..a390381 100644
--- a/drivers/staging/lustre/lnet/klnds/socklnd/socklnd.h
+++ b/drivers/staging/lustre/lnet/klnds/socklnd/socklnd.h
@@ -101,8 +101,8 @@ struct ksock_sched_info {
 #define KSOCK_THREAD_SID(id)      ((id) & ((1UL << KSOCK_CPT_SHIFT) - 1))
 
 struct ksock_interface {			/* in-use interface */
-	__u32		ksni_ipaddr;		/* interface's IP address */
-	__u32		ksni_netmask;		/* interface's network mask */
+	u32		ksni_ipaddr;		/* interface's IP address */
+	u32		ksni_netmask;		/* interface's network mask */
 	int		ksni_nroutes;		/* # routes using (active) */
 	int		ksni_npeers;		/* # peers using (passive) */
 	char		ksni_name[IFNAMSIZ];	/* interface name */
@@ -167,7 +167,7 @@ struct ksock_tunables {
 };
 
 struct ksock_net {
-	__u64		  ksnn_incarnation;	/* my epoch */
+	u64		  ksnn_incarnation;	/* my epoch */
 	spinlock_t	  ksnn_lock;		/* serialise */
 	struct list_head	  ksnn_list;		/* chain on global list */
 	int		  ksnn_npeers;		/* # peers */
@@ -326,8 +326,8 @@ struct ksock_conn {
 	atomic_t           ksnc_sock_refcount;/* sock refcount */
 	struct ksock_sched *ksnc_scheduler;	/* who schedules this connection
 						 */
-	__u32              ksnc_myipaddr;     /* my IP */
-	__u32              ksnc_ipaddr;       /* peer_ni's IP */
+	u32              ksnc_myipaddr;     /* my IP */
+	u32              ksnc_ipaddr;       /* peer_ni's IP */
 	int                ksnc_port;         /* peer_ni's port */
 	signed int         ksnc_type:3;       /* type of connection, should be
 					       * signed value
@@ -344,14 +344,14 @@ struct ksock_conn {
 	time64_t	   ksnc_rx_deadline;  /* when (in secs) receive times
 					       * out
 					       */
-	__u8               ksnc_rx_started;   /* started receiving a message */
-	__u8               ksnc_rx_ready;     /* data ready to read */
-	__u8               ksnc_rx_scheduled; /* being progressed */
-	__u8               ksnc_rx_state;     /* what is being read */
+	u8               ksnc_rx_started;   /* started receiving a message */
+	u8               ksnc_rx_ready;     /* data ready to read */
+	u8               ksnc_rx_scheduled; /* being progressed */
+	u8               ksnc_rx_state;     /* what is being read */
 	int                ksnc_rx_nob_left;  /* # bytes to next hdr/body */
 	struct iov_iter    ksnc_rx_to;		/* copy destination */
 	struct kvec        ksnc_rx_iov_space[LNET_MAX_IOV]; /* space for frag descriptors */
-	__u32              ksnc_rx_csum;      /* partial checksum for incoming
+	u32              ksnc_rx_csum;      /* partial checksum for incoming
 					       * data
 					       */
 	void               *ksnc_cookie;      /* rx lnet_finalize passthru arg
@@ -391,8 +391,8 @@ struct ksock_route {
 						* can happen next
 						*/
 	time64_t	  ksnr_retry_interval; /* how long between retries */
-	__u32             ksnr_myipaddr;       /* my IP */
-	__u32             ksnr_ipaddr;         /* IP address to connect to */
+	u32             ksnr_myipaddr;       /* my IP */
+	u32             ksnr_ipaddr;         /* IP address to connect to */
 	int               ksnr_port;           /* port to connect to */
 	unsigned int      ksnr_scheduled:1;    /* scheduled for attention */
 	unsigned int      ksnr_connecting:1;   /* connection establishment in
@@ -422,8 +422,8 @@ struct ksock_peer {
 	int                ksnp_accepting;      /* # passive connections pending
 						 */
 	int                ksnp_error;          /* errno on closing last conn */
-	__u64              ksnp_zc_next_cookie; /* ZC completion cookie */
-	__u64              ksnp_incarnation;    /* latest known peer_ni
+	u64              ksnp_zc_next_cookie; /* ZC completion cookie */
+	u64              ksnp_incarnation;    /* latest known peer_ni
 						 * incarnation
 						 */
 	struct ksock_proto *ksnp_proto;         /* latest known peer_ni
@@ -479,13 +479,13 @@ struct ksock_proto {
 	struct ksock_tx *(*pro_queue_tx_msg)(struct ksock_conn *, struct ksock_tx *);
 
 	/* queue ZC ack on the connection */
-	int        (*pro_queue_tx_zcack)(struct ksock_conn *, struct ksock_tx *, __u64);
+	int        (*pro_queue_tx_zcack)(struct ksock_conn *, struct ksock_tx *, u64);
 
 	/* handle ZC request */
-	int        (*pro_handle_zcreq)(struct ksock_conn *, __u64, int);
+	int        (*pro_handle_zcreq)(struct ksock_conn *, u64, int);
 
 	/* handle ZC ACK */
-	int        (*pro_handle_zcack)(struct ksock_conn *, __u64, __u64);
+	int        (*pro_handle_zcack)(struct ksock_conn *, u64, u64);
 
 	/*
 	 * msg type matches the connection type:
@@ -634,7 +634,7 @@ int ksocknal_recv(struct lnet_ni *ni, void *private, struct lnet_msg *lntmsg,
 		  int delayed, struct iov_iter *to, unsigned int rlen);
 int ksocknal_accept(struct lnet_ni *ni, struct socket *sock);
 
-int ksocknal_add_peer(struct lnet_ni *ni, struct lnet_process_id id, __u32 ip,
+int ksocknal_add_peer(struct lnet_ni *ni, struct lnet_process_id id, u32 ip,
 		      int port);
 struct ksock_peer *ksocknal_find_peer_locked(struct lnet_ni *ni,
 					     struct lnet_process_id id);
@@ -647,9 +647,9 @@ int ksocknal_create_conn(struct lnet_ni *ni, struct ksock_route *route,
 void ksocknal_terminate_conn(struct ksock_conn *conn);
 void ksocknal_destroy_conn(struct ksock_conn *conn);
 int  ksocknal_close_peer_conns_locked(struct ksock_peer *peer_ni,
-				      __u32 ipaddr, int why);
+				      u32 ipaddr, int why);
 int ksocknal_close_conn_and_siblings(struct ksock_conn *conn, int why);
-int ksocknal_close_matching_conns(struct lnet_process_id id, __u32 ipaddr);
+int ksocknal_close_matching_conns(struct lnet_process_id id, u32 ipaddr);
 struct ksock_conn *ksocknal_find_conn_locked(struct ksock_peer *peer_ni,
 					     struct ksock_tx *tx, int nonblk);
 
@@ -657,7 +657,7 @@ int  ksocknal_launch_packet(struct lnet_ni *ni, struct ksock_tx *tx,
 			    struct lnet_process_id id);
 struct ksock_tx *ksocknal_alloc_tx(int type, int size);
 void ksocknal_free_tx(struct ksock_tx *tx);
-struct ksock_tx *ksocknal_alloc_tx_noop(__u64 cookie, int nonblk);
+struct ksock_tx *ksocknal_alloc_tx_noop(u64 cookie, int nonblk);
 void ksocknal_next_tx_carrier(struct ksock_conn *conn);
 void ksocknal_queue_tx_locked(struct ksock_tx *tx, struct ksock_conn *conn);
 void ksocknal_txlist_done(struct lnet_ni *ni, struct list_head *txlist, int error);
@@ -679,7 +679,7 @@ int ksocknal_send_hello(struct lnet_ni *ni, struct ksock_conn *conn,
 int ksocknal_recv_hello(struct lnet_ni *ni, struct ksock_conn *conn,
 			struct ksock_hello_msg *hello,
 			struct lnet_process_id *id,
-			__u64 *incarnation);
+			u64 *incarnation);
 void ksocknal_read_callback(struct ksock_conn *conn);
 void ksocknal_write_callback(struct ksock_conn *conn);
 
diff --git a/drivers/staging/lustre/lnet/klnds/socklnd/socklnd_cb.c b/drivers/staging/lustre/lnet/klnds/socklnd/socklnd_cb.c
index 4abf0eb..dd4fb69 100644
--- a/drivers/staging/lustre/lnet/klnds/socklnd/socklnd_cb.c
+++ b/drivers/staging/lustre/lnet/klnds/socklnd/socklnd_cb.c
@@ -64,7 +64,7 @@ struct ksock_tx *
 }
 
 struct ksock_tx *
-ksocknal_alloc_tx_noop(__u64 cookie, int nonblk)
+ksocknal_alloc_tx_noop(u64 cookie, int nonblk)
 {
 	struct ksock_tx *tx;
 
@@ -1126,7 +1126,7 @@ struct ksock_route *
 		}
 
 		if (conn->ksnc_msg.ksm_zc_cookies[1]) {
-			__u64 cookie = 0;
+			u64 cookie = 0;
 
 			LASSERT(conn->ksnc_proto != &ksocknal_protocol_v1x);
 
@@ -1533,7 +1533,7 @@ void ksocknal_write_callback(struct ksock_conn *conn)
 static struct ksock_proto *
 ksocknal_parse_proto_version(struct ksock_hello_msg *hello)
 {
-	__u32 version = 0;
+	u32 version = 0;
 
 	if (hello->kshm_magic == LNET_PROTO_MAGIC)
 		version = hello->kshm_version;
@@ -1614,7 +1614,7 @@ void ksocknal_write_callback(struct ksock_conn *conn)
 ksocknal_recv_hello(struct lnet_ni *ni, struct ksock_conn *conn,
 		    struct ksock_hello_msg *hello,
 		    struct lnet_process_id *peerid,
-		    __u64 *incarnation)
+		    u64 *incarnation)
 {
 	/* Return < 0	fatal error
 	 *	0	  success
diff --git a/drivers/staging/lustre/lnet/klnds/socklnd/socklnd_lib.c b/drivers/staging/lustre/lnet/klnds/socklnd/socklnd_lib.c
index 686c2d3..565c50c 100644
--- a/drivers/staging/lustre/lnet/klnds/socklnd/socklnd_lib.c
+++ b/drivers/staging/lustre/lnet/klnds/socklnd/socklnd_lib.c
@@ -175,7 +175,7 @@ static int lustre_csum(struct kvec *v, void *context)
 ksocknal_lib_recv(struct ksock_conn *conn)
 {
 	struct msghdr msg = { .msg_iter = conn->ksnc_rx_to };
-	__u32 saved_csum;
+	u32 saved_csum;
 	int rc;
 
 	rc = sock_recvmsg(conn->ksnc_sock, &msg, MSG_DONTWAIT);
@@ -203,7 +203,7 @@ static int lustre_csum(struct kvec *v, void *context)
 ksocknal_lib_csum_tx(struct ksock_tx *tx)
 {
 	int i;
-	__u32 csum;
+	u32 csum;
 	void *base;
 
 	LASSERT(tx->tx_iov[0].iov_base == &tx->tx_msg);
diff --git a/drivers/staging/lustre/lnet/klnds/socklnd/socklnd_proto.c b/drivers/staging/lustre/lnet/klnds/socklnd/socklnd_proto.c
index 54ec5d0..91bed59 100644
--- a/drivers/staging/lustre/lnet/klnds/socklnd/socklnd_proto.c
+++ b/drivers/staging/lustre/lnet/klnds/socklnd/socklnd_proto.c
@@ -68,7 +68,7 @@
 
 static int
 ksocknal_queue_tx_zcack_v2(struct ksock_conn *conn,
-			   struct ksock_tx *tx_ack, __u64 cookie)
+			   struct ksock_tx *tx_ack, u64 cookie)
 {
 	struct ksock_tx *tx = conn->ksnc_tx_carrier;
 
@@ -151,7 +151,7 @@
 
 static int
 ksocknal_queue_tx_zcack_v3(struct ksock_conn *conn,
-			   struct ksock_tx *tx_ack, __u64 cookie)
+			   struct ksock_tx *tx_ack, u64 cookie)
 {
 	struct ksock_tx *tx;
 
@@ -220,7 +220,7 @@
 	/* takes two or more cookies already */
 
 	if (tx->tx_msg.ksm_zc_cookies[0] > tx->tx_msg.ksm_zc_cookies[1]) {
-		__u64   tmp = 0;
+		u64   tmp = 0;
 
 		/* two separated cookies: (a+2, a) or (a+1, a) */
 		LASSERT(tx->tx_msg.ksm_zc_cookies[0] -
@@ -365,7 +365,7 @@
 
 /* (Sink) handle incoming ZC request from sender */
 static int
-ksocknal_handle_zcreq(struct ksock_conn *c, __u64 cookie, int remote)
+ksocknal_handle_zcreq(struct ksock_conn *c, u64 cookie, int remote)
 {
 	struct ksock_peer *peer_ni = c->ksnc_peer;
 	struct ksock_conn *conn;
@@ -409,7 +409,7 @@
 
 /* (Sender) handle ZC_ACK from sink */
 static int
-ksocknal_handle_zcack(struct ksock_conn *conn, __u64 cookie1, __u64 cookie2)
+ksocknal_handle_zcack(struct ksock_conn *conn, u64 cookie1, u64 cookie2)
 {
 	struct ksock_peer *peer_ni = conn->ksnc_peer;
 	struct ksock_tx *tx;
@@ -432,7 +432,7 @@
 
 	list_for_each_entry_safe(tx, tmp, &peer_ni->ksnp_zc_req_list,
 				 tx_zc_list) {
-		__u64 c = tx->tx_msg.ksm_zc_cookies[0];
+		u64 c = tx->tx_msg.ksm_zc_cookies[0];
 
 		if (c == cookie1 || c == cookie2 ||
 		    (cookie1 < c && c < cookie2)) {
@@ -500,7 +500,7 @@
 	hdr->src_nid = cpu_to_le64(hello->kshm_src_nid);
 	hdr->src_pid = cpu_to_le32(hello->kshm_src_pid);
 	hdr->type = cpu_to_le32(LNET_MSG_HELLO);
-	hdr->payload_length = cpu_to_le32(hello->kshm_nips * sizeof(__u32));
+	hdr->payload_length = cpu_to_le32(hello->kshm_nips * sizeof(u32));
 	hdr->msg.hello.type = cpu_to_le32(hello->kshm_ctype);
 	hdr->msg.hello.incarnation = cpu_to_le64(hello->kshm_src_incarnation);
 
@@ -518,7 +518,7 @@
 		hello->kshm_ips[i] = __cpu_to_le32(hello->kshm_ips[i]);
 
 	rc = lnet_sock_write(sock, hello->kshm_ips,
-			     hello->kshm_nips * sizeof(__u32),
+			     hello->kshm_nips * sizeof(u32),
 			     lnet_acceptor_timeout());
 	if (rc) {
 		CNETERR("Error %d sending HELLO payload (%d) to %pI4h/%d\n",
@@ -562,7 +562,7 @@
 		return 0;
 
 	rc = lnet_sock_write(sock, hello->kshm_ips,
-			     hello->kshm_nips * sizeof(__u32),
+			     hello->kshm_nips * sizeof(u32),
 			     lnet_acceptor_timeout());
 	if (rc) {
 		CNETERR("Error %d sending HELLO payload (%d) to %pI4h/%d\n",
@@ -612,7 +612,7 @@
 	hello->kshm_src_incarnation = le64_to_cpu(hdr->msg.hello.incarnation);
 	hello->kshm_ctype           = le32_to_cpu(hdr->msg.hello.type);
 	hello->kshm_nips            = le32_to_cpu(hdr->payload_length) /
-						  sizeof(__u32);
+						  sizeof(u32);
 
 	if (hello->kshm_nips > LNET_INTERFACES_NUM) {
 		CERROR("Bad nips %d from ip %pI4h\n",
@@ -625,7 +625,7 @@
 		goto out;
 
 	rc = lnet_sock_read(sock, hello->kshm_ips,
-			    hello->kshm_nips * sizeof(__u32), timeout);
+			    hello->kshm_nips * sizeof(u32), timeout);
 	if (rc) {
 		CERROR("Error %d reading IPs from ip %pI4h\n",
 		       rc, &conn->ksnc_ipaddr);
@@ -694,7 +694,7 @@
 		return 0;
 
 	rc = lnet_sock_read(sock, hello->kshm_ips,
-			    hello->kshm_nips * sizeof(__u32), timeout);
+			    hello->kshm_nips * sizeof(u32), timeout);
 	if (rc) {
 		CERROR("Error %d reading IPs from ip %pI4h\n",
 		       rc, &conn->ksnc_ipaddr);
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [lustre-devel] [PATCH 03/26] lnet: use kernel types for lnet selftest kernel code
  2019-01-31 17:19 [lustre-devel] [PATCH 00/26] lustre: cleanups with no code changes James Simmons
  2019-01-31 17:19 ` [lustre-devel] [PATCH 01/26] lnet: use kernel types for lnet core kernel code James Simmons
  2019-01-31 17:19 ` [lustre-devel] [PATCH 02/26] lnet: use kernel types for lnet klnd " James Simmons
@ 2019-01-31 17:19 ` James Simmons
  2019-01-31 17:19 ` [lustre-devel] [PATCH 04/26] ptlrpc: use kernel types for " James Simmons
                   ` (23 subsequent siblings)
  26 siblings, 0 replies; 30+ messages in thread
From: James Simmons @ 2019-01-31 17:19 UTC (permalink / raw)
  To: lustre-devel

LNet self test was originally both a user land and kernel
implementation. The source contains many types of the
form __u32 but since this is mostly kernel code change
the types to kernel internal types.

Signed-off-by: James Simmons <jsimmons@infradead.org>
---
 drivers/staging/lustre/lnet/selftest/brw_test.c |  20 ++--
 drivers/staging/lustre/lnet/selftest/console.h  |   2 +-
 drivers/staging/lustre/lnet/selftest/rpc.c      |  22 ++--
 drivers/staging/lustre/lnet/selftest/rpc.h      | 132 ++++++++++++------------
 4 files changed, 88 insertions(+), 88 deletions(-)

diff --git a/drivers/staging/lustre/lnet/selftest/brw_test.c b/drivers/staging/lustre/lnet/selftest/brw_test.c
index e372ff3..eb8b1e9 100644
--- a/drivers/staging/lustre/lnet/selftest/brw_test.c
+++ b/drivers/staging/lustre/lnet/selftest/brw_test.c
@@ -153,7 +153,7 @@ static int brw_inject_one_error(void)
 }
 
 static void
-brw_fill_page(struct page *pg, int off, int len, int pattern, __u64 magic)
+brw_fill_page(struct page *pg, int off, int len, int pattern, u64 magic)
 {
 	char *addr = page_address(pg) + off;
 	int i;
@@ -186,10 +186,10 @@ static int brw_inject_one_error(void)
 }
 
 static int
-brw_check_page(struct page *pg, int off, int len, int pattern, __u64 magic)
+brw_check_page(struct page *pg, int off, int len, int pattern, u64 magic)
 {
 	char *addr = page_address(pg) + off;
-	__u64 data = 0; /* make compiler happy */
+	u64 data = 0; /* make compiler happy */
 	int i;
 
 	LASSERT(addr);
@@ -199,13 +199,13 @@ static int brw_inject_one_error(void)
 		return 0;
 
 	if (pattern == LST_BRW_CHECK_SIMPLE) {
-		data = *((__u64 *)addr);
+		data = *((u64 *)addr);
 		if (data != magic)
 			goto bad_data;
 
 		if (len > BRW_MSIZE) {
 			addr += PAGE_SIZE - BRW_MSIZE;
-			data = *((__u64 *)addr);
+			data = *((u64 *)addr);
 			if (data != magic)
 				goto bad_data;
 		}
@@ -230,7 +230,7 @@ static int brw_inject_one_error(void)
 }
 
 static void
-brw_fill_bulk(struct srpc_bulk *bk, int pattern, __u64 magic)
+brw_fill_bulk(struct srpc_bulk *bk, int pattern, u64 magic)
 {
 	int i;
 	struct page *pg;
@@ -246,7 +246,7 @@ static int brw_inject_one_error(void)
 }
 
 static int
-brw_check_bulk(struct srpc_bulk *bk, int pattern, __u64 magic)
+brw_check_bulk(struct srpc_bulk *bk, int pattern, u64 magic)
 {
 	int i;
 	struct page *pg;
@@ -331,7 +331,7 @@ static int brw_inject_one_error(void)
 static void
 brw_client_done_rpc(struct sfw_test_unit *tsu, struct srpc_client_rpc *rpc)
 {
-	__u64 magic = BRW_MAGIC;
+	u64 magic = BRW_MAGIC;
 	struct sfw_test_instance *tsi = tsu->tsu_instance;
 	struct sfw_session *sn = tsi->tsi_batch->bat_session;
 	struct srpc_msg *msg = &rpc->crpc_replymsg;
@@ -397,7 +397,7 @@ static int brw_inject_one_error(void)
 static int
 brw_bulk_ready(struct srpc_server_rpc *rpc, int status)
 {
-	__u64 magic = BRW_MAGIC;
+	u64 magic = BRW_MAGIC;
 	struct srpc_brw_reply *reply = &rpc->srpc_replymsg.msg_body.brw_reply;
 	struct srpc_brw_reqst *reqst;
 	struct srpc_msg *reqstmsg;
@@ -452,7 +452,7 @@ static int brw_inject_one_error(void)
 		__swab64s(&reqst->brw_rpyid);
 		__swab64s(&reqst->brw_bulkid);
 	}
-	LASSERT(reqstmsg->msg_type == (__u32)srpc_service2request(sv->sv_id));
+	LASSERT(reqstmsg->msg_type == (u32)srpc_service2request(sv->sv_id));
 
 	reply->brw_status = 0;
 	rpc->srpc_done = brw_server_rpc_done;
diff --git a/drivers/staging/lustre/lnet/selftest/console.h b/drivers/staging/lustre/lnet/selftest/console.h
index eaad07c..b5709a4 100644
--- a/drivers/staging/lustre/lnet/selftest/console.h
+++ b/drivers/staging/lustre/lnet/selftest/console.h
@@ -153,7 +153,7 @@ struct lstcon_session {
 	unsigned int	    ses_force:1;      /* force creating */
 	unsigned int	    ses_shutdown:1;   /* session is shutting down */
 	unsigned int	    ses_expired:1;    /* console is timedout */
-	__u64		    ses_id_cookie;    /* batch id cookie */
+	u64		    ses_id_cookie;    /* batch id cookie */
 	char		    ses_name[LST_NAME_SIZE];/* session name */
 	struct lstcon_rpc_trans	*ses_ping;		/* session pinger */
 	struct stt_timer	 ses_ping_timer;   /* timer for pinger */
diff --git a/drivers/staging/lustre/lnet/selftest/rpc.c b/drivers/staging/lustre/lnet/selftest/rpc.c
index 26132ab..2a30107 100644
--- a/drivers/staging/lustre/lnet/selftest/rpc.c
+++ b/drivers/staging/lustre/lnet/selftest/rpc.c
@@ -57,7 +57,7 @@ enum srpc_state {
 	struct lnet_handle_eq	 rpc_lnet_eq;	/* _the_ LNet event queue */
 	enum srpc_state	 rpc_state;
 	struct srpc_counters	 rpc_counters;
-	__u64		 rpc_matchbits;	/* matchbits counter */
+	u64		 rpc_matchbits;	/* matchbits counter */
 } srpc_data;
 
 static inline int
@@ -159,10 +159,10 @@ struct srpc_bulk *
 	return bk;
 }
 
-static inline __u64
+static inline u64
 srpc_next_id(void)
 {
-	__u64 id;
+	u64 id;
 
 	spin_lock(&srpc_data.rpc_glock);
 	id = srpc_data.rpc_matchbits++;
@@ -354,7 +354,7 @@ struct srpc_bulk *
 }
 
 static int
-srpc_post_passive_rdma(int portal, int local, __u64 matchbits, void *buf,
+srpc_post_passive_rdma(int portal, int local, u64 matchbits, void *buf,
 		       int len, int options, struct lnet_process_id peer,
 		       struct lnet_handle_md *mdh, struct srpc_event *ev)
 {
@@ -393,7 +393,7 @@ struct srpc_bulk *
 }
 
 static int
-srpc_post_active_rdma(int portal, __u64 matchbits, void *buf, int len,
+srpc_post_active_rdma(int portal, u64 matchbits, void *buf, int len,
 		      int options, struct lnet_process_id peer,
 		      lnet_nid_t self, struct lnet_handle_md *mdh,
 		      struct srpc_event *ev)
@@ -813,7 +813,7 @@ struct srpc_bulk *
 srpc_prepare_reply(struct srpc_client_rpc *rpc)
 {
 	struct srpc_event *ev = &rpc->crpc_replyev;
-	__u64 *id = &rpc->crpc_reqstmsg.msg_body.reqst.rpyid;
+	u64 *id = &rpc->crpc_reqstmsg.msg_body.reqst.rpyid;
 	int rc;
 
 	ev->ev_fired = 0;
@@ -839,7 +839,7 @@ struct srpc_bulk *
 {
 	struct srpc_bulk *bk = &rpc->crpc_bulk;
 	struct srpc_event *ev = &rpc->crpc_bulkev;
-	__u64 *id = &rpc->crpc_reqstmsg.msg_body.reqst.bulkid;
+	u64 *id = &rpc->crpc_reqstmsg.msg_body.reqst.bulkid;
 	int rc;
 	int opt;
 
@@ -872,7 +872,7 @@ struct srpc_bulk *
 {
 	struct srpc_event *ev = &rpc->srpc_ev;
 	struct srpc_bulk *bk = rpc->srpc_bulk;
-	__u64 id = rpc->srpc_reqstbuf->buf_msg.msg_body.reqst.bulkid;
+	u64 id = rpc->srpc_reqstbuf->buf_msg.msg_body.reqst.bulkid;
 	int rc;
 	int opt;
 
@@ -1362,7 +1362,7 @@ struct srpc_client_rpc *
 	struct srpc_buffer *buffer = rpc->srpc_reqstbuf;
 	struct srpc_service_cd *scd = rpc->srpc_scd;
 	struct srpc_service *sv = scd->scd_svc;
-	__u64 rpyid;
+	u64 rpyid;
 	int rc;
 
 	LASSERT(buffer);
@@ -1415,7 +1415,7 @@ struct srpc_client_rpc *
 	LASSERT(!in_interrupt());
 
 	if (ev->status) {
-		__u32 errors;
+		u32 errors;
 
 		spin_lock(&srpc_data.rpc_glock);
 		if (ev->status != -ECANCELED) /* cancellation is not error */
@@ -1604,7 +1604,7 @@ struct srpc_client_rpc *
 
 	/* 1 second pause to avoid timestamp reuse */
 	schedule_timeout_uninterruptible(HZ);
-	srpc_data.rpc_matchbits = ((__u64)ktime_get_real_seconds()) << 48;
+	srpc_data.rpc_matchbits = ((u64)ktime_get_real_seconds()) << 48;
 
 	srpc_data.rpc_state = SRPC_STATE_NONE;
 
diff --git a/drivers/staging/lustre/lnet/selftest/rpc.h b/drivers/staging/lustre/lnet/selftest/rpc.h
index 9ce3367..ae1c07f 100644
--- a/drivers/staging/lustre/lnet/selftest/rpc.h
+++ b/drivers/staging/lustre/lnet/selftest/rpc.h
@@ -66,70 +66,70 @@ enum srpc_msg_type {
  * All srpc_*_reqst_t's 1st field must be matchbits of reply buffer,
  * and 2nd field matchbits of bulk buffer if any.
  *
- * All srpc_*_reply_t's 1st field must be a __u32 status, and 2nd field
+ * All srpc_*_reply_t's 1st field must be a u32 status, and 2nd field
  * session id if needed.
  */
 struct srpc_generic_reqst {
-	__u64			rpyid;		/* reply buffer matchbits */
-	__u64			bulkid;		/* bulk buffer matchbits */
+	u64			rpyid;		/* reply buffer matchbits */
+	u64			bulkid;		/* bulk buffer matchbits */
 } __packed;
 
 struct srpc_generic_reply {
-	__u32			status;
+	u32			status;
 	struct lst_sid		sid;
 } __packed;
 
 /* FRAMEWORK RPCs */
 struct srpc_mksn_reqst {
-	__u64			mksn_rpyid;	/* reply buffer matchbits */
+	u64			mksn_rpyid;	/* reply buffer matchbits */
 	struct lst_sid		mksn_sid;	/* session id */
-	__u32			mksn_force;	/* use brute force */
+	u32			mksn_force;	/* use brute force */
 	char			mksn_name[LST_NAME_SIZE];
 } __packed; /* make session request */
 
 struct srpc_mksn_reply {
-	__u32			mksn_status;	/* session status */
+	u32			mksn_status;	/* session status */
 	struct lst_sid		mksn_sid;	/* session id */
-	__u32			mksn_timeout;	/* session timeout */
+	u32			mksn_timeout;	/* session timeout */
 	char			mksn_name[LST_NAME_SIZE];
 } __packed; /* make session reply */
 
 struct srpc_rmsn_reqst {
-	__u64			rmsn_rpyid;	/* reply buffer matchbits */
+	u64			rmsn_rpyid;	/* reply buffer matchbits */
 	struct lst_sid		rmsn_sid;	/* session id */
 } __packed; /* remove session request */
 
 struct srpc_rmsn_reply {
-	__u32			rmsn_status;
+	u32			rmsn_status;
 	struct lst_sid		rmsn_sid;	/* session id */
 } __packed; /* remove session reply */
 
 struct srpc_join_reqst {
-	__u64			join_rpyid;	/* reply buffer matchbits */
+	u64			join_rpyid;	/* reply buffer matchbits */
 	struct lst_sid		join_sid;	/* session id to join */
 	char			join_group[LST_NAME_SIZE]; /* group name */
 } __packed;
 
 struct srpc_join_reply {
-	__u32			join_status;	/* returned status */
+	u32			join_status;	/* returned status */
 	struct lst_sid		join_sid;	/* session id */
-	__u32			join_timeout;	/* # seconds' inactivity to
+	u32			join_timeout;	/* # seconds' inactivity to
 						 * expire
 						 */
 	char			join_session[LST_NAME_SIZE]; /* session name */
 } __packed;
 
 struct srpc_debug_reqst {
-	__u64			dbg_rpyid;	/* reply buffer matchbits */
+	u64			dbg_rpyid;	/* reply buffer matchbits */
 	struct lst_sid		dbg_sid;	/* session id */
-	__u32			dbg_flags;	/* bitmap of debug */
+	u32			dbg_flags;	/* bitmap of debug */
 } __packed;
 
 struct srpc_debug_reply {
-	__u32			dbg_status;	/* returned code */
+	u32			dbg_status;	/* returned code */
 	struct lst_sid		dbg_sid;	/* session id */
-	__u32			dbg_timeout;	/* session timeout */
-	__u32			dbg_nbatch;	/* # of batches in the node */
+	u32			dbg_timeout;	/* session timeout */
+	u32			dbg_nbatch;	/* # of batches in the node */
 	char			dbg_name[LST_NAME_SIZE]; /* session name */
 } __packed;
 
@@ -138,29 +138,29 @@ struct srpc_debug_reply {
 #define SRPC_BATCH_OPC_QUERY	3
 
 struct srpc_batch_reqst {
-	__u64		   bar_rpyid;	   /* reply buffer matchbits */
+	u64		   bar_rpyid;	   /* reply buffer matchbits */
 	struct lst_sid	   bar_sid;	   /* session id */
 	struct lst_bid	   bar_bid;	   /* batch id */
-	__u32		   bar_opc;	   /* create/start/stop batch */
-	__u32		   bar_testidx;    /* index of test */
-	__u32		   bar_arg;	   /* parameters */
+	u32		   bar_opc;	   /* create/start/stop batch */
+	u32		   bar_testidx;    /* index of test */
+	u32		   bar_arg;	   /* parameters */
 } __packed;
 
 struct srpc_batch_reply {
-	__u32		   bar_status;	   /* status of request */
+	u32		   bar_status;	   /* status of request */
 	struct lst_sid	   bar_sid;	   /* session id */
-	__u32		   bar_active;	   /* # of active tests in batch/test */
-	__u32		   bar_time;	   /* remained time */
+	u32		   bar_active;	   /* # of active tests in batch/test */
+	u32		   bar_time;	   /* remained time */
 } __packed;
 
 struct srpc_stat_reqst {
-	__u64		   str_rpyid;	   /* reply buffer matchbits */
+	u64		   str_rpyid;	   /* reply buffer matchbits */
 	struct lst_sid	   str_sid;	   /* session id */
-	__u32		   str_type;	   /* type of stat */
+	u32		   str_type;	   /* type of stat */
 } __packed;
 
 struct srpc_stat_reply {
-	__u32		   str_status;
+	u32		   str_status;
 	struct lst_sid	   str_sid;
 	struct sfw_counters	str_fw;
 	struct srpc_counters	str_rpc;
@@ -168,36 +168,36 @@ struct srpc_stat_reply {
 } __packed;
 
 struct test_bulk_req {
-	__u32		   blk_opc;	   /* bulk operation code */
-	__u32		   blk_npg;	   /* # of pages */
-	__u32		   blk_flags;	   /* reserved flags */
+	u32		   blk_opc;	   /* bulk operation code */
+	u32		   blk_npg;	   /* # of pages */
+	u32		   blk_flags;	   /* reserved flags */
 } __packed;
 
 struct test_bulk_req_v1 {
-	__u16		   blk_opc;	   /* bulk operation code */
-	__u16		   blk_flags;	   /* data check flags */
-	__u32		   blk_len;	   /* data length */
-	__u32		   blk_offset;	   /* offset */
+	u16		   blk_opc;	   /* bulk operation code */
+	u16		   blk_flags;	   /* data check flags */
+	u32		   blk_len;	   /* data length */
+	u32		   blk_offset;	   /* offset */
 } __packed;
 
 struct test_ping_req {
-	__u32		   png_size;	   /* size of ping message */
-	__u32		   png_flags;	   /* reserved flags */
+	u32		   png_size;	   /* size of ping message */
+	u32		   png_flags;	   /* reserved flags */
 } __packed;
 
 struct srpc_test_reqst {
-	__u64			tsr_rpyid;	/* reply buffer matchbits */
-	__u64			tsr_bulkid;	/* bulk buffer matchbits */
+	u64			tsr_rpyid;	/* reply buffer matchbits */
+	u64			tsr_bulkid;	/* bulk buffer matchbits */
 	struct lst_sid		tsr_sid;	/* session id */
 	struct lst_bid		tsr_bid;	/* batch id */
-	__u32			tsr_service;	/* test type: bulk|ping|... */
-	__u32			tsr_loop;	/* test client loop count or
+	u32			tsr_service;	/* test type: bulk|ping|... */
+	u32			tsr_loop;	/* test client loop count or
 						 * # server buffers needed
 						 */
-	__u32			tsr_concur;	/* concurrency of test */
-	__u8			tsr_is_client;	/* is test client or not */
-	__u8			tsr_stop_onerr; /* stop on error */
-	__u32			tsr_ndest;	/* # of dest nodes */
+	u32			tsr_concur;	/* concurrency of test */
+	u8			tsr_is_client;	/* is test client or not */
+	u8			tsr_stop_onerr; /* stop on error */
+	u32			tsr_ndest;	/* # of dest nodes */
 
 	union {
 		struct test_ping_req	ping;
@@ -207,47 +207,47 @@ struct srpc_test_reqst {
 } __packed;
 
 struct srpc_test_reply {
-	__u32			tsr_status;	/* returned code */
+	u32			tsr_status;	/* returned code */
 	struct lst_sid		tsr_sid;
 } __packed;
 
 /* TEST RPCs */
 struct srpc_ping_reqst {
-	__u64		   pnr_rpyid;
-	__u32		   pnr_magic;
-	__u32		   pnr_seq;
-	__u64		   pnr_time_sec;
-	__u64		   pnr_time_usec;
+	u64		   pnr_rpyid;
+	u32		   pnr_magic;
+	u32		   pnr_seq;
+	u64		   pnr_time_sec;
+	u64		   pnr_time_usec;
 } __packed;
 
 struct srpc_ping_reply {
-	__u32		   pnr_status;
-	__u32		   pnr_magic;
-	__u32		   pnr_seq;
+	u32		   pnr_status;
+	u32		   pnr_magic;
+	u32		   pnr_seq;
 } __packed;
 
 struct srpc_brw_reqst {
-	__u64		   brw_rpyid;	   /* reply buffer matchbits */
-	__u64		   brw_bulkid;	   /* bulk buffer matchbits */
-	__u32		   brw_rw;	   /* read or write */
-	__u32		   brw_len;	   /* bulk data len */
-	__u32		   brw_flags;	   /* bulk data patterns */
+	u64		   brw_rpyid;	   /* reply buffer matchbits */
+	u64		   brw_bulkid;	   /* bulk buffer matchbits */
+	u32		   brw_rw;	   /* read or write */
+	u32		   brw_len;	   /* bulk data len */
+	u32		   brw_flags;	   /* bulk data patterns */
 } __packed; /* bulk r/w request */
 
 struct srpc_brw_reply {
-	__u32		   brw_status;
+	u32		   brw_status;
 } __packed; /* bulk r/w reply */
 
 #define SRPC_MSG_MAGIC		0xeeb0f00d
 #define SRPC_MSG_VERSION	1
 
 struct srpc_msg {
-	__u32	msg_magic;     /* magic number */
-	__u32	msg_version;   /* message version number */
-	__u32	msg_type;      /* type of message body: srpc_msg_type */
-	__u32	msg_reserved0;
-	__u32	msg_reserved1;
-	__u32	msg_ses_feats; /* test session features */
+	u32	msg_magic;     /* magic number */
+	u32	msg_version;   /* message version number */
+	u32	msg_type;      /* type of message body: srpc_msg_type */
+	u32	msg_reserved0;
+	u32	msg_reserved1;
+	u32	msg_ses_feats; /* test session features */
 	union {
 		struct srpc_generic_reqst	reqst;
 		struct srpc_generic_reply	reply;
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [lustre-devel] [PATCH 04/26] ptlrpc: use kernel types for kernel code
  2019-01-31 17:19 [lustre-devel] [PATCH 00/26] lustre: cleanups with no code changes James Simmons
                   ` (2 preceding siblings ...)
  2019-01-31 17:19 ` [lustre-devel] [PATCH 03/26] lnet: use kernel types for lnet selftest " James Simmons
@ 2019-01-31 17:19 ` James Simmons
  2019-01-31 17:19 ` [lustre-devel] [PATCH 05/26] lustre: use kernel types for lustre internal headers James Simmons
                   ` (22 subsequent siblings)
  26 siblings, 0 replies; 30+ messages in thread
From: James Simmons @ 2019-01-31 17:19 UTC (permalink / raw)
  To: lustre-devel

Lustre ptlrpc code was originally both a user land and kernel
implementation. The source contains many types of the form __u32
but since this is mostly kernel code change the types to kernel
internal types.

Signed-off-by: James Simmons <jsimmons@infradead.org>
---
 drivers/staging/lustre/lustre/ptlrpc/client.c      |  32 +++---
 drivers/staging/lustre/lustre/ptlrpc/events.c      |  10 +-
 drivers/staging/lustre/lustre/ptlrpc/import.c      |  14 +--
 drivers/staging/lustre/lustre/ptlrpc/layout.c      |  18 ++--
 drivers/staging/lustre/lustre/ptlrpc/llog_client.c |   2 +-
 .../staging/lustre/lustre/ptlrpc/lproc_ptlrpc.c    |  30 +++---
 drivers/staging/lustre/lustre/ptlrpc/niobuf.c      |   4 +-
 .../staging/lustre/lustre/ptlrpc/pack_generic.c    | 112 ++++++++++-----------
 .../staging/lustre/lustre/ptlrpc/ptlrpc_internal.h |   4 +-
 drivers/staging/lustre/lustre/ptlrpc/ptlrpcd.c     |   2 +-
 drivers/staging/lustre/lustre/ptlrpc/recover.c     |   2 +-
 drivers/staging/lustre/lustre/ptlrpc/sec.c         |  20 ++--
 drivers/staging/lustre/lustre/ptlrpc/sec_bulk.c    |   6 +-
 drivers/staging/lustre/lustre/ptlrpc/sec_null.c    |   6 +-
 drivers/staging/lustre/lustre/ptlrpc/sec_plain.c   |  26 ++---
 drivers/staging/lustre/lustre/ptlrpc/service.c     |  16 +--
 16 files changed, 152 insertions(+), 152 deletions(-)

diff --git a/drivers/staging/lustre/lustre/ptlrpc/client.c b/drivers/staging/lustre/lustre/ptlrpc/client.c
index 110bb5d..f4b3875 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/client.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/client.c
@@ -286,7 +286,7 @@ void ptlrpc_free_bulk(struct ptlrpc_bulk_desc *desc)
  */
 void ptlrpc_at_set_req_timeout(struct ptlrpc_request *req)
 {
-	__u32 serv_est;
+	u32 serv_est;
 	int idx;
 	struct imp_at *at;
 
@@ -690,12 +690,12 @@ static inline void ptlrpc_assign_next_xid(struct ptlrpc_request *req)
 }
 
 int ptlrpc_request_bufs_pack(struct ptlrpc_request *request,
-			     __u32 version, int opcode, char **bufs,
+			     u32 version, int opcode, char **bufs,
 			     struct ptlrpc_cli_ctx *ctx)
 {
 	int count;
 	struct obd_import *imp;
-	__u32 *lengths;
+	u32 *lengths;
 	int rc;
 
 	count = req_capsule_filled_sizes(&request->rq_pill, RCL_CLIENT);
@@ -785,7 +785,7 @@ int ptlrpc_request_bufs_pack(struct ptlrpc_request *request,
  * steps if necessary.
  */
 int ptlrpc_request_pack(struct ptlrpc_request *request,
-			__u32 version, int opcode)
+			u32 version, int opcode)
 {
 	int rc;
 
@@ -917,7 +917,7 @@ void ptlrpc_request_free(struct ptlrpc_request *request)
  */
 struct ptlrpc_request *ptlrpc_request_alloc_pack(struct obd_import *imp,
 						 const struct req_format *format,
-						 __u32 version, int opcode)
+						 u32 version, int opcode)
 {
 	struct ptlrpc_request *req = ptlrpc_request_alloc(imp, format);
 	int rc;
@@ -1186,7 +1186,7 @@ static int ptlrpc_import_delay_req(struct obd_import *imp,
  */
 static bool ptlrpc_console_allow(struct ptlrpc_request *req)
 {
-	__u32 opc;
+	u32 opc;
 
 	LASSERT(req->rq_reqmsg);
 	opc = lustre_msg_get_opc(req->rq_reqmsg);
@@ -1226,7 +1226,7 @@ static int ptlrpc_check_status(struct ptlrpc_request *req)
 	if (lustre_msg_get_type(req->rq_repmsg) == PTL_RPC_MSG_ERR) {
 		struct obd_import *imp = req->rq_import;
 		lnet_nid_t nid = imp->imp_connection->c_peer.nid;
-		__u32 opc = lustre_msg_get_opc(req->rq_reqmsg);
+		u32 opc = lustre_msg_get_opc(req->rq_reqmsg);
 
 		/* -EAGAIN is normal when using POSIX flocks */
 		if (ptlrpc_console_allow(req) &&
@@ -1256,7 +1256,7 @@ static void ptlrpc_save_versions(struct ptlrpc_request *req)
 {
 	struct lustre_msg *repmsg = req->rq_repmsg;
 	struct lustre_msg *reqmsg = req->rq_reqmsg;
-	__u64 *versions = lustre_msg_get_versions(repmsg);
+	u64 *versions = lustre_msg_get_versions(repmsg);
 
 	if (lustre_msg_get_flags(req->rq_reqmsg) & MSG_REPLAY)
 		return;
@@ -1267,7 +1267,7 @@ static void ptlrpc_save_versions(struct ptlrpc_request *req)
 	       versions[0], versions[1]);
 }
 
-__u64 ptlrpc_known_replied_xid(struct obd_import *imp)
+u64 ptlrpc_known_replied_xid(struct obd_import *imp)
 {
 	struct ptlrpc_request *req;
 
@@ -2471,7 +2471,7 @@ void ptlrpc_req_finished(struct ptlrpc_request *request)
 /**
  * Returns xid of a \a request
  */
-__u64 ptlrpc_req_xid(struct ptlrpc_request *request)
+u64 ptlrpc_req_xid(struct ptlrpc_request *request)
 {
 	return request->rq_xid;
 }
@@ -3025,7 +3025,7 @@ void ptlrpc_abort_set(struct ptlrpc_request_set *set)
 	}
 }
 
-static __u64 ptlrpc_last_xid;
+static u64 ptlrpc_last_xid;
 static spinlock_t ptlrpc_last_xid_lock;
 
 /**
@@ -3054,7 +3054,7 @@ void ptlrpc_init_xid(void)
 		ptlrpc_last_xid >>= 2;
 		ptlrpc_last_xid |= (1ULL << 61);
 	} else {
-		ptlrpc_last_xid = (__u64)now << 20;
+		ptlrpc_last_xid = (u64)now << 20;
 	}
 
 	/* Always need to be aligned to a power-of-two for multi-bulk BRW */
@@ -3074,9 +3074,9 @@ void ptlrpc_init_xid(void)
  * This is assumed to be true due to the initial ptlrpc_last_xid
  * value also being initialized to a power-of-two value. LU-1431
  */
-__u64 ptlrpc_next_xid(void)
+u64 ptlrpc_next_xid(void)
 {
-	__u64 next;
+	u64 next;
 
 	spin_lock(&ptlrpc_last_xid_lock);
 	next = ptlrpc_last_xid + PTLRPC_BULK_OPS_COUNT;
@@ -3155,11 +3155,11 @@ void ptlrpc_set_bulk_mbits(struct ptlrpc_request *req)
  * Get a glimpse@what next xid value might have been.
  * Returns possible next xid.
  */
-__u64 ptlrpc_sample_next_xid(void)
+u64 ptlrpc_sample_next_xid(void)
 {
 #if BITS_PER_LONG == 32
 	/* need to avoid possible word tearing on 32-bit systems */
-	__u64 next;
+	u64 next;
 
 	spin_lock(&ptlrpc_last_xid_lock);
 	next = ptlrpc_last_xid + PTLRPC_BULK_OPS_COUNT;
diff --git a/drivers/staging/lustre/lustre/ptlrpc/events.c b/drivers/staging/lustre/lustre/ptlrpc/events.c
index ab6dd74..0c16a2c 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/events.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/events.c
@@ -253,9 +253,9 @@ void client_bulk_callback(struct lnet_event *ev)
 static void ptlrpc_req_add_history(struct ptlrpc_service_part *svcpt,
 				   struct ptlrpc_request *req)
 {
-	__u64 sec = req->rq_arrival_time.tv_sec;
-	__u32 usec = req->rq_arrival_time.tv_nsec / NSEC_PER_USEC / 16; /* usec / 16 */
-	__u64 new_seq;
+	u64 sec = req->rq_arrival_time.tv_sec;
+	u32 usec = req->rq_arrival_time.tv_nsec / NSEC_PER_USEC / 16; /* usec / 16 */
+	u64 new_seq;
 
 	/* set sequence ID for request and add it to history list,
 	 * it must be called with hold svcpt::scp_lock
@@ -453,11 +453,11 @@ int ptlrpc_uuid_to_peer(struct obd_uuid *uuid,
 			struct lnet_process_id *peer, lnet_nid_t *self)
 {
 	int best_dist = 0;
-	__u32 best_order = 0;
+	u32 best_order = 0;
 	int count = 0;
 	int rc = -ENOENT;
 	int dist;
-	__u32 order;
+	u32 order;
 	lnet_nid_t dst_nid;
 	lnet_nid_t src_nid;
 
diff --git a/drivers/staging/lustre/lustre/ptlrpc/import.c b/drivers/staging/lustre/lustre/ptlrpc/import.c
index 480c860d..56a0b76 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/import.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/import.c
@@ -51,7 +51,7 @@
 #include "ptlrpc_internal.h"
 
 struct ptlrpc_connect_async_args {
-	 __u64 pcaa_peer_committed;
+	 u64 pcaa_peer_committed;
 	int pcaa_initial_connect;
 };
 
@@ -154,7 +154,7 @@ static void deuuidify(char *uuid, const char *prefix, char **uuid_start,
  *	     (increasing the import->conn_cnt) the older failure should
  *	     not also cause a reconnection.  If zero it forces a reconnect.
  */
-int ptlrpc_set_import_discon(struct obd_import *imp, __u32 conn_cnt)
+int ptlrpc_set_import_discon(struct obd_import *imp, u32 conn_cnt)
 {
 	int rc = 0;
 
@@ -399,7 +399,7 @@ void ptlrpc_pinger_force(struct obd_import *imp)
 }
 EXPORT_SYMBOL(ptlrpc_pinger_force);
 
-void ptlrpc_fail_import(struct obd_import *imp, __u32 conn_cnt)
+void ptlrpc_fail_import(struct obd_import *imp, u32 conn_cnt)
 {
 	LASSERT(!imp->imp_dlm_fake);
 
@@ -547,7 +547,7 @@ static int import_select_connection(struct obd_import *imp)
 /*
  * must be called under imp_lock
  */
-static int ptlrpc_first_transno(struct obd_import *imp, __u64 *transno)
+static int ptlrpc_first_transno(struct obd_import *imp, u64 *transno)
 {
 	struct ptlrpc_request *req;
 
@@ -589,7 +589,7 @@ int ptlrpc_connect_import(struct obd_import *imp)
 	struct obd_device *obd = imp->imp_obd;
 	int initial_connect = 0;
 	int set_transno = 0;
-	__u64 committed_before_reconnect = 0;
+	u64 committed_before_reconnect = 0;
 	struct ptlrpc_request *request;
 	char *bufs[] = { NULL,
 			 obd2cli_tgt(imp->imp_obd),
@@ -686,7 +686,7 @@ int ptlrpc_connect_import(struct obd_import *imp)
 	/* Allow a slightly larger reply for future growth compatibility */
 	req_capsule_set_size(&request->rq_pill, &RMF_CONNECT_DATA, RCL_SERVER,
 			     sizeof(struct obd_connect_data) +
-			     16 * sizeof(__u64));
+			     16 * sizeof(u64));
 	ptlrpc_request_set_replen(request);
 	request->rq_interpret_reply = ptlrpc_connect_interpret;
 
@@ -936,7 +936,7 @@ static int ptlrpc_connect_interpret(const struct lu_env *env,
 	struct ptlrpc_connect_async_args *aa = data;
 	struct obd_import *imp = request->rq_import;
 	struct lustre_handle old_hdl;
-	__u64 old_connect_flags;
+	u64 old_connect_flags;
 	int msg_flags;
 	struct obd_connect_data *ocd;
 	struct obd_export *exp;
diff --git a/drivers/staging/lustre/lustre/ptlrpc/layout.c b/drivers/staging/lustre/lustre/ptlrpc/layout.c
index a155200..2848f2f 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/layout.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/layout.c
@@ -768,7 +768,7 @@
 };
 
 struct req_msg_field {
-	const __u32 rmf_flags;
+	const u32 rmf_flags;
 	const char  *rmf_name;
 	/**
 	 * Field length. (-1) means "variable length".  If the
@@ -842,7 +842,7 @@ struct req_msg_field RMF_MGS_CONFIG_RES =
 
 struct req_msg_field RMF_U32 =
 	DEFINE_MSGF("generic u32", 0,
-		    sizeof(__u32), lustre_swab_generic_32s, NULL);
+		    sizeof(u32), lustre_swab_generic_32s, NULL);
 EXPORT_SYMBOL(RMF_U32);
 
 struct req_msg_field RMF_SETINFO_VAL =
@@ -855,7 +855,7 @@ struct req_msg_field RMF_GETINFO_KEY =
 
 struct req_msg_field RMF_GETINFO_VALLEN =
 	DEFINE_MSGF("getinfo_vallen", 0,
-		    sizeof(__u32), lustre_swab_generic_32s, NULL);
+		    sizeof(u32), lustre_swab_generic_32s, NULL);
 EXPORT_SYMBOL(RMF_GETINFO_VALLEN);
 
 struct req_msg_field RMF_GETINFO_VAL =
@@ -864,7 +864,7 @@ struct req_msg_field RMF_GETINFO_VAL =
 
 struct req_msg_field RMF_SEQ_OPC =
 	DEFINE_MSGF("seq_query_opc", 0,
-		    sizeof(__u32), lustre_swab_generic_32s, NULL);
+		    sizeof(u32), lustre_swab_generic_32s, NULL);
 EXPORT_SYMBOL(RMF_SEQ_OPC);
 
 struct req_msg_field RMF_SEQ_RANGE =
@@ -875,7 +875,7 @@ struct req_msg_field RMF_SEQ_RANGE =
 
 struct req_msg_field RMF_FLD_OPC =
 	DEFINE_MSGF("fld_query_opc", 0,
-		    sizeof(__u32), lustre_swab_generic_32s, NULL);
+		    sizeof(u32), lustre_swab_generic_32s, NULL);
 EXPORT_SYMBOL(RMF_FLD_OPC);
 
 struct req_msg_field RMF_FLD_MDFLD =
@@ -1069,12 +1069,12 @@ struct req_msg_field RMF_NIOBUF_REMOTE =
 EXPORT_SYMBOL(RMF_NIOBUF_REMOTE);
 
 struct req_msg_field RMF_RCS =
-	DEFINE_MSGF("niobuf_remote", RMF_F_STRUCT_ARRAY, sizeof(__u32),
+	DEFINE_MSGF("niobuf_remote", RMF_F_STRUCT_ARRAY, sizeof(u32),
 		    lustre_swab_generic_32s, dump_rcs);
 EXPORT_SYMBOL(RMF_RCS);
 
 struct req_msg_field RMF_EAVALS_LENS =
-	DEFINE_MSGF("eavals_lens", RMF_F_STRUCT_ARRAY, sizeof(__u32),
+	DEFINE_MSGF("eavals_lens", RMF_F_STRUCT_ARRAY, sizeof(u32),
 		    lustre_swab_generic_32s, NULL);
 EXPORT_SYMBOL(RMF_EAVALS_LENS);
 
@@ -1130,7 +1130,7 @@ struct req_msg_field RMF_MDS_HSM_USER_ITEM =
 
 struct req_msg_field RMF_MDS_HSM_ARCHIVE =
 	DEFINE_MSGF("hsm_archive", 0,
-		    sizeof(__u32), lustre_swab_generic_32s, NULL);
+		    sizeof(u32), lustre_swab_generic_32s, NULL);
 EXPORT_SYMBOL(RMF_MDS_HSM_ARCHIVE);
 
 struct req_msg_field RMF_MDS_HSM_REQUEST =
@@ -2129,7 +2129,7 @@ u32 req_capsule_msg_size(struct req_capsule *pill, enum req_location loc)
  * This function should not be used for formats which contain variable size
  * fields.
  */
-u32 req_capsule_fmt_size(__u32 magic, const struct req_format *fmt,
+u32 req_capsule_fmt_size(u32 magic, const struct req_format *fmt,
 			 enum req_location loc)
 {
 	size_t i = 0;
diff --git a/drivers/staging/lustre/lustre/ptlrpc/llog_client.c b/drivers/staging/lustre/lustre/ptlrpc/llog_client.c
index 6ddd93c..8ca6959 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/llog_client.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/llog_client.c
@@ -142,7 +142,7 @@ static int llog_client_open(const struct lu_env *env,
 static int llog_client_next_block(const struct lu_env *env,
 				  struct llog_handle *loghandle,
 				  int *cur_idx, int next_idx,
-				  __u64 *cur_offset, void *buf, int len)
+				  u64 *cur_offset, void *buf, int len)
 {
 	struct obd_import *imp;
 	struct ptlrpc_request *req = NULL;
diff --git a/drivers/staging/lustre/lustre/ptlrpc/lproc_ptlrpc.c b/drivers/staging/lustre/lustre/ptlrpc/lproc_ptlrpc.c
index cce86c4..92e3e0f 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/lproc_ptlrpc.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/lproc_ptlrpc.c
@@ -42,7 +42,7 @@
 #include "ptlrpc_internal.h"
 
 static struct ll_rpc_opcode {
-	__u32       opcode;
+	u32       opcode;
 	const char *opname;
 } ll_rpc_opcode_table[LUSTRE_MAX_OPCODES] = {
 	{ OST_REPLY,	"ost_reply" },
@@ -134,7 +134,7 @@
 };
 
 static struct ll_eopcode {
-	__u32       opcode;
+	u32       opcode;
 	const char *opname;
 } ll_eopcode_table[EXTRA_LAST_OPC] = {
 	{ LDLM_GLIMPSE_ENQUEUE, "ldlm_glimpse_enqueue" },
@@ -153,7 +153,7 @@
 	{ BRW_WRITE_BYTES,      "write_bytes" },
 };
 
-const char *ll_opcode2str(__u32 opcode)
+const char *ll_opcode2str(u32 opcode)
 {
 	/* When one of the assertions below fail, chances are that:
 	 *     1) A new opcode was added in include/lustre/lustre_idl.h,
@@ -162,7 +162,7 @@ const char *ll_opcode2str(__u32 opcode)
 	 *	and the opcode_offset() function in
 	 *	ptlrpc_internal.h needs to be modified.
 	 */
-	__u32 offset = opcode_offset(opcode);
+	u32 offset = opcode_offset(opcode);
 
 	LASSERTF(offset < LUSTRE_MAX_OPCODES,
 		 "offset %u >= LUSTRE_MAX_OPCODES %u\n",
@@ -173,7 +173,7 @@ const char *ll_opcode2str(__u32 opcode)
 	return ll_rpc_opcode_table[offset].opname;
 }
 
-static const char *ll_eopcode2str(__u32 opcode)
+static const char *ll_eopcode2str(u32 opcode)
 {
 	LASSERT(ll_eopcode_table[opcode].opcode == opcode);
 	return ll_eopcode_table[opcode].opname;
@@ -231,7 +231,7 @@ static const char *ll_eopcode2str(__u32 opcode)
 				     ll_eopcode2str(i), units);
 	}
 	for (i = 0; i < LUSTRE_MAX_OPCODES; i++) {
-		__u32 opcode = ll_rpc_opcode_table[i].opcode;
+		u32 opcode = ll_rpc_opcode_table[i].opcode;
 
 		lprocfs_counter_init(svc_stats,
 				     EXTRA_MAX_OPCODES + i, svc_counter_config,
@@ -709,14 +709,14 @@ static ssize_t ptlrpc_lprocfs_nrs_seq_write(struct file *file,
 
 struct ptlrpc_srh_iterator {
 	int			srhi_idx;
-	__u64			srhi_seq;
+	u64			srhi_seq;
 	struct ptlrpc_request	*srhi_req;
 };
 
 static int
 ptlrpc_lprocfs_svc_req_history_seek(struct ptlrpc_service_part *svcpt,
 				    struct ptlrpc_srh_iterator *srhi,
-				    __u64 seq)
+				    u64 seq)
 {
 	struct list_head *e;
 	struct ptlrpc_request *req;
@@ -772,7 +772,7 @@ struct ptlrpc_srh_iterator {
 /* convert seq_file pos to cpt */
 #define PTLRPC_REQ_POS2CPT(svc, pos)			\
 	((svc)->srv_cpt_bits == 0 ? 0 :			\
-	 (__u64)(pos) >> (64 - (svc)->srv_cpt_bits))
+	 (u64)(pos) >> (64 - (svc)->srv_cpt_bits))
 
 /* make up seq_file pos from cpt */
 #define PTLRPC_REQ_CPT2POS(svc, cpt)			\
@@ -788,8 +788,8 @@ struct ptlrpc_srh_iterator {
 /* convert position to sequence */
 #define PTLRPC_REQ_POS2SEQ(svc, pos)			\
 	((svc)->srv_cpt_bits == 0 ? (pos) :		\
-	 ((__u64)(pos) << (svc)->srv_cpt_bits) |	\
-	 ((__u64)(pos) >> (64 - (svc)->srv_cpt_bits)))
+	 ((u64)(pos) << (svc)->srv_cpt_bits) |	\
+	 ((u64)(pos) >> (64 - (svc)->srv_cpt_bits)))
 
 static void *
 ptlrpc_lprocfs_svc_req_history_start(struct seq_file *s, loff_t *pos)
@@ -801,7 +801,7 @@ struct ptlrpc_srh_iterator {
 	int				rc;
 	int				i;
 
-	if (sizeof(loff_t) != sizeof(__u64)) { /* can't support */
+	if (sizeof(loff_t) != sizeof(u64)) { /* can't support */
 		CWARN("Failed to read request history because size of loff_t %d can't match size of u64\n",
 		      (int)sizeof(loff_t));
 		return NULL;
@@ -852,7 +852,7 @@ struct ptlrpc_srh_iterator {
 	struct ptlrpc_service *svc = s->private;
 	struct ptlrpc_srh_iterator *srhi = iter;
 	struct ptlrpc_service_part *svcpt;
-	__u64 seq;
+	u64 seq;
 	int rc;
 	int i;
 
@@ -1120,7 +1120,7 @@ void ptlrpc_lprocfs_register_obd(struct obd_device *obddev)
 void ptlrpc_lprocfs_rpc_sent(struct ptlrpc_request *req, long amount)
 {
 	struct lprocfs_stats *svc_stats;
-	__u32 op = lustre_msg_get_opc(req->rq_reqmsg);
+	u32 op = lustre_msg_get_opc(req->rq_reqmsg);
 	int opc = opcode_offset(op);
 
 	svc_stats = req->rq_import->imp_obd->obd_svc_stats;
@@ -1243,7 +1243,7 @@ int lprocfs_wr_import(struct file *file, const char __user *buffer,
 	uuid = kbuf + prefix_len;
 	ptr = strstr(uuid, "::");
 	if (ptr) {
-		__u32 inst;
+		u32 inst;
 		char *endptr;
 
 		*ptr = 0;
diff --git a/drivers/staging/lustre/lustre/ptlrpc/niobuf.c b/drivers/staging/lustre/lustre/ptlrpc/niobuf.c
index 7e7db24..d3044a7 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/niobuf.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/niobuf.c
@@ -48,7 +48,7 @@
 static int ptl_send_buf(struct lnet_handle_md *mdh, void *base, int len,
 			enum lnet_ack_req ack, struct ptlrpc_cb_id *cbid,
 			lnet_nid_t self, struct lnet_process_id peer_id,
-			int portal, __u64 xid, unsigned int offset,
+			int portal, u64 xid, unsigned int offset,
 			struct lnet_handle_md *bulk_cookie)
 {
 	int rc;
@@ -530,7 +530,7 @@ int ptl_send_rpc(struct ptlrpc_request *request, int noreply)
 	 * from the resend for reply timeout.
 	 */
 	if (request->rq_nr_resend && list_empty(&request->rq_unreplied_list)) {
-		__u64 min_xid = 0;
+		u64 min_xid = 0;
 		/*
 		 * resend for EINPROGRESS, allocate new xid to avoid reply
 		 * reconstruction
diff --git a/drivers/staging/lustre/lustre/ptlrpc/pack_generic.c b/drivers/staging/lustre/lustre/ptlrpc/pack_generic.c
index 5fadd5e..1fadba2 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/pack_generic.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/pack_generic.c
@@ -58,7 +58,7 @@ static inline u32 lustre_msg_hdr_size_v2(u32 count)
 				       lm_buflens[count]));
 }
 
-u32 lustre_msg_hdr_size(__u32 magic, u32 count)
+u32 lustre_msg_hdr_size(u32 magic, u32 count)
 {
 	switch (magic) {
 	case LUSTRE_MSG_MAGIC_V2:
@@ -102,7 +102,7 @@ u32 lustre_msg_early_size(void)
 		 * XXX Remove this whenever we drop interoperability with such
 		 *     client.
 		 */
-		__u32 pblen = sizeof(struct ptlrpc_body_v2);
+		u32 pblen = sizeof(struct ptlrpc_body_v2);
 
 		size = lustre_msg_size(LUSTRE_MSG_MAGIC_V2, 1, &pblen);
 	}
@@ -110,7 +110,7 @@ u32 lustre_msg_early_size(void)
 }
 EXPORT_SYMBOL(lustre_msg_early_size);
 
-u32 lustre_msg_size_v2(int count, __u32 *lengths)
+u32 lustre_msg_size_v2(int count, u32 *lengths)
 {
 	u32 size;
 	int i;
@@ -130,9 +130,9 @@ u32 lustre_msg_size_v2(int count, __u32 *lengths)
  *       target then the first buffer will be stripped because the ptlrpc
  *       data is part of the lustre_msg_v1 header. b=14043
  */
-u32 lustre_msg_size(__u32 magic, int count, __u32 *lens)
+u32 lustre_msg_size(u32 magic, int count, u32 *lens)
 {
-	__u32 size[] = { sizeof(struct ptlrpc_body) };
+	u32 size[] = { sizeof(struct ptlrpc_body) };
 
 	if (!lens) {
 		LASSERT(count == 1);
@@ -165,7 +165,7 @@ u32 lustre_packed_msg_size(struct lustre_msg *msg)
 	}
 }
 
-void lustre_init_msg_v2(struct lustre_msg_v2 *msg, int count, __u32 *lens,
+void lustre_init_msg_v2(struct lustre_msg_v2 *msg, int count, u32 *lens,
 			char **bufs)
 {
 	char *ptr;
@@ -193,7 +193,7 @@ void lustre_init_msg_v2(struct lustre_msg_v2 *msg, int count, __u32 *lens,
 EXPORT_SYMBOL(lustre_init_msg_v2);
 
 static int lustre_pack_request_v2(struct ptlrpc_request *req,
-				  int count, __u32 *lens, char **bufs)
+				  int count, u32 *lens, char **bufs)
 {
 	int reqlen, rc;
 
@@ -210,10 +210,10 @@ static int lustre_pack_request_v2(struct ptlrpc_request *req,
 	return 0;
 }
 
-int lustre_pack_request(struct ptlrpc_request *req, __u32 magic, int count,
-			__u32 *lens, char **bufs)
+int lustre_pack_request(struct ptlrpc_request *req, u32 magic, int count,
+			u32 *lens, char **bufs)
 {
-	__u32 size[] = { sizeof(struct ptlrpc_body) };
+	u32 size[] = { sizeof(struct ptlrpc_body) };
 
 	if (!lens) {
 		LASSERT(count == 1);
@@ -297,7 +297,7 @@ void lustre_put_emerg_rs(struct ptlrpc_reply_state *rs)
 }
 
 int lustre_pack_reply_v2(struct ptlrpc_request *req, int count,
-			 __u32 *lens, char **bufs, int flags)
+			 u32 *lens, char **bufs, int flags)
 {
 	struct ptlrpc_reply_state *rs;
 	int msg_len, rc;
@@ -338,11 +338,11 @@ int lustre_pack_reply_v2(struct ptlrpc_request *req, int count,
 }
 EXPORT_SYMBOL(lustre_pack_reply_v2);
 
-int lustre_pack_reply_flags(struct ptlrpc_request *req, int count, __u32 *lens,
+int lustre_pack_reply_flags(struct ptlrpc_request *req, int count, u32 *lens,
 			    char **bufs, int flags)
 {
 	int rc = 0;
-	__u32 size[] = { sizeof(struct ptlrpc_body) };
+	u32 size[] = { sizeof(struct ptlrpc_body) };
 
 	if (!lens) {
 		LASSERT(count == 1);
@@ -367,7 +367,7 @@ int lustre_pack_reply_flags(struct ptlrpc_request *req, int count, __u32 *lens,
 	return rc;
 }
 
-int lustre_pack_reply(struct ptlrpc_request *req, int count, __u32 *lens,
+int lustre_pack_reply(struct ptlrpc_request *req, int count, u32 *lens,
 		      char **bufs)
 {
 	return lustre_pack_reply_flags(req, count, lens, bufs, 0);
@@ -749,7 +749,7 @@ static inline struct ptlrpc_body *lustre_msg_ptlrpc_body(struct lustre_msg *msg)
 				 sizeof(struct ptlrpc_body_v2));
 }
 
-__u32 lustre_msghdr_get_flags(struct lustre_msg *msg)
+u32 lustre_msghdr_get_flags(struct lustre_msg *msg)
 {
 	switch (msg->lm_magic) {
 	case LUSTRE_MSG_MAGIC_V2:
@@ -762,7 +762,7 @@ __u32 lustre_msghdr_get_flags(struct lustre_msg *msg)
 }
 EXPORT_SYMBOL(lustre_msghdr_get_flags);
 
-void lustre_msghdr_set_flags(struct lustre_msg *msg, __u32 flags)
+void lustre_msghdr_set_flags(struct lustre_msg *msg, u32 flags)
 {
 	switch (msg->lm_magic) {
 	case LUSTRE_MSG_MAGIC_V2:
@@ -773,7 +773,7 @@ void lustre_msghdr_set_flags(struct lustre_msg *msg, __u32 flags)
 	}
 }
 
-__u32 lustre_msg_get_flags(struct lustre_msg *msg)
+u32 lustre_msg_get_flags(struct lustre_msg *msg)
 {
 	switch (msg->lm_magic) {
 	case LUSTRE_MSG_MAGIC_V2: {
@@ -841,7 +841,7 @@ void lustre_msg_clear_flags(struct lustre_msg *msg, u32 flags)
 }
 EXPORT_SYMBOL(lustre_msg_clear_flags);
 
-__u32 lustre_msg_get_op_flags(struct lustre_msg *msg)
+u32 lustre_msg_get_op_flags(struct lustre_msg *msg)
 {
 	switch (msg->lm_magic) {
 	case LUSTRE_MSG_MAGIC_V2: {
@@ -892,7 +892,7 @@ struct lustre_handle *lustre_msg_get_handle(struct lustre_msg *msg)
 	}
 }
 
-__u32 lustre_msg_get_type(struct lustre_msg *msg)
+u32 lustre_msg_get_type(struct lustre_msg *msg)
 {
 	switch (msg->lm_magic) {
 	case LUSTRE_MSG_MAGIC_V2: {
@@ -926,7 +926,7 @@ void lustre_msg_add_version(struct lustre_msg *msg, u32 version)
 	}
 }
 
-__u32 lustre_msg_get_opc(struct lustre_msg *msg)
+u32 lustre_msg_get_opc(struct lustre_msg *msg)
 {
 	switch (msg->lm_magic) {
 	case LUSTRE_MSG_MAGIC_V2: {
@@ -946,7 +946,7 @@ __u32 lustre_msg_get_opc(struct lustre_msg *msg)
 }
 EXPORT_SYMBOL(lustre_msg_get_opc);
 
-__u16 lustre_msg_get_tag(struct lustre_msg *msg)
+u16 lustre_msg_get_tag(struct lustre_msg *msg)
 {
 	switch (msg->lm_magic) {
 	case LUSTRE_MSG_MAGIC_V2: {
@@ -965,7 +965,7 @@ __u16 lustre_msg_get_tag(struct lustre_msg *msg)
 }
 EXPORT_SYMBOL(lustre_msg_get_tag);
 
-__u64 lustre_msg_get_last_committed(struct lustre_msg *msg)
+u64 lustre_msg_get_last_committed(struct lustre_msg *msg)
 {
 	switch (msg->lm_magic) {
 	case LUSTRE_MSG_MAGIC_V2: {
@@ -984,7 +984,7 @@ __u64 lustre_msg_get_last_committed(struct lustre_msg *msg)
 }
 EXPORT_SYMBOL(lustre_msg_get_last_committed);
 
-__u64 *lustre_msg_get_versions(struct lustre_msg *msg)
+u64 *lustre_msg_get_versions(struct lustre_msg *msg)
 {
 	switch (msg->lm_magic) {
 	case LUSTRE_MSG_MAGIC_V2: {
@@ -1003,7 +1003,7 @@ __u64 *lustre_msg_get_versions(struct lustre_msg *msg)
 }
 EXPORT_SYMBOL(lustre_msg_get_versions);
 
-__u64 lustre_msg_get_transno(struct lustre_msg *msg)
+u64 lustre_msg_get_transno(struct lustre_msg *msg)
 {
 	switch (msg->lm_magic) {
 	case LUSTRE_MSG_MAGIC_V2: {
@@ -1043,7 +1043,7 @@ int lustre_msg_get_status(struct lustre_msg *msg)
 }
 EXPORT_SYMBOL(lustre_msg_get_status);
 
-__u64 lustre_msg_get_slv(struct lustre_msg *msg)
+u64 lustre_msg_get_slv(struct lustre_msg *msg)
 {
 	switch (msg->lm_magic) {
 	case LUSTRE_MSG_MAGIC_V2: {
@@ -1061,7 +1061,7 @@ __u64 lustre_msg_get_slv(struct lustre_msg *msg)
 	}
 }
 
-void lustre_msg_set_slv(struct lustre_msg *msg, __u64 slv)
+void lustre_msg_set_slv(struct lustre_msg *msg, u64 slv)
 {
 	switch (msg->lm_magic) {
 	case LUSTRE_MSG_MAGIC_V2: {
@@ -1080,7 +1080,7 @@ void lustre_msg_set_slv(struct lustre_msg *msg, __u64 slv)
 	}
 }
 
-__u32 lustre_msg_get_limit(struct lustre_msg *msg)
+u32 lustre_msg_get_limit(struct lustre_msg *msg)
 {
 	switch (msg->lm_magic) {
 	case LUSTRE_MSG_MAGIC_V2: {
@@ -1098,7 +1098,7 @@ __u32 lustre_msg_get_limit(struct lustre_msg *msg)
 	}
 }
 
-void lustre_msg_set_limit(struct lustre_msg *msg, __u64 limit)
+void lustre_msg_set_limit(struct lustre_msg *msg, u64 limit)
 {
 	switch (msg->lm_magic) {
 	case LUSTRE_MSG_MAGIC_V2: {
@@ -1117,7 +1117,7 @@ void lustre_msg_set_limit(struct lustre_msg *msg, __u64 limit)
 	}
 }
 
-__u32 lustre_msg_get_conn_cnt(struct lustre_msg *msg)
+u32 lustre_msg_get_conn_cnt(struct lustre_msg *msg)
 {
 	switch (msg->lm_magic) {
 	case LUSTRE_MSG_MAGIC_V2: {
@@ -1136,7 +1136,7 @@ __u32 lustre_msg_get_conn_cnt(struct lustre_msg *msg)
 }
 EXPORT_SYMBOL(lustre_msg_get_conn_cnt);
 
-__u32 lustre_msg_get_magic(struct lustre_msg *msg)
+u32 lustre_msg_get_magic(struct lustre_msg *msg)
 {
 	switch (msg->lm_magic) {
 	case LUSTRE_MSG_MAGIC_V2:
@@ -1147,7 +1147,7 @@ __u32 lustre_msg_get_magic(struct lustre_msg *msg)
 	}
 }
 
-__u32 lustre_msg_get_timeout(struct lustre_msg *msg)
+u32 lustre_msg_get_timeout(struct lustre_msg *msg)
 {
 	switch (msg->lm_magic) {
 	case LUSTRE_MSG_MAGIC_V2: {
@@ -1165,7 +1165,7 @@ __u32 lustre_msg_get_timeout(struct lustre_msg *msg)
 	}
 }
 
-__u32 lustre_msg_get_service_time(struct lustre_msg *msg)
+u32 lustre_msg_get_service_time(struct lustre_msg *msg)
 {
 	switch (msg->lm_magic) {
 	case LUSTRE_MSG_MAGIC_V2: {
@@ -1183,7 +1183,7 @@ __u32 lustre_msg_get_service_time(struct lustre_msg *msg)
 	}
 }
 
-__u32 lustre_msg_get_cksum(struct lustre_msg *msg)
+u32 lustre_msg_get_cksum(struct lustre_msg *msg)
 {
 	switch (msg->lm_magic) {
 	case LUSTRE_MSG_MAGIC_V2:
@@ -1194,12 +1194,12 @@ __u32 lustre_msg_get_cksum(struct lustre_msg *msg)
 	}
 }
 
-__u32 lustre_msg_calc_cksum(struct lustre_msg *msg)
+u32 lustre_msg_calc_cksum(struct lustre_msg *msg)
 {
 	switch (msg->lm_magic) {
 	case LUSTRE_MSG_MAGIC_V2: {
 		struct ptlrpc_body *pb = lustre_msg_ptlrpc_body(msg);
-		__u32 crc;
+		u32 crc;
 		unsigned int hsize = 4;
 
 		cfs_crypto_hash_digest(CFS_HASH_ALG_CRC32, (unsigned char *)pb,
@@ -1229,7 +1229,7 @@ void lustre_msg_set_handle(struct lustre_msg *msg, struct lustre_handle *handle)
 	}
 }
 
-void lustre_msg_set_type(struct lustre_msg *msg, __u32 type)
+void lustre_msg_set_type(struct lustre_msg *msg, u32 type)
 {
 	switch (msg->lm_magic) {
 	case LUSTRE_MSG_MAGIC_V2: {
@@ -1244,7 +1244,7 @@ void lustre_msg_set_type(struct lustre_msg *msg, __u32 type)
 	}
 }
 
-void lustre_msg_set_opc(struct lustre_msg *msg, __u32 opc)
+void lustre_msg_set_opc(struct lustre_msg *msg, u32 opc)
 {
 	switch (msg->lm_magic) {
 	case LUSTRE_MSG_MAGIC_V2: {
@@ -1274,7 +1274,7 @@ void lustre_msg_set_last_xid(struct lustre_msg *msg, u64 last_xid)
 	}
 }
 
-void lustre_msg_set_tag(struct lustre_msg *msg, __u16 tag)
+void lustre_msg_set_tag(struct lustre_msg *msg, u16 tag)
 {
 	switch (msg->lm_magic) {
 	case LUSTRE_MSG_MAGIC_V2: {
@@ -1290,7 +1290,7 @@ void lustre_msg_set_tag(struct lustre_msg *msg, __u16 tag)
 }
 EXPORT_SYMBOL(lustre_msg_set_tag);
 
-void lustre_msg_set_versions(struct lustre_msg *msg, __u64 *versions)
+void lustre_msg_set_versions(struct lustre_msg *msg, u64 *versions)
 {
 	switch (msg->lm_magic) {
 	case LUSTRE_MSG_MAGIC_V2: {
@@ -1309,7 +1309,7 @@ void lustre_msg_set_versions(struct lustre_msg *msg, __u64 *versions)
 }
 EXPORT_SYMBOL(lustre_msg_set_versions);
 
-void lustre_msg_set_transno(struct lustre_msg *msg, __u64 transno)
+void lustre_msg_set_transno(struct lustre_msg *msg, u64 transno)
 {
 	switch (msg->lm_magic) {
 	case LUSTRE_MSG_MAGIC_V2: {
@@ -1325,7 +1325,7 @@ void lustre_msg_set_transno(struct lustre_msg *msg, __u64 transno)
 }
 EXPORT_SYMBOL(lustre_msg_set_transno);
 
-void lustre_msg_set_status(struct lustre_msg *msg, __u32 status)
+void lustre_msg_set_status(struct lustre_msg *msg, u32 status)
 {
 	switch (msg->lm_magic) {
 	case LUSTRE_MSG_MAGIC_V2: {
@@ -1341,7 +1341,7 @@ void lustre_msg_set_status(struct lustre_msg *msg, __u32 status)
 }
 EXPORT_SYMBOL(lustre_msg_set_status);
 
-void lustre_msg_set_conn_cnt(struct lustre_msg *msg, __u32 conn_cnt)
+void lustre_msg_set_conn_cnt(struct lustre_msg *msg, u32 conn_cnt)
 {
 	switch (msg->lm_magic) {
 	case LUSTRE_MSG_MAGIC_V2: {
@@ -1356,7 +1356,7 @@ void lustre_msg_set_conn_cnt(struct lustre_msg *msg, __u32 conn_cnt)
 	}
 }
 
-void lustre_msg_set_timeout(struct lustre_msg *msg, __u32 timeout)
+void lustre_msg_set_timeout(struct lustre_msg *msg, u32 timeout)
 {
 	switch (msg->lm_magic) {
 	case LUSTRE_MSG_MAGIC_V2: {
@@ -1371,7 +1371,7 @@ void lustre_msg_set_timeout(struct lustre_msg *msg, __u32 timeout)
 	}
 }
 
-void lustre_msg_set_service_time(struct lustre_msg *msg, __u32 service_time)
+void lustre_msg_set_service_time(struct lustre_msg *msg, u32 service_time)
 {
 	switch (msg->lm_magic) {
 	case LUSTRE_MSG_MAGIC_V2: {
@@ -1390,7 +1390,7 @@ void lustre_msg_set_jobid(struct lustre_msg *msg, char *jobid)
 {
 	switch (msg->lm_magic) {
 	case LUSTRE_MSG_MAGIC_V2: {
-		__u32 opc = lustre_msg_get_opc(msg);
+		u32 opc = lustre_msg_get_opc(msg);
 		struct ptlrpc_body *pb;
 
 		/* Don't set jobid for ldlm ast RPCs, they've been shrunk.
@@ -1416,7 +1416,7 @@ void lustre_msg_set_jobid(struct lustre_msg *msg, char *jobid)
 }
 EXPORT_SYMBOL(lustre_msg_set_jobid);
 
-void lustre_msg_set_cksum(struct lustre_msg *msg, __u32 cksum)
+void lustre_msg_set_cksum(struct lustre_msg *msg, u32 cksum)
 {
 	switch (msg->lm_magic) {
 	case LUSTRE_MSG_MAGIC_V2:
@@ -1427,7 +1427,7 @@ void lustre_msg_set_cksum(struct lustre_msg *msg, __u32 cksum)
 	}
 }
 
-void lustre_msg_set_mbits(struct lustre_msg *msg, __u64 mbits)
+void lustre_msg_set_mbits(struct lustre_msg *msg, u64 mbits)
 {
 	switch (msg->lm_magic) {
 	case LUSTRE_MSG_MAGIC_V2: {
@@ -1677,7 +1677,7 @@ void lustre_swab_ost_last_id(u64 *id)
 	__swab64s(id);
 }
 
-void lustre_swab_generic_32s(__u32 *val)
+void lustre_swab_generic_32s(u32 *val)
 {
 	__swab32s(val);
 }
@@ -1784,14 +1784,14 @@ void lustre_swab_mgs_target_info(struct mgs_target_info *mti)
 	__swab32s(&mti->mti_flags);
 	__swab32s(&mti->mti_instance);
 	__swab32s(&mti->mti_nid_count);
-	BUILD_BUG_ON(sizeof(lnet_nid_t) != sizeof(__u64));
+	BUILD_BUG_ON(sizeof(lnet_nid_t) != sizeof(u64));
 	for (i = 0; i < MTI_NIDS_MAX; i++)
 		__swab64s(&mti->mti_nids[i]);
 }
 
 void lustre_swab_mgs_nidtbl_entry(struct mgs_nidtbl_entry *entry)
 {
-	__u8 i;
+	u8 i;
 
 	__swab64s(&entry->mne_version);
 	__swab32s(&entry->mne_instance);
@@ -1800,13 +1800,13 @@ void lustre_swab_mgs_nidtbl_entry(struct mgs_nidtbl_entry *entry)
 
 	/* mne_nid_(count|type) must be one byte size because we're gonna
 	 * access it w/o swapping. */
-	BUILD_BUG_ON(sizeof(entry->mne_nid_count) != sizeof(__u8));
-	BUILD_BUG_ON(sizeof(entry->mne_nid_type) != sizeof(__u8));
+	BUILD_BUG_ON(sizeof(entry->mne_nid_count) != sizeof(u8));
+	BUILD_BUG_ON(sizeof(entry->mne_nid_type) != sizeof(u8));
 
 	/* remove this assertion if ipv6 is supported. */
 	LASSERT(entry->mne_nid_type == 0);
 	for (i = 0; i < entry->mne_nid_count; i++) {
-		BUILD_BUG_ON(sizeof(lnet_nid_t) != sizeof(__u64));
+		BUILD_BUG_ON(sizeof(lnet_nid_t) != sizeof(u64));
 		__swab64s(&entry->u.nids[i]);
 	}
 }
@@ -1877,7 +1877,7 @@ static void lustre_swab_fiemap_extent(struct fiemap_extent *fm_extent)
 
 void lustre_swab_fiemap(struct fiemap *fiemap)
 {
-	__u32 i;
+	u32 i;
 
 	__swab64s(&fiemap->fm_start);
 	__swab64s(&fiemap->fm_length);
@@ -2171,7 +2171,7 @@ void dump_rniobuf(struct niobuf_remote *nb)
 
 static void dump_obdo(struct obdo *oa)
 {
-	__u32 valid = oa->o_valid;
+	u32 valid = oa->o_valid;
 
 	CDEBUG(D_RPCTRACE, "obdo: o_valid = %08x\n", valid);
 	if (valid & OBD_MD_FLID)
@@ -2234,7 +2234,7 @@ void dump_ost_body(struct ost_body *ob)
 	dump_obdo(&ob->oa);
 }
 
-void dump_rcs(__u32 *rc)
+void dump_rcs(u32 *rc)
 {
 	CDEBUG(D_RPCTRACE, "rmf_rcs: %d\n", *rc);
 }
diff --git a/drivers/staging/lustre/lustre/ptlrpc/ptlrpc_internal.h b/drivers/staging/lustre/lustre/ptlrpc/ptlrpc_internal.h
index da42b99..10c2520 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/ptlrpc_internal.h
+++ b/drivers/staging/lustre/lustre/ptlrpc/ptlrpc_internal.h
@@ -73,7 +73,7 @@ void ptlrpc_set_add_new_req(struct ptlrpcd_ctl *pc,
 void ptlrpc_resend_req(struct ptlrpc_request *request);
 void ptlrpc_set_bulk_mbits(struct ptlrpc_request *req);
 void ptlrpc_assign_next_xid_nolock(struct ptlrpc_request *req);
-__u64 ptlrpc_known_replied_xid(struct obd_import *imp);
+u64 ptlrpc_known_replied_xid(struct obd_import *imp);
 void ptlrpc_add_unreplied(struct ptlrpc_request *req);
 
 /* events.c */
@@ -83,7 +83,7 @@ void ptlrpc_set_add_new_req(struct ptlrpcd_ctl *pc,
 void ptlrpc_request_handle_notconn(struct ptlrpc_request *req);
 void lustre_assert_wire_constants(void);
 int ptlrpc_import_in_recovery(struct obd_import *imp);
-int ptlrpc_set_import_discon(struct obd_import *imp, __u32 conn_cnt);
+int ptlrpc_set_import_discon(struct obd_import *imp, u32 conn_cnt);
 int ptlrpc_replay_next(struct obd_import *imp, int *inflight);
 void ptlrpc_initiate_recovery(struct obd_import *imp);
 
diff --git a/drivers/staging/lustre/lustre/ptlrpc/ptlrpcd.c b/drivers/staging/lustre/lustre/ptlrpc/ptlrpcd.c
index 4bd0d9d..e39c38a 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/ptlrpcd.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/ptlrpcd.c
@@ -672,7 +672,7 @@ static int ptlrpcd_init(void)
 	int j;
 	int rc = 0;
 	struct cfs_cpt_table *cptable;
-	__u32 *cpts = NULL;
+	u32 *cpts = NULL;
 	int ncpts;
 	int cpt;
 	struct ptlrpcd *pd;
diff --git a/drivers/staging/lustre/lustre/ptlrpc/recover.c b/drivers/staging/lustre/lustre/ptlrpc/recover.c
index 9d369f8..ed769a4 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/recover.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/recover.c
@@ -67,7 +67,7 @@ int ptlrpc_replay_next(struct obd_import *imp, int *inflight)
 {
 	int rc = 0;
 	struct ptlrpc_request *req = NULL, *pos;
-	__u64 last_transno;
+	u64 last_transno;
 
 	*inflight = 0;
 
diff --git a/drivers/staging/lustre/lustre/ptlrpc/sec.c b/drivers/staging/lustre/lustre/ptlrpc/sec.c
index 53f4d4f..165082a 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/sec.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/sec.c
@@ -64,7 +64,7 @@
 
 int sptlrpc_register_policy(struct ptlrpc_sec_policy *policy)
 {
-	__u16 number = policy->sp_policy;
+	u16 number = policy->sp_policy;
 
 	LASSERT(policy->sp_name);
 	LASSERT(policy->sp_cops);
@@ -88,7 +88,7 @@ int sptlrpc_register_policy(struct ptlrpc_sec_policy *policy)
 
 int sptlrpc_unregister_policy(struct ptlrpc_sec_policy *policy)
 {
-	__u16 number = policy->sp_policy;
+	u16 number = policy->sp_policy;
 
 	LASSERT(number < SPTLRPC_POLICY_MAX);
 
@@ -109,13 +109,13 @@ int sptlrpc_unregister_policy(struct ptlrpc_sec_policy *policy)
 EXPORT_SYMBOL(sptlrpc_unregister_policy);
 
 static
-struct ptlrpc_sec_policy *sptlrpc_wireflavor2policy(__u32 flavor)
+struct ptlrpc_sec_policy *sptlrpc_wireflavor2policy(u32 flavor)
 {
 	static DEFINE_MUTEX(load_mutex);
 	static atomic_t loaded = ATOMIC_INIT(0);
 	struct ptlrpc_sec_policy *policy;
-	__u16 number = SPTLRPC_FLVR_POLICY(flavor);
-	__u16 flag = 0;
+	u16 number = SPTLRPC_FLVR_POLICY(flavor);
+	u16 flag = 0;
 
 	if (number >= SPTLRPC_POLICY_MAX)
 		return NULL;
@@ -150,7 +150,7 @@ struct ptlrpc_sec_policy *sptlrpc_wireflavor2policy(__u32 flavor)
 	return policy;
 }
 
-__u32 sptlrpc_name2flavor_base(const char *name)
+u32 sptlrpc_name2flavor_base(const char *name)
 {
 	if (!strcmp(name, "null"))
 		return SPTLRPC_FLVR_NULL;
@@ -169,9 +169,9 @@ __u32 sptlrpc_name2flavor_base(const char *name)
 }
 EXPORT_SYMBOL(sptlrpc_name2flavor_base);
 
-const char *sptlrpc_flavor2name_base(__u32 flvr)
+const char *sptlrpc_flavor2name_base(u32 flvr)
 {
-	__u32   base = SPTLRPC_FLVR_BASE(flvr);
+	u32   base = SPTLRPC_FLVR_BASE(flvr);
 
 	if (base == SPTLRPC_FLVR_BASE(SPTLRPC_FLVR_NULL))
 		return "null";
@@ -226,7 +226,7 @@ char *sptlrpc_flavor2name(struct sptlrpc_flavor *sf, char *buf, int bufsize)
 }
 EXPORT_SYMBOL(sptlrpc_flavor2name);
 
-static char *sptlrpc_secflags2str(__u32 flags, char *buf, int bufsize)
+static char *sptlrpc_secflags2str(u32 flags, char *buf, int bufsize)
 {
 	buf[0] = '\0';
 
@@ -2280,7 +2280,7 @@ int sptlrpc_unpack_user_desc(struct lustre_msg *msg, int offset, int swabbed)
 		return -EINVAL;
 	}
 
-	if (sizeof(*pud) + pud->pud_ngroups * sizeof(__u32) >
+	if (sizeof(*pud) + pud->pud_ngroups * sizeof(u32) >
 	    msg->lm_buflens[offset]) {
 		CERROR("%u groups are claimed but bufsize only %u\n",
 		       pud->pud_ngroups, msg->lm_buflens[offset]);
diff --git a/drivers/staging/lustre/lustre/ptlrpc/sec_bulk.c b/drivers/staging/lustre/lustre/ptlrpc/sec_bulk.c
index dbd6c74..93dcb6d 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/sec_bulk.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/sec_bulk.c
@@ -485,12 +485,12 @@ void sptlrpc_enc_pool_fini(void)
 	[BULK_HASH_ALG_SHA512]	= CFS_HASH_ALG_SHA512,
 };
 
-const char *sptlrpc_get_hash_name(__u8 hash_alg)
+const char *sptlrpc_get_hash_name(u8 hash_alg)
 {
 	return cfs_crypto_hash_name(cfs_hash_alg_id[hash_alg]);
 }
 
-__u8 sptlrpc_get_hash_alg(const char *algname)
+u8 sptlrpc_get_hash_alg(const char *algname)
 {
 	return cfs_crypto_hash_alg(algname);
 }
@@ -532,7 +532,7 @@ int bulk_sec_desc_unpack(struct lustre_msg *msg, int offset, int swabbed)
 }
 EXPORT_SYMBOL(bulk_sec_desc_unpack);
 
-int sptlrpc_get_bulk_checksum(struct ptlrpc_bulk_desc *desc, __u8 alg,
+int sptlrpc_get_bulk_checksum(struct ptlrpc_bulk_desc *desc, u8 alg,
 			      void *buf, int buflen)
 {
 	struct ahash_request *hdesc;
diff --git a/drivers/staging/lustre/lustre/ptlrpc/sec_null.c b/drivers/staging/lustre/lustre/ptlrpc/sec_null.c
index ecc387d..6933a53 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/sec_null.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/sec_null.c
@@ -57,7 +57,7 @@
 static inline
 void null_encode_sec_part(struct lustre_msg *msg, enum lustre_sec_part sp)
 {
-	msg->lm_secflvr |= (((__u32)sp) & 0xFF) << 24;
+	msg->lm_secflvr |= (((u32)sp) & 0xFF) << 24;
 }
 
 static inline
@@ -91,7 +91,7 @@ int null_ctx_sign(struct ptlrpc_cli_ctx *ctx, struct ptlrpc_request *req)
 static
 int null_ctx_verify(struct ptlrpc_cli_ctx *ctx, struct ptlrpc_request *req)
 {
-	__u32 cksums, cksumc;
+	u32 cksums, cksumc;
 
 	LASSERT(req->rq_repdata);
 
@@ -361,7 +361,7 @@ int null_authorize(struct ptlrpc_request *req)
 		else
 			req->rq_reply_off = 0;
 	} else {
-		__u32 cksum;
+		u32 cksum;
 
 		cksum = lustre_msg_calc_cksum(rs->rs_repbuf);
 		lustre_msg_set_cksum(rs->rs_repbuf, cksum);
diff --git a/drivers/staging/lustre/lustre/ptlrpc/sec_plain.c b/drivers/staging/lustre/lustre/ptlrpc/sec_plain.c
index ead1df7..0a31ff4 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/sec_plain.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/sec_plain.c
@@ -76,15 +76,15 @@ static inline struct plain_sec *sec2plsec(struct ptlrpc_sec *sec)
 #define PLAIN_FL_BULK		   (0x02)
 
 struct plain_header {
-	__u8	    ph_ver;	    /* 0 */
-	__u8	    ph_flags;
-	__u8	    ph_sp;	     /* source */
-	__u8	    ph_bulk_hash_alg;  /* complete flavor desc */
-	__u8	    ph_pad[4];
+	u8	    ph_ver;	    /* 0 */
+	u8	    ph_flags;
+	u8	    ph_sp;	     /* source */
+	u8	    ph_bulk_hash_alg;  /* complete flavor desc */
+	u8	    ph_pad[4];
 };
 
 struct plain_bulk_token {
-	__u8	    pbt_hash[8];
+	u8	    pbt_hash[8];
 };
 
 #define PLAIN_BSD_SIZE \
@@ -118,7 +118,7 @@ static int plain_unpack_bsd(struct lustre_msg *msg, int swabbed)
 }
 
 static int plain_generate_bulk_csum(struct ptlrpc_bulk_desc *desc,
-				    __u8 hash_alg,
+				    u8 hash_alg,
 				    struct plain_bulk_token *token)
 {
 	if (hash_alg == BULK_HASH_ALG_NULL)
@@ -130,7 +130,7 @@ static int plain_generate_bulk_csum(struct ptlrpc_bulk_desc *desc,
 }
 
 static int plain_verify_bulk_csum(struct ptlrpc_bulk_desc *desc,
-				  __u8 hash_alg,
+				  u8 hash_alg,
 				  struct plain_bulk_token *tokenr)
 {
 	struct plain_bulk_token tokenv;
@@ -216,7 +216,7 @@ int plain_ctx_verify(struct ptlrpc_cli_ctx *ctx, struct ptlrpc_request *req)
 {
 	struct lustre_msg *msg = req->rq_repdata;
 	struct plain_header *phdr;
-	__u32 cksum;
+	u32 cksum;
 	int swabbed;
 
 	if (msg->lm_bufcount != PLAIN_PACK_SEGMENTS) {
@@ -543,7 +543,7 @@ int plain_alloc_reqbuf(struct ptlrpc_sec *sec,
 		       struct ptlrpc_request *req,
 		       int msgsize)
 {
-	__u32 buflens[PLAIN_PACK_SEGMENTS] = { 0, };
+	u32 buflens[PLAIN_PACK_SEGMENTS] = { 0, };
 	int alloc_len;
 
 	buflens[PLAIN_PACK_HDR_OFF] = sizeof(struct plain_header);
@@ -603,7 +603,7 @@ int plain_alloc_repbuf(struct ptlrpc_sec *sec,
 		       struct ptlrpc_request *req,
 		       int msgsize)
 {
-	__u32 buflens[PLAIN_PACK_SEGMENTS] = { 0, };
+	u32 buflens[PLAIN_PACK_SEGMENTS] = { 0, };
 	int alloc_len;
 
 	buflens[PLAIN_PACK_HDR_OFF] = sizeof(struct plain_header);
@@ -790,7 +790,7 @@ int plain_accept(struct ptlrpc_request *req)
 int plain_alloc_rs(struct ptlrpc_request *req, int msgsize)
 {
 	struct ptlrpc_reply_state *rs;
-	__u32 buflens[PLAIN_PACK_SEGMENTS] = { 0, };
+	u32 buflens[PLAIN_PACK_SEGMENTS] = { 0, };
 	int rs_size = sizeof(*rs);
 
 	LASSERT(msgsize % 8 == 0);
@@ -1001,7 +1001,7 @@ int plain_svc_wrap_bulk(struct ptlrpc_request *req,
 
 int sptlrpc_plain_init(void)
 {
-	__u32 buflens[PLAIN_PACK_SEGMENTS] = { 0, };
+	u32 buflens[PLAIN_PACK_SEGMENTS] = { 0, };
 	int rc;
 
 	buflens[PLAIN_PACK_MSG_OFF] = lustre_msg_early_size();
diff --git a/drivers/staging/lustre/lustre/ptlrpc/service.c b/drivers/staging/lustre/lustre/ptlrpc/service.c
index 6a5a9c5..1030f65 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/service.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/service.c
@@ -507,7 +507,7 @@ static void ptlrpc_at_timer(struct timer_list *t)
 		INIT_LIST_HEAD(&array->paa_reqs_array[index]);
 
 	array->paa_reqs_count =
-		kzalloc_node(sizeof(__u32) * size, GFP_NOFS,
+		kzalloc_node(sizeof(u32) * size, GFP_NOFS,
 			     cfs_cpt_spread_node(svc->srv_cptable, cpt));
 	if (!array->paa_reqs_count)
 		goto free_reqs_array;
@@ -555,7 +555,7 @@ struct ptlrpc_service *
 	struct ptlrpc_service *service;
 	struct ptlrpc_service_part *svcpt;
 	struct cfs_cpt_table *cptable;
-	__u32 *cpts = NULL;
+	u32 *cpts = NULL;
 	int ncpts;
 	int cpt;
 	int rc;
@@ -925,7 +925,7 @@ static int ptlrpc_check_req(struct ptlrpc_request *req)
 static void ptlrpc_at_set_timer(struct ptlrpc_service_part *svcpt)
 {
 	struct ptlrpc_at_array *array = &svcpt->scp_at_array;
-	__s32 next;
+	s32 next;
 
 	if (array->paa_count == 0) {
 		del_timer(&svcpt->scp_at_timer);
@@ -933,7 +933,7 @@ static void ptlrpc_at_set_timer(struct ptlrpc_service_part *svcpt)
 	}
 
 	/* Set timer for closest deadline */
-	next = (__s32)(array->paa_deadline - ktime_get_real_seconds() -
+	next = (s32)(array->paa_deadline - ktime_get_real_seconds() -
 		       at_early_margin);
 	if (next <= 0) {
 		ptlrpc_at_timer(&svcpt->scp_at_timer);
@@ -950,7 +950,7 @@ static int ptlrpc_at_add_timed(struct ptlrpc_request *req)
 	struct ptlrpc_service_part *svcpt = req->rq_rqbd->rqbd_svcpt;
 	struct ptlrpc_at_array *array = &svcpt->scp_at_array;
 	struct ptlrpc_request *rq = NULL;
-	__u32 index;
+	u32 index;
 
 	if (AT_OFF)
 		return 0;
@@ -1158,7 +1158,7 @@ static void ptlrpc_at_check_timed(struct ptlrpc_service_part *svcpt)
 	struct ptlrpc_at_array *array = &svcpt->scp_at_array;
 	struct ptlrpc_request *rq, *n;
 	struct list_head work_list;
-	__u32 index, count;
+	u32 index, count;
 	time64_t deadline;
 	time64_t now = ktime_get_real_seconds();
 	long delay;
@@ -1478,7 +1478,7 @@ static bool ptlrpc_server_normal_pending(struct ptlrpc_service_part *svcpt,
 {
 	struct ptlrpc_service *svc = svcpt->scp_service;
 	struct ptlrpc_request *req;
-	__u32 deadline;
+	u32 deadline;
 	int rc;
 
 	spin_lock(&svcpt->scp_lock);
@@ -1757,7 +1757,7 @@ static bool ptlrpc_server_normal_pending(struct ptlrpc_service_part *svcpt,
 	       (request->rq_repmsg ?
 		lustre_msg_get_status(request->rq_repmsg) : -999));
 	if (likely(svc->srv_stats && request->rq_reqmsg)) {
-		__u32 op = lustre_msg_get_opc(request->rq_reqmsg);
+		u32 op = lustre_msg_get_opc(request->rq_reqmsg);
 		int opc = opcode_offset(op);
 
 		if (opc > 0 && !(op == LDLM_ENQUEUE || op == MDS_REINT)) {
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [lustre-devel] [PATCH 05/26] lustre: use kernel types for lustre internal headers
  2019-01-31 17:19 [lustre-devel] [PATCH 00/26] lustre: cleanups with no code changes James Simmons
                   ` (3 preceding siblings ...)
  2019-01-31 17:19 ` [lustre-devel] [PATCH 04/26] ptlrpc: use kernel types for " James Simmons
@ 2019-01-31 17:19 ` James Simmons
  2019-01-31 17:19 ` [lustre-devel] [PATCH 06/26] ldlm: use kernel types for kernel code James Simmons
                   ` (21 subsequent siblings)
  26 siblings, 0 replies; 30+ messages in thread
From: James Simmons @ 2019-01-31 17:19 UTC (permalink / raw)
  To: lustre-devel

Lustre internal header was originally both a user land and kernel
implementation. The source contains many types of the form __u32
but since this is mostly kernel code change the types to kernel
internal types.

Signed-off-by: James Simmons <jsimmons@infradead.org>
---
 drivers/staging/lustre/lustre/include/cl_object.h  |  10 +-
 .../staging/lustre/lustre/include/lprocfs_status.h |  28 ++--
 drivers/staging/lustre/lustre/include/lu_object.h  |  46 +++----
 .../staging/lustre/lustre/include/lustre_debug.h   |   4 +-
 .../staging/lustre/lustre/include/lustre_disk.h    |   6 +-
 drivers/staging/lustre/lustre/include/lustre_dlm.h |  66 +++++-----
 .../staging/lustre/lustre/include/lustre_export.h  |  14 +-
 drivers/staging/lustre/lustre/include/lustre_fid.h |  34 ++---
 drivers/staging/lustre/lustre/include/lustre_fld.h |   6 +-
 drivers/staging/lustre/lustre/include/lustre_ha.h  |   2 +-
 .../staging/lustre/lustre/include/lustre_handles.h |   4 +-
 .../staging/lustre/lustre/include/lustre_import.h  |  28 ++--
 .../staging/lustre/lustre/include/lustre_intent.h  |   8 +-
 drivers/staging/lustre/lustre/include/lustre_lmv.h |  26 ++--
 drivers/staging/lustre/lustre/include/lustre_log.h |   8 +-
 drivers/staging/lustre/lustre/include/lustre_mdc.h |   2 +-
 drivers/staging/lustre/lustre/include/lustre_net.h | 144 ++++++++++-----------
 .../lustre/lustre/include/lustre_nrs_fifo.h        |   4 +-
 .../lustre/lustre/include/lustre_req_layout.h      |   4 +-
 drivers/staging/lustre/lustre/include/lustre_sec.h |  92 ++++++-------
 .../staging/lustre/lustre/include/lustre_swab.h    |   6 +-
 drivers/staging/lustre/lustre/include/obd.h        |  84 ++++++------
 drivers/staging/lustre/lustre/include/obd_class.h  |  28 ++--
 23 files changed, 327 insertions(+), 327 deletions(-)

diff --git a/drivers/staging/lustre/lustre/include/cl_object.h b/drivers/staging/lustre/lustre/include/cl_object.h
index 41b32b7..3109c04 100644
--- a/drivers/staging/lustre/lustre/include/cl_object.h
+++ b/drivers/staging/lustre/lustre/include/cl_object.h
@@ -153,7 +153,7 @@ struct cl_attr {
 	 *
 	 * \todo XXX An interface for block size is needed.
 	 */
-	__u64  cat_blocks;
+	u64  cat_blocks;
 	/**
 	 * User identifier for quota purposes.
 	 */
@@ -164,7 +164,7 @@ struct cl_attr {
 	gid_t  cat_gid;
 
 	/* nlink of the directory */
-	__u64  cat_nlink;
+	u64  cat_nlink;
 
 	/* Project identifier for quota purpose. */
 	u32	cat_projid;
@@ -1151,14 +1151,14 @@ struct cl_lock_descr {
 	/** Index of the last page (inclusive) protected by this lock. */
 	pgoff_t	   cld_end;
 	/** Group ID, for group lock */
-	__u64	     cld_gid;
+	u64	     cld_gid;
 	/** Lock mode. */
 	enum cl_lock_mode cld_mode;
 	/**
 	 * flags to enqueue lock. A combination of bit-flags from
 	 * enum cl_enq_flags.
 	 */
-	__u32	     cld_enq_flags;
+	u32	     cld_enq_flags;
 };
 
 #define DDESCR "%s(%d):[%lu, %lu]:%x"
@@ -2438,7 +2438,7 @@ void cl_sync_io_note(const struct lu_env *env, struct cl_sync_io *anchor,
  */
 
 struct lu_env *cl_env_get(u16 *refcheck);
-struct lu_env *cl_env_alloc(u16 *refcheck, __u32 tags);
+struct lu_env *cl_env_alloc(u16 *refcheck, u32 tags);
 void cl_env_put(struct lu_env *env, u16 *refcheck);
 unsigned int cl_env_cache_purge(unsigned int nr);
 struct lu_env *cl_env_percpu_get(void);
diff --git a/drivers/staging/lustre/lustre/include/lprocfs_status.h b/drivers/staging/lustre/lustre/include/lprocfs_status.h
index c22ae3d..7649040 100644
--- a/drivers/staging/lustre/lustre/include/lprocfs_status.h
+++ b/drivers/staging/lustre/lustre/include/lprocfs_status.h
@@ -136,7 +136,7 @@ enum {
 	LPROCFS_TYPE_CYCLE	= 0x0800,
 };
 
-#define LC_MIN_INIT ((~(__u64)0) >> 1)
+#define LC_MIN_INIT ((~(u64)0) >> 1)
 
 struct lprocfs_counter_header {
 	unsigned int		lc_config;
@@ -145,17 +145,17 @@ struct lprocfs_counter_header {
 };
 
 struct lprocfs_counter {
-	__s64	lc_count;
-	__s64	lc_min;
-	__s64	lc_max;
-	__s64	lc_sumsquare;
+	s64	lc_count;
+	s64	lc_min;
+	s64	lc_max;
+	s64	lc_sumsquare;
 	/*
 	 * Every counter has lc_array_sum[0], while lc_array_sum[1] is only
 	 * for irq context counter, i.e. stats with
 	 * LPROCFS_STATS_FLAG_IRQ_SAFE flag, its counter need
 	 * lc_array_sum[1]
 	 */
-	__s64	lc_array_sum[1];
+	s64	lc_array_sum[1];
 };
 
 #define lc_sum		lc_array_sum[0]
@@ -163,7 +163,7 @@ struct lprocfs_counter {
 
 struct lprocfs_percpu {
 #ifndef __GNUC__
-	__s64			pad;
+	s64			pad;
 #endif
 	struct lprocfs_counter lp_cntr[0];
 };
@@ -210,7 +210,7 @@ struct lprocfs_stats {
 #define OPC_RANGE(seg) (seg ## _LAST_OPC - seg ## _FIRST_OPC)
 
 /* Pack all opcodes down into a single monotonically increasing index */
-static inline int opcode_offset(__u32 opc)
+static inline int opcode_offset(u32 opc)
 {
 	if (opc < OST_LAST_OPC) {
 		 /* OST opcode */
@@ -394,7 +394,7 @@ void lprocfs_stats_unlock(struct lprocfs_stats *stats,
 
 	/* irq safe stats need lc_array_sum[1] */
 	if ((stats->ls_flags & LPROCFS_STATS_FLAG_IRQ_SAFE) != 0)
-		percpusize += stats->ls_num * sizeof(__s64);
+		percpusize += stats->ls_num * sizeof(s64);
 
 	if ((stats->ls_flags & LPROCFS_STATS_FLAG_NOPERCPU) == 0)
 		percpusize = L1_CACHE_ALIGN(percpusize);
@@ -411,7 +411,7 @@ void lprocfs_stats_unlock(struct lprocfs_stats *stats,
 	cntr = &stats->ls_percpu[cpuid]->lp_cntr[index];
 
 	if ((stats->ls_flags & LPROCFS_STATS_FLAG_IRQ_SAFE) != 0)
-		cntr = (void *)cntr + index * sizeof(__s64);
+		cntr = (void *)cntr + index * sizeof(s64);
 
 	return cntr;
 }
@@ -431,11 +431,11 @@ void lprocfs_stats_unlock(struct lprocfs_stats *stats,
 #define lprocfs_counter_decr(stats, idx) \
 	lprocfs_counter_sub(stats, idx, 1)
 
-__s64 lprocfs_read_helper(struct lprocfs_counter *lc,
+s64 lprocfs_read_helper(struct lprocfs_counter *lc,
 			  struct lprocfs_counter_header *header,
 			  enum lprocfs_stats_flags flags,
 			  enum lprocfs_fields_flags field);
-__u64 lprocfs_stats_collector(struct lprocfs_stats *stats, int idx,
+u64 lprocfs_stats_collector(struct lprocfs_stats *stats, int idx,
 			      enum lprocfs_fields_flags field);
 
 extern struct lprocfs_stats *
@@ -485,10 +485,10 @@ int lprocfs_wr_pinger_recov(struct file *file, const char __user *buffer,
 int lprocfs_write_helper(const char __user *buffer, unsigned long count,
 			 int *val);
 int lprocfs_write_u64_helper(const char __user *buffer,
-			     unsigned long count, __u64 *val);
+			     unsigned long count, u64 *val);
 int lprocfs_write_frac_u64_helper(const char __user *buffer,
 				  unsigned long count,
-				  __u64 *val, int mult);
+				  u64 *val, int mult);
 char *lprocfs_find_named_value(const char *buffer, const char *name,
 			       size_t *count);
 void lprocfs_oh_tally(struct obd_histogram *oh, unsigned int value);
diff --git a/drivers/staging/lustre/lustre/include/lu_object.h b/drivers/staging/lustre/lustre/include/lu_object.h
index 47f8021..3e663a9 100644
--- a/drivers/staging/lustre/lustre/include/lu_object.h
+++ b/drivers/staging/lustre/lustre/include/lu_object.h
@@ -309,7 +309,7 @@ struct lu_device_type {
 	/**
 	 * Tag bits. Taken from enum lu_device_tag. Never modified once set.
 	 */
-	__u32				   ldt_tags;
+	u32				   ldt_tags;
 	/**
 	 * Name of this class. Unique system-wide. Never modified once set.
 	 */
@@ -325,7 +325,7 @@ struct lu_device_type {
 	/**
 	 * \todo XXX: temporary: context tags used by obd_*() calls.
 	 */
-	__u32				   ldt_ctx_tags;
+	u32				   ldt_ctx_tags;
 	/**
 	 * Number of existing device type instances.
 	 */
@@ -392,7 +392,7 @@ static inline int lu_device_is_md(const struct lu_device *d)
  */
 struct lu_attr {
 	/** size in bytes */
-	__u64	  la_size;
+	u64	  la_size;
 	/** modification time in seconds since Epoch */
 	s64	  la_mtime;
 	/** access time in seconds since Epoch */
@@ -400,29 +400,29 @@ struct lu_attr {
 	/** change time in seconds since Epoch */
 	s64	  la_ctime;
 	/** 512-byte blocks allocated to object */
-	__u64	  la_blocks;
+	u64	  la_blocks;
 	/** permission bits and file type */
-	__u32	  la_mode;
+	u32	  la_mode;
 	/** owner id */
-	__u32	  la_uid;
+	u32	  la_uid;
 	/** group id */
-	__u32	  la_gid;
+	u32	  la_gid;
 	/** object flags */
-	__u32	  la_flags;
+	u32	  la_flags;
 	/** number of persistent references to this object */
-	__u32	  la_nlink;
+	u32	  la_nlink;
 	/** blk bits of the object*/
-	__u32	  la_blkbits;
+	u32	  la_blkbits;
 	/** blk size of the object*/
-	__u32	  la_blksize;
+	u32	  la_blksize;
 	/** real device */
-	__u32	  la_rdev;
+	u32	  la_rdev;
 	/**
 	 * valid bits
 	 *
 	 * \see enum la_valid
 	 */
-	__u64	  la_valid;
+	u64	  la_valid;
 };
 
 /** Bit-mask of valid attributes */
@@ -522,7 +522,7 @@ struct lu_object_header {
 	 * Common object attributes, cached for efficiency. From enum
 	 * lu_object_header_attr.
 	 */
-	__u32		  loh_attr;
+	u32		  loh_attr;
 	/**
 	 * Linkage into per-site hash table. Protected by lu_site::ls_guard.
 	 */
@@ -812,7 +812,7 @@ static inline int lu_object_assert_not_exists(const struct lu_object *o)
 /**
  * Attr of this object.
  */
-static inline __u32 lu_object_attr(const struct lu_object *o)
+static inline u32 lu_object_attr(const struct lu_object *o)
 {
 	LASSERT(lu_object_exists(o) != 0);
 	return o->lo_header->loh_attr;
@@ -849,13 +849,13 @@ static inline void lu_object_ref_del_at(struct lu_object *o,
 /** input params, should be filled out by mdt */
 struct lu_rdpg {
 	/** hash */
-	__u64		   rp_hash;
+	u64		   rp_hash;
 	/** count in bytes */
 	unsigned int	    rp_count;
 	/** number of pages */
 	unsigned int	    rp_npages;
 	/** requested attr */
-	__u32		   rp_attrs;
+	u32		   rp_attrs;
 	/** pointers to pages */
 	struct page	   **rp_pages;
 };
@@ -912,7 +912,7 @@ struct lu_context {
 	 * of tags has non-empty intersection with one for key. Tags are taken
 	 * from enum lu_context_tag.
 	 */
-	__u32		  lc_tags;
+	u32		  lc_tags;
 	enum lu_context_state  lc_state;
 	/**
 	 * Pointer to the home service thread. NULL for other execution
@@ -1049,7 +1049,7 @@ struct lu_context_key {
 	/**
 	 * Set of tags for which values of this key are to be instantiated.
 	 */
-	__u32 lct_tags;
+	u32 lct_tags;
 	/**
 	 * Value constructor. This is called when new value is created for a
 	 * context. Returns pointer to new value of error pointer.
@@ -1194,7 +1194,7 @@ void *lu_context_key_get(const struct lu_context *ctx,
 	LU_TYPE_START(mod, __VA_ARGS__);	\
 	LU_TYPE_STOP(mod, __VA_ARGS__)
 
-int lu_context_init(struct lu_context *ctx, __u32 tags);
+int lu_context_init(struct lu_context *ctx, u32 tags);
 void lu_context_fini(struct lu_context *ctx);
 void lu_context_enter(struct lu_context *ctx);
 void lu_context_exit(struct lu_context *ctx);
@@ -1224,7 +1224,7 @@ struct lu_env {
 	struct lu_context *le_ses;
 };
 
-int lu_env_init(struct lu_env *env, __u32 tags);
+int lu_env_init(struct lu_env *env, u32 tags);
 void lu_env_fini(struct lu_env *env);
 int lu_env_refill(struct lu_env *env);
 
@@ -1293,8 +1293,8 @@ struct lu_kmem_descr {
 int  lu_kmem_init(struct lu_kmem_descr *caches);
 void lu_kmem_fini(struct lu_kmem_descr *caches);
 
-extern __u32 lu_context_tags_default;
-extern __u32 lu_session_tags_default;
+extern u32 lu_context_tags_default;
+extern u32 lu_session_tags_default;
 
 /** @} lu */
 #endif /* __LUSTRE_LU_OBJECT_H */
diff --git a/drivers/staging/lustre/lustre/include/lustre_debug.h b/drivers/staging/lustre/lustre/include/lustre_debug.h
index 721a81f..b9414fc 100644
--- a/drivers/staging/lustre/lustre/include/lustre_debug.h
+++ b/drivers/staging/lustre/lustre/include/lustre_debug.h
@@ -44,8 +44,8 @@
 
 /* lib/debug.c */
 int dump_req(struct ptlrpc_request *req);
-int block_debug_setup(void *addr, int len, __u64 off, __u64 id);
-int block_debug_check(char *who, void *addr, int len, __u64 off, __u64 id);
+int block_debug_setup(void *addr, int len, u64 off, u64 id);
+int block_debug_check(char *who, void *addr, int len, u64 off, u64 id);
 
 /** @} debug */
 
diff --git a/drivers/staging/lustre/lustre/include/lustre_disk.h b/drivers/staging/lustre/lustre/include/lustre_disk.h
index 7b6421d..091a09f 100644
--- a/drivers/staging/lustre/lustre/include/lustre_disk.h
+++ b/drivers/staging/lustre/lustre/include/lustre_disk.h
@@ -70,8 +70,8 @@
 
 /* gleaned from the mount command - no persistent info here */
 struct lustre_mount_data {
-	__u32	lmd_magic;
-	__u32	lmd_flags;	/* lustre mount flags */
+	u32	lmd_magic;
+	u32	lmd_flags;	/* lustre mount flags */
 	int	lmd_mgs_failnodes; /* mgs failover node count */
 	int	lmd_exclude_count;
 	int	lmd_recovery_time_soft;
@@ -84,7 +84,7 @@ struct lustre_mount_data {
 				 * _device_ mount options)
 				 */
 	char	*lmd_params;	/* lustre params */
-	__u32	*lmd_exclude;	/* array of OSTs to ignore */
+	u32	*lmd_exclude;	/* array of OSTs to ignore */
 	char	*lmd_mgs;	/* MGS nid */
 	char	*lmd_osd_type;	/* OSD type */
 	char    *lmd_nidnet;	/* network to restrict this client to */
diff --git a/drivers/staging/lustre/lustre/include/lustre_dlm.h b/drivers/staging/lustre/lustre/include/lustre_dlm.h
index e2bbcaa..7c12087 100644
--- a/drivers/staging/lustre/lustre/include/lustre_dlm.h
+++ b/drivers/staging/lustre/lustre/include/lustre_dlm.h
@@ -243,9 +243,9 @@ struct ldlm_pool {
 	/** Cancel rate per T. */
 	atomic_t		pl_cancel_rate;
 	/** Server lock volume (SLV). Protected by pl_lock. */
-	__u64			pl_server_lock_volume;
+	u64			pl_server_lock_volume;
 	/** Current biggest client lock volume. Protected by pl_lock. */
-	__u64			pl_client_lock_volume;
+	u64			pl_client_lock_volume;
 	/** Lock volume factor. SLV on client is calculated as following:
 	 *  server_slv * lock_volume_factor.
 	 */
@@ -377,10 +377,10 @@ struct ldlm_namespace {
 	 * Namespace connect flags supported by server (may be changed via
 	 * sysfs, LRU resize may be disabled/enabled).
 	 */
-	__u64			ns_connect_flags;
+	u64			ns_connect_flags;
 
 	/** Client side original connect flags supported by server. */
-	__u64			ns_orig_connect_flags;
+	u64			ns_orig_connect_flags;
 
 	/* namespace debugfs dir entry */
 	struct dentry		*ns_debugfs_entry;
@@ -494,7 +494,7 @@ typedef int (*ldlm_blocking_callback)(struct ldlm_lock *lock,
 				      struct ldlm_lock_desc *new, void *data,
 				      int flag);
 /** Type for completion callback function of a lock. */
-typedef int (*ldlm_completion_callback)(struct ldlm_lock *lock, __u64 flags,
+typedef int (*ldlm_completion_callback)(struct ldlm_lock *lock, u64 flags,
 					void *data);
 /** Type for glimpse callback function of a lock. */
 typedef int (*ldlm_glimpse_callback)(struct ldlm_lock *lock, void *data);
@@ -503,7 +503,7 @@ typedef int (*ldlm_completion_callback)(struct ldlm_lock *lock, __u64 flags,
 struct ldlm_glimpse_work {
 	struct ldlm_lock	*gl_lock; /* lock to glimpse */
 	struct list_head		 gl_list; /* linkage to other gl work structs */
-	__u32			 gl_flags;/* see LDLM_GL_WORK_* below */
+	u32			 gl_flags;/* see LDLM_GL_WORK_* below */
 	union ldlm_gl_desc	*gl_desc; /* glimpse descriptor to be packed in
 					   * glimpse callback request
 					   */
@@ -538,12 +538,12 @@ enum ldlm_cancel_flags {
 };
 
 struct ldlm_flock {
-	__u64 start;
-	__u64 end;
-	__u64 owner;
-	__u64 blocking_owner;
+	u64 start;
+	u64 end;
+	u64 owner;
+	u64 blocking_owner;
 	struct obd_export *blocking_export;
-	__u32 pid;
+	u32 pid;
 };
 
 union ldlm_policy_data {
@@ -566,7 +566,7 @@ enum lvb_type {
 /**
  * LDLM_GID_ANY is used to match any group id in ldlm_lock_match().
  */
-#define LDLM_GID_ANY	((__u64)-1)
+#define LDLM_GID_ANY	((u64)-1)
 
 /**
  * LDLM lock structure
@@ -621,7 +621,7 @@ struct ldlm_lock {
 	 * Interval-tree node for ldlm_extent.
 	 */
 	struct rb_node		l_rb;
-	__u64			__subtree_last;
+	u64			__subtree_last;
 
 	/**
 	 * Requested mode.
@@ -681,14 +681,14 @@ struct ldlm_lock {
 	 * Lock state flags. Protected by lr_lock.
 	 * \see lustre_dlm_flags.h where the bits are defined.
 	 */
-	__u64			l_flags;
+	u64			l_flags;
 
 	/**
 	 * Lock r/w usage counters.
 	 * Protected by lr_lock.
 	 */
-	__u32			l_readers;
-	__u32			l_writers;
+	u32			l_readers;
+	u32			l_writers;
 	/**
 	 * If the lock is granted, a process sleeps on this waitq to learn when
 	 * it's no longer in use.  If the lock is not granted, a process sleeps
@@ -720,7 +720,7 @@ struct ldlm_lock {
 	/**
 	 * Temporary storage for a LVB received during an enqueue operation.
 	 */
-	__u32			l_lvb_len;
+	u32			l_lvb_len;
 	void			*l_lvb_data;
 
 	/** Private storage for lock user. Opaque to LDLM. */
@@ -735,7 +735,7 @@ struct ldlm_lock {
 	 * Used by Commit on Share (COS) code. Currently only used for
 	 * inodebits locks on MDS.
 	 */
-	__u64			l_client_cookie;
+	u64			l_client_cookie;
 
 	/**
 	 * List item for locks waiting for cancellation from clients.
@@ -756,7 +756,7 @@ struct ldlm_lock {
 	unsigned long		l_callback_timeout;
 
 	/** Local PID of process which created this lock. */
-	__u32			l_pid;
+	u32			l_pid;
 
 	/**
 	 * Number of times blocking AST was sent for this lock.
@@ -1007,7 +1007,7 @@ void _ldlm_lock_debug(struct ldlm_lock *lock,
 	}								    \
 } while (0)
 
-typedef int (*ldlm_processing_policy)(struct ldlm_lock *lock, __u64 *flags,
+typedef int (*ldlm_processing_policy)(struct ldlm_lock *lock, u64 *flags,
 				      int first_enq, enum ldlm_error *err,
 				      struct list_head *work_list);
 
@@ -1034,10 +1034,10 @@ int ldlm_resource_iterate(struct ldlm_namespace *, const struct ldlm_res_id *,
 int ldlm_replay_locks(struct obd_import *imp);
 
 /* ldlm_flock.c */
-int ldlm_flock_completion_ast(struct ldlm_lock *lock, __u64 flags, void *data);
+int ldlm_flock_completion_ast(struct ldlm_lock *lock, u64 flags, void *data);
 
 /* ldlm_extent.c */
-__u64 ldlm_extent_shift_kms(struct ldlm_lock *lock, __u64 old_kms);
+u64 ldlm_extent_shift_kms(struct ldlm_lock *lock, u64 old_kms);
 
 struct ldlm_callback_suite {
 	ldlm_completion_callback lcs_completion;
@@ -1053,7 +1053,7 @@ struct ldlm_callback_suite {
 /* ldlm_lock.c */
 void ldlm_lock2handle(const struct ldlm_lock *lock,
 		      struct lustre_handle *lockh);
-struct ldlm_lock *__ldlm_handle2lock(const struct lustre_handle *, __u64 flags);
+struct ldlm_lock *__ldlm_handle2lock(const struct lustre_handle *, u64 flags);
 void ldlm_cancel_callback(struct ldlm_lock *);
 int ldlm_lock_remove_from_lru(struct ldlm_lock *);
 int ldlm_lock_set_data(const struct lustre_handle *lockh, void *data);
@@ -1070,7 +1070,7 @@ static inline struct ldlm_lock *ldlm_handle2lock(const struct lustre_handle *h)
 	lu_ref_del(&lock->l_reference, "handle", current)
 
 static inline struct ldlm_lock *
-ldlm_handle2lock_long(const struct lustre_handle *h, __u64 flags)
+ldlm_handle2lock_long(const struct lustre_handle *h, u64 flags)
 {
 	struct ldlm_lock *lock;
 
@@ -1154,13 +1154,13 @@ void ldlm_lock_decref_and_cancel(const struct lustre_handle *lockh,
 void ldlm_lock_fail_match_locked(struct ldlm_lock *lock);
 void ldlm_lock_allow_match(struct ldlm_lock *lock);
 void ldlm_lock_allow_match_locked(struct ldlm_lock *lock);
-enum ldlm_mode ldlm_lock_match(struct ldlm_namespace *ns, __u64 flags,
+enum ldlm_mode ldlm_lock_match(struct ldlm_namespace *ns, u64 flags,
 			       const struct ldlm_res_id *,
 			       enum ldlm_type type, union ldlm_policy_data *,
 			       enum ldlm_mode mode, struct lustre_handle *,
 			       int unref);
 enum ldlm_mode ldlm_revalidate_lock_handle(const struct lustre_handle *lockh,
-					   __u64 *bits);
+					   u64 *bits);
 void ldlm_lock_cancel(struct ldlm_lock *lock);
 void ldlm_lock_dump_handle(int level, const struct lustre_handle *);
 void ldlm_unlink_lock_skiplist(struct ldlm_lock *req);
@@ -1170,7 +1170,7 @@ struct ldlm_namespace *
 ldlm_namespace_new(struct obd_device *obd, char *name,
 		   enum ldlm_side client, enum ldlm_appetite apt,
 		   enum ldlm_ns_type ns_type);
-int ldlm_namespace_cleanup(struct ldlm_namespace *ns, __u64 flags);
+int ldlm_namespace_cleanup(struct ldlm_namespace *ns, u64 flags);
 void ldlm_namespace_free_prior(struct ldlm_namespace *ns,
 			       struct obd_import *imp,
 			       int force);
@@ -1213,7 +1213,7 @@ int ldlm_lock_change_resource(struct ldlm_namespace *, struct ldlm_lock *,
  * processing.
  * @{
  */
-int ldlm_completion_ast(struct ldlm_lock *lock, __u64 flags, void *data);
+int ldlm_completion_ast(struct ldlm_lock *lock, u64 flags, void *data);
 /** @} ldlm_local_ast */
 
 /** \defgroup ldlm_cli_api API to operate on locks from actual LDLM users.
@@ -1224,8 +1224,8 @@ int ldlm_lock_change_resource(struct ldlm_namespace *, struct ldlm_lock *,
 int ldlm_cli_enqueue(struct obd_export *exp, struct ptlrpc_request **reqp,
 		     struct ldlm_enqueue_info *einfo,
 		     const struct ldlm_res_id *res_id,
-		     union ldlm_policy_data const *policy, __u64 *flags,
-		     void *lvb, __u32 lvb_len, enum lvb_type lvb_type,
+		     union ldlm_policy_data const *policy, u64 *flags,
+		     void *lvb, u32 lvb_len, enum lvb_type lvb_type,
 		     struct lustre_handle *lockh, int async);
 int ldlm_prep_enqueue_req(struct obd_export *exp,
 			  struct ptlrpc_request *req,
@@ -1237,9 +1237,9 @@ int ldlm_prep_elc_req(struct obd_export *exp,
 		      struct list_head *cancels, int count);
 
 int ldlm_cli_enqueue_fini(struct obd_export *exp, struct ptlrpc_request *req,
-			  enum ldlm_type type, __u8 with_policy,
+			  enum ldlm_type type, u8 with_policy,
 			  enum ldlm_mode mode,
-			  __u64 *flags, void *lvb, __u32 lvb_len,
+			  u64 *flags, void *lvb, u32 lvb_len,
 			  const struct lustre_handle *lockh, int rc);
 int ldlm_cli_update_pool(struct ptlrpc_request *req);
 int ldlm_cli_cancel(const struct lustre_handle *lockh,
@@ -1255,7 +1255,7 @@ int ldlm_cli_cancel_unused_resource(struct ldlm_namespace *ns,
 int ldlm_cancel_resource_local(struct ldlm_resource *res,
 			       struct list_head *cancels,
 			       union ldlm_policy_data *policy,
-			       enum ldlm_mode mode, __u64 lock_flags,
+			       enum ldlm_mode mode, u64 lock_flags,
 			       enum ldlm_cancel_flags cancel_flags,
 			       void *opaque);
 int ldlm_cli_cancel_list_local(struct list_head *cancels, int count,
diff --git a/drivers/staging/lustre/lustre/include/lustre_export.h b/drivers/staging/lustre/lustre/include/lustre_export.h
index 79ad5aa..1c70259 100644
--- a/drivers/staging/lustre/lustre/include/lustre_export.h
+++ b/drivers/staging/lustre/lustre/include/lustre_export.h
@@ -101,12 +101,12 @@ struct obd_export {
 	/** Active connection */
 	struct ptlrpc_connection *exp_connection;
 	/** Connection count value from last successful reconnect rpc */
-	__u32		     exp_conn_cnt;
+	u32		     exp_conn_cnt;
 	struct list_head		exp_outstanding_replies;
 	struct list_head		exp_uncommitted_replies;
 	spinlock_t		  exp_uncommitted_replies_lock;
 	/** Last committed transno for this export */
-	__u64		     exp_last_committed;
+	u64		     exp_last_committed;
 	/** On replay all requests waiting for replay are linked here */
 	struct list_head		exp_req_replay_queue;
 	/**
@@ -139,12 +139,12 @@ struct obd_export {
 	spinlock_t		  exp_bl_list_lock;
 };
 
-static inline __u64 *exp_connect_flags_ptr(struct obd_export *exp)
+static inline u64 *exp_connect_flags_ptr(struct obd_export *exp)
 {
 	return &exp->exp_connect_data.ocd_connect_flags;
 }
 
-static inline __u64 exp_connect_flags(struct obd_export *exp)
+static inline u64 exp_connect_flags(struct obd_export *exp)
 {
 	return *exp_connect_flags_ptr(exp);
 }
@@ -219,7 +219,7 @@ static inline bool imp_connect_lvb_type(struct obd_import *imp)
 		return false;
 }
 
-static inline __u64 exp_connect_ibits(struct obd_export *exp)
+static inline u64 exp_connect_ibits(struct obd_export *exp)
 {
 	struct obd_connect_data *ocd;
 
@@ -239,9 +239,9 @@ static inline bool imp_connect_disp_stripe(struct obd_import *imp)
 
 #define KKUC_CT_DATA_MAGIC	0x092013cea
 struct kkuc_ct_data {
-	__u32		kcd_magic;
+	u32		kcd_magic;
 	struct obd_uuid	kcd_uuid;
-	__u32		kcd_archive;
+	u32		kcd_archive;
 };
 
 /** @} export */
diff --git a/drivers/staging/lustre/lustre/include/lustre_fid.h b/drivers/staging/lustre/lustre/include/lustre_fid.h
index 094ad28..f0afa8d 100644
--- a/drivers/staging/lustre/lustre/include/lustre_fid.h
+++ b/drivers/staging/lustre/lustre/include/lustre_fid.h
@@ -235,14 +235,14 @@ enum local_oid {
 	SLAVE_LLOG_CATALOGS_OID	= 4124UL,
 };
 
-static inline void lu_local_obj_fid(struct lu_fid *fid, __u32 oid)
+static inline void lu_local_obj_fid(struct lu_fid *fid, u32 oid)
 {
 	fid->f_seq = FID_SEQ_LOCAL_FILE;
 	fid->f_oid = oid;
 	fid->f_ver = 0;
 }
 
-static inline void lu_local_name_obj_fid(struct lu_fid *fid, __u32 oid)
+static inline void lu_local_name_obj_fid(struct lu_fid *fid, u32 oid)
 {
 	fid->f_seq = FID_SEQ_LOCAL_NAME;
 	fid->f_oid = oid;
@@ -290,13 +290,13 @@ static inline int fid_is_quota(const struct lu_fid *fid)
 	       fid_seq(fid) == FID_SEQ_QUOTA_GLB;
 }
 
-static inline int fid_seq_in_fldb(__u64 seq)
+static inline int fid_seq_in_fldb(u64 seq)
 {
 	return fid_seq_is_igif(seq) || fid_seq_is_norm(seq) ||
 	       fid_seq_is_root(seq) || fid_seq_is_dot(seq);
 }
 
-static inline void lu_last_id_fid(struct lu_fid *fid, __u64 seq, __u32 ost_idx)
+static inline void lu_last_id_fid(struct lu_fid *fid, u64 seq, u32 ost_idx)
 {
 	if (fid_seq_is_mdt0(seq)) {
 		fid->f_seq = fid_idif_seq(0, ost_idx);
@@ -352,7 +352,7 @@ struct lu_client_seq {
 	 * Sequence width, that is how many objects may be allocated in one
 	 * sequence. Default value for it is LUSTRE_SEQ_MAX_WIDTH.
 	 */
-	__u64		   lcs_width;
+	u64		   lcs_width;
 
 	/* wait queue for fid allocation and update indicator */
 	wait_queue_head_t	     lcs_waitq;
@@ -409,8 +409,8 @@ static inline bool fid_res_name_eq(const struct lu_fid *fid,
 fid_extract_from_res_name(struct lu_fid *fid, const struct ldlm_res_id *res)
 {
 	fid->f_seq = res->name[LUSTRE_RES_ID_SEQ_OFF];
-	fid->f_oid = (__u32)(res->name[LUSTRE_RES_ID_VER_OID_OFF]);
-	fid->f_ver = (__u32)(res->name[LUSTRE_RES_ID_VER_OID_OFF] >> 32);
+	fid->f_oid = (u32)(res->name[LUSTRE_RES_ID_VER_OID_OFF]);
+	fid->f_ver = (u32)(res->name[LUSTRE_RES_ID_VER_OID_OFF] >> 32);
 	LASSERT(fid_res_name_eq(fid, res));
 }
 
@@ -435,9 +435,9 @@ static inline void fid_extract_from_quota_res(struct lu_fid *glb_fid,
 {
 	fid_extract_from_res_name(glb_fid, res);
 	qid->qid_fid.f_seq = res->name[LUSTRE_RES_ID_QUOTA_SEQ_OFF];
-	qid->qid_fid.f_oid = (__u32)res->name[LUSTRE_RES_ID_QUOTA_VER_OID_OFF];
+	qid->qid_fid.f_oid = (u32)res->name[LUSTRE_RES_ID_QUOTA_VER_OID_OFF];
 	qid->qid_fid.f_ver =
-		(__u32)(res->name[LUSTRE_RES_ID_QUOTA_VER_OID_OFF] >> 32);
+		(u32)(res->name[LUSTRE_RES_ID_QUOTA_VER_OID_OFF] >> 32);
 }
 
 static inline void
@@ -500,7 +500,7 @@ static inline int ostid_res_name_eq(const struct ost_id *oi,
  * Note: we need check oi_seq to decide where to set oi_id,
  * so oi_seq should always be set ahead of oi_id.
  */
-static inline int ostid_set_id(struct ost_id *oi, __u64 oid)
+static inline int ostid_set_id(struct ost_id *oi, u64 oid)
 {
 	if (fid_seq_is_mdt0(oi->oi.oi_seq)) {
 		if (oid >= IDIF_MAX_OID)
@@ -569,10 +569,10 @@ static inline void ost_fid_build_resid(const struct lu_fid *fid,
  * the time between re-used inode numbers is very long - 2^40 SEQ numbers,
  * or about 2^40 client mounts, if clients create less than 2^24 files/mount.
  */
-static inline __u64 fid_flatten(const struct lu_fid *fid)
+static inline u64 fid_flatten(const struct lu_fid *fid)
 {
-	__u64 ino;
-	__u64 seq;
+	u64 ino;
+	u64 seq;
 
 	if (fid_is_igif(fid)) {
 		ino = lu_igif_ino(fid);
@@ -586,7 +586,7 @@ static inline __u64 fid_flatten(const struct lu_fid *fid)
 	return ino ? ino : fid_oid(fid);
 }
 
-static inline __u32 fid_hash(const struct lu_fid *f, int bits)
+static inline u32 fid_hash(const struct lu_fid *f, int bits)
 {
 	/* all objects with same id and different versions will belong to same
 	 * collisions list.
@@ -597,10 +597,10 @@ static inline __u32 fid_hash(const struct lu_fid *f, int bits)
 /**
  * map fid to 32 bit value for ino on 32bit systems.
  */
-static inline __u32 fid_flatten32(const struct lu_fid *fid)
+static inline u32 fid_flatten32(const struct lu_fid *fid)
 {
-	__u32 ino;
-	__u64 seq;
+	u32 ino;
+	u64 seq;
 
 	if (fid_is_igif(fid)) {
 		ino = lu_igif_ino(fid);
diff --git a/drivers/staging/lustre/lustre/include/lustre_fld.h b/drivers/staging/lustre/lustre/include/lustre_fld.h
index f42122a..4bcabf7 100644
--- a/drivers/staging/lustre/lustre/include/lustre_fld.h
+++ b/drivers/staging/lustre/lustre/include/lustre_fld.h
@@ -62,7 +62,7 @@ struct lu_fld_target {
 	struct list_head	       ft_chain;
 	struct obd_export       *ft_exp;
 	struct lu_server_fld    *ft_srv;
-	__u64		    ft_idx;
+	u64		    ft_idx;
 };
 
 struct lu_server_fld {
@@ -115,7 +115,7 @@ int fld_client_init(struct lu_client_fld *fld,
 void fld_client_flush(struct lu_client_fld *fld);
 
 int fld_client_lookup(struct lu_client_fld *fld, u64 seq, u32 *mds,
-		      __u32 flags, const struct lu_env *env);
+		      u32 flags, const struct lu_env *env);
 
 int fld_client_create(struct lu_client_fld *fld,
 		      struct lu_seq_range *range,
@@ -128,7 +128,7 @@ int fld_client_add_target(struct lu_client_fld *fld,
 			  struct lu_fld_target *tar);
 
 int fld_client_del_target(struct lu_client_fld *fld,
-			  __u64 idx);
+			  u64 idx);
 
 void fld_client_debugfs_fini(struct lu_client_fld *fld);
 
diff --git a/drivers/staging/lustre/lustre/include/lustre_ha.h b/drivers/staging/lustre/lustre/include/lustre_ha.h
index cbd6898..af92a56 100644
--- a/drivers/staging/lustre/lustre/include/lustre_ha.h
+++ b/drivers/staging/lustre/lustre/include/lustre_ha.h
@@ -53,7 +53,7 @@
 void ptlrpc_activate_import(struct obd_import *imp);
 void ptlrpc_deactivate_import(struct obd_import *imp);
 void ptlrpc_invalidate_import(struct obd_import *imp);
-void ptlrpc_fail_import(struct obd_import *imp, __u32 conn_cnt);
+void ptlrpc_fail_import(struct obd_import *imp, u32 conn_cnt);
 void ptlrpc_pinger_force(struct obd_import *imp);
 
 /** @} ha */
diff --git a/drivers/staging/lustre/lustre/include/lustre_handles.h b/drivers/staging/lustre/lustre/include/lustre_handles.h
index 3556ce8..84f70f3 100644
--- a/drivers/staging/lustre/lustre/include/lustre_handles.h
+++ b/drivers/staging/lustre/lustre/include/lustre_handles.h
@@ -64,7 +64,7 @@ struct portals_handle_ops {
  */
 struct portals_handle {
 	struct list_head			h_link;
-	__u64				h_cookie;
+	u64				h_cookie;
 	const void			*h_owner;
 	struct portals_handle_ops	*h_ops;
 
@@ -81,7 +81,7 @@ struct portals_handle {
 void class_handle_hash(struct portals_handle *,
 		       struct portals_handle_ops *ops);
 void class_handle_unhash(struct portals_handle *);
-void *class_handle2object(__u64 cookie, const void *owner);
+void *class_handle2object(u64 cookie, const void *owner);
 void class_handle_free_cb(struct rcu_head *rcu);
 int class_handle_init(void);
 void class_handle_cleanup(void);
diff --git a/drivers/staging/lustre/lustre/include/lustre_import.h b/drivers/staging/lustre/lustre/include/lustre_import.h
index 8a8a125..db075be 100644
--- a/drivers/staging/lustre/lustre/include/lustre_import.h
+++ b/drivers/staging/lustre/lustre/include/lustre_import.h
@@ -69,10 +69,10 @@ struct adaptive_timeout {
 
 struct ptlrpc_at_array {
 	struct list_head       *paa_reqs_array; /** array to hold requests */
-	__u32	     paa_size;       /** the size of array */
-	__u32	     paa_count;      /** the total count of reqs */
+	u32	     paa_size;       /** the size of array */
+	u32	     paa_count;      /** the total count of reqs */
 	time64_t     paa_deadline;   /** the earliest deadline of reqs */
-	__u32	    *paa_reqs_count; /** the count of reqs in each entry */
+	u32	    *paa_reqs_count; /** the count of reqs in each entry */
 };
 
 #define IMP_AT_MAX_PORTALS 8
@@ -137,7 +137,7 @@ struct obd_import_conn {
 	/**
 	 * Time (64 bit jiffies) of last connection attempt on this connection
 	 */
-	__u64		     oic_last_attempt;
+	u64		     oic_last_attempt;
 };
 
 /* state history */
@@ -190,7 +190,7 @@ struct obd_import {
 	/** List of not replied requests */
 	struct list_head	imp_unreplied_list;
 	/** Known maximal replied XID */
-	__u64			imp_known_replied_xid;
+	u64			imp_known_replied_xid;
 
 	/** obd device for this import */
 	struct obd_device	*imp_obd;
@@ -227,23 +227,23 @@ struct obd_import {
 	/** Current import generation. Incremented on every reconnect */
 	int		       imp_generation;
 	/** Incremented every time we send reconnection request */
-	__u32		     imp_conn_cnt;
+	u32		     imp_conn_cnt;
        /**
 	* \see ptlrpc_free_committed remembers imp_generation value here
 	* after a check to save on unnecessary replay list iterations
 	*/
 	int		       imp_last_generation_checked;
 	/** Last transno we replayed */
-	__u64		     imp_last_replay_transno;
+	u64		     imp_last_replay_transno;
 	/** Last transno committed on remote side */
-	__u64		     imp_peer_committed_transno;
+	u64		     imp_peer_committed_transno;
 	/**
 	 * \see ptlrpc_free_committed remembers last_transno since its last
 	 * check here and if last_transno did not change since last run of
 	 * ptlrpc_free_committed and import generation is the same, we can
 	 * skip looking for requests to remove from replay list as optimisation
 	 */
-	__u64		     imp_last_transno_checked;
+	u64		     imp_last_transno_checked;
 	/**
 	 * Remote export handle. This is how remote side knows what export
 	 * we are talking to. Filled from response to connect request
@@ -252,7 +252,7 @@ struct obd_import {
 	/** When to perform next ping. time in jiffies. */
 	unsigned long		imp_next_ping;
 	/** When we last successfully connected. time in 64bit jiffies */
-	__u64		     imp_last_success_conn;
+	u64		     imp_last_success_conn;
 
 	/** List of all possible connection for import. */
 	struct list_head		imp_conn_list;
@@ -304,14 +304,14 @@ struct obd_import {
 				  imp_connect_tried:1,
 				 /* connected but not FULL yet */
 				 imp_connected:1;
-	__u32		     imp_connect_op;
+	u32		     imp_connect_op;
 	struct obd_connect_data   imp_connect_data;
-	__u64		     imp_connect_flags_orig;
+	u64		     imp_connect_flags_orig;
 	u64			imp_connect_flags2_orig;
 	int		       imp_connect_error;
 
-	__u32		     imp_msg_magic;
-	__u32		     imp_msghdr_flags;       /* adjusted based on server capability */
+	u32		     imp_msg_magic;
+	u32		     imp_msghdr_flags;       /* adjusted based on server capability */
 
 	struct imp_at	     imp_at;		 /* adaptive timeout data */
 	time64_t	     imp_last_reply_time;    /* for health check */
diff --git a/drivers/staging/lustre/lustre/include/lustre_intent.h b/drivers/staging/lustre/lustre/include/lustre_intent.h
index 51e5c0e..3f26d7a 100644
--- a/drivers/staging/lustre/lustre/include/lustre_intent.h
+++ b/drivers/staging/lustre/lustre/include/lustre_intent.h
@@ -41,14 +41,14 @@
 struct lookup_intent {
 	int		it_op;
 	int		it_create_mode;
-	__u64		it_flags;
+	u64		it_flags;
 	int		it_disposition;
 	int		it_status;
-	__u64		it_lock_handle;
-	__u64		it_lock_bits;
+	u64		it_lock_handle;
+	u64		it_lock_bits;
 	int		it_lock_mode;
 	int		it_remote_lock_mode;
-	__u64	   it_remote_lock_handle;
+	u64	   it_remote_lock_handle;
 	struct ptlrpc_request *it_request;
 	unsigned int    it_lock_set:1;
 };
diff --git a/drivers/staging/lustre/lustre/include/lustre_lmv.h b/drivers/staging/lustre/lustre/include/lustre_lmv.h
index 080ec1f..c4f05d2 100644
--- a/drivers/staging/lustre/lustre/include/lustre_lmv.h
+++ b/drivers/staging/lustre/lustre/include/lustre_lmv.h
@@ -42,13 +42,13 @@ struct lmv_oinfo {
 };
 
 struct lmv_stripe_md {
-	__u32	lsm_md_magic;
-	__u32	lsm_md_stripe_count;
-	__u32	lsm_md_master_mdt_index;
-	__u32	lsm_md_hash_type;
-	__u32	lsm_md_layout_version;
-	__u32	lsm_md_default_count;
-	__u32	lsm_md_default_index;
+	u32	lsm_md_magic;
+	u32	lsm_md_stripe_count;
+	u32	lsm_md_master_mdt_index;
+	u32	lsm_md_hash_type;
+	u32	lsm_md_layout_version;
+	u32	lsm_md_default_count;
+	u32	lsm_md_default_index;
 	char	lsm_md_pool_name[LOV_MAXPOOLNAME + 1];
 	struct lmv_oinfo lsm_md_oinfo[0];
 };
@@ -56,7 +56,7 @@ struct lmv_stripe_md {
 static inline bool
 lsm_md_eq(const struct lmv_stripe_md *lsm1, const struct lmv_stripe_md *lsm2)
 {
-	__u32 idx;
+	u32 idx;
 
 	if (lsm1->lsm_md_magic != lsm2->lsm_md_magic ||
 	    lsm1->lsm_md_stripe_count != lsm2->lsm_md_stripe_count ||
@@ -82,7 +82,7 @@ struct lmv_stripe_md {
 static inline void lmv1_le_to_cpu(struct lmv_mds_md_v1 *lmv_dst,
 				  const struct lmv_mds_md_v1 *lmv_src)
 {
-	__u32 i;
+	u32 i;
 
 	lmv_dst->lmv_magic = le32_to_cpu(lmv_src->lmv_magic);
 	lmv_dst->lmv_stripe_count = le32_to_cpu(lmv_src->lmv_stripe_count);
@@ -126,18 +126,18 @@ static inline void lmv_le_to_cpu(union lmv_mds_md *lmv_dst,
 static inline unsigned int
 lmv_hash_fnv1a(unsigned int count, const char *name, int namelen)
 {
-	__u64 hash;
+	u64 hash;
 
 	hash = lustre_hash_fnv_1a_64(name, namelen);
 
 	return do_div(hash, count);
 }
 
-static inline int lmv_name_to_stripe_index(__u32 lmv_hash_type,
+static inline int lmv_name_to_stripe_index(u32 lmv_hash_type,
 					   unsigned int stripe_count,
 					   const char *name, int namelen)
 {
-	__u32 hash_type = lmv_hash_type & LMV_HASH_TYPE_MASK;
+	u32 hash_type = lmv_hash_type & LMV_HASH_TYPE_MASK;
 	int idx;
 
 	LASSERT(namelen > 0);
@@ -165,7 +165,7 @@ static inline int lmv_name_to_stripe_index(__u32 lmv_hash_type,
 	return idx;
 }
 
-static inline bool lmv_is_known_hash_type(__u32 type)
+static inline bool lmv_is_known_hash_type(u32 type)
 {
 	return (type & LMV_HASH_TYPE_MASK) == LMV_HASH_TYPE_FNV_1A_64 ||
 	       (type & LMV_HASH_TYPE_MASK) == LMV_HASH_TYPE_ALL_CHARS;
diff --git a/drivers/staging/lustre/lustre/include/lustre_log.h b/drivers/staging/lustre/lustre/include/lustre_log.h
index 07f4e60..4ba4501 100644
--- a/drivers/staging/lustre/lustre/include/lustre_log.h
+++ b/drivers/staging/lustre/lustre/include/lustre_log.h
@@ -143,7 +143,7 @@ int llog_setup(const struct lu_env *env, struct obd_device *obd,
 
 struct llog_operations {
 	int (*lop_next_block)(const struct lu_env *env, struct llog_handle *h,
-			      int *curr_idx, int next_idx, __u64 *offset,
+			      int *curr_idx, int next_idx, u64 *offset,
 			      void *buf, int len);
 	int (*lop_prev_block)(const struct lu_env *env, struct llog_handle *h,
 			      int prev_idx, void *buf, int len);
@@ -218,7 +218,7 @@ struct llog_handle {
 	size_t			 lgh_hdr_size;
 	int			 lgh_last_idx;
 	int			 lgh_cur_idx; /* used during llog_process */
-	__u64			 lgh_cur_offset; /* used during llog_process */
+	u64			 lgh_cur_offset; /* used during llog_process */
 	struct llog_ctxt	*lgh_ctxt;
 	union {
 		struct plain_handle_data	 phd;
@@ -250,7 +250,7 @@ struct llog_ctxt {
 	 * llog chunk size, and llog record size can not be bigger than
 	 * loc_chunk_size
 	 */
-	__u32			loc_chunk_size;
+	u32			loc_chunk_size;
 };
 
 #define LLOG_PROC_BREAK 0x0001
@@ -348,7 +348,7 @@ static inline int llog_ctxt_null(struct obd_device *obd, int index)
 
 static inline int llog_next_block(const struct lu_env *env,
 				  struct llog_handle *loghandle, int *cur_idx,
-				  int next_idx, __u64 *cur_offset, void *buf,
+				  int next_idx, u64 *cur_offset, void *buf,
 				  int len)
 {
 	struct llog_operations *lop;
diff --git a/drivers/staging/lustre/lustre/include/lustre_mdc.h b/drivers/staging/lustre/lustre/include/lustre_mdc.h
index 6ac7fc4..c1fb324 100644
--- a/drivers/staging/lustre/lustre/include/lustre_mdc.h
+++ b/drivers/staging/lustre/lustre/include/lustre_mdc.h
@@ -204,7 +204,7 @@ static inline void mdc_update_max_ea_from_body(struct obd_export *exp,
 		if (cli->cl_max_mds_easize < body->mbo_max_mdsize)
 			cli->cl_max_mds_easize = body->mbo_max_mdsize;
 
-		def_easize = min_t(__u32, body->mbo_max_mdsize,
+		def_easize = min_t(u32, body->mbo_max_mdsize,
 				   OBD_MAX_DEFAULT_EA_SIZE);
 		cli->cl_default_mds_easize = def_easize;
 	}
diff --git a/drivers/staging/lustre/lustre/include/lustre_net.h b/drivers/staging/lustre/lustre/include/lustre_net.h
index 8e34766..050a7ec 100644
--- a/drivers/staging/lustre/lustre/include/lustre_net.h
+++ b/drivers/staging/lustre/lustre/include/lustre_net.h
@@ -95,7 +95,7 @@
  * use the negotiated per-client ocd_brw_size to determine the bulk
  * RPC count.
  */
-#define PTLRPC_BULK_OPS_MASK	(~((__u64)PTLRPC_BULK_OPS_COUNT - 1))
+#define PTLRPC_BULK_OPS_MASK	(~((u64)PTLRPC_BULK_OPS_COUNT - 1))
 
 /**
  * Define maxima for bulk I/O.
@@ -304,9 +304,9 @@ struct ptlrpc_connection {
 /** Client definition for PortalRPC */
 struct ptlrpc_client {
 	/** What lnet portal does this client send messages to by default */
-	__u32		   cli_request_portal;
+	u32		   cli_request_portal;
 	/** What portal do we expect replies on */
-	__u32		   cli_reply_portal;
+	u32		   cli_reply_portal;
 	/** Name of the client */
 	char		   *cli_name;
 };
@@ -327,7 +327,7 @@ struct ptlrpc_client {
 	 * least big enough for that.
 	 */
 	void      *pointer_arg[11];
-	__u64      space[7];
+	u64      space[7];
 };
 
 struct ptlrpc_request_set;
@@ -455,11 +455,11 @@ struct ptlrpc_reply_state {
 	/** Size of the state */
 	int		    rs_size;
 	/** opcode */
-	__u32		  rs_opc;
+	u32		  rs_opc;
 	/** Transaction number */
-	__u64		  rs_transno;
+	u64		  rs_transno;
 	/** xid */
-	__u64		  rs_xid;
+	u64		  rs_xid;
 	struct obd_export     *rs_export;
 	struct ptlrpc_service_part *rs_svcpt;
 	/** Lnet metadata handle for the reply */
@@ -667,7 +667,7 @@ struct ptlrpc_srv_req {
 	/** server-side history, used for debuging purposes. */
 	struct list_head		sr_hist_list;
 	/** history sequence # */
-	__u64				sr_hist_seq;
+	u64				sr_hist_seq;
 	/** the index of service's srv_at_array into which request is linked */
 	time64_t			sr_at_index;
 	/** authed uid */
@@ -809,9 +809,9 @@ struct ptlrpc_request {
 	/** Reply message - server response */
 	struct lustre_msg *rq_repmsg;
 	/** Transaction number */
-	__u64 rq_transno;
+	u64 rq_transno;
 	/** xid */
-	__u64 rq_xid;
+	u64 rq_xid;
 	/** bulk match bits */
 	u64				rq_mbits;
 	/**
@@ -871,8 +871,8 @@ struct ptlrpc_request {
 	/** @} */
 
 	/** Fields that help to see if request and reply were swabbed or not */
-	__u32 rq_req_swab_mask;
-	__u32 rq_rep_swab_mask;
+	u32 rq_req_swab_mask;
+	u32 rq_rep_swab_mask;
 
 	/** how many early replies (for stats) */
 	int rq_early_count;
@@ -1214,7 +1214,7 @@ struct ptlrpc_bulk_desc {
 	/** {put,get}{source,sink}{kvec,kiov} */
 	enum ptlrpc_bulk_op_type bd_type;
 	/** LNet portal for this bulk */
-	__u32 bd_portal;
+	u32 bd_portal;
 	/** Server side - export this bulk created for */
 	struct obd_export *bd_export;
 	/** Client side - import this bulk was sent on */
@@ -1282,7 +1282,7 @@ struct ptlrpc_thread {
 	 * thread-private data (preallocated memory)
 	 */
 	void *t_data;
-	__u32 t_flags;
+	u32 t_flags;
 	/**
 	 * service thread index, from ptlrpc_start_threads
 	 */
@@ -1329,23 +1329,23 @@ static inline int thread_is_running(struct ptlrpc_thread *thread)
 	return !!(thread->t_flags & SVC_RUNNING);
 }
 
-static inline void thread_clear_flags(struct ptlrpc_thread *thread, __u32 flags)
+static inline void thread_clear_flags(struct ptlrpc_thread *thread, u32 flags)
 {
 	thread->t_flags &= ~flags;
 }
 
-static inline void thread_set_flags(struct ptlrpc_thread *thread, __u32 flags)
+static inline void thread_set_flags(struct ptlrpc_thread *thread, u32 flags)
 {
 	thread->t_flags = flags;
 }
 
-static inline void thread_add_flags(struct ptlrpc_thread *thread, __u32 flags)
+static inline void thread_add_flags(struct ptlrpc_thread *thread, u32 flags)
 {
 	thread->t_flags |= flags;
 }
 
 static inline int thread_test_and_clear_flags(struct ptlrpc_thread *thread,
-					      __u32 flags)
+					      u32 flags)
 {
 	if (thread->t_flags & flags) {
 		thread->t_flags &= ~flags;
@@ -1459,14 +1459,14 @@ struct ptlrpc_service {
 	/** # buffers to allocate in 1 group */
 	int			     srv_nbuf_per_group;
 	/** Local portal on which to receive requests */
-	__u32			   srv_req_portal;
+	u32			   srv_req_portal;
 	/** Portal on the client to send replies to */
-	__u32			   srv_rep_portal;
+	u32			   srv_rep_portal;
 	/**
 	 * Tags for lu_context associated with this thread, see struct
 	 * lu_context.
 	 */
-	__u32			   srv_ctx_tags;
+	u32			   srv_ctx_tags;
 	/** soft watchdog timeout multiplier */
 	int			     srv_watchdog_factor;
 	/** under unregister_service */
@@ -1477,7 +1477,7 @@ struct ptlrpc_service {
 	/** number of CPTs this service bound on */
 	int				srv_ncpts;
 	/** CPTs array this service bound on */
-	__u32				*srv_cpts;
+	u32				*srv_cpts;
 	/** 2^srv_cptab_bits >= cfs_cpt_numbert(srv_cptable) */
 	int				srv_cpt_bits;
 	/** CPT table this service is running over */
@@ -1561,9 +1561,9 @@ struct ptlrpc_service_part {
 	/** # request buffers in history */
 	int				scp_hist_nrqbds;
 	/** sequence number for request */
-	__u64				scp_hist_seq;
+	u64				scp_hist_seq;
 	/** highest seq culled from history */
-	__u64				scp_hist_seq_culled;
+	u64				scp_hist_seq_culled;
 
 	/**
 	 * serialize the following fields, used for processing requests
@@ -1849,12 +1849,12 @@ struct ptlrpc_request *ptlrpc_request_alloc_pool(struct obd_import *imp,
 						 const struct req_format *);
 void ptlrpc_request_free(struct ptlrpc_request *request);
 int ptlrpc_request_pack(struct ptlrpc_request *request,
-			__u32 version, int opcode);
+			u32 version, int opcode);
 struct ptlrpc_request *ptlrpc_request_alloc_pack(struct obd_import *,
 						 const struct req_format *,
-						 __u32, int);
+						 u32, int);
 int ptlrpc_request_bufs_pack(struct ptlrpc_request *request,
-			     __u32 version, int opcode, char **bufs,
+			     u32 version, int opcode, char **bufs,
 			     struct ptlrpc_cli_ctx *ctx);
 void ptlrpc_req_finished(struct ptlrpc_request *request);
 struct ptlrpc_request *ptlrpc_request_addref(struct ptlrpc_request *req);
@@ -1896,9 +1896,9 @@ static inline void ptlrpc_release_bulk_page_pin(struct ptlrpc_bulk_desc *desc)
 
 void ptlrpc_retain_replayable_request(struct ptlrpc_request *req,
 				      struct obd_import *imp);
-__u64 ptlrpc_next_xid(void);
-__u64 ptlrpc_sample_next_xid(void);
-__u64 ptlrpc_req_xid(struct ptlrpc_request *request);
+u64 ptlrpc_next_xid(void);
+u64 ptlrpc_sample_next_xid(void);
+u64 ptlrpc_req_xid(struct ptlrpc_request *request);
 
 /* Set of routines to run a function in ptlrpcd context */
 void *ptlrpcd_alloc_work(struct obd_import *imp,
@@ -1945,7 +1945,7 @@ struct ptlrpc_service_thr_conf {
 	/* set NUMA node affinity for service threads */
 	unsigned int			tc_cpu_affinity;
 	/* Tags for lu_context associated with service thread */
-	__u32				tc_ctx_tags;
+	u32				tc_ctx_tags;
 };
 
 struct ptlrpc_service_cpt_conf {
@@ -2016,24 +2016,24 @@ void ptlrpc_buf_set_swabbed(struct ptlrpc_request *req, const int inout,
 int ptlrpc_unpack_rep_msg(struct ptlrpc_request *req, int len);
 int ptlrpc_unpack_req_msg(struct ptlrpc_request *req, int len);
 
-void lustre_init_msg_v2(struct lustre_msg_v2 *msg, int count, __u32 *lens,
+void lustre_init_msg_v2(struct lustre_msg_v2 *msg, int count, u32 *lens,
 			char **bufs);
-int lustre_pack_request(struct ptlrpc_request *, __u32 magic, int count,
-			__u32 *lens, char **bufs);
-int lustre_pack_reply(struct ptlrpc_request *, int count, __u32 *lens,
+int lustre_pack_request(struct ptlrpc_request *, u32 magic, int count,
+			u32 *lens, char **bufs);
+int lustre_pack_reply(struct ptlrpc_request *, int count, u32 *lens,
 		      char **bufs);
 int lustre_pack_reply_v2(struct ptlrpc_request *req, int count,
-			 __u32 *lens, char **bufs, int flags);
+			 u32 *lens, char **bufs, int flags);
 #define LPRFL_EARLY_REPLY 1
-int lustre_pack_reply_flags(struct ptlrpc_request *, int count, __u32 *lens,
+int lustre_pack_reply_flags(struct ptlrpc_request *, int count, u32 *lens,
 			    char **bufs, int flags);
 int lustre_shrink_msg(struct lustre_msg *msg, int segment,
 		      unsigned int newlen, int move_data);
 void lustre_free_reply_state(struct ptlrpc_reply_state *rs);
 int __lustre_unpack_msg(struct lustre_msg *m, int len);
-u32 lustre_msg_hdr_size(__u32 magic, u32 count);
-u32 lustre_msg_size(__u32 magic, int count, __u32 *lengths);
-u32 lustre_msg_size_v2(int count, __u32 *lengths);
+u32 lustre_msg_hdr_size(u32 magic, u32 count);
+u32 lustre_msg_size(u32 magic, int count, u32 *lengths);
+u32 lustre_msg_size_v2(int count, u32 *lengths);
 u32 lustre_packed_msg_size(struct lustre_msg *msg);
 u32 lustre_msg_early_size(void);
 void *lustre_msg_buf_v2(struct lustre_msg_v2 *m, u32 n, u32 min_size);
@@ -2041,48 +2041,48 @@ int lustre_shrink_msg(struct lustre_msg *msg, int segment,
 u32 lustre_msg_buflen(struct lustre_msg *m, u32 n);
 u32 lustre_msg_bufcount(struct lustre_msg *m);
 char *lustre_msg_string(struct lustre_msg *m, u32 n, u32 max_len);
-__u32 lustre_msghdr_get_flags(struct lustre_msg *msg);
-void lustre_msghdr_set_flags(struct lustre_msg *msg, __u32 flags);
-__u32 lustre_msg_get_flags(struct lustre_msg *msg);
+u32 lustre_msghdr_get_flags(struct lustre_msg *msg);
+void lustre_msghdr_set_flags(struct lustre_msg *msg, u32 flags);
+u32 lustre_msg_get_flags(struct lustre_msg *msg);
 void lustre_msg_add_flags(struct lustre_msg *msg, u32 flags);
 void lustre_msg_set_flags(struct lustre_msg *msg, u32 flags);
 void lustre_msg_clear_flags(struct lustre_msg *msg, u32 flags);
-__u32 lustre_msg_get_op_flags(struct lustre_msg *msg);
+u32 lustre_msg_get_op_flags(struct lustre_msg *msg);
 void lustre_msg_add_op_flags(struct lustre_msg *msg, u32 flags);
 struct lustre_handle *lustre_msg_get_handle(struct lustre_msg *msg);
-__u32 lustre_msg_get_type(struct lustre_msg *msg);
+u32 lustre_msg_get_type(struct lustre_msg *msg);
 void lustre_msg_add_version(struct lustre_msg *msg, u32 version);
-__u32 lustre_msg_get_opc(struct lustre_msg *msg);
-__u16 lustre_msg_get_tag(struct lustre_msg *msg);
-__u64 lustre_msg_get_last_committed(struct lustre_msg *msg);
-__u64 *lustre_msg_get_versions(struct lustre_msg *msg);
-__u64 lustre_msg_get_transno(struct lustre_msg *msg);
-__u64 lustre_msg_get_slv(struct lustre_msg *msg);
-__u32 lustre_msg_get_limit(struct lustre_msg *msg);
-void lustre_msg_set_slv(struct lustre_msg *msg, __u64 slv);
-void lustre_msg_set_limit(struct lustre_msg *msg, __u64 limit);
+u32 lustre_msg_get_opc(struct lustre_msg *msg);
+u16 lustre_msg_get_tag(struct lustre_msg *msg);
+u64 lustre_msg_get_last_committed(struct lustre_msg *msg);
+u64 *lustre_msg_get_versions(struct lustre_msg *msg);
+u64 lustre_msg_get_transno(struct lustre_msg *msg);
+u64 lustre_msg_get_slv(struct lustre_msg *msg);
+u32 lustre_msg_get_limit(struct lustre_msg *msg);
+void lustre_msg_set_slv(struct lustre_msg *msg, u64 slv);
+void lustre_msg_set_limit(struct lustre_msg *msg, u64 limit);
 int lustre_msg_get_status(struct lustre_msg *msg);
-__u32 lustre_msg_get_conn_cnt(struct lustre_msg *msg);
-__u32 lustre_msg_get_magic(struct lustre_msg *msg);
-__u32 lustre_msg_get_timeout(struct lustre_msg *msg);
-__u32 lustre_msg_get_service_time(struct lustre_msg *msg);
-__u32 lustre_msg_get_cksum(struct lustre_msg *msg);
-__u32 lustre_msg_calc_cksum(struct lustre_msg *msg);
+u32 lustre_msg_get_conn_cnt(struct lustre_msg *msg);
+u32 lustre_msg_get_magic(struct lustre_msg *msg);
+u32 lustre_msg_get_timeout(struct lustre_msg *msg);
+u32 lustre_msg_get_service_time(struct lustre_msg *msg);
+u32 lustre_msg_get_cksum(struct lustre_msg *msg);
+u32 lustre_msg_calc_cksum(struct lustre_msg *msg);
 void lustre_msg_set_handle(struct lustre_msg *msg,
 			   struct lustre_handle *handle);
-void lustre_msg_set_type(struct lustre_msg *msg, __u32 type);
-void lustre_msg_set_opc(struct lustre_msg *msg, __u32 opc);
+void lustre_msg_set_type(struct lustre_msg *msg, u32 type);
+void lustre_msg_set_opc(struct lustre_msg *msg, u32 opc);
 void lustre_msg_set_last_xid(struct lustre_msg *msg, u64 last_xid);
-void lustre_msg_set_tag(struct lustre_msg *msg, __u16 tag);
-void lustre_msg_set_versions(struct lustre_msg *msg, __u64 *versions);
-void lustre_msg_set_transno(struct lustre_msg *msg, __u64 transno);
-void lustre_msg_set_status(struct lustre_msg *msg, __u32 status);
-void lustre_msg_set_conn_cnt(struct lustre_msg *msg, __u32 conn_cnt);
+void lustre_msg_set_tag(struct lustre_msg *msg, u16 tag);
+void lustre_msg_set_versions(struct lustre_msg *msg, u64 *versions);
+void lustre_msg_set_transno(struct lustre_msg *msg, u64 transno);
+void lustre_msg_set_status(struct lustre_msg *msg, u32 status);
+void lustre_msg_set_conn_cnt(struct lustre_msg *msg, u32 conn_cnt);
 void ptlrpc_request_set_replen(struct ptlrpc_request *req);
-void lustre_msg_set_timeout(struct lustre_msg *msg, __u32 timeout);
-void lustre_msg_set_service_time(struct lustre_msg *msg, __u32 service_time);
+void lustre_msg_set_timeout(struct lustre_msg *msg, u32 timeout);
+void lustre_msg_set_service_time(struct lustre_msg *msg, u32 service_time);
 void lustre_msg_set_jobid(struct lustre_msg *msg, char *jobid);
-void lustre_msg_set_cksum(struct lustre_msg *msg, __u32 cksum);
+void lustre_msg_set_cksum(struct lustre_msg *msg, u32 cksum);
 void lustre_msg_set_mbits(struct lustre_msg *msg, u64 mbits);
 
 static inline void
@@ -2244,7 +2244,7 @@ static inline void ptlrpc_req_drop_rs(struct ptlrpc_request *req)
 	req->rq_repmsg = NULL;
 }
 
-static inline __u32 lustre_request_magic(struct ptlrpc_request *req)
+static inline u32 lustre_request_magic(struct ptlrpc_request *req)
 {
 	return lustre_msg_get_magic(req->rq_reqmsg);
 }
@@ -2355,7 +2355,7 @@ int ptlrpc_del_timeout_client(struct list_head *obd_list,
  * procfs output related functions
  * @{
  */
-const char *ll_opcode2str(__u32 opcode);
+const char *ll_opcode2str(u32 opcode);
 void ptlrpc_lprocfs_register_obd(struct obd_device *obd);
 void ptlrpc_lprocfs_unregister_obd(struct obd_device *obd);
 void ptlrpc_lprocfs_brw(struct ptlrpc_request *req, int bytes);
diff --git a/drivers/staging/lustre/lustre/include/lustre_nrs_fifo.h b/drivers/staging/lustre/lustre/include/lustre_nrs_fifo.h
index b70d97d..0db4345f 100644
--- a/drivers/staging/lustre/lustre/include/lustre_nrs_fifo.h
+++ b/drivers/staging/lustre/lustre/include/lustre_nrs_fifo.h
@@ -59,12 +59,12 @@ struct nrs_fifo_head {
 	/**
 	 * For debugging purposes.
 	 */
-	__u64				fh_sequence;
+	u64				fh_sequence;
 };
 
 struct nrs_fifo_req {
 	struct list_head	fr_list;
-	__u64			fr_sequence;
+	u64			fr_sequence;
 };
 
 /** @} fifo */
diff --git a/drivers/staging/lustre/lustre/include/lustre_req_layout.h b/drivers/staging/lustre/lustre/include/lustre_req_layout.h
index 3387ab2..2aba99f 100644
--- a/drivers/staging/lustre/lustre/include/lustre_req_layout.h
+++ b/drivers/staging/lustre/lustre/include/lustre_req_layout.h
@@ -66,7 +66,7 @@ struct req_capsule {
 	struct ptlrpc_request   *rc_req;
 	const struct req_format *rc_fmt;
 	enum req_location	rc_loc;
-	__u32		    rc_area[RCL_NR][REQ_MAX_FIELD_NR];
+	u32		    rc_area[RCL_NR][REQ_MAX_FIELD_NR];
 };
 
 void req_capsule_init(struct req_capsule *pill, struct ptlrpc_request *req,
@@ -105,7 +105,7 @@ u32 req_capsule_get_size(const struct req_capsule *pill,
 			 const struct req_msg_field *field,
 			 enum req_location loc);
 u32 req_capsule_msg_size(struct req_capsule *pill, enum req_location loc);
-u32 req_capsule_fmt_size(__u32 magic, const struct req_format *fmt,
+u32 req_capsule_fmt_size(u32 magic, const struct req_format *fmt,
 			 enum req_location loc);
 void req_capsule_extend(struct req_capsule *pill, const struct req_format *fmt);
 
diff --git a/drivers/staging/lustre/lustre/include/lustre_sec.h b/drivers/staging/lustre/lustre/include/lustre_sec.h
index 43ff594..c622c8d 100644
--- a/drivers/staging/lustre/lustre/include/lustre_sec.h
+++ b/drivers/staging/lustre/lustre/include/lustre_sec.h
@@ -139,37 +139,37 @@ enum sptlrpc_bulk_service {
 #define FLVR_BULK_SVC_OFFSET	    (16)
 
 #define MAKE_FLVR(policy, mech, svc, btype, bsvc)		       \
-	(((__u32)(policy) << FLVR_POLICY_OFFSET) |		      \
-	 ((__u32)(mech) << FLVR_MECH_OFFSET) |			  \
-	 ((__u32)(svc) << FLVR_SVC_OFFSET) |			    \
-	 ((__u32)(btype) << FLVR_BULK_TYPE_OFFSET) |		    \
-	 ((__u32)(bsvc) << FLVR_BULK_SVC_OFFSET))
+	(((u32)(policy) << FLVR_POLICY_OFFSET) |		      \
+	 ((u32)(mech) << FLVR_MECH_OFFSET) |			  \
+	 ((u32)(svc) << FLVR_SVC_OFFSET) |			    \
+	 ((u32)(btype) << FLVR_BULK_TYPE_OFFSET) |		    \
+	 ((u32)(bsvc) << FLVR_BULK_SVC_OFFSET))
 
 /*
  * extraction
  */
 #define SPTLRPC_FLVR_POLICY(flavor)				     \
-	((((__u32)(flavor)) >> FLVR_POLICY_OFFSET) & 0xF)
+	((((u32)(flavor)) >> FLVR_POLICY_OFFSET) & 0xF)
 #define SPTLRPC_FLVR_MECH(flavor)				       \
-	((((__u32)(flavor)) >> FLVR_MECH_OFFSET) & 0xF)
+	((((u32)(flavor)) >> FLVR_MECH_OFFSET) & 0xF)
 #define SPTLRPC_FLVR_SVC(flavor)					\
-	((((__u32)(flavor)) >> FLVR_SVC_OFFSET) & 0xF)
+	((((u32)(flavor)) >> FLVR_SVC_OFFSET) & 0xF)
 #define SPTLRPC_FLVR_BULK_TYPE(flavor)				  \
-	((((__u32)(flavor)) >> FLVR_BULK_TYPE_OFFSET) & 0xF)
+	((((u32)(flavor)) >> FLVR_BULK_TYPE_OFFSET) & 0xF)
 #define SPTLRPC_FLVR_BULK_SVC(flavor)				   \
-	((((__u32)(flavor)) >> FLVR_BULK_SVC_OFFSET) & 0xF)
+	((((u32)(flavor)) >> FLVR_BULK_SVC_OFFSET) & 0xF)
 
 #define SPTLRPC_FLVR_BASE(flavor)				       \
-	((((__u32)(flavor)) >> FLVR_POLICY_OFFSET) & 0xFFF)
+	((((u32)(flavor)) >> FLVR_POLICY_OFFSET) & 0xFFF)
 #define SPTLRPC_FLVR_BASE_SUB(flavor)				   \
-	((((__u32)(flavor)) >> FLVR_MECH_OFFSET) & 0xFF)
+	((((u32)(flavor)) >> FLVR_MECH_OFFSET) & 0xFF)
 
 /*
  * gss subflavors
  */
 #define MAKE_BASE_SUBFLVR(mech, svc)				    \
-	((__u32)(mech) |						\
-	 ((__u32)(svc) << (FLVR_SVC_OFFSET - FLVR_MECH_OFFSET)))
+	((u32)(mech) |						\
+	 ((u32)(svc) << (FLVR_SVC_OFFSET - FLVR_MECH_OFFSET)))
 
 #define SPTLRPC_SUBFLVR_KRB5N					   \
 	MAKE_BASE_SUBFLVR(SPTLRPC_MECH_GSS_KRB5, SPTLRPC_SVC_NULL)
@@ -222,17 +222,17 @@ enum sptlrpc_bulk_service {
 
 #define SPTLRPC_FLVR_DEFAULT	    SPTLRPC_FLVR_NULL
 
-#define SPTLRPC_FLVR_INVALID	    ((__u32)0xFFFFFFFF)
-#define SPTLRPC_FLVR_ANY		((__u32)0xFFF00000)
+#define SPTLRPC_FLVR_INVALID	    ((u32)0xFFFFFFFF)
+#define SPTLRPC_FLVR_ANY		((u32)0xFFF00000)
 
 /**
  * extract the useful part from wire flavor
  */
-#define WIRE_FLVR(wflvr)		(((__u32)(wflvr)) & 0x000FFFFF)
+#define WIRE_FLVR(wflvr)		(((u32)(wflvr)) & 0x000FFFFF)
 
 /** @} flavor */
 
-static inline void flvr_set_svc(__u32 *flvr, __u32 svc)
+static inline void flvr_set_svc(u32 *flvr, u32 svc)
 {
 	LASSERT(svc < SPTLRPC_SVC_MAX);
 	*flvr = MAKE_FLVR(SPTLRPC_FLVR_POLICY(*flvr),
@@ -242,7 +242,7 @@ static inline void flvr_set_svc(__u32 *flvr, __u32 svc)
 			  SPTLRPC_FLVR_BULK_SVC(*flvr));
 }
 
-static inline void flvr_set_bulk_svc(__u32 *flvr, __u32 svc)
+static inline void flvr_set_bulk_svc(u32 *flvr, u32 svc)
 {
 	LASSERT(svc < SPTLRPC_BULK_SVC_MAX);
 	*flvr = MAKE_FLVR(SPTLRPC_FLVR_POLICY(*flvr),
@@ -253,7 +253,7 @@ static inline void flvr_set_bulk_svc(__u32 *flvr, __u32 svc)
 }
 
 struct bulk_spec_hash {
-	__u8    hash_alg;
+	u8    hash_alg;
 };
 
 /**
@@ -264,11 +264,11 @@ struct sptlrpc_flavor {
 	/**
 	 * wire flavor, should be renamed to sf_wire.
 	 */
-	__u32   sf_rpc;
+	u32   sf_rpc;
 	/**
 	 * general flags of PTLRPC_SEC_FL_*
 	 */
-	__u32   sf_flags;
+	u32   sf_flags;
 	/**
 	 * rpc flavor specification
 	 */
@@ -303,10 +303,10 @@ enum lustre_sec_part {
  * two Lustre parts.
  */
 struct sptlrpc_rule {
-	__u32		   sr_netid;   /* LNET network ID */
-	__u8		    sr_from;    /* sec_part */
-	__u8		    sr_to;      /* sec_part */
-	__u16		   sr_padding;
+	u32		   sr_netid;   /* LNET network ID */
+	u8		    sr_from;    /* sec_part */
+	u8		    sr_to;      /* sec_part */
+	u16		   sr_padding;
 	struct sptlrpc_flavor   sr_flvr;
 };
 
@@ -757,7 +757,7 @@ struct ptlrpc_sec_sops {
 struct ptlrpc_sec_policy {
 	struct module		   *sp_owner;
 	char			   *sp_name;
-	__u16			   sp_policy; /* policy number */
+	u16			   sp_policy; /* policy number */
 	struct ptlrpc_sec_cops	 *sp_cops;   /* client ops */
 	struct ptlrpc_sec_sops	 *sp_sops;   /* server ops */
 };
@@ -819,13 +819,13 @@ struct ptlrpc_svc_ctx {
 #define LUSTRE_MAX_GROUPS	       (128)
 
 struct ptlrpc_user_desc {
-	__u32	   pud_uid;
-	__u32	   pud_gid;
-	__u32	   pud_fsuid;
-	__u32	   pud_fsgid;
-	__u32	   pud_cap;
-	__u32	   pud_ngroups;
-	__u32	   pud_groups[0];
+	u32	   pud_uid;
+	u32	   pud_gid;
+	u32	   pud_fsuid;
+	u32	   pud_fsgid;
+	u32	   pud_cap;
+	u32	   pud_ngroups;
+	u32	   pud_groups[0];
 };
 
 /*
@@ -843,20 +843,20 @@ enum sptlrpc_bulk_hash_alg {
 	BULK_HASH_ALG_MAX
 };
 
-const char *sptlrpc_get_hash_name(__u8 hash_alg);
-__u8 sptlrpc_get_hash_alg(const char *algname);
+const char *sptlrpc_get_hash_name(u8 hash_alg);
+u8 sptlrpc_get_hash_alg(const char *algname);
 
 enum {
 	BSD_FL_ERR      = 1,
 };
 
 struct ptlrpc_bulk_sec_desc {
-	__u8	    bsd_version;    /* 0 */
-	__u8	    bsd_type;       /* SPTLRPC_BULK_XXX */
-	__u8	    bsd_svc;	/* SPTLRPC_BULK_SVC_XXXX */
-	__u8	    bsd_flags;      /* flags */
-	__u32	   bsd_nob;	/* nob of bulk data */
-	__u8	    bsd_data[0];    /* policy-specific token */
+	u8	    bsd_version;    /* 0 */
+	u8	    bsd_type;       /* SPTLRPC_BULK_XXX */
+	u8	    bsd_svc;	/* SPTLRPC_BULK_SVC_XXXX */
+	u8	    bsd_flags;      /* flags */
+	u32	   bsd_nob;	/* nob of bulk data */
+	u8	    bsd_data[0];    /* policy-specific token */
 };
 
 /*
@@ -887,8 +887,8 @@ void _sptlrpc_enlarge_msg_inplace(struct lustre_msg *msg,
 int sptlrpc_register_policy(struct ptlrpc_sec_policy *policy);
 int sptlrpc_unregister_policy(struct ptlrpc_sec_policy *policy);
 
-__u32 sptlrpc_name2flavor_base(const char *name);
-const char *sptlrpc_flavor2name_base(__u32 flvr);
+u32 sptlrpc_name2flavor_base(const char *name);
+const char *sptlrpc_flavor2name_base(u32 flvr);
 char *sptlrpc_flavor2name_bulk(struct sptlrpc_flavor *sf,
 			       char *buf, int bufsize);
 char *sptlrpc_flavor2name(struct sptlrpc_flavor *sf, char *buf, int bufsize);
@@ -1047,7 +1047,7 @@ int sptlrpc_cli_unwrap_bulk_write(struct ptlrpc_request *req,
 				  struct ptlrpc_bulk_desc *desc);
 
 /* bulk helpers (internal use only by policies) */
-int sptlrpc_get_bulk_checksum(struct ptlrpc_bulk_desc *desc, __u8 alg,
+int sptlrpc_get_bulk_checksum(struct ptlrpc_bulk_desc *desc, u8 alg,
 			      void *buf, int buflen);
 
 int bulk_sec_desc_unpack(struct lustre_msg *msg, int offset, int swabbed);
@@ -1055,7 +1055,7 @@ int sptlrpc_get_bulk_checksum(struct ptlrpc_bulk_desc *desc, __u8 alg,
 /* user descriptor helpers */
 static inline int sptlrpc_user_desc_size(int ngroups)
 {
-	return sizeof(struct ptlrpc_user_desc) + ngroups * sizeof(__u32);
+	return sizeof(struct ptlrpc_user_desc) + ngroups * sizeof(u32);
 }
 
 int sptlrpc_current_user_desc_size(void);
diff --git a/drivers/staging/lustre/lustre/include/lustre_swab.h b/drivers/staging/lustre/lustre/include/lustre_swab.h
index 6939ac1..1758dd9 100644
--- a/drivers/staging/lustre/lustre/include/lustre_swab.h
+++ b/drivers/staging/lustre/lustre/include/lustre_swab.h
@@ -62,7 +62,7 @@
 void lustre_swab_ost_lvb(struct ost_lvb *lvb);
 void lustre_swab_obd_quotactl(struct obd_quotactl *q);
 void lustre_swab_lquota_lvb(struct lquota_lvb *lvb);
-void lustre_swab_generic_32s(__u32 *val);
+void lustre_swab_generic_32s(u32 *val);
 void lustre_swab_mdt_body(struct mdt_body *b);
 void lustre_swab_mdt_ioepoch(struct mdt_ioepoch *b);
 void lustre_swab_mdt_rec_setattr(struct mdt_rec_setattr *sa);
@@ -79,7 +79,7 @@
 void lustre_swab_mgs_config_body(struct mgs_config_body *body);
 void lustre_swab_mgs_config_res(struct mgs_config_res *body);
 void lustre_swab_ost_body(struct ost_body *b);
-void lustre_swab_ost_last_id(__u64 *id);
+void lustre_swab_ost_last_id(u64 *id);
 void lustre_swab_fiemap(struct fiemap *fiemap);
 void lustre_swab_lov_user_md_v1(struct lov_user_md_v1 *lum);
 void lustre_swab_lov_user_md_v3(struct lov_user_md_v3 *lum);
@@ -107,6 +107,6 @@ void lustre_swab_lov_user_md_objects(struct lov_user_ost_data *lod,
 void dump_rniobuf(struct niobuf_remote *rnb);
 void dump_ioo(struct obd_ioobj *nb);
 void dump_ost_body(struct ost_body *ob);
-void dump_rcs(__u32 *rc);
+void dump_rcs(u32 *rc);
 
 #endif
diff --git a/drivers/staging/lustre/lustre/include/obd.h b/drivers/staging/lustre/lustre/include/obd.h
index a5dc573..0bb3cf8 100644
--- a/drivers/staging/lustre/lustre/include/obd.h
+++ b/drivers/staging/lustre/lustre/include/obd.h
@@ -55,7 +55,7 @@
 struct osc_async_rc {
 	int     ar_rc;
 	int     ar_force_sync;
-	__u64   ar_min_xid;
+	u64   ar_min_xid;
 };
 
 struct lov_oinfo {		 /* per-stripe data structure */
@@ -64,12 +64,12 @@ struct lov_oinfo {		 /* per-stripe data structure */
 	int loi_ost_gen;	   /* generation of this loi_ost_idx */
 
 	unsigned long loi_kms_valid:1;
-	__u64 loi_kms;	     /* known minimum size */
+	u64 loi_kms;	     /* known minimum size */
 	struct ost_lvb loi_lvb;
 	struct osc_async_rc     loi_ar;
 };
 
-static inline void loi_kms_set(struct lov_oinfo *oinfo, __u64 kms)
+static inline void loi_kms_set(struct lov_oinfo *oinfo, u64 kms)
 {
 	oinfo->loi_kms = kms;
 	oinfo->loi_kms_valid = 1;
@@ -85,7 +85,7 @@ static inline void loi_kms_set(struct lov_oinfo *oinfo, __u64 kms)
 /* obd info for a particular level (lov, osc). */
 struct obd_info {
 	/* OBD_STATFS_* flags */
-	__u64		   oi_flags;
+	u64		   oi_flags;
 	/* lsm data specific for every OSC. */
 	struct lov_stripe_md   *oi_md;
 	/* statfs data specific for every OSC, if needed at all. */
@@ -243,13 +243,13 @@ struct client_obd {
 	struct list_head	       cl_loi_hp_ready_list;
 	struct list_head	       cl_loi_write_list;
 	struct list_head	       cl_loi_read_list;
-	__u32			 cl_r_in_flight;
-	__u32			 cl_w_in_flight;
+	u32			 cl_r_in_flight;
+	u32			 cl_w_in_flight;
 	/* just a sum of the loi/lop pending numbers to be exported by sysfs */
 	atomic_t	     cl_pending_w_pages;
 	atomic_t	     cl_pending_r_pages;
-	__u32			 cl_max_pages_per_rpc;
-	__u32			 cl_max_rpcs_in_flight;
+	u32			 cl_max_pages_per_rpc;
+	u32			 cl_max_rpcs_in_flight;
 	struct obd_histogram     cl_read_rpc_hist;
 	struct obd_histogram     cl_write_rpc_hist;
 	struct obd_histogram     cl_read_page_hist;
@@ -322,7 +322,7 @@ struct client_obd {
 	unsigned int		 cl_checksum:1,	/* 0 = disabled, 1 = enabled */
 				 cl_checksum_dump:1; /* same */
 	/* supported checksum types that are worked out at connect time */
-	__u32		    cl_supp_cksum_types;
+	u32		    cl_supp_cksum_types;
 	/* checksum algorithm to be used */
 	enum cksum_type	     cl_cksum_type;
 
@@ -347,7 +347,7 @@ struct client_obd {
 #define obd2cli_tgt(obd) ((char *)(obd)->u.cli.cl_target_uuid.uuid)
 
 struct obd_id_info {
-	__u32   idx;
+	u32   idx;
 	u64	*data;
 };
 
@@ -356,12 +356,12 @@ struct echo_client_obd {
 	spinlock_t		ec_lock;
 	struct list_head	   ec_objects;
 	struct list_head	   ec_locks;
-	__u64		ec_unique;
+	u64		ec_unique;
 };
 
 /* Generic subset of OSTs */
 struct ost_pool {
-	__u32	      *op_array;      /* array of index of lov_obd->lov_tgts */
+	u32	      *op_array;      /* array of index of lov_obd->lov_tgts */
 	unsigned int	op_count;      /* number of OSTs in the array */
 	unsigned int	op_size;       /* allocated size of lp_array */
 	struct rw_semaphore op_rw_sem;     /* to protect ost_pool use */
@@ -375,8 +375,8 @@ struct lov_tgt_desc {
 	struct obd_uuid     ltd_uuid;
 	struct obd_device  *ltd_obd;
 	struct obd_export  *ltd_exp;
-	__u32	       ltd_gen;
-	__u32	       ltd_index;   /* index in lov_obd->tgts */
+	u32	       ltd_gen;
+	u32	       ltd_index;   /* index in lov_obd->tgts */
 	unsigned long       ltd_active:1,/* is this target up for requests */
 			    ltd_activate:1,/* should  target be activated */
 			    ltd_reap:1;  /* should this target be deleted */
@@ -389,8 +389,8 @@ struct lov_obd {
 	struct mutex		lov_lock;
 	struct obd_connect_data lov_ocd;
 	atomic_t	    lov_refcount;
-	__u32		   lov_death_row;/* tgts scheduled to be deleted */
-	__u32		   lov_tgt_size;   /* size of tgts array */
+	u32		   lov_death_row;/* tgts scheduled to be deleted */
+	u32		   lov_tgt_size;   /* size of tgts array */
 	int		     lov_connects;
 	int		     lov_pool_count;
 	struct rhashtable	lov_pools_hash_body; /* used for key access */
@@ -433,10 +433,10 @@ struct lmv_obd {
 };
 
 struct niobuf_local {
-	__u64		lnb_file_offset;
-	__u32		lnb_page_offset;
-	__u32		lnb_len;
-	__u32		lnb_flags;
+	u64		lnb_file_offset;
+	u32		lnb_page_offset;
+	u32		lnb_len;
+	u32		lnb_flags;
 	int		lnb_rc;
 	struct page	*lnb_page;
 	void		*lnb_data;
@@ -576,7 +576,7 @@ struct obd_device {
 	spinlock_t		obd_dev_lock; /* protect OBD bitfield above */
 	spinlock_t		obd_osfs_lock;
 	struct obd_statfs	obd_osfs;       /* locked by obd_osfs_lock */
-	__u64			obd_osfs_age;
+	u64			obd_osfs_age;
 	u64			obd_last_committed;
 	struct mutex		obd_dev_mutex;
 	struct lvfs_run_ctxt	obd_lvfs_ctxt;
@@ -728,9 +728,9 @@ struct md_op_data {
 	size_t			op_namelen;
 	struct lmv_stripe_md   *op_mea1;
 	struct lmv_stripe_md   *op_mea2;
-	__u32		   op_suppgids[2];
-	__u32		   op_fsuid;
-	__u32		   op_fsgid;
+	u32		   op_suppgids[2];
+	u32		   op_fsuid;
+	u32		   op_fsgid;
 	kernel_cap_t	       op_cap;
 	void		   *op_data;
 	size_t			op_data_size;
@@ -739,16 +739,16 @@ struct md_op_data {
 	struct iattr	    op_attr;
 	enum op_xvalid		op_xvalid;	/* eXtra validity flags */
 	unsigned int	    op_attr_flags;
-	__u64		   op_valid;
+	u64		   op_valid;
 	loff_t		  op_attr_blocks;
 
-	__u32		   op_flags;
+	u32		   op_flags;
 
 	/* Various operation flags. */
 	enum mds_op_bias        op_bias;
 
 	/* Used by readdir */
-	__u64		   op_offset;
+	u64		   op_offset;
 
 	/* used to transfer info between the stacks of MD client
 	 * see enum op_cli_flags
@@ -756,7 +756,7 @@ struct md_op_data {
 	enum md_cli_flags	op_cli_flags;
 
 	/* File object data version for HSM release, on client */
-	__u64			op_data_version;
+	u64			op_data_version;
 	struct lustre_handle	op_lease_handle;
 
 	/* File security context, for creates. */
@@ -765,7 +765,7 @@ struct md_op_data {
 	u32			op_file_secctx_size;
 
 	/* default stripe offset */
-	__u32			op_default_stripe_offset;
+	u32			op_default_stripe_offset;
 
 	u32			op_projid;
 };
@@ -795,10 +795,10 @@ struct obd_ops {
 	int (*iocontrol)(unsigned int cmd, struct obd_export *exp, int len,
 			 void *karg, void __user *uarg);
 	int (*get_info)(const struct lu_env *env, struct obd_export *,
-			__u32 keylen, void *key, __u32 *vallen, void *val);
+			u32 keylen, void *key, u32 *vallen, void *val);
 	int (*set_info_async)(const struct lu_env *, struct obd_export *,
-			      __u32 keylen, void *key,
-			      __u32 vallen, void *val,
+			      u32 keylen, void *key,
+			      u32 vallen, void *val,
 			      struct ptlrpc_request_set *set);
 	int (*setup)(struct obd_device *dev, struct lustre_cfg *cfg);
 	int (*precleanup)(struct obd_device *dev);
@@ -838,9 +838,9 @@ struct obd_ops {
 	 * about this.
 	 */
 	int (*statfs)(const struct lu_env *, struct obd_export *exp,
-		      struct obd_statfs *osfs, __u64 max_age, __u32 flags);
+		      struct obd_statfs *osfs, u64 max_age, u32 flags);
 	int (*statfs_async)(struct obd_export *exp, struct obd_info *oinfo,
-			    __u64 max_age, struct ptlrpc_request_set *set);
+			    u64 max_age, struct ptlrpc_request_set *set);
 	int (*create)(const struct lu_env *env, struct obd_export *exp,
 		      struct obdo *oa);
 	int (*destroy)(const struct lu_env *env, struct obd_export *exp,
@@ -908,7 +908,7 @@ struct obd_client_handle {
 	struct lu_fid		 och_fid;
 	struct md_open_data	*och_mod;
 	struct lustre_handle	 och_lease_handle; /* open lock for lease */
-	__u32			 och_magic;
+	u32			 och_magic;
 	fmode_t			 och_flags;
 };
 
@@ -925,10 +925,10 @@ struct md_ops {
 		     struct md_open_data *, struct ptlrpc_request **);
 	int (*create)(struct obd_export *, struct md_op_data *,
 		      const void *, size_t, umode_t, uid_t, gid_t,
-		      kernel_cap_t, __u64, struct ptlrpc_request **);
+		      kernel_cap_t, u64, struct ptlrpc_request **);
 	int (*enqueue)(struct obd_export *, struct ldlm_enqueue_info *,
 		       const union ldlm_policy_data *, struct md_op_data *,
-		       struct lustre_handle *, __u64);
+		       struct lustre_handle *, u64);
 	int (*getattr)(struct obd_export *, struct md_op_data *,
 		       struct ptlrpc_request **);
 	int (*getattr_name)(struct obd_export *, struct md_op_data *,
@@ -936,7 +936,7 @@ struct md_ops {
 	int (*intent_lock)(struct obd_export *, struct md_op_data *,
 			   struct lookup_intent *,
 			   struct ptlrpc_request **,
-			   ldlm_blocking_callback, __u64);
+			   ldlm_blocking_callback, u64);
 	int (*link)(struct obd_export *, struct md_op_data *,
 		    struct ptlrpc_request **);
 	int (*rename)(struct obd_export *, struct md_op_data *,
@@ -947,7 +947,7 @@ struct md_ops {
 	int (*fsync)(struct obd_export *, const struct lu_fid *,
 		     struct ptlrpc_request **);
 	int (*read_page)(struct obd_export *, struct md_op_data *,
-			 struct md_callback *cb_op, __u64 hash_offset,
+			 struct md_callback *cb_op, u64 hash_offset,
 			 struct page **ppage);
 	int (*unlink)(struct obd_export *, struct md_op_data *,
 		      struct ptlrpc_request **);
@@ -977,9 +977,9 @@ struct md_ops {
 	int (*clear_open_replay_data)(struct obd_export *,
 				      struct obd_client_handle *);
 	int (*set_lock_data)(struct obd_export *, const struct lustre_handle *,
-			     void *, __u64 *);
+			     void *, u64 *);
 
-	enum ldlm_mode (*lock_match)(struct obd_export *, __u64,
+	enum ldlm_mode (*lock_match)(struct obd_export *, u64,
 				     const struct lu_fid *, enum ldlm_type,
 				     union ldlm_policy_data *, enum ldlm_mode,
 				     struct lustre_handle *);
@@ -997,7 +997,7 @@ struct md_ops {
 				    struct md_enqueue_info *);
 
 	int (*revalidate_lock)(struct obd_export *, struct lookup_intent *,
-			       struct lu_fid *, __u64 *bits);
+			       struct lu_fid *, u64 *bits);
 
 	int (*unpackmd)(struct obd_export *exp, struct lmv_stripe_md **plsm,
 			const union lmv_mds_md *lmv, size_t lmv_size);
diff --git a/drivers/staging/lustre/lustre/include/obd_class.h b/drivers/staging/lustre/lustre/include/obd_class.h
index cc00915..b64ba8b 100644
--- a/drivers/staging/lustre/lustre/include/obd_class.h
+++ b/drivers/staging/lustre/lustre/include/obd_class.h
@@ -92,8 +92,8 @@ int obd_connect_flags2str(char *page, int count, u64 flags, u64 flags2,
 
 int obd_get_request_slot(struct client_obd *cli);
 void obd_put_request_slot(struct client_obd *cli);
-__u32 obd_get_max_rpcs_in_flight(struct client_obd *cli);
-int obd_set_max_rpcs_in_flight(struct client_obd *cli, __u32 max);
+u32 obd_get_max_rpcs_in_flight(struct client_obd *cli);
+int obd_set_max_rpcs_in_flight(struct client_obd *cli, u32 max);
 int obd_set_max_mod_rpcs_in_flight(struct client_obd *cli, u16 max);
 int obd_mod_rpc_stats_seq_show(struct client_obd *cli, struct seq_file *seq);
 
@@ -356,8 +356,8 @@ static inline int class_devno_max(void)
 }
 
 static inline int obd_get_info(const struct lu_env *env,
-			       struct obd_export *exp, __u32 keylen,
-			       void *key, __u32 *vallen, void *val)
+			       struct obd_export *exp, u32 keylen,
+			       void *key, u32 *vallen, void *val)
 {
 	int rc;
 
@@ -873,7 +873,7 @@ static inline int obd_destroy_export(struct obd_export *exp)
  */
 static inline int obd_statfs_async(struct obd_export *exp,
 				   struct obd_info *oinfo,
-				   __u64 max_age,
+				   u64 max_age,
 				   struct ptlrpc_request_set *rqset)
 {
 	int rc = 0;
@@ -909,8 +909,8 @@ static inline int obd_statfs_async(struct obd_export *exp,
 }
 
 static inline int obd_statfs_rqset(struct obd_export *exp,
-				   struct obd_statfs *osfs, __u64 max_age,
-				   __u32 flags)
+				   struct obd_statfs *osfs, u64 max_age,
+				   u32 flags)
 {
 	struct ptlrpc_request_set *set = NULL;
 	struct obd_info oinfo = {
@@ -936,8 +936,8 @@ static inline int obd_statfs_rqset(struct obd_export *exp,
  * target.  Use a value of "jiffies + HZ" to guarantee freshness.
  */
 static inline int obd_statfs(const struct lu_env *env, struct obd_export *exp,
-			     struct obd_statfs *osfs, __u64 max_age,
-			     __u32 flags)
+			     struct obd_statfs *osfs, u64 max_age,
+			     u32 flags)
 {
 	struct obd_device *obd = exp->exp_obd;
 	int rc = 0;
@@ -1509,7 +1509,7 @@ static inline int md_clear_open_replay_data(struct obd_export *exp,
 
 static inline int md_set_lock_data(struct obd_export *exp,
 				   const struct lustre_handle *lockh,
-				   void *data, __u64 *bits)
+				   void *data, u64 *bits)
 {
 	int rc;
 
@@ -1537,7 +1537,7 @@ static inline int md_cancel_unused(struct obd_export *exp,
 						flags, opaque);
 }
 
-static inline enum ldlm_mode md_lock_match(struct obd_export *exp, __u64 flags,
+static inline enum ldlm_mode md_lock_match(struct obd_export *exp, u64 flags,
 					   const struct lu_fid *fid,
 					   enum ldlm_type type,
 					   union ldlm_policy_data *policy,
@@ -1583,7 +1583,7 @@ static inline int md_intent_getattr_async(struct obd_export *exp,
 
 static inline int md_revalidate_lock(struct obd_export *exp,
 				     struct lookup_intent *it,
-				     struct lu_fid *fid, __u64 *bits)
+				     struct lu_fid *fid, u64 *bits)
 {
 	int rc;
 
@@ -1669,9 +1669,9 @@ static inline void class_uuid_unparse(class_uuid_t uu, struct obd_uuid *out)
 
 /* lustre_peer.c    */
 int lustre_uuid_to_peer(const char *uuid, lnet_nid_t *peer_nid, int index);
-int class_add_uuid(const char *uuid, __u64 nid);
+int class_add_uuid(const char *uuid, u64 nid);
 int class_del_uuid(const char *uuid);
-int class_check_uuid(struct obd_uuid *uuid, __u64 nid);
+int class_check_uuid(struct obd_uuid *uuid, u64 nid);
 
 /* class_obd.c */
 extern char obd_jobid_node[];
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [lustre-devel] [PATCH 06/26] ldlm: use kernel types for kernel code
  2019-01-31 17:19 [lustre-devel] [PATCH 00/26] lustre: cleanups with no code changes James Simmons
                   ` (4 preceding siblings ...)
  2019-01-31 17:19 ` [lustre-devel] [PATCH 05/26] lustre: use kernel types for lustre internal headers James Simmons
@ 2019-01-31 17:19 ` James Simmons
  2019-01-31 17:19 ` [lustre-devel] [PATCH 07/26] obdclass: " James Simmons
                   ` (20 subsequent siblings)
  26 siblings, 0 replies; 30+ messages in thread
From: James Simmons @ 2019-01-31 17:19 UTC (permalink / raw)
  To: lustre-devel

Lustre ldlm code was originally both a user land and kernel
implementation. The source contains many types of the form __u32
but since this is mostly kernel code change the types to kernel
internal types.

Signed-off-by: James Simmons <jsimmons@infradead.org>
---
 drivers/staging/lustre/lustre/ldlm/ldlm_extent.c   |  8 +++---
 drivers/staging/lustre/lustre/ldlm/ldlm_flock.c    |  2 +-
 drivers/staging/lustre/lustre/ldlm/ldlm_internal.h | 12 ++++----
 drivers/staging/lustre/lustre/ldlm/ldlm_lock.c     | 14 +++++-----
 drivers/staging/lustre/lustre/ldlm/ldlm_pool.c     | 22 +++++++--------
 drivers/staging/lustre/lustre/ldlm/ldlm_request.c  | 32 +++++++++++-----------
 drivers/staging/lustre/lustre/ldlm/ldlm_resource.c | 22 +++++++--------
 7 files changed, 56 insertions(+), 56 deletions(-)

diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_extent.c b/drivers/staging/lustre/lustre/ldlm/ldlm_extent.c
index 225c023..99aef0b 100644
--- a/drivers/staging/lustre/lustre/ldlm/ldlm_extent.c
+++ b/drivers/staging/lustre/lustre/ldlm/ldlm_extent.c
@@ -57,7 +57,7 @@
 
 #define START(node) ((node)->l_policy_data.l_extent.start)
 #define LAST(node) ((node)->l_policy_data.l_extent.end)
-INTERVAL_TREE_DEFINE(struct ldlm_lock, l_rb, __u64, __subtree_last,
+INTERVAL_TREE_DEFINE(struct ldlm_lock, l_rb, u64, __subtree_last,
 		     START, LAST, static, extent);
 
 /* When a lock is cancelled by a client, the KMS may undergo change if this
@@ -66,11 +66,11 @@
  *
  * NB: A lock on [x,y] protects a KMS of up to y + 1 bytes!
  */
-__u64 ldlm_extent_shift_kms(struct ldlm_lock *lock, __u64 old_kms)
+u64 ldlm_extent_shift_kms(struct ldlm_lock *lock, u64 old_kms)
 {
 	struct ldlm_resource *res = lock->l_resource;
 	struct ldlm_lock *lck;
-	__u64 kms = 0;
+	u64 kms = 0;
 
 	/* don't let another thread in ldlm_extent_shift_kms race in
 	 * just after we finish and take our lock into account in its
@@ -192,7 +192,7 @@ void ldlm_extent_policy_local_to_wire(const union ldlm_policy_data *lpolicy,
 }
 
 void ldlm_extent_search(struct rb_root_cached *root,
-			__u64 start, __u64 end,
+			u64 start, u64 end,
 			bool (*matches)(struct ldlm_lock *lock, void *data),
 			void *data)
 {
diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c b/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
index 94f3b1e..baa5b3a 100644
--- a/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
+++ b/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
@@ -312,7 +312,7 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req)
  * \retval <0   : failure
  */
 int
-ldlm_flock_completion_ast(struct ldlm_lock *lock, __u64 flags, void *data)
+ldlm_flock_completion_ast(struct ldlm_lock *lock, u64 flags, void *data)
 {
 	struct file_lock		*getlk = lock->l_ast_data;
 	int				rc = 0;
diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_internal.h b/drivers/staging/lustre/lustre/ldlm/ldlm_internal.h
index b64e2be0..d8dcf8a 100644
--- a/drivers/staging/lustre/lustre/ldlm/ldlm_internal.h
+++ b/drivers/staging/lustre/lustre/ldlm/ldlm_internal.h
@@ -136,10 +136,10 @@ struct ldlm_lock *
 ldlm_lock_create(struct ldlm_namespace *ns, const struct ldlm_res_id *id,
 		 enum ldlm_type type, enum ldlm_mode mode,
 		 const struct ldlm_callback_suite *cbs,
-		 void *data, __u32 lvb_len, enum lvb_type lvb_type);
+		 void *data, u32 lvb_len, enum lvb_type lvb_type);
 enum ldlm_error ldlm_lock_enqueue(struct ldlm_namespace *ns,
 				  struct ldlm_lock **lock, void *cookie,
-				  __u64 *flags);
+				  u64 *flags);
 void ldlm_lock_addref_internal(struct ldlm_lock *lock, enum ldlm_mode mode);
 void ldlm_lock_addref_internal_nolock(struct ldlm_lock *lock,
 				      enum ldlm_mode mode);
@@ -176,7 +176,7 @@ void ldlm_handle_bl_callback(struct ldlm_namespace *ns,
 void ldlm_extent_add_lock(struct ldlm_resource *res, struct ldlm_lock *lock);
 void ldlm_extent_unlink_lock(struct ldlm_lock *lock);
 void ldlm_extent_search(struct rb_root_cached *root,
-			__u64 start, __u64 end,
+			u64 start, u64 end,
 			bool (*matches)(struct ldlm_lock *lock, void *data),
 			void *data);
 
@@ -195,9 +195,9 @@ struct ldlm_state {
 };
 
 /* ldlm_pool.c */
-__u64 ldlm_pool_get_slv(struct ldlm_pool *pl);
-void ldlm_pool_set_clv(struct ldlm_pool *pl, __u64 clv);
-__u32 ldlm_pool_get_lvf(struct ldlm_pool *pl);
+u64 ldlm_pool_get_slv(struct ldlm_pool *pl);
+void ldlm_pool_set_clv(struct ldlm_pool *pl, u64 clv);
+u32 ldlm_pool_get_lvf(struct ldlm_pool *pl);
 
 int ldlm_init(void);
 void ldlm_exit(void);
diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_lock.c b/drivers/staging/lustre/lustre/ldlm/ldlm_lock.c
index ebdfc11..e726e76 100644
--- a/drivers/staging/lustre/lustre/ldlm/ldlm_lock.c
+++ b/drivers/staging/lustre/lustre/ldlm/ldlm_lock.c
@@ -508,7 +508,7 @@ void ldlm_lock2handle(const struct ldlm_lock *lock, struct lustre_handle *lockh)
  *	      Return NULL if flag already set
  */
 struct ldlm_lock *__ldlm_handle2lock(const struct lustre_handle *handle,
-				     __u64 flags)
+				     u64 flags)
 {
 	struct ldlm_lock *lock;
 
@@ -1043,7 +1043,7 @@ struct lock_match_data {
 	struct ldlm_lock	*lmd_lock;
 	enum ldlm_mode		*lmd_mode;
 	union ldlm_policy_data	*lmd_policy;
-	__u64			 lmd_flags;
+	u64			 lmd_flags;
 	int			 lmd_unref;
 };
 
@@ -1250,7 +1250,7 @@ void ldlm_lock_allow_match(struct ldlm_lock *lock)
  * keep caller code unchanged), the context failure will be discovered by
  * caller sometime later.
  */
-enum ldlm_mode ldlm_lock_match(struct ldlm_namespace *ns, __u64 flags,
+enum ldlm_mode ldlm_lock_match(struct ldlm_namespace *ns, u64 flags,
 			       const struct ldlm_res_id *res_id,
 			       enum ldlm_type type,
 			       union ldlm_policy_data *policy,
@@ -1313,7 +1313,7 @@ enum ldlm_mode ldlm_lock_match(struct ldlm_namespace *ns, __u64 flags,
 	if (lock) {
 		ldlm_lock2handle(lock, lockh);
 		if ((flags & LDLM_FL_LVB_READY) && !ldlm_is_lvb_ready(lock)) {
-			__u64 wait_flags = LDLM_FL_LVB_READY |
+			u64 wait_flags = LDLM_FL_LVB_READY |
 				LDLM_FL_DESTROYED | LDLM_FL_FAIL_NOTIFIED;
 
 			if (lock->l_completion_ast) {
@@ -1381,7 +1381,7 @@ enum ldlm_mode ldlm_lock_match(struct ldlm_namespace *ns, __u64 flags,
 EXPORT_SYMBOL(ldlm_lock_match);
 
 enum ldlm_mode ldlm_revalidate_lock_handle(const struct lustre_handle *lockh,
-					   __u64 *bits)
+					   u64 *bits)
 {
 	struct ldlm_lock *lock;
 	enum ldlm_mode mode = 0;
@@ -1519,7 +1519,7 @@ struct ldlm_lock *ldlm_lock_create(struct ldlm_namespace *ns,
 				   enum ldlm_type type,
 				   enum ldlm_mode mode,
 				   const struct ldlm_callback_suite *cbs,
-				   void *data, __u32 lvb_len,
+				   void *data, u32 lvb_len,
 				   enum lvb_type lvb_type)
 {
 	struct ldlm_lock *lock;
@@ -1580,7 +1580,7 @@ struct ldlm_lock *ldlm_lock_create(struct ldlm_namespace *ns,
  */
 enum ldlm_error ldlm_lock_enqueue(struct ldlm_namespace *ns,
 				  struct ldlm_lock **lockp,
-				  void *cookie, __u64 *flags)
+				  void *cookie, u64 *flags)
 {
 	struct ldlm_lock *lock = *lockp;
 	struct ldlm_resource *res = lock->l_resource;
diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_pool.c b/drivers/staging/lustre/lustre/ldlm/ldlm_pool.c
index 36d14ee..e94d8a3 100644
--- a/drivers/staging/lustre/lustre/ldlm/ldlm_pool.c
+++ b/drivers/staging/lustre/lustre/ldlm/ldlm_pool.c
@@ -137,22 +137,22 @@
  */
 #define LDLM_POOL_SLV_SHIFT (10)
 
-static inline __u64 dru(__u64 val, __u32 shift, int round_up)
+static inline u64 dru(u64 val, u32 shift, int round_up)
 {
 	return (val + (round_up ? (1 << shift) - 1 : 0)) >> shift;
 }
 
-static inline __u64 ldlm_pool_slv_max(__u32 L)
+static inline u64 ldlm_pool_slv_max(u32 L)
 {
 	/*
 	 * Allow to have all locks for 1 client for 10 hrs.
 	 * Formula is the following: limit * 10h / 1 client.
 	 */
-	__u64 lim = (__u64)L *  LDLM_POOL_MAX_AGE / 1;
+	u64 lim = (u64)L *  LDLM_POOL_MAX_AGE / 1;
 	return lim;
 }
 
-static inline __u64 ldlm_pool_slv_min(__u32 L)
+static inline u64 ldlm_pool_slv_min(u32 L)
 {
 	return 1;
 }
@@ -212,7 +212,7 @@ static inline int ldlm_pool_t2gsp(unsigned int t)
 static void ldlm_pool_recalc_stats(struct ldlm_pool *pl)
 {
 	int grant_plan = pl->pl_grant_plan;
-	__u64 slv = pl->pl_server_lock_volume;
+	u64 slv = pl->pl_server_lock_volume;
 	int granted = atomic_read(&pl->pl_granted);
 	int grant_rate = atomic_read(&pl->pl_grant_rate);
 	int cancel_rate = atomic_read(&pl->pl_cancel_rate);
@@ -430,8 +430,8 @@ static int lprocfs_pool_state_seq_show(struct seq_file *m, void *unused)
 	int granted, grant_rate, cancel_rate;
 	int grant_speed, lvf;
 	struct ldlm_pool *pl = m->private;
-	__u64 slv, clv;
-	__u32 limit;
+	u64 slv, clv;
+	u32 limit;
 
 	spin_lock(&pl->pl_lock);
 	slv = pl->pl_server_lock_volume;
@@ -739,9 +739,9 @@ void ldlm_pool_del(struct ldlm_pool *pl, struct ldlm_lock *lock)
  *
  * \pre ->pl_lock is not locked.
  */
-__u64 ldlm_pool_get_slv(struct ldlm_pool *pl)
+u64 ldlm_pool_get_slv(struct ldlm_pool *pl)
 {
-	__u64 slv;
+	u64 slv;
 
 	spin_lock(&pl->pl_lock);
 	slv = pl->pl_server_lock_volume;
@@ -754,7 +754,7 @@ __u64 ldlm_pool_get_slv(struct ldlm_pool *pl)
  *
  * \pre ->pl_lock is not locked.
  */
-void ldlm_pool_set_clv(struct ldlm_pool *pl, __u64 clv)
+void ldlm_pool_set_clv(struct ldlm_pool *pl, u64 clv)
 {
 	spin_lock(&pl->pl_lock);
 	pl->pl_client_lock_volume = clv;
@@ -764,7 +764,7 @@ void ldlm_pool_set_clv(struct ldlm_pool *pl, __u64 clv)
 /**
  * Returns current LVF from \a pl.
  */
-__u32 ldlm_pool_get_lvf(struct ldlm_pool *pl)
+u32 ldlm_pool_get_lvf(struct ldlm_pool *pl)
 {
 	return atomic_read(&pl->pl_lock_volume_factor);
 }
diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_request.c b/drivers/staging/lustre/lustre/ldlm/ldlm_request.c
index c09359a..a7fe8c6 100644
--- a/drivers/staging/lustre/lustre/ldlm/ldlm_request.c
+++ b/drivers/staging/lustre/lustre/ldlm/ldlm_request.c
@@ -104,7 +104,7 @@ static int ldlm_request_bufsize(int count, int type)
 	return sizeof(struct ldlm_request) + avail;
 }
 
-static void ldlm_expired_completion_wait(struct ldlm_lock *lock, __u32 conn_cnt)
+static void ldlm_expired_completion_wait(struct ldlm_lock *lock, u32 conn_cnt)
 {
 	struct obd_import *imp;
 	struct obd_device *obd;
@@ -213,13 +213,13 @@ static int ldlm_completion_tail(struct ldlm_lock *lock, void *data)
  * or penultimate cases happen in some other thread.
  *
  */
-int ldlm_completion_ast(struct ldlm_lock *lock, __u64 flags, void *data)
+int ldlm_completion_ast(struct ldlm_lock *lock, u64 flags, void *data)
 {
 	/* XXX ALLOCATE - 160 bytes */
 	struct obd_device *obd;
 	struct obd_import *imp = NULL;
-	__u32 timeout;
-	__u32 conn_cnt = 0;
+	u32 timeout;
+	u32 conn_cnt = 0;
 	int rc = 0;
 
 	if (flags == LDLM_FL_WAIT_NOREPROC) {
@@ -337,9 +337,9 @@ static void failed_lock_cleanup(struct ldlm_namespace *ns,
  * Called after receiving reply from server.
  */
 int ldlm_cli_enqueue_fini(struct obd_export *exp, struct ptlrpc_request *req,
-			  enum ldlm_type type, __u8 with_policy,
+			  enum ldlm_type type, u8 with_policy,
 			  enum ldlm_mode mode,
-			  __u64 *flags, void *lvb, __u32 lvb_len,
+			  u64 *flags, void *lvb, u32 lvb_len,
 			  const struct lustre_handle *lockh, int rc)
 {
 	struct ldlm_namespace *ns = exp->exp_obd->obd_namespace;
@@ -670,8 +670,8 @@ static struct ptlrpc_request *ldlm_enqueue_pack(struct obd_export *exp,
 int ldlm_cli_enqueue(struct obd_export *exp, struct ptlrpc_request **reqp,
 		     struct ldlm_enqueue_info *einfo,
 		     const struct ldlm_res_id *res_id,
-		     union ldlm_policy_data const *policy, __u64 *flags,
-		     void *lvb, __u32 lvb_len, enum lvb_type lvb_type,
+		     union ldlm_policy_data const *policy, u64 *flags,
+		     void *lvb, u32 lvb_len, enum lvb_type lvb_type,
 		     struct lustre_handle *lockh, int async)
 {
 	struct ldlm_namespace *ns;
@@ -792,9 +792,9 @@ int ldlm_cli_enqueue(struct obd_export *exp, struct ptlrpc_request **reqp,
  * \retval LDLM_FL_CANCELING otherwise;
  * \retval LDLM_FL_BL_AST if there is a need for a separate CANCEL RPC.
  */
-static __u64 ldlm_cli_cancel_local(struct ldlm_lock *lock)
+static u64 ldlm_cli_cancel_local(struct ldlm_lock *lock)
 {
-	__u64 rc = LDLM_FL_LOCAL_ONLY;
+	u64 rc = LDLM_FL_LOCAL_ONLY;
 
 	if (lock->l_conn_export) {
 		bool local_only;
@@ -960,8 +960,8 @@ static inline struct ldlm_pool *ldlm_imp2pl(struct obd_import *imp)
 int ldlm_cli_update_pool(struct ptlrpc_request *req)
 {
 	struct obd_device *obd;
-	__u64 new_slv;
-	__u32 new_limit;
+	u64 new_slv;
+	u32 new_limit;
 
 	if (unlikely(!req->rq_import || !req->rq_import->imp_obd ||
 		     !imp_connect_lru_resize(req->rq_import))) {
@@ -1014,7 +1014,7 @@ int ldlm_cli_cancel(const struct lustre_handle *lockh,
 {
 	struct obd_export *exp;
 	int avail, flags, count = 1;
-	__u64 rc = 0;
+	u64 rc = 0;
 	struct ldlm_namespace *ns;
 	struct ldlm_lock *lock;
 	LIST_HEAD(cancels);
@@ -1080,7 +1080,7 @@ int ldlm_cli_cancel_list_local(struct list_head *cancels, int count,
 	LIST_HEAD(head);
 	struct ldlm_lock *lock, *next;
 	int left = 0, bl_ast = 0;
-	__u64 rc;
+	u64 rc;
 
 	left = count;
 	list_for_each_entry_safe(lock, next, cancels, l_bl_ast) {
@@ -1169,7 +1169,7 @@ static enum ldlm_policy_res ldlm_cancel_lrur_policy(struct ldlm_namespace *ns,
 {
 	unsigned long cur = jiffies;
 	struct ldlm_pool *pl = &ns->ns_pool;
-	__u64 slv, lvf, lv;
+	u64 slv, lvf, lv;
 	unsigned long la;
 
 	/* Stop LRU processing when we reach past @count or have checked all
@@ -1562,7 +1562,7 @@ int ldlm_cancel_lru(struct ldlm_namespace *ns, int nr,
 int ldlm_cancel_resource_local(struct ldlm_resource *res,
 			       struct list_head *cancels,
 			       union ldlm_policy_data *policy,
-			       enum ldlm_mode mode, __u64 lock_flags,
+			       enum ldlm_mode mode, u64 lock_flags,
 			       enum ldlm_cancel_flags cancel_flags,
 			       void *opaque)
 {
diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_resource.c b/drivers/staging/lustre/lustre/ldlm/ldlm_resource.c
index 11c0b88..e0b9918 100644
--- a/drivers/staging/lustre/lustre/ldlm/ldlm_resource.c
+++ b/drivers/staging/lustre/lustre/ldlm/ldlm_resource.c
@@ -132,7 +132,7 @@ static ssize_t resource_count_show(struct kobject *kobj, struct attribute *attr,
 {
 	struct ldlm_namespace *ns = container_of(kobj, struct ldlm_namespace,
 						 ns_kobj);
-	__u64		  res = 0;
+	u64		  res = 0;
 	struct cfs_hash_bd	  bd;
 	int		    i;
 
@@ -148,7 +148,7 @@ static ssize_t lock_count_show(struct kobject *kobj, struct attribute *attr,
 {
 	struct ldlm_namespace *ns = container_of(kobj, struct ldlm_namespace,
 						 ns_kobj);
-	__u64		  locks;
+	u64		  locks;
 
 	locks = lprocfs_stats_collector(ns->ns_stats, LDLM_NSS_LOCKS,
 					LPROCFS_FIELDS_FLAGS_SUM);
@@ -172,7 +172,7 @@ static ssize_t lru_size_show(struct kobject *kobj, struct attribute *attr,
 {
 	struct ldlm_namespace *ns = container_of(kobj, struct ldlm_namespace,
 						 ns_kobj);
-	__u32 *nr = &ns->ns_max_unused;
+	u32 *nr = &ns->ns_max_unused;
 
 	if (ns_connect_lru_resize(ns))
 		nr = &ns->ns_nr_unused;
@@ -421,12 +421,12 @@ static unsigned int ldlm_res_hop_fid_hash(struct cfs_hash *hs,
 {
 	const struct ldlm_res_id *id = key;
 	struct lu_fid       fid;
-	__u32	       hash;
-	__u32	       val;
+	u32	       hash;
+	u32	       val;
 
 	fid.f_seq = id->name[LUSTRE_RES_ID_SEQ_OFF];
-	fid.f_oid = (__u32)id->name[LUSTRE_RES_ID_VER_OID_OFF];
-	fid.f_ver = (__u32)(id->name[LUSTRE_RES_ID_VER_OID_OFF] >> 32);
+	fid.f_oid = (u32)id->name[LUSTRE_RES_ID_VER_OID_OFF];
+	fid.f_ver = (u32)(id->name[LUSTRE_RES_ID_VER_OID_OFF] >> 32);
 
 	hash = fid_flatten32(&fid);
 	hash += (hash >> 4) + (hash << 12); /* mixing oid and seq */
@@ -694,7 +694,7 @@ struct ldlm_namespace *ldlm_namespace_new(struct obd_device *obd, char *name,
  * locks with refs.
  */
 static void cleanup_resource(struct ldlm_resource *res, struct list_head *q,
-			     __u64 flags)
+			     u64 flags)
 {
 	int rc = 0;
 	bool local_only = !!(flags & LDLM_FL_LOCAL_ONLY);
@@ -764,7 +764,7 @@ static int ldlm_resource_clean(struct cfs_hash *hs, struct cfs_hash_bd *bd,
 			       struct hlist_node *hnode, void *arg)
 {
 	struct ldlm_resource *res = cfs_hash_object(hs, hnode);
-	__u64 flags = *(__u64 *)arg;
+	u64 flags = *(u64 *)arg;
 
 	cleanup_resource(res, &res->lr_granted, flags);
 	cleanup_resource(res, &res->lr_waiting, flags);
@@ -795,7 +795,7 @@ static int ldlm_resource_complain(struct cfs_hash *hs, struct cfs_hash_bd *bd,
  * evicted and all of its state needs to be destroyed.
  * Also used during shutdown.
  */
-int ldlm_namespace_cleanup(struct ldlm_namespace *ns, __u64 flags)
+int ldlm_namespace_cleanup(struct ldlm_namespace *ns, u64 flags)
 {
 	if (!ns) {
 		CDEBUG(D_INFO, "NULL ns, skipping cleanup\n");
@@ -1048,7 +1048,7 @@ struct ldlm_resource *
 	struct hlist_node     *hnode;
 	struct ldlm_resource *res = NULL;
 	struct cfs_hash_bd	 bd;
-	__u64		 version;
+	u64		 version;
 	int		      ns_refcount = 0;
 	int rc;
 
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [lustre-devel] [PATCH 07/26] obdclass: use kernel types for kernel code
  2019-01-31 17:19 [lustre-devel] [PATCH 00/26] lustre: cleanups with no code changes James Simmons
                   ` (5 preceding siblings ...)
  2019-01-31 17:19 ` [lustre-devel] [PATCH 06/26] ldlm: use kernel types for kernel code James Simmons
@ 2019-01-31 17:19 ` James Simmons
  2019-01-31 17:19 ` [lustre-devel] [PATCH 08/26] lustre: convert remaining code to kernel types James Simmons
                   ` (19 subsequent siblings)
  26 siblings, 0 replies; 30+ messages in thread
From: James Simmons @ 2019-01-31 17:19 UTC (permalink / raw)
  To: lustre-devel

Lustre obdclass code was originally both a user land and kernel
implementation. The source contains many types of the form __u32
but since this is mostly kernel code change the types to kernel
internal types.

Signed-off-by: James Simmons <jsimmons@infradead.org>
---
 drivers/staging/lustre/lustre/obdclass/cl_io.c     |  2 +-
 drivers/staging/lustre/lustre/obdclass/cl_lock.c   |  2 +-
 drivers/staging/lustre/lustre/obdclass/cl_object.c |  2 +-
 drivers/staging/lustre/lustre/obdclass/class_obd.c | 14 +++----
 drivers/staging/lustre/lustre/obdclass/debug.c     | 16 ++++----
 drivers/staging/lustre/lustre/obdclass/genops.c    | 10 ++---
 drivers/staging/lustre/lustre/obdclass/llog_swab.c | 14 +++----
 .../lustre/lustre/obdclass/lprocfs_counters.c      |  2 +-
 .../lustre/lustre/obdclass/lprocfs_status.c        | 46 +++++++++++-----------
 drivers/staging/lustre/lustre/obdclass/lu_object.c | 26 ++++++------
 .../lustre/lustre/obdclass/lustre_handles.c        |  4 +-
 .../staging/lustre/lustre/obdclass/lustre_peer.c   |  4 +-
 .../staging/lustre/lustre/obdclass/obd_config.c    |  4 +-
 drivers/staging/lustre/lustre/obdclass/obd_mount.c |  8 ++--
 14 files changed, 77 insertions(+), 77 deletions(-)

diff --git a/drivers/staging/lustre/lustre/obdclass/cl_io.c b/drivers/staging/lustre/lustre/obdclass/cl_io.c
index 84c7710..d3f2455 100644
--- a/drivers/staging/lustre/lustre/obdclass/cl_io.c
+++ b/drivers/staging/lustre/lustre/obdclass/cl_io.c
@@ -205,7 +205,7 @@ int cl_io_rw_init(const struct lu_env *env, struct cl_io *io,
 
 	LU_OBJECT_HEADER(D_VFSTRACE, env, &io->ci_obj->co_lu,
 			 "io range: %u [%llu, %llu) %u %u\n",
-			 iot, (__u64)pos, (__u64)pos + count,
+			 iot, (u64)pos, (u64)pos + count,
 			 io->u.ci_rw.crw_nonblock, io->u.ci_wr.wr_append);
 	io->u.ci_rw.crw_pos    = pos;
 	io->u.ci_rw.crw_count  = count;
diff --git a/drivers/staging/lustre/lustre/obdclass/cl_lock.c b/drivers/staging/lustre/lustre/obdclass/cl_lock.c
index 23c1609..425ca9c 100644
--- a/drivers/staging/lustre/lustre/obdclass/cl_lock.c
+++ b/drivers/staging/lustre/lustre/obdclass/cl_lock.c
@@ -180,7 +180,7 @@ int cl_lock_request(const struct lu_env *env, struct cl_io *io,
 		    struct cl_lock *lock)
 {
 	struct cl_sync_io *anchor = NULL;
-	__u32 enq_flags = lock->cll_descr.cld_enq_flags;
+	u32 enq_flags = lock->cll_descr.cld_enq_flags;
 	int rc;
 
 	rc = cl_lock_init(env, lock, io);
diff --git a/drivers/staging/lustre/lustre/obdclass/cl_object.c b/drivers/staging/lustre/lustre/obdclass/cl_object.c
index b2bf570..5b59a71 100644
--- a/drivers/staging/lustre/lustre/obdclass/cl_object.c
+++ b/drivers/staging/lustre/lustre/obdclass/cl_object.c
@@ -592,7 +592,7 @@ static void cl_env_init0(struct cl_env *cle, void *debug)
 	cl_env_inc(CS_busy);
 }
 
-static struct lu_env *cl_env_new(__u32 ctx_tags, __u32 ses_tags, void *debug)
+static struct lu_env *cl_env_new(u32 ctx_tags, u32 ses_tags, void *debug)
 {
 	struct lu_env *env;
 	struct cl_env *cle;
diff --git a/drivers/staging/lustre/lustre/obdclass/class_obd.c b/drivers/staging/lustre/lustre/obdclass/class_obd.c
index 75345dd..e130cf7 100644
--- a/drivers/staging/lustre/lustre/obdclass/class_obd.c
+++ b/drivers/staging/lustre/lustre/obdclass/class_obd.c
@@ -120,7 +120,7 @@ int lustre_get_jobid(char *jobid)
 }
 EXPORT_SYMBOL(lustre_get_jobid);
 
-static int class_resolve_dev_name(__u32 len, const char *name)
+static int class_resolve_dev_name(u32 len, const char *name)
 {
 	int rc;
 	int dev;
@@ -594,19 +594,19 @@ static long obd_class_ioctl(struct file *filp, unsigned int cmd,
 
 static int obd_init_checks(void)
 {
-	__u64 u64val, div64val;
+	u64 u64val, div64val;
 	char buf[64];
 	int len, ret = 0;
 
 	CDEBUG(D_INFO, "LPU64=%s, LPD64=%s, LPX64=%s\n", "%llu", "%lld",
 	       "%#llx");
 
-	CDEBUG(D_INFO, "OBD_OBJECT_EOF = %#llx\n", (__u64)OBD_OBJECT_EOF);
+	CDEBUG(D_INFO, "OBD_OBJECT_EOF = %#llx\n", (u64)OBD_OBJECT_EOF);
 
 	u64val = OBD_OBJECT_EOF;
 	CDEBUG(D_INFO, "u64val OBD_OBJECT_EOF = %#llx\n", u64val);
 	if (u64val != OBD_OBJECT_EOF) {
-		CERROR("__u64 %#llx(%d) != 0xffffffffffffffff\n",
+		CERROR("u64 %#llx(%d) != 0xffffffffffffffff\n",
 		       u64val, (int)sizeof(u64val));
 		ret = -EINVAL;
 	}
@@ -619,12 +619,12 @@ static int obd_init_checks(void)
 	div64val = OBD_OBJECT_EOF;
 	CDEBUG(D_INFO, "u64val OBD_OBJECT_EOF = %#llx\n", u64val);
 	if (u64val != OBD_OBJECT_EOF) {
-		CERROR("__u64 %#llx(%d) != 0xffffffffffffffff\n",
+		CERROR("u64 %#llx(%d) != 0xffffffffffffffff\n",
 		       u64val, (int)sizeof(u64val));
 		ret = -EOVERFLOW;
 	}
 	if (u64val >> 8 != OBD_OBJECT_EOF >> 8) {
-		CERROR("__u64 %#llx(%d) != 0xffffffffffffffff\n",
+		CERROR("u64 %#llx(%d) != 0xffffffffffffffff\n",
 		       u64val, (int)sizeof(u64val));
 		return -EOVERFLOW;
 	}
@@ -654,7 +654,7 @@ static int obd_init_checks(void)
 	}
 	if ((u64val & ~PAGE_MASK) >= PAGE_SIZE) {
 		CWARN("mask failed: u64val %llu >= %llu\n", u64val,
-		      (__u64)PAGE_SIZE);
+		      (u64)PAGE_SIZE);
 		ret = -EINVAL;
 	}
 
diff --git a/drivers/staging/lustre/lustre/obdclass/debug.c b/drivers/staging/lustre/lustre/obdclass/debug.c
index 2156a82..2e526c7 100644
--- a/drivers/staging/lustre/lustre/obdclass/debug.c
+++ b/drivers/staging/lustre/lustre/obdclass/debug.c
@@ -43,8 +43,8 @@
 #include <lustre_debug.h>
 #include <lustre_net.h>
 
-#define LPDS sizeof(__u64)
-int block_debug_setup(void *addr, int len, __u64 off, __u64 id)
+#define LPDS sizeof(u64)
+int block_debug_setup(void *addr, int len, u64 off, u64 id)
 {
 	LASSERT(addr);
 
@@ -58,9 +58,9 @@ int block_debug_setup(void *addr, int len, __u64 off, __u64 id)
 }
 EXPORT_SYMBOL(block_debug_setup);
 
-int block_debug_check(char *who, void *addr, int end, __u64 off, __u64 id)
+int block_debug_check(char *who, void *addr, int end, u64 off, u64 id)
 {
-	__u64 ne_off;
+	u64 ne_off;
 	int err = 0;
 
 	LASSERT(addr);
@@ -69,24 +69,24 @@ int block_debug_check(char *who, void *addr, int end, __u64 off, __u64 id)
 	id = le64_to_cpu(id);
 	if (memcmp(addr, (char *)&ne_off, LPDS)) {
 		CDEBUG(D_ERROR, "%s: id %#llx offset %llu off: %#llx != %#llx\n",
-		       who, id, off, *(__u64 *)addr, ne_off);
+		       who, id, off, *(u64 *)addr, ne_off);
 		err = -EINVAL;
 	}
 	if (memcmp(addr + LPDS, (char *)&id, LPDS)) {
 		CDEBUG(D_ERROR, "%s: id %#llx offset %llu id: %#llx != %#llx\n",
-		       who, id, off, *(__u64 *)(addr + LPDS), id);
+		       who, id, off, *(u64 *)(addr + LPDS), id);
 		err = -EINVAL;
 	}
 
 	addr += end - LPDS - LPDS;
 	if (memcmp(addr, (char *)&ne_off, LPDS)) {
 		CDEBUG(D_ERROR, "%s: id %#llx offset %llu end off: %#llx != %#llx\n",
-		       who, id, off, *(__u64 *)addr, ne_off);
+		       who, id, off, *(u64 *)addr, ne_off);
 		err = -EINVAL;
 	}
 	if (memcmp(addr + LPDS, (char *)&id, LPDS)) {
 		CDEBUG(D_ERROR, "%s: id %#llx offset %llu end id: %#llx != %#llx\n",
-		       who, id, off, *(__u64 *)(addr + LPDS), id);
+		       who, id, off, *(u64 *)(addr + LPDS), id);
 		err = -EINVAL;
 	}
 
diff --git a/drivers/staging/lustre/lustre/obdclass/genops.c b/drivers/staging/lustre/lustre/obdclass/genops.c
index 03df181..3d4d6e1 100644
--- a/drivers/staging/lustre/lustre/obdclass/genops.c
+++ b/drivers/staging/lustre/lustre/obdclass/genops.c
@@ -1366,17 +1366,17 @@ void obd_put_request_slot(struct client_obd *cli)
 }
 EXPORT_SYMBOL(obd_put_request_slot);
 
-__u32 obd_get_max_rpcs_in_flight(struct client_obd *cli)
+u32 obd_get_max_rpcs_in_flight(struct client_obd *cli)
 {
 	return cli->cl_max_rpcs_in_flight;
 }
 EXPORT_SYMBOL(obd_get_max_rpcs_in_flight);
 
-int obd_set_max_rpcs_in_flight(struct client_obd *cli, __u32 max)
+int obd_set_max_rpcs_in_flight(struct client_obd *cli, u32 max)
 {
 	struct obd_request_slot_waiter *orsw;
 	const char *typ_name;
-	__u32 old;
+	u32 old;
 	int diff;
 	int rc;
 	int i;
@@ -1424,7 +1424,7 @@ int obd_set_max_rpcs_in_flight(struct client_obd *cli, __u32 max)
 }
 EXPORT_SYMBOL(obd_set_max_rpcs_in_flight);
 
-int obd_set_max_mod_rpcs_in_flight(struct client_obd *cli, __u16 max)
+int obd_set_max_mod_rpcs_in_flight(struct client_obd *cli, u16 max)
 {
 	struct obd_connect_data *ocd;
 	u16 maxmodrpcs;
@@ -1564,7 +1564,7 @@ static inline bool obd_skip_mod_rpc_slot(const struct lookup_intent *it)
  * Returns the tag to be set in the request message. Tag 0
  * is reserved for non-modifying requests.
  */
-u16 obd_get_mod_rpc_slot(struct client_obd *cli, __u32 opc,
+u16 obd_get_mod_rpc_slot(struct client_obd *cli, u32 opc,
 			 struct lookup_intent *it)
 {
 	bool close_req = false;
diff --git a/drivers/staging/lustre/lustre/obdclass/llog_swab.c b/drivers/staging/lustre/lustre/obdclass/llog_swab.c
index f18330f..fddc1ea 100644
--- a/drivers/staging/lustre/lustre/obdclass/llog_swab.c
+++ b/drivers/staging/lustre/lustre/obdclass/llog_swab.c
@@ -358,12 +358,12 @@ void lustre_swab_lustre_cfg(struct lustre_cfg *lcfg)
 
 /* used only for compatibility with old on-disk cfg_marker data */
 struct cfg_marker32 {
-	__u32   cm_step;
-	__u32   cm_flags;
-	__u32   cm_vers;
-	__u32   padding;
-	__u32   cm_createtime;
-	__u32   cm_canceltime;
+	u32   cm_step;
+	u32   cm_flags;
+	u32   cm_vers;
+	u32   padding;
+	u32   cm_createtime;
+	u32   cm_canceltime;
 	char    cm_tgtname[MTI_NAME_MAXLEN];
 	char    cm_comment[MTI_NAME_MAXLEN];
 };
@@ -381,7 +381,7 @@ void lustre_swab_cfg_marker(struct cfg_marker *marker, int swab, int size)
 		__swab32s(&marker->cm_vers);
 	}
 	if (size == sizeof(*cm32)) {
-		__u32 createtime, canceltime;
+		u32 createtime, canceltime;
 		/* There was a problem with the original declaration of
 		 * cfg_marker on 32-bit systems because it used time_t as
 		 * a wire protocol structure, and didn't verify this in
diff --git a/drivers/staging/lustre/lustre/obdclass/lprocfs_counters.c b/drivers/staging/lustre/lustre/obdclass/lprocfs_counters.c
index 85f09af..77bc66f 100644
--- a/drivers/staging/lustre/lustre/obdclass/lprocfs_counters.c
+++ b/drivers/staging/lustre/lustre/obdclass/lprocfs_counters.c
@@ -82,7 +82,7 @@ void lprocfs_counter_add(struct lprocfs_stats *stats, int idx, long amount)
 			percpu_cntr->lc_sum += amount;
 
 		if (header->lc_config & LPROCFS_CNTR_STDDEV)
-			percpu_cntr->lc_sumsquare += (__s64)amount * amount;
+			percpu_cntr->lc_sumsquare += (s64)amount * amount;
 		if (amount < percpu_cntr->lc_min)
 			percpu_cntr->lc_min = amount;
 		if (amount > percpu_cntr->lc_max)
diff --git a/drivers/staging/lustre/lustre/obdclass/lprocfs_status.c b/drivers/staging/lustre/lustre/obdclass/lprocfs_status.c
index acfea7a..cc70402 100644
--- a/drivers/staging/lustre/lustre/obdclass/lprocfs_status.c
+++ b/drivers/staging/lustre/lustre/obdclass/lprocfs_status.c
@@ -118,7 +118,7 @@
 int obd_connect_flags2str(char *page, int count, u64 flags, u64 flags2,
 			  const char *sep)
 {
-	__u64 mask;
+	u64 mask;
 	int i, ret = 0;
 
 	BUILD_BUG_ON(ARRAY_SIZE(obd_connect_names) < 65);
@@ -385,8 +385,8 @@ static ssize_t kbytestotal_show(struct kobject *kobj, struct attribute *attr,
 			    get_jiffies_64() - OBD_STATFS_CACHE_SECONDS * HZ,
 			    OBD_STATFS_NODELAY);
 	if (!rc) {
-		__u32 blk_size = osfs.os_bsize >> 10;
-		__u64 result = osfs.os_blocks;
+		u32 blk_size = osfs.os_bsize >> 10;
+		u64 result = osfs.os_blocks;
 
 		while (blk_size >>= 1)
 			result <<= 1;
@@ -408,8 +408,8 @@ static ssize_t kbytesfree_show(struct kobject *kobj, struct attribute *attr,
 			    get_jiffies_64() - OBD_STATFS_CACHE_SECONDS * HZ,
 			    OBD_STATFS_NODELAY);
 	if (!rc) {
-		__u32 blk_size = osfs.os_bsize >> 10;
-		__u64 result = osfs.os_bfree;
+		u32 blk_size = osfs.os_bsize >> 10;
+		u64 result = osfs.os_bfree;
 
 		while (blk_size >>= 1)
 			result <<= 1;
@@ -431,8 +431,8 @@ static ssize_t kbytesavail_show(struct kobject *kobj, struct attribute *attr,
 			    get_jiffies_64() - OBD_STATFS_CACHE_SECONDS * HZ,
 			    OBD_STATFS_NODELAY);
 	if (!rc) {
-		__u32 blk_size = osfs.os_bsize >> 10;
-		__u64 result = osfs.os_bavail;
+		u32 blk_size = osfs.os_bsize >> 10;
+		u64 result = osfs.os_bavail;
 
 		while (blk_size >>= 1)
 			result <<= 1;
@@ -702,7 +702,7 @@ static int obd_import_flags2str(struct obd_import *imp, struct seq_file *m)
 static void obd_connect_seq_flags2str(struct seq_file *m, u64 flags,
 				      u64 flags2, const char *sep)
 {
-	__u64 mask;
+	u64 mask;
 	int i;
 	bool first = true;
 
@@ -813,8 +813,8 @@ int lprocfs_rd_import(struct seq_file *m, void *data)
 	header = &obd->obd_svc_stats->ls_cnt_header[PTLRPC_REQWAIT_CNTR];
 	lprocfs_stats_collect(obd->obd_svc_stats, PTLRPC_REQWAIT_CNTR, &ret);
 	if (ret.lc_count != 0) {
-		/* first argument to do_div MUST be __u64 */
-		__u64 sum = ret.lc_sum;
+		/* first argument to do_div MUST be u64 */
+		u64 sum = ret.lc_sum;
 
 		do_div(sum, ret.lc_count);
 		ret.lc_sum = sum;
@@ -861,8 +861,8 @@ int lprocfs_rd_import(struct seq_file *m, void *data)
 				      PTLRPC_LAST_CNTR + BRW_READ_BYTES + rw,
 				      &ret);
 		if (ret.lc_sum > 0 && ret.lc_count > 0) {
-			/* first argument to do_div MUST be __u64 */
-			__u64 sum = ret.lc_sum;
+			/* first argument to do_div MUST be u64 */
+			u64 sum = ret.lc_sum;
 
 			do_div(sum, ret.lc_count);
 			ret.lc_sum = sum;
@@ -877,8 +877,8 @@ int lprocfs_rd_import(struct seq_file *m, void *data)
 		header = &obd->obd_svc_stats->ls_cnt_header[j];
 		lprocfs_stats_collect(obd->obd_svc_stats, j, &ret);
 		if (ret.lc_sum > 0 && ret.lc_count != 0) {
-			/* first argument to do_div MUST be __u64 */
-			__u64 sum = ret.lc_sum;
+			/* first argument to do_div MUST be u64 */
+			u64 sum = ret.lc_sum;
 
 			do_div(sum, ret.lc_count);
 			ret.lc_sum = sum;
@@ -994,7 +994,7 @@ int lprocfs_rd_timeouts(struct seq_file *m, void *data)
 int lprocfs_rd_connect_flags(struct seq_file *m, void *data)
 {
 	struct obd_device *obd = data;
-	__u64 flags, flags2;
+	u64 flags, flags2;
 	int rc;
 
 	rc = lprocfs_climp_check(obd);
@@ -1217,13 +1217,13 @@ void lprocfs_free_stats(struct lprocfs_stats **statsh)
 }
 EXPORT_SYMBOL(lprocfs_free_stats);
 
-__u64 lprocfs_stats_collector(struct lprocfs_stats *stats, int idx,
+u64 lprocfs_stats_collector(struct lprocfs_stats *stats, int idx,
 			      enum lprocfs_fields_flags field)
 {
 	unsigned int i;
 	unsigned int  num_cpu;
 	unsigned long flags     = 0;
-	__u64         ret       = 0;
+	u64         ret       = 0;
 
 	LASSERT(stats);
 
@@ -1471,12 +1471,12 @@ void ldebugfs_free_md_stats(struct obd_device *obd)
 }
 EXPORT_SYMBOL(ldebugfs_free_md_stats);
 
-__s64 lprocfs_read_helper(struct lprocfs_counter *lc,
+s64 lprocfs_read_helper(struct lprocfs_counter *lc,
 			  struct lprocfs_counter_header *header,
 			  enum lprocfs_stats_flags flags,
 			  enum lprocfs_fields_flags field)
 {
-	__s64 ret = 0;
+	s64 ret = 0;
 
 	if (!lc || !header)
 		return 0;
@@ -1521,17 +1521,17 @@ int lprocfs_write_helper(const char __user *buffer, unsigned long count,
 EXPORT_SYMBOL(lprocfs_write_helper);
 
 int lprocfs_write_u64_helper(const char __user *buffer, unsigned long count,
-			     __u64 *val)
+			     u64 *val)
 {
 	return lprocfs_write_frac_u64_helper(buffer, count, val, 1);
 }
 EXPORT_SYMBOL(lprocfs_write_u64_helper);
 
 int lprocfs_write_frac_u64_helper(const char __user *buffer,
-				  unsigned long count, __u64 *val, int mult)
+				  unsigned long count, u64 *val, int mult)
 {
 	char kernbuf[22], *end, *pbuf;
-	__u64 whole, frac = 0, units;
+	u64 whole, frac = 0, units;
 	unsigned int frac_d = 1;
 	int sign = 1;
 
@@ -1557,7 +1557,7 @@ int lprocfs_write_frac_u64_helper(const char __user *buffer,
 
 		pbuf = end + 1;
 
-		/* need to limit frac_d to a __u32 */
+		/* need to limit frac_d to a u32 */
 		if (strlen(pbuf) > 10)
 			pbuf[10] = '\0';
 
diff --git a/drivers/staging/lustre/lustre/obdclass/lu_object.c b/drivers/staging/lustre/lustre/obdclass/lu_object.c
index c513e55..a132d87 100644
--- a/drivers/staging/lustre/lustre/obdclass/lu_object.c
+++ b/drivers/staging/lustre/lustre/obdclass/lu_object.c
@@ -106,7 +106,7 @@ enum {
 MODULE_PARM_DESC(lu_cache_nr, "Maximum number of objects in lu_object cache");
 
 static void lu_object_free(const struct lu_env *env, struct lu_object *o);
-static __u32 ls_stats_read(struct lprocfs_stats *stats, int idx);
+static u32 ls_stats_read(struct lprocfs_stats *stats, int idx);
 
 wait_queue_head_t *
 lu_site_wq_from_fid(struct lu_site *site, struct lu_fid *fid)
@@ -590,7 +590,7 @@ void lu_object_print(const struct lu_env *env, void *cookie,
 static struct lu_object *htable_lookup(struct lu_site *s,
 				       struct cfs_hash_bd *bd,
 				       const struct lu_fid *f,
-				       __u64 *version)
+				       u64 *version)
 {
 	struct lu_site_bkt_data *bkt;
 	struct lu_object_header *h;
@@ -643,18 +643,18 @@ static struct lu_object *lu_object_find(const struct lu_env *env,
  */
 static void lu_object_limit(const struct lu_env *env, struct lu_device *dev)
 {
-	__u64 size, nr;
+	u64 size, nr;
 
 	if (lu_cache_nr == LU_CACHE_NR_UNLIMITED)
 		return;
 
 	size = cfs_hash_size_get(dev->ld_site->ls_obj_hash);
-	nr = (__u64)lu_cache_nr;
+	nr = (u64)lu_cache_nr;
 	if (size <= nr)
 		return;
 
 	lu_site_purge_objects(env, dev->ld_site,
-			      min_t(__u64, size - nr, LU_CACHE_NR_MAX_ADJUST),
+			      min_t(u64, size - nr, LU_CACHE_NR_MAX_ADJUST),
 			      false);
 }
 
@@ -675,7 +675,7 @@ struct lu_object *lu_object_find_at(const struct lu_env *env,
 	struct lu_site	*s;
 	struct cfs_hash	    *hs;
 	struct cfs_hash_bd	  bd;
-	__u64		  version = 0;
+	u64		  version = 0;
 
 	/*
 	 * This uses standard index maintenance protocol:
@@ -884,7 +884,7 @@ static unsigned int lu_obj_hop_hash(struct cfs_hash *hs,
 				    const void *key, unsigned int mask)
 {
 	struct lu_fid  *fid = (struct lu_fid *)key;
-	__u32	   hash;
+	u32	   hash;
 
 	hash = fid_flatten32(fid);
 	hash += (hash >> 4) + (hash << 12); /* mixing oid and seq */
@@ -1593,7 +1593,7 @@ static int keys_init(struct lu_context *ctx)
 /**
  * Initialize context data-structure. Create values for all keys.
  */
-int lu_context_init(struct lu_context *ctx, __u32 tags)
+int lu_context_init(struct lu_context *ctx, u32 tags)
 {
 	int	rc;
 
@@ -1705,10 +1705,10 @@ int lu_context_refill(struct lu_context *ctx)
  * predefined when the lu_device type are registered, during the module probe
  * phase.
  */
-__u32 lu_context_tags_default;
-__u32 lu_session_tags_default;
+u32 lu_context_tags_default;
+u32 lu_session_tags_default;
 
-int lu_env_init(struct lu_env *env, __u32 tags)
+int lu_env_init(struct lu_env *env, u32 tags)
 {
 	int result;
 
@@ -1939,12 +1939,12 @@ void lu_global_fini(void)
 	lu_ref_global_fini();
 }
 
-static __u32 ls_stats_read(struct lprocfs_stats *stats, int idx)
+static u32 ls_stats_read(struct lprocfs_stats *stats, int idx)
 {
 	struct lprocfs_counter ret;
 
 	lprocfs_stats_collect(stats, idx, &ret);
-	return (__u32)ret.lc_count;
+	return (u32)ret.lc_count;
 }
 
 /**
diff --git a/drivers/staging/lustre/lustre/obdclass/lustre_handles.c b/drivers/staging/lustre/lustre/obdclass/lustre_handles.c
index cdc8dc1..b296877 100644
--- a/drivers/staging/lustre/lustre/obdclass/lustre_handles.c
+++ b/drivers/staging/lustre/lustre/obdclass/lustre_handles.c
@@ -42,7 +42,7 @@
 #include <lustre_handles.h>
 #include <lustre_lib.h>
 
-static __u64 handle_base;
+static u64 handle_base;
 #define HANDLE_INCR 7
 static spinlock_t handle_base_lock;
 
@@ -132,7 +132,7 @@ void class_handle_unhash(struct portals_handle *h)
 }
 EXPORT_SYMBOL(class_handle_unhash);
 
-void *class_handle2object(__u64 cookie, const void *owner)
+void *class_handle2object(u64 cookie, const void *owner)
 {
 	struct handle_bucket *bucket;
 	struct portals_handle *h;
diff --git a/drivers/staging/lustre/lustre/obdclass/lustre_peer.c b/drivers/staging/lustre/lustre/obdclass/lustre_peer.c
index 5705b0a..8e7f3a8 100644
--- a/drivers/staging/lustre/lustre/obdclass/lustre_peer.c
+++ b/drivers/staging/lustre/lustre/obdclass/lustre_peer.c
@@ -80,7 +80,7 @@ int lustre_uuid_to_peer(const char *uuid, lnet_nid_t *peer_nid, int index)
 /* Add a nid to a niduuid.  Multiple nids can be added to a single uuid;
  * LNET will choose the best one.
  */
-int class_add_uuid(const char *uuid, __u64 nid)
+int class_add_uuid(const char *uuid, u64 nid)
 {
 	struct uuid_nid_data *data, *entry;
 	int found = 0;
@@ -173,7 +173,7 @@ int class_del_uuid(const char *uuid)
 }
 
 /* check if @nid exists in nid list of @uuid */
-int class_check_uuid(struct obd_uuid *uuid, __u64 nid)
+int class_check_uuid(struct obd_uuid *uuid, u64 nid)
 {
 	struct uuid_nid_data *entry;
 	int found = 0;
diff --git a/drivers/staging/lustre/lustre/obdclass/obd_config.c b/drivers/staging/lustre/lustre/obdclass/obd_config.c
index 7d00ca4..887afda 100644
--- a/drivers/staging/lustre/lustre/obdclass/obd_config.c
+++ b/drivers/staging/lustre/lustre/obdclass/obd_config.c
@@ -168,7 +168,7 @@ static int parse_nid(char *buf, void *value, int quiet)
 
 static int parse_net(char *buf, void *value)
 {
-	__u32 *net = value;
+	u32 *net = value;
 
 	*net = libcfs_str2net(buf);
 	CDEBUG(D_INFO, "Net %s\n", libcfs_net2str(*net));
@@ -1415,7 +1415,7 @@ int class_config_llog_handler(const struct lu_env *env,
 		 */
 		if (lcfg->lcfg_nal != 0 &&      /* pre-newconfig log? */
 		    (lcfg->lcfg_nid >> 32) == 0) {
-			__u32 addr = (__u32)(lcfg->lcfg_nid & 0xffffffff);
+			u32 addr = (u32)(lcfg->lcfg_nid & 0xffffffff);
 
 			lcfg_new->lcfg_nid =
 				LNET_MKNID(LNET_MKNET(lcfg->lcfg_nal, 0), addr);
diff --git a/drivers/staging/lustre/lustre/obdclass/obd_mount.c b/drivers/staging/lustre/lustre/obdclass/obd_mount.c
index db5e1b5..eab3216 100644
--- a/drivers/staging/lustre/lustre/obdclass/obd_mount.c
+++ b/drivers/staging/lustre/lustre/obdclass/obd_mount.c
@@ -269,7 +269,7 @@ int lustre_start_mgc(struct super_block *sb)
 		if (lmd_is_client(lsi->lsi_lmd)) {
 			int has_ir;
 			int vallen = sizeof(*data);
-			__u32 *flags = &lsi->lsi_lmd->lmd_flags;
+			u32 *flags = &lsi->lsi_lmd->lmd_flags;
 
 			rc = obd_get_info(NULL, obd->obd_self_export,
 					  strlen(KEY_CONN_DATA), KEY_CONN_DATA,
@@ -621,7 +621,7 @@ static int server_name2fsname(const char *svname, char *fsname,
  * rc < 0  on error
  * if endptr isn't NULL it is set to end of name
  */
-static int server_name2index(const char *svname, __u32 *idx,
+static int server_name2index(const char *svname, u32 *idx,
 			     const char **endptr)
 {
 	unsigned long index;
@@ -721,7 +721,7 @@ int lustre_check_exclusion(struct super_block *sb, char *svname)
 {
 	struct lustre_sb_info *lsi = s2lsi(sb);
 	struct lustre_mount_data *lmd = lsi->lsi_lmd;
-	__u32 index;
+	u32 index;
 	int i, rc;
 
 	rc = server_name2index(svname, &index, NULL);
@@ -745,7 +745,7 @@ int lustre_check_exclusion(struct super_block *sb, char *svname)
 static int lmd_make_exclusion(struct lustre_mount_data *lmd, const char *ptr)
 {
 	const char *s1 = ptr, *s2;
-	__u32 index = 0, *exclude_list;
+	u32 index = 0, *exclude_list;
 	int rc = 0, devmax;
 
 	/* The shortest an ost name can be is 8 chars: -OST0000.
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [lustre-devel] [PATCH 08/26] lustre: convert remaining code to kernel types
  2019-01-31 17:19 [lustre-devel] [PATCH 00/26] lustre: cleanups with no code changes James Simmons
                   ` (6 preceding siblings ...)
  2019-01-31 17:19 ` [lustre-devel] [PATCH 07/26] obdclass: " James Simmons
@ 2019-01-31 17:19 ` James Simmons
  2019-01-31 17:19 ` [lustre-devel] [PATCH 09/26] lustre: cleanup white spaces in fid and fld layer James Simmons
                   ` (18 subsequent siblings)
  26 siblings, 0 replies; 30+ messages in thread
From: James Simmons @ 2019-01-31 17:19 UTC (permalink / raw)
  To: lustre-devel

Convert the remaining Lustre kernel code to use the proper kernel
types.

Signed-off-by: James Simmons <jsimmons@infradead.org>
---
 drivers/staging/lustre/lustre/fid/fid_lib.c        |  2 +-
 drivers/staging/lustre/lustre/fid/fid_request.c    |  4 +-
 drivers/staging/lustre/lustre/fid/lproc_fid.c      |  4 +-
 drivers/staging/lustre/lustre/fld/fld_cache.c      |  4 +-
 drivers/staging/lustre/lustre/fld/fld_internal.h   | 12 ++--
 drivers/staging/lustre/lustre/fld/fld_request.c    |  8 +--
 drivers/staging/lustre/lustre/llite/dir.c          | 30 ++++-----
 drivers/staging/lustre/lustre/llite/file.c         | 46 +++++++-------
 drivers/staging/lustre/lustre/llite/lcommon_misc.c |  4 +-
 .../staging/lustre/lustre/llite/llite_internal.h   | 74 +++++++++++-----------
 drivers/staging/lustre/lustre/llite/llite_lib.c    | 12 ++--
 drivers/staging/lustre/lustre/llite/llite_mmap.c   |  2 +-
 drivers/staging/lustre/lustre/llite/llite_nfs.c    | 12 ++--
 drivers/staging/lustre/lustre/llite/lproc_llite.c  |  6 +-
 drivers/staging/lustre/lustre/llite/namei.c        | 16 ++---
 drivers/staging/lustre/lustre/llite/range_lock.c   |  4 +-
 drivers/staging/lustre/lustre/llite/range_lock.h   | 10 +--
 drivers/staging/lustre/lustre/llite/rw.c           | 10 +--
 drivers/staging/lustre/lustre/llite/statahead.c    | 26 ++++----
 drivers/staging/lustre/lustre/llite/vvp_internal.h |  2 +-
 drivers/staging/lustre/lustre/llite/vvp_io.c       | 12 ++--
 drivers/staging/lustre/lustre/llite/xattr_cache.c  |  6 +-
 drivers/staging/lustre/lustre/lmv/lmv_intent.c     |  8 +--
 drivers/staging/lustre/lustre/lmv/lmv_internal.h   |  2 +-
 drivers/staging/lustre/lustre/lmv/lmv_obd.c        | 50 +++++++--------
 .../staging/lustre/lustre/lov/lov_cl_internal.h    |  4 +-
 drivers/staging/lustre/lustre/lov/lov_dev.c        | 10 +--
 drivers/staging/lustre/lustre/lov/lov_internal.h   | 20 +++---
 drivers/staging/lustre/lustre/lov/lov_merge.c      |  8 +--
 drivers/staging/lustre/lustre/lov/lov_obd.c        | 46 +++++++-------
 drivers/staging/lustre/lustre/lov/lov_pack.c       |  2 +-
 drivers/staging/lustre/lustre/lov/lov_pool.c       |  4 +-
 drivers/staging/lustre/lustre/lov/lov_request.c    |  4 +-
 drivers/staging/lustre/lustre/mdc/mdc_internal.h   | 24 +++----
 drivers/staging/lustre/lustre/mdc/mdc_lib.c        | 22 +++----
 drivers/staging/lustre/lustre/mdc/mdc_locks.c      | 20 +++---
 drivers/staging/lustre/lustre/mdc/mdc_reint.c      |  6 +-
 drivers/staging/lustre/lustre/mdc/mdc_request.c    | 42 ++++++------
 drivers/staging/lustre/lustre/mgc/mgc_request.c    | 16 ++---
 .../staging/lustre/lustre/obdecho/echo_client.c    |  6 +-
 .../staging/lustre/lustre/obdecho/echo_internal.h  |  2 +-
 drivers/staging/lustre/lustre/osc/osc_cache.c      | 10 +--
 .../staging/lustre/lustre/osc/osc_cl_internal.h    |  2 +-
 drivers/staging/lustre/lustre/osc/osc_internal.h   |  6 +-
 drivers/staging/lustre/lustre/osc/osc_io.c         | 10 +--
 drivers/staging/lustre/lustre/osc/osc_lock.c       | 12 ++--
 drivers/staging/lustre/lustre/osc/osc_object.c     |  2 +-
 drivers/staging/lustre/lustre/osc/osc_request.c    | 38 +++++------
 48 files changed, 341 insertions(+), 341 deletions(-)

diff --git a/drivers/staging/lustre/lustre/fid/fid_lib.c b/drivers/staging/lustre/lustre/fid/fid_lib.c
index ac52b37..6b06847 100644
--- a/drivers/staging/lustre/lustre/fid/fid_lib.c
+++ b/drivers/staging/lustre/lustre/fid/fid_lib.c
@@ -60,7 +60,7 @@
  */
 const struct lu_seq_range LUSTRE_SEQ_SPACE_RANGE = {
 	.lsr_start	= FID_SEQ_NORMAL,
-	.lsr_end	= (__u64)~0ULL,
+	.lsr_end	= (u64)~0ULL,
 };
 
 /* Zero range, used for init and other purposes. */
diff --git a/drivers/staging/lustre/lustre/fid/fid_request.c b/drivers/staging/lustre/lustre/fid/fid_request.c
index f91242c..3f79f22 100644
--- a/drivers/staging/lustre/lustre/fid/fid_request.c
+++ b/drivers/staging/lustre/lustre/fid/fid_request.c
@@ -52,13 +52,13 @@
 static struct dentry *seq_debugfs_dir;
 
 static int seq_client_rpc(struct lu_client_seq *seq,
-			  struct lu_seq_range *output, __u32 opc,
+			  struct lu_seq_range *output, u32 opc,
 			  const char *opcname)
 {
 	struct obd_export     *exp = seq->lcs_exp;
 	struct ptlrpc_request *req;
 	struct lu_seq_range   *out, *in;
-	__u32                 *op;
+	u32                 *op;
 	unsigned int           debug_mask;
 	int                    rc;
 
diff --git a/drivers/staging/lustre/lustre/fid/lproc_fid.c b/drivers/staging/lustre/lustre/fid/lproc_fid.c
index aa2df68..d583778 100644
--- a/drivers/staging/lustre/lustre/fid/lproc_fid.c
+++ b/drivers/staging/lustre/lustre/fid/lproc_fid.c
@@ -50,7 +50,7 @@
 #include "fid_internal.h"
 
 /* Format: [0x64BIT_INT - 0x64BIT_INT] + 32 bytes just in case */
-#define MAX_FID_RANGE_STRLEN (32 + 2 * 2 * sizeof(__u64))
+#define MAX_FID_RANGE_STRLEN (32 + 2 * 2 * sizeof(u64))
 /*
  * Note: this function is only used for testing, it is no safe for production
  * use.
@@ -143,7 +143,7 @@
 			     size_t count, loff_t *off)
 {
 	struct lu_client_seq *seq;
-	__u64  max;
+	u64  max;
 	int rc, val;
 
 	seq = ((struct seq_file *)file->private_data)->private;
diff --git a/drivers/staging/lustre/lustre/fld/fld_cache.c b/drivers/staging/lustre/lustre/fld/fld_cache.c
index a7415c9..749d33b 100644
--- a/drivers/staging/lustre/lustre/fld/fld_cache.c
+++ b/drivers/staging/lustre/lustre/fld/fld_cache.c
@@ -94,7 +94,7 @@ struct fld_cache *fld_cache_init(const char *name,
  */
 void fld_cache_fini(struct fld_cache *cache)
 {
-	__u64 pct;
+	u64 pct;
 
 	LASSERT(cache);
 	fld_cache_flush(cache);
@@ -383,7 +383,7 @@ static int fld_cache_insert_nolock(struct fld_cache *cache,
 	struct list_head *prev = NULL;
 	const u64 new_start  = f_new->fce_range.lsr_start;
 	const u64 new_end  = f_new->fce_range.lsr_end;
-	__u32 new_flags  = f_new->fce_range.lsr_flags;
+	u32 new_flags  = f_new->fce_range.lsr_flags;
 
 	/*
 	 * Duplicate entries are eliminated in insert op.
diff --git a/drivers/staging/lustre/lustre/fld/fld_internal.h b/drivers/staging/lustre/lustre/fld/fld_internal.h
index e1d6aaa..66a0fb6 100644
--- a/drivers/staging/lustre/lustre/fld/fld_internal.h
+++ b/drivers/staging/lustre/lustre/fld/fld_internal.h
@@ -63,15 +63,15 @@
 #include <lustre_fld.h>
 
 struct fld_stats {
-	__u64   fst_count;
-	__u64   fst_cache;
-	__u64   fst_inflight;
+	u64   fst_count;
+	u64   fst_cache;
+	u64   fst_inflight;
 };
 
 struct lu_fld_hash {
 	const char	      *fh_name;
-	int (*fh_hash_func)(struct lu_client_fld *, __u64);
-	struct lu_fld_target *(*fh_scan_func)(struct lu_client_fld *, __u64);
+	int (*fh_hash_func)(struct lu_client_fld *, u64);
+	struct lu_fld_target *(*fh_scan_func)(struct lu_client_fld *, u64);
 };
 
 struct fld_cache_entry {
@@ -130,7 +130,7 @@ enum {
 extern struct lu_fld_hash fld_hash[];
 
 int fld_client_rpc(struct obd_export *exp,
-		   struct lu_seq_range *range, __u32 fld_op,
+		   struct lu_seq_range *range, u32 fld_op,
 		   struct ptlrpc_request **reqp);
 
 extern struct lprocfs_vars fld_client_debugfs_list[];
diff --git a/drivers/staging/lustre/lustre/fld/fld_request.c b/drivers/staging/lustre/lustre/fld/fld_request.c
index 7b0365b..8a915b9 100644
--- a/drivers/staging/lustre/lustre/fld/fld_request.c
+++ b/drivers/staging/lustre/lustre/fld/fld_request.c
@@ -193,7 +193,7 @@ int fld_client_add_target(struct lu_client_fld *fld,
 EXPORT_SYMBOL(fld_client_add_target);
 
 /* Remove export from FLD */
-int fld_client_del_target(struct lu_client_fld *fld, __u64 idx)
+int fld_client_del_target(struct lu_client_fld *fld, u64 idx)
 {
 	struct lu_fld_target *target, *tmp;
 
@@ -303,12 +303,12 @@ void fld_client_fini(struct lu_client_fld *fld)
 EXPORT_SYMBOL(fld_client_fini);
 
 int fld_client_rpc(struct obd_export *exp,
-		   struct lu_seq_range *range, __u32 fld_op,
+		   struct lu_seq_range *range, u32 fld_op,
 		   struct ptlrpc_request **reqp)
 {
 	struct ptlrpc_request *req = NULL;
 	struct lu_seq_range   *prange;
-	__u32		 *op;
+	u32		 *op;
 	int		    rc = 0;
 	struct obd_import     *imp;
 
@@ -383,7 +383,7 @@ int fld_client_rpc(struct obd_export *exp,
 }
 
 int fld_client_lookup(struct lu_client_fld *fld, u64 seq, u32 *mds,
-		      __u32 flags, const struct lu_env *env)
+		      u32 flags, const struct lu_env *env)
 {
 	struct lu_seq_range res = { 0 };
 	struct lu_fld_target *target;
diff --git a/drivers/staging/lustre/lustre/llite/dir.c b/drivers/staging/lustre/lustre/llite/dir.c
index 2459f5c..4520344 100644
--- a/drivers/staging/lustre/lustre/llite/dir.c
+++ b/drivers/staging/lustre/lustre/llite/dir.c
@@ -138,7 +138,7 @@
  *
  */
 struct page *ll_get_dir_page(struct inode *dir, struct md_op_data *op_data,
-			     __u64 offset)
+			     u64 offset)
 {
 	struct md_callback cb_op;
 	struct page *page;
@@ -180,9 +180,9 @@ void ll_release_page(struct inode *inode, struct page *page, bool remove)
  * IF_* flag shld be converted to particular OS file type in
  * platform llite module.
  */
-static __u16 ll_dirent_type_get(struct lu_dirent *ent)
+static u16 ll_dirent_type_get(struct lu_dirent *ent)
 {
-	__u16 type = 0;
+	u16 type = 0;
 	struct luda_type *lt;
 	int len = 0;
 
@@ -197,11 +197,11 @@ static __u16 ll_dirent_type_get(struct lu_dirent *ent)
 	return type;
 }
 
-int ll_dir_read(struct inode *inode, __u64 *ppos, struct md_op_data *op_data,
+int ll_dir_read(struct inode *inode, u64 *ppos, struct md_op_data *op_data,
 		struct dir_context *ctx)
 {
 	struct ll_sb_info    *sbi	= ll_i2sbi(inode);
-	__u64		   pos		= *ppos;
+	u64		   pos		= *ppos;
 	bool is_api32 = ll_need_32bit_api(sbi);
 	int		   is_hash64 = sbi->ll_flags & LL_SBI_64BIT_HASH;
 	struct page	  *page;
@@ -213,8 +213,8 @@ int ll_dir_read(struct inode *inode, __u64 *ppos, struct md_op_data *op_data,
 	while (rc == 0 && !done) {
 		struct lu_dirpage *dp;
 		struct lu_dirent  *ent;
-		__u64 hash;
-		__u64 next;
+		u64 hash;
+		u64 next;
 
 		if (IS_ERR(page)) {
 			rc = PTR_ERR(page);
@@ -225,11 +225,11 @@ int ll_dir_read(struct inode *inode, __u64 *ppos, struct md_op_data *op_data,
 		dp = page_address(page);
 		for (ent = lu_dirent_start(dp); ent && !done;
 		     ent = lu_dirent_next(ent)) {
-			__u16	  type;
+			u16	  type;
 			int	    namelen;
 			struct lu_fid  fid;
-			__u64	  lhash;
-			__u64	  ino;
+			u64	  lhash;
+			u64	  ino;
 
 			hash = le64_to_cpu(ent->lde_hash);
 			if (hash < pos)
@@ -294,7 +294,7 @@ static int ll_readdir(struct file *filp, struct dir_context *ctx)
 	struct inode		*inode	= file_inode(filp);
 	struct ll_file_data	*lfd	= LUSTRE_FPRIVATE(filp);
 	struct ll_sb_info	*sbi	= ll_i2sbi(inode);
-	__u64 pos = lfd ? lfd->lfd_pos : 0;
+	u64 pos = lfd ? lfd->lfd_pos : 0;
 	int			hash64	= sbi->ll_flags & LL_SBI_64BIT_HASH;
 	bool api32 = ll_need_32bit_api(sbi);
 	struct md_op_data *op_data;
@@ -327,7 +327,7 @@ static int ll_readdir(struct file *filp, struct dir_context *ctx)
 		 */
 		if (file_dentry(filp)->d_parent &&
 		    file_dentry(filp)->d_parent->d_inode) {
-			__u64 ibits = MDS_INODELOCK_UPDATE;
+			u64 ibits = MDS_INODELOCK_UPDATE;
 			struct inode *parent;
 
 			parent = file_dentry(filp)->d_parent->d_inode;
@@ -760,7 +760,7 @@ static int ll_ioc_copy_start(struct super_block *sb, struct hsm_copy *copy)
 	/* For archive request, we need to read the current file version. */
 	if (copy->hc_hai.hai_action == HSMA_ARCHIVE) {
 		struct inode	*inode;
-		__u64		 data_version = 0;
+		u64		 data_version = 0;
 
 		/* Get inode for this fid */
 		inode = search_inode_for_lustre(sb, &copy->hc_hai.hai_fid);
@@ -845,7 +845,7 @@ static int ll_ioc_copy_end(struct super_block *sb, struct hsm_copy *copy)
 	     (copy->hc_hai.hai_action == HSMA_RESTORE)) &&
 	    (copy->hc_errval == 0)) {
 		struct inode	*inode;
-		__u64		 data_version = 0;
+		u64		 data_version = 0;
 
 		/* Get lsm for this fid */
 		inode = search_inode_for_lustre(sb, &copy->hc_hai.hai_fid);
@@ -1522,7 +1522,7 @@ static long ll_dir_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
 	case LL_IOC_FID2MDTIDX: {
 		struct obd_export *exp = ll_i2mdexp(inode);
 		struct lu_fid fid;
-		__u32 index;
+		u32 index;
 
 		if (copy_from_user(&fid, (const struct lu_fid __user *)arg,
 				   sizeof(fid)))
diff --git a/drivers/staging/lustre/lustre/llite/file.c b/drivers/staging/lustre/lustre/llite/file.c
index 94574b7..f71e273 100644
--- a/drivers/staging/lustre/lustre/llite/file.c
+++ b/drivers/staging/lustre/lustre/llite/file.c
@@ -158,7 +158,7 @@ static int ll_close_inode_openhandle(struct inode *inode,
 	case MDS_HSM_RELEASE:
 		LASSERT(data);
 		op_data->op_bias |= MDS_HSM_RELEASE;
-		op_data->op_data_version = *(__u64 *)data;
+		op_data->op_data_version = *(u64 *)data;
 		op_data->op_lease_handle = och->och_lease_handle;
 		op_data->op_attr.ia_valid |= ATTR_SIZE;
 		op_data->op_xvalid |= OP_XVALID_BLOCKS;
@@ -200,7 +200,7 @@ int ll_md_real_close(struct inode *inode, fmode_t fmode)
 	struct ll_inode_info *lli = ll_i2info(inode);
 	struct obd_client_handle **och_p;
 	struct obd_client_handle *och;
-	__u64 *och_usecount;
+	u64 *och_usecount;
 	int rc = 0;
 
 	if (fmode & FMODE_WRITE) {
@@ -243,7 +243,7 @@ static int ll_md_close(struct inode *inode, struct file *file)
 	struct ll_file_data *fd = LUSTRE_FPRIVATE(file);
 	struct ll_inode_info *lli = ll_i2info(inode);
 	int lockmode;
-	__u64 flags = LDLM_FL_BLOCK_GRANTED | LDLM_FL_TEST_LOCK;
+	u64 flags = LDLM_FL_BLOCK_GRANTED | LDLM_FL_TEST_LOCK;
 	struct lustre_handle lockh;
 	union ldlm_policy_data policy = {
 		.l_inodebits = { MDS_INODELOCK_OPEN }
@@ -491,7 +491,7 @@ int ll_file_open(struct inode *inode, struct file *file)
 	struct lookup_intent *it, oit = { .it_op = IT_OPEN,
 					  .it_flags = file->f_flags };
 	struct obd_client_handle **och_p = NULL;
-	__u64 *och_usecount = NULL;
+	u64 *och_usecount = NULL;
 	struct ll_file_data *fd;
 	int rc = 0;
 
@@ -813,7 +813,7 @@ static int ll_lease_och_release(struct inode *inode, struct file *file)
  */
 static struct obd_client_handle *
 ll_lease_open(struct inode *inode, struct file *file, fmode_t fmode,
-	      __u64 open_flags)
+	      u64 open_flags)
 {
 	struct lookup_intent it = { .it_op = IT_OPEN };
 	struct ll_sb_info *sbi = ll_i2sbi(inode);
@@ -1366,7 +1366,7 @@ static ssize_t ll_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
 }
 
 int ll_lov_setstripe_ea_info(struct inode *inode, struct dentry *dentry,
-			     __u64 flags, struct lov_user_md *lum,
+			     u64 flags, struct lov_user_md *lum,
 			     int lum_size)
 {
 	struct lookup_intent oit = {
@@ -1483,7 +1483,7 @@ int ll_lov_getstripe_ea_info(struct inode *inode, const char *filename,
 static int ll_lov_setea(struct inode *inode, struct file *file,
 			void __user *arg)
 {
-	__u64			 flags = MDS_OPEN_HAS_OBJS | FMODE_WRITE;
+	u64			 flags = MDS_OPEN_HAS_OBJS | FMODE_WRITE;
 	struct lov_user_md	*lump;
 	int			 lum_size = sizeof(struct lov_user_md) +
 					    sizeof(struct lov_user_ost_data);
@@ -1530,7 +1530,7 @@ static int ll_lov_setstripe(struct inode *inode, struct file *file,
 	struct lov_user_md __user *lum = (struct lov_user_md __user *)arg;
 	struct lov_user_md *klum;
 	int lum_size, rc;
-	__u64 flags = FMODE_WRITE;
+	u64 flags = FMODE_WRITE;
 
 	rc = ll_copy_user_md(lum, &klum);
 	if (rc < 0)
@@ -1828,7 +1828,7 @@ int ll_fid2path(struct inode *inode, void __user *arg)
  *		LL_DV_RD_FLUSH: flush dirty pages, LCK_PR on OSTs
  *		LL_DV_WR_FLUSH: drop all caching pages, LCK_PW on OSTs
  */
-int ll_data_version(struct inode *inode, __u64 *data_version, int flags)
+int ll_data_version(struct inode *inode, u64 *data_version, int flags)
 {
 	struct cl_object *obj = ll_i2info(inode)->lli_clob;
 	struct lu_env *env;
@@ -1876,7 +1876,7 @@ int ll_hsm_release(struct inode *inode)
 {
 	struct lu_env *env;
 	struct obd_client_handle *och = NULL;
-	__u64 data_version = 0;
+	u64 data_version = 0;
 	int rc;
 	u16 refcheck;
 
@@ -1933,8 +1933,8 @@ static int ll_swap_layouts(struct file *file1, struct file *file2,
 {
 	struct mdc_swap_layouts	 msl;
 	struct md_op_data	*op_data;
-	__u32			 gid;
-	__u64			 dv;
+	u32			 gid;
+	u64			 dv;
 	struct ll_swap_stack	*llss = NULL;
 	int			 rc;
 
@@ -2186,7 +2186,7 @@ static int ll_file_futimes_3(struct file *file, const struct ll_futimes_3 *lfu)
  * all that data into each client cache with fadvise() may not be, due to
  * much more data being sent to the client.
  */
-static int ll_ladvise(struct inode *inode, struct file *file, __u64 flags,
+static int ll_ladvise(struct inode *inode, struct file *file, u64 flags,
 		      struct llapi_lu_ladvise *ladvise)
 {
 	struct cl_ladvise_io *lio;
@@ -2869,7 +2869,7 @@ int ll_fsync(struct file *file, loff_t start, loff_t end, int datasync)
 	struct lustre_handle lockh = {0};
 	union ldlm_policy_data flock = { { 0 } };
 	int fl_type = file_lock->fl_type;
-	__u64 flags = 0;
+	u64 flags = 0;
 	int rc;
 	int rc2 = 0;
 
@@ -3179,7 +3179,7 @@ int ll_migrate(struct inode *parent, struct file *file, int mdtidx,
  * \param l_req_mode [IN] searched lock mode
  * \retval boolean, true iff all bits are found
  */
-int ll_have_md_lock(struct inode *inode, __u64 *bits,
+int ll_have_md_lock(struct inode *inode, u64 *bits,
 		    enum ldlm_mode l_req_mode)
 {
 	struct lustre_handle lockh;
@@ -3187,7 +3187,7 @@ int ll_have_md_lock(struct inode *inode, __u64 *bits,
 	enum ldlm_mode mode = (l_req_mode == LCK_MINMODE) ?
 			      (LCK_CR | LCK_CW | LCK_PR | LCK_PW) : l_req_mode;
 	struct lu_fid *fid;
-	__u64 flags;
+	u64 flags;
 	int i;
 
 	if (!inode)
@@ -3220,8 +3220,8 @@ int ll_have_md_lock(struct inode *inode, __u64 *bits,
 	return *bits == 0;
 }
 
-enum ldlm_mode ll_take_md_lock(struct inode *inode, __u64 bits,
-			       struct lustre_handle *lockh, __u64 flags,
+enum ldlm_mode ll_take_md_lock(struct inode *inode, u64 bits,
+			       struct lustre_handle *lockh, u64 flags,
 			       enum ldlm_mode mode)
 {
 	union ldlm_policy_data policy = { .l_inodebits = { bits } };
@@ -3261,7 +3261,7 @@ static int ll_inode_revalidate_fini(struct inode *inode, int rc)
 	return rc;
 }
 
-static int __ll_inode_revalidate(struct dentry *dentry, __u64 ibits)
+static int __ll_inode_revalidate(struct dentry *dentry, u64 ibits)
 {
 	struct inode *inode = d_inode(dentry);
 	struct ptlrpc_request *req = NULL;
@@ -3371,7 +3371,7 @@ static int ll_merge_md_attr(struct inode *inode)
 	return 0;
 }
 
-static int ll_inode_revalidate(struct dentry *dentry, __u64 ibits)
+static int ll_inode_revalidate(struct dentry *dentry, u64 ibits)
 {
 	struct inode *inode = d_inode(dentry);
 	int rc;
@@ -3453,7 +3453,7 @@ int ll_getattr(const struct path *path, struct kstat *stat,
 }
 
 static int ll_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
-		     __u64 start, __u64 len)
+		     u64 start, u64 len)
 {
 	int rc;
 	size_t num_bytes;
@@ -3888,7 +3888,7 @@ static int ll_layout_intent(struct inode *inode, struct layout_intent *intent)
  * is finished, this function should be called again to verify that layout
  * is not changed during IO time.
  */
-int ll_layout_refresh(struct inode *inode, __u32 *gen)
+int ll_layout_refresh(struct inode *inode, u32 *gen)
 {
 	struct ll_inode_info *lli = ll_i2info(inode);
 	struct ll_sb_info *sbi = ll_i2sbi(inode);
@@ -3963,7 +3963,7 @@ int ll_layout_write_intent(struct inode *inode, u64 start, u64 end)
 /**
  *  This function send a restore request to the MDT
  */
-int ll_layout_restore(struct inode *inode, loff_t offset, __u64 length)
+int ll_layout_restore(struct inode *inode, loff_t offset, u64 length)
 {
 	struct hsm_user_request	*hur;
 	int			 len, rc;
diff --git a/drivers/staging/lustre/lustre/llite/lcommon_misc.c b/drivers/staging/lustre/lustre/llite/lcommon_misc.c
index 80563a2..75156d8 100644
--- a/drivers/staging/lustre/lustre/llite/lcommon_misc.c
+++ b/drivers/staging/lustre/lustre/llite/lcommon_misc.c
@@ -85,7 +85,7 @@ int cl_ocd_update(struct obd_device *host, struct obd_device *watched,
 {
 	struct lustre_client_ocd *lco;
 	struct client_obd	*cli;
-	__u64 flags;
+	u64 flags;
 	int   result;
 
 	if (!strcmp(watched->obd_type->typ_name, LUSTRE_OSC_NAME) &&
@@ -122,7 +122,7 @@ int cl_get_grouplock(struct cl_object *obj, unsigned long gid, int nonblock,
 	struct cl_io	   *io;
 	struct cl_lock	 *lock;
 	struct cl_lock_descr   *descr;
-	__u32		   enqflags;
+	u32		   enqflags;
 	u16 refcheck;
 	int		     rc;
 
diff --git a/drivers/staging/lustre/lustre/llite/llite_internal.h b/drivers/staging/lustre/lustre/llite/llite_internal.h
index c680a49..bf7e46f 100644
--- a/drivers/staging/lustre/lustre/llite/llite_internal.h
+++ b/drivers/staging/lustre/lustre/llite/llite_internal.h
@@ -113,7 +113,7 @@ enum ll_file_flags {
 };
 
 struct ll_inode_info {
-	__u32				lli_inode_magic;
+	u32				lli_inode_magic;
 
 	spinlock_t			lli_lock;
 	unsigned long			lli_flags;
@@ -130,9 +130,9 @@ struct ll_inode_info {
 	struct obd_client_handle       *lli_mds_read_och;
 	struct obd_client_handle       *lli_mds_write_och;
 	struct obd_client_handle       *lli_mds_exec_och;
-	__u64			   lli_open_fd_read_count;
-	__u64			   lli_open_fd_write_count;
-	__u64			   lli_open_fd_exec_count;
+	u64			   lli_open_fd_read_count;
+	u64			   lli_open_fd_write_count;
+	u64			   lli_open_fd_exec_count;
 	/* Protects access to och pointers and their usage counters */
 	struct mutex			lli_och_mutex;
 
@@ -184,7 +184,7 @@ struct ll_inode_info {
 			 * "dmv" and gets the rest of the default layout itself
 			 * (count, hash, etc).
 			 */
-			__u32				lli_def_stripe_offset;
+			u32				lli_def_stripe_offset;
 		};
 
 		/* for non-directory */
@@ -204,7 +204,7 @@ struct ll_inode_info {
 			struct rw_semaphore		lli_glimpse_sem;
 			unsigned long			lli_glimpse_time;
 			struct list_head		lli_agl_list;
-			__u64				lli_agl_index;
+			u64				lli_agl_index;
 
 			/* for writepage() only to communicate to fsync */
 			int				lli_async_rc;
@@ -236,7 +236,7 @@ struct ll_inode_info {
 	/* mutex to request for layout lock exclusively. */
 	struct mutex			lli_layout_mutex;
 	/* Layout version, protected by lli_layout_lock */
-	__u32				lli_layout_gen;
+	u32				lli_layout_gen;
 	spinlock_t			lli_layout_lock;
 
 	u32				lli_projid;	/* project id */
@@ -246,9 +246,9 @@ struct ll_inode_info {
 	struct list_head		lli_xattrs;/* ll_xattr_entry->xe_list */
 };
 
-static inline __u32 ll_layout_version_get(struct ll_inode_info *lli)
+static inline u32 ll_layout_version_get(struct ll_inode_info *lli)
 {
-	__u32 gen;
+	u32 gen;
 
 	spin_lock(&lli->lli_layout_lock);
 	gen = lli->lli_layout_gen;
@@ -257,7 +257,7 @@ static inline __u32 ll_layout_version_get(struct ll_inode_info *lli)
 	return gen;
 }
 
-static inline void ll_layout_version_set(struct ll_inode_info *lli, __u32 gen)
+static inline void ll_layout_version_set(struct ll_inode_info *lli, u32 gen)
 {
 	spin_lock(&lli->lli_layout_lock);
 	lli->lli_layout_gen = gen;
@@ -267,7 +267,7 @@ static inline void ll_layout_version_set(struct ll_inode_info *lli, __u32 gen)
 int ll_xattr_cache_destroy(struct inode *inode);
 
 int ll_xattr_cache_get(struct inode *inode, const char *name,
-		       char *buffer, size_t size, __u64 valid);
+		       char *buffer, size_t size, u64 valid);
 
 static inline bool obd_connect_has_secctx(struct obd_connect_data *data)
 {
@@ -466,7 +466,7 @@ struct lustre_client_ocd {
 	 * (LOVs) this mount is connected to. This field is updated by
 	 * cl_ocd_update() under ->lco_lock.
 	 */
-	__u64			 lco_flags;
+	u64			 lco_flags;
 	struct mutex		 lco_lock;
 	struct obd_export	*lco_md_exp;
 	struct obd_export	*lco_dt_exp;
@@ -646,8 +646,8 @@ struct ll_readahead_state {
 struct ll_file_data {
 	struct ll_readahead_state fd_ras;
 	struct ll_grouplock fd_grouplock;
-	__u64 lfd_pos;
-	__u32 fd_flags;
+	u64 lfd_pos;
+	u32 fd_flags;
 	fmode_t fd_omode;
 	/* openhandle if lease exists for this file.
 	 * Borrow lli->lli_och_mutex to protect assignment
@@ -672,8 +672,8 @@ static inline struct inode *ll_info2i(struct ll_inode_info *lli)
 	return &lli->lli_vfs_inode;
 }
 
-__u32 ll_i2suppgid(struct inode *i);
-void ll_i2gids(__u32 *suppgids, struct inode *i1, struct inode *i2);
+u32 ll_i2suppgid(struct inode *i);
+void ll_i2gids(u32 *suppgids, struct inode *i1, struct inode *i2);
 
 static inline bool ll_need_32bit_api(struct ll_sb_info *sbi)
 {
@@ -764,12 +764,12 @@ enum {
 /* llite/dir.c */
 extern const struct file_operations ll_dir_operations;
 extern const struct inode_operations ll_dir_inode_operations;
-int ll_dir_read(struct inode *inode, __u64 *ppos, struct md_op_data *op_data,
+int ll_dir_read(struct inode *inode, u64 *ppos, struct md_op_data *op_data,
 		struct dir_context *ctx);
 int ll_get_mdt_idx(struct inode *inode);
 int ll_get_mdt_idx_by_fid(struct ll_sb_info *sbi, const struct lu_fid *fid);
 struct page *ll_get_dir_page(struct inode *dir, struct md_op_data *op_data,
-			     __u64 offset);
+			     u64 offset);
 void ll_release_page(struct inode *inode, struct page *page, bool remove);
 
 /* llite/namei.c */
@@ -803,10 +803,10 @@ void ll_cl_add(struct file *file, const struct lu_env *env, struct cl_io *io,
 extern const struct file_operations ll_file_operations_flock;
 extern const struct file_operations ll_file_operations_noflock;
 extern const struct inode_operations ll_file_inode_operations;
-int ll_have_md_lock(struct inode *inode, __u64 *bits,
+int ll_have_md_lock(struct inode *inode, u64 *bits,
 		    enum ldlm_mode l_req_mode);
-enum ldlm_mode ll_take_md_lock(struct inode *inode, __u64 bits,
-			       struct lustre_handle *lockh, __u64 flags,
+enum ldlm_mode ll_take_md_lock(struct inode *inode, u64 bits,
+			       struct lustre_handle *lockh, u64 flags,
 			       enum ldlm_mode mode);
 int ll_file_open(struct inode *inode, struct file *file);
 int ll_file_release(struct inode *inode, struct file *file);
@@ -832,7 +832,7 @@ int ll_ioctl_fsgetxattr(struct inode *inode, unsigned int cmd,
 int ll_ioctl_fssetxattr(struct inode *inode, unsigned int cmd,
 			unsigned long arg);
 int ll_lov_setstripe_ea_info(struct inode *inode, struct dentry *dentry,
-			     __u64 flags, struct lov_user_md *lum,
+			     u64 flags, struct lov_user_md *lum,
 			     int lum_size);
 int ll_lov_getstripe_ea_info(struct inode *inode, const char *filename,
 			     struct lov_mds_md **lmm, int *lmm_size,
@@ -844,7 +844,7 @@ int ll_dir_getstripe(struct inode *inode, void **lmmp, int *lmm_size,
 int ll_fsync(struct file *file, loff_t start, loff_t end, int data);
 int ll_merge_attr(const struct lu_env *env, struct inode *inode);
 int ll_fid2path(struct inode *inode, void __user *arg);
-int ll_data_version(struct inode *inode, __u64 *data_version, int flags);
+int ll_data_version(struct inode *inode, u64 *data_version, int flags);
 int ll_hsm_release(struct inode *inode);
 int ll_hsm_state_set(struct inode *inode, struct hsm_state_set *hss);
 
@@ -904,7 +904,7 @@ enum {
 struct md_op_data *ll_prep_md_op_data(struct md_op_data *op_data,
 				      struct inode *i1, struct inode *i2,
 				      const char *name, size_t namelen,
-				      u32 mode, __u32 opc, void *data);
+				      u32 mode, u32 opc, void *data);
 void ll_finish_md_op_data(struct md_op_data *op_data);
 int ll_get_obd_name(struct inode *inode, unsigned int cmd, unsigned long arg);
 char *ll_get_fsname(struct super_block *sb, char *buf, int buflen);
@@ -935,7 +935,7 @@ static inline ssize_t ll_lov_user_md_size(const struct lov_user_md *lum)
 
 /* llite/llite_nfs.c */
 extern const struct export_operations lustre_export_operations;
-__u32 get_uuid2int(const char *name, int len);
+u32 get_uuid2int(const char *name, int len);
 void get_uuid2fsid(const char *name, int len, __kernel_fsid_t *fsid);
 struct inode *search_inode_for_lustre(struct super_block *sb,
 				      const struct lu_fid *fid);
@@ -995,7 +995,7 @@ static inline struct vvp_io_args *ll_env_args(const struct lu_env *env)
 
 /* llite/llite_mmap.c */
 
-int ll_teardown_mmaps(struct address_space *mapping, __u64 first, __u64 last);
+int ll_teardown_mmaps(struct address_space *mapping, u64 first, u64 last);
 int ll_file_mmap(struct file *file, struct vm_area_struct *vma);
 void policy_from_vma(union ldlm_policy_data *policy, struct vm_area_struct *vma,
 		     unsigned long addr, size_t count);
@@ -1074,7 +1074,7 @@ static inline loff_t ll_file_maxbytes(struct inode *inode)
 
 ssize_t ll_listxattr(struct dentry *dentry, char *buffer, size_t size);
 int ll_xattr_list(struct inode *inode, const char *name, int type,
-		  void *buffer, size_t size, __u64 valid);
+		  void *buffer, size_t size, u64 valid);
 const struct xattr_handler *get_xattr_type(const char *name);
 
 /**
@@ -1114,16 +1114,16 @@ struct ll_statahead_info {
 					     * refcount
 					     */
 	unsigned int	    sai_max;	/* max ahead of lookup */
-	__u64		   sai_sent;       /* stat requests sent count */
-	__u64		   sai_replied;    /* stat requests which received
+	u64		   sai_sent;       /* stat requests sent count */
+	u64		   sai_replied;    /* stat requests which received
 					    * reply
 					    */
-	__u64		   sai_index;      /* index of statahead entry */
-	__u64		   sai_index_wait; /* index of entry which is the
+	u64		   sai_index;      /* index of statahead entry */
+	u64		   sai_index_wait; /* index of entry which is the
 					    * caller is waiting for
 					    */
-	__u64		   sai_hit;	/* hit count */
-	__u64		   sai_miss;       /* miss count:
+	u64		   sai_hit;	/* hit count */
+	u64		   sai_miss;       /* miss count:
 					    * for "ls -al" case, it includes
 					    * hidden dentry miss;
 					    * for "ls -l" case, it does not
@@ -1249,7 +1249,7 @@ static inline int ll_file_nolock(const struct file *file)
 }
 
 static inline void ll_set_lock_data(struct obd_export *exp, struct inode *inode,
-				    struct lookup_intent *it, __u64 *bits)
+				    struct lookup_intent *it, u64 *bits)
 {
 	if (!it->it_lock_set) {
 		struct lustre_handle handle;
@@ -1318,8 +1318,8 @@ static inline void d_lustre_revalidate(struct dentry *dentry)
 }
 
 int ll_layout_conf(struct inode *inode, const struct cl_object_conf *conf);
-int ll_layout_refresh(struct inode *inode, __u32 *gen);
-int ll_layout_restore(struct inode *inode, loff_t start, __u64 length);
+int ll_layout_refresh(struct inode *inode, u32 *gen);
+int ll_layout_restore(struct inode *inode, loff_t start, u64 length);
 int ll_layout_write_intent(struct inode *inode, u64 start, u64 end);
 
 int ll_xattr_init(void);
@@ -1341,6 +1341,6 @@ int cl_setattr_ost(struct cl_object *obj, const struct iattr *attr,
 void cl_inode_fini(struct inode *inode);
 
 u64 cl_fid_build_ino(const struct lu_fid *fid, bool api32);
-__u32 cl_fid_build_gen(const struct lu_fid *fid);
+u32 cl_fid_build_gen(const struct lu_fid *fid);
 
 #endif /* LLITE_INTERNAL_H */
diff --git a/drivers/staging/lustre/lustre/llite/llite_lib.c b/drivers/staging/lustre/lustre/llite/llite_lib.c
index 7b1226b..88b08dd 100644
--- a/drivers/staging/lustre/lustre/llite/llite_lib.c
+++ b/drivers/staging/lustre/lustre/llite/llite_lib.c
@@ -2142,7 +2142,7 @@ int ll_remount_fs(struct super_block *sb, int *flags, char *data)
 	struct ll_sb_info *sbi = ll_s2sbi(sb);
 	char *profilenm = get_profile_name(sb);
 	int err;
-	__u32 read_only;
+	u32 read_only;
 
 	if ((bool)(*flags & SB_RDONLY) != sb_rdonly(sb)) {
 		read_only = *flags & SB_RDONLY;
@@ -2297,7 +2297,7 @@ int ll_obd_statfs(struct inode *inode, void __user *arg)
 	struct obd_export *exp;
 	char *buf = NULL;
 	struct obd_ioctl_data *data = NULL;
-	__u32 type;
+	u32 type;
 	int len = 0, rc;
 
 	if (!inode) {
@@ -2322,15 +2322,15 @@ int ll_obd_statfs(struct inode *inode, void __user *arg)
 		goto out_statfs;
 	}
 
-	if (data->ioc_inllen1 != sizeof(__u32) ||
-	    data->ioc_inllen2 != sizeof(__u32) ||
+	if (data->ioc_inllen1 != sizeof(u32) ||
+	    data->ioc_inllen2 != sizeof(u32) ||
 	    data->ioc_plen1 != sizeof(struct obd_statfs) ||
 	    data->ioc_plen2 != sizeof(struct obd_uuid)) {
 		rc = -EINVAL;
 		goto out_statfs;
 	}
 
-	memcpy(&type, data->ioc_inlbuf1, sizeof(__u32));
+	memcpy(&type, data->ioc_inlbuf1, sizeof(u32));
 	if (type & LL_STATFS_LMV) {
 		exp = sbi->ll_md_exp;
 	} else if (type & LL_STATFS_LOV) {
@@ -2352,7 +2352,7 @@ int ll_obd_statfs(struct inode *inode, void __user *arg)
 struct md_op_data *ll_prep_md_op_data(struct md_op_data *op_data,
 				      struct inode *i1, struct inode *i2,
 				      const char *name, size_t namelen,
-				      u32 mode, __u32 opc, void *data)
+				      u32 mode, u32 opc, void *data)
 {
 	if (!name) {
 		/* Do not reuse namelen for something else. */
diff --git a/drivers/staging/lustre/lustre/llite/llite_mmap.c b/drivers/staging/lustre/lustre/llite/llite_mmap.c
index 33e23ee..c6e9f10 100644
--- a/drivers/staging/lustre/lustre/llite/llite_mmap.c
+++ b/drivers/staging/lustre/lustre/llite/llite_mmap.c
@@ -466,7 +466,7 @@ static void ll_vm_close(struct vm_area_struct *vma)
 /* XXX put nice comment here.  talk about __free_pte -> dirty pages and
  * nopage's reference passing to the pte
  */
-int ll_teardown_mmaps(struct address_space *mapping, __u64 first, __u64 last)
+int ll_teardown_mmaps(struct address_space *mapping, u64 first, u64 last)
 {
 	int rc = -ENOENT;
 
diff --git a/drivers/staging/lustre/lustre/llite/llite_nfs.c b/drivers/staging/lustre/lustre/llite/llite_nfs.c
index 5e91e83..7c5c9b8 100644
--- a/drivers/staging/lustre/lustre/llite/llite_nfs.c
+++ b/drivers/staging/lustre/lustre/llite/llite_nfs.c
@@ -42,12 +42,12 @@
 #include "llite_internal.h"
 #include <linux/exportfs.h>
 
-__u32 get_uuid2int(const char *name, int len)
+u32 get_uuid2int(const char *name, int len)
 {
-	__u32 key0 = 0x12a3fe2d, key1 = 0x37abe8f9;
+	u32 key0 = 0x12a3fe2d, key1 = 0x37abe8f9;
 
 	while (len--) {
-		__u32 key = key1 + (key0 ^ (*name++ * 7152373));
+		u32 key = key1 + (key0 ^ (*name++ * 7152373));
 
 		if (key & 0x80000000)
 			key -= 0x7fffffff;
@@ -59,7 +59,7 @@ __u32 get_uuid2int(const char *name, int len)
 
 void get_uuid2fsid(const char *name, int len, __kernel_fsid_t *fsid)
 {
-	__u64 key = 0, key0 = 0x12a3fe2d, key1 = 0x37abe8f9;
+	u64 key = 0, key0 = 0x12a3fe2d, key1 = 0x37abe8f9;
 
 	while (len--) {
 		key = key1 + (key0 ^ (*name++ * 7152373));
@@ -186,7 +186,7 @@ struct lustre_nfs_fid {
  * 2 -- contains child file handle and parent file handle;
  * 255 -- error.
  */
-static int ll_encode_fh(struct inode *inode, __u32 *fh, int *plen,
+static int ll_encode_fh(struct inode *inode, u32 *fh, int *plen,
 			struct inode *parent)
 {
 	int fileid_len = sizeof(struct lustre_nfs_fid) / 4;
@@ -243,7 +243,7 @@ static int ll_get_name(struct dentry *dentry, char *name,
 		.ctx.actor = ll_nfs_get_name_filldir,
 	};
 	struct md_op_data *op_data;
-	__u64 pos = 0;
+	u64 pos = 0;
 
 	if (!dir || !S_ISDIR(dir->i_mode)) {
 		rc = -ENOTDIR;
diff --git a/drivers/staging/lustre/lustre/llite/lproc_llite.c b/drivers/staging/lustre/lustre/llite/lproc_llite.c
index 9404bb7..672de81 100644
--- a/drivers/staging/lustre/lustre/llite/lproc_llite.c
+++ b/drivers/staging/lustre/lustre/llite/lproc_llite.c
@@ -1232,8 +1232,8 @@ static void llite_kobj_release(struct kobject *kobj)
 };
 
 static const struct llite_file_opcode {
-	__u32       opcode;
-	__u32       type;
+	u32       opcode;
+	u32       type;
 	const char *opname;
 } llite_opcode_table[LPROC_LL_FILE_OPCODES] = {
 	/* file operation */
@@ -1352,7 +1352,7 @@ int ll_debugfs_register_super(struct super_block *sb, const char *name)
 
 	/* do counter init */
 	for (id = 0; id < LPROC_LL_FILE_OPCODES; id++) {
-		__u32 type = llite_opcode_table[id].type;
+		u32 type = llite_opcode_table[id].type;
 		void *ptr = NULL;
 
 		if (type & LPROCFS_TYPE_REGS)
diff --git a/drivers/staging/lustre/lustre/llite/namei.c b/drivers/staging/lustre/lustre/llite/namei.c
index b5b46f7..8bdf947 100644
--- a/drivers/staging/lustre/lustre/llite/namei.c
+++ b/drivers/staging/lustre/lustre/llite/namei.c
@@ -191,7 +191,7 @@ int ll_md_blocking_ast(struct ldlm_lock *lock, struct ldlm_lock_desc *desc,
 		break;
 	case LDLM_CB_CANCELING: {
 		struct inode *inode = ll_inode_from_resource_lock(lock);
-		__u64 bits = lock->l_policy_data.l_inodebits.bits;
+		u64 bits = lock->l_policy_data.l_inodebits.bits;
 
 		/* Inode is set to lock->l_resource->lr_lvb_inode
 		 * for mdc - bug 24555
@@ -348,12 +348,12 @@ int ll_md_blocking_ast(struct ldlm_lock *lock, struct ldlm_lock_desc *desc,
 	return 0;
 }
 
-__u32 ll_i2suppgid(struct inode *i)
+u32 ll_i2suppgid(struct inode *i)
 {
 	if (in_group_p(i->i_gid))
-		return (__u32)from_kgid(&init_user_ns, i->i_gid);
+		return (u32)from_kgid(&init_user_ns, i->i_gid);
 	else
-		return (__u32)(-1);
+		return (u32)(-1);
 }
 
 /* Pack the required supplementary groups into the supplied groups array.
@@ -361,7 +361,7 @@ __u32 ll_i2suppgid(struct inode *i)
  * instead pack one or more groups from the user's supplementary group
  * array in case it might be useful.  Not needed if doing an MDS-side upcall.
  */
-void ll_i2gids(__u32 *suppgids, struct inode *i1, struct inode *i2)
+void ll_i2gids(u32 *suppgids, struct inode *i1, struct inode *i2)
 {
 	LASSERT(i1);
 
@@ -454,7 +454,7 @@ static int ll_lookup_it_finish(struct ptlrpc_request *request,
 			       struct inode *parent, struct dentry **de)
 {
 	struct inode *inode = NULL;
-	__u64 bits = 0;
+	u64 bits = 0;
 	int rc = 0;
 	struct dentry *alias;
 
@@ -536,7 +536,7 @@ static struct dentry *ll_lookup_it(struct inode *parent, struct dentry *dentry,
 	struct ptlrpc_request *req = NULL;
 	struct md_op_data *op_data = NULL;
 	struct inode *inode;
-	__u32 opc;
+	u32 opc;
 	int rc;
 
 	if (dentry->d_name.len > ll_i2sbi(parent)->ll_namelen)
@@ -901,7 +901,7 @@ void ll_update_times(struct ptlrpc_request *request, struct inode *inode)
 
 static int ll_new_node(struct inode *dir, struct dentry *dentry,
 		       const char *tgt, umode_t mode, int rdev,
-		       __u32 opc)
+		       u32 opc)
 {
 	struct ptlrpc_request *request = NULL;
 	struct md_op_data *op_data;
diff --git a/drivers/staging/lustre/lustre/llite/range_lock.c b/drivers/staging/lustre/lustre/llite/range_lock.c
index d37da8e..c1f0e1e 100644
--- a/drivers/staging/lustre/lustre/llite/range_lock.c
+++ b/drivers/staging/lustre/lustre/llite/range_lock.c
@@ -42,7 +42,7 @@
 #define START(node) ((node)->rl_start)
 #define LAST(node)  ((node)->rl_last)
 
-INTERVAL_TREE_DEFINE(struct range_lock, rl_rb, __u64, __subtree_last,
+INTERVAL_TREE_DEFINE(struct range_lock, rl_rb, u64, __subtree_last,
 		     START, LAST, static, range);
 /**
  * Initialize a range lock tree
@@ -69,7 +69,7 @@ void range_lock_tree_init(struct range_lock_tree *tree)
  * Pre:  Caller should have allocated the range lock node.
  * Post: The range lock node is meant to cover [start, end] region
  */
-int range_lock_init(struct range_lock *lock, __u64 start, __u64 end)
+int range_lock_init(struct range_lock *lock, u64 start, u64 end)
 {
 	RB_CLEAR_NODE(&lock->rl_rb);
 
diff --git a/drivers/staging/lustre/lustre/llite/range_lock.h b/drivers/staging/lustre/lustre/llite/range_lock.h
index 2a0704d..10566da 100644
--- a/drivers/staging/lustre/lustre/llite/range_lock.h
+++ b/drivers/staging/lustre/lustre/llite/range_lock.h
@@ -42,8 +42,8 @@
 
 struct range_lock {
 	struct rb_node		rl_rb;
-	__u64			rl_start, rl_last;
-	__u64			__subtree_last;
+	u64			rl_start, rl_last;
+	u64			__subtree_last;
 	/**
 	 * Process to enqueue this lock.
 	 */
@@ -56,17 +56,17 @@ struct range_lock {
 	 * Sequence number of range lock. This number is used to get to know
 	 * the order the locks are queued; this is required for range_cancel().
 	 */
-	__u64			rl_sequence;
+	u64			rl_sequence;
 };
 
 struct range_lock_tree {
 	struct rb_root_cached	rlt_root;
 	spinlock_t		rlt_lock;	/* protect range lock tree */
-	__u64			rlt_sequence;
+	u64			rlt_sequence;
 };
 
 void range_lock_tree_init(struct range_lock_tree *tree);
-int range_lock_init(struct range_lock *lock, __u64 start, __u64 end);
+int range_lock_init(struct range_lock *lock, u64 start, u64 end);
 int  range_lock(struct range_lock_tree *tree, struct range_lock *lock);
 void range_unlock(struct range_lock_tree *tree, struct range_lock *lock);
 #endif
diff --git a/drivers/staging/lustre/lustre/llite/rw.c b/drivers/staging/lustre/lustre/llite/rw.c
index 55d8b31..e207d7c 100644
--- a/drivers/staging/lustre/lustre/llite/rw.c
+++ b/drivers/staging/lustre/lustre/llite/rw.c
@@ -272,8 +272,8 @@ static inline int stride_io_mode(struct ll_readahead_state *ras)
 stride_pg_count(pgoff_t st_off, unsigned long st_len, unsigned long st_pgs,
 		unsigned long off, unsigned long length)
 {
-	__u64 start = off > st_off ? off - st_off : 0;
-	__u64 end = off + length > st_off ? off + length - st_off : 0;
+	u64 start = off > st_off ? off - st_off : 0;
+	u64 end = off + length > st_off ? off + length - st_off : 0;
 	unsigned long start_left = 0;
 	unsigned long end_left = 0;
 	unsigned long pg_count;
@@ -308,7 +308,7 @@ static inline int stride_io_mode(struct ll_readahead_state *ras)
 
 static int ria_page_count(struct ra_io_arg *ria)
 {
-	__u64 length = ria->ria_end >= ria->ria_start ?
+	u64 length = ria->ria_end >= ria->ria_start ?
 		       ria->ria_end - ria->ria_start + 1 : 0;
 
 	return stride_pg_count(ria->ria_stoff, ria->ria_length,
@@ -445,7 +445,7 @@ static int ll_readahead(const struct lu_env *env, struct cl_io *io,
 	struct ra_io_arg *ria = &lti->lti_ria;
 	struct cl_object *clob;
 	int ret = 0;
-	__u64 kms;
+	u64 kms;
 
 	clob = io->ci_obj;
 	inode = vvp_object_inode(clob);
@@ -764,7 +764,7 @@ static void ras_update(struct ll_sb_info *sbi, struct inode *inode,
 	 * ras_requests and thus can never trigger this behavior.
 	 */
 	if (ras->ras_requests >= 2 && !ras->ras_request_index) {
-		__u64 kms_pages;
+		u64 kms_pages;
 
 		kms_pages = (i_size_read(inode) + PAGE_SIZE - 1) >>
 			    PAGE_SHIFT;
diff --git a/drivers/staging/lustre/lustre/llite/statahead.c b/drivers/staging/lustre/lustre/llite/statahead.c
index 24c2335c..6f5c7ab 100644
--- a/drivers/staging/lustre/lustre/llite/statahead.c
+++ b/drivers/staging/lustre/lustre/llite/statahead.c
@@ -65,9 +65,9 @@ struct sa_entry {
 	/* link into sai hash table locally */
 	struct list_head	      se_hash;
 	/* entry index in the sai */
-	__u64		   se_index;
+	u64		   se_index;
 	/* low layer ldlm lock handle */
-	__u64		   se_handle;
+	u64		   se_handle;
 	/* entry status */
 	enum se_stat		se_state;
 	/* entry size, contains name */
@@ -163,15 +163,15 @@ static inline int sa_low_hit(struct ll_statahead_info *sai)
  * if the given index is behind of statahead window more than
  * SA_OMITTED_ENTRY_MAX, then it is old.
  */
-static inline int is_omitted_entry(struct ll_statahead_info *sai, __u64 index)
+static inline int is_omitted_entry(struct ll_statahead_info *sai, u64 index)
 {
-	return ((__u64)sai->sai_max + index + SA_OMITTED_ENTRY_MAX <
+	return ((u64)sai->sai_max + index + SA_OMITTED_ENTRY_MAX <
 		 sai->sai_index);
 }
 
 /* allocate sa_entry and hash it to allow scanner process to find it */
 static struct sa_entry *
-sa_alloc(struct dentry *parent, struct ll_statahead_info *sai, __u64 index,
+sa_alloc(struct dentry *parent, struct ll_statahead_info *sai, u64 index,
 	 const char *name, int len, const struct lu_fid *fid)
 {
 	struct ll_inode_info *lli;
@@ -309,7 +309,7 @@ static void sa_free(struct ll_statahead_info *sai, struct sa_entry *entry)
 __sa_make_ready(struct ll_statahead_info *sai, struct sa_entry *entry, int ret)
 {
 	struct list_head *pos = &sai->sai_entries;
-	__u64 index = entry->se_index;
+	u64 index = entry->se_index;
 	struct sa_entry *se;
 
 	LASSERT(!sa_ready(entry));
@@ -492,7 +492,7 @@ static void ll_sai_put(struct ll_statahead_info *sai)
 static void ll_agl_trigger(struct inode *inode, struct ll_statahead_info *sai)
 {
 	struct ll_inode_info *lli   = ll_i2info(inode);
-	__u64		 index = lli->lli_agl_index;
+	u64		 index = lli->lli_agl_index;
 	int		   rc;
 
 	LASSERT(list_empty(&lli->lli_agl_list));
@@ -665,7 +665,7 @@ static int ll_statahead_interpret(struct ptlrpc_request *req,
 	struct ll_inode_info     *lli = ll_i2info(dir);
 	struct ll_statahead_info *sai = lli->lli_sai;
 	struct sa_entry *entry = (struct sa_entry *)minfo->mi_cbdata;
-	__u64 handle = 0;
+	u64 handle = 0;
 
 	if (it_disposition(it, DISP_LOOKUP_NEG))
 		rc = -ENOENT;
@@ -963,7 +963,7 @@ static int ll_statahead_thread(void *arg)
 	struct ll_sb_info	*sbi    = ll_i2sbi(dir);
 	struct ll_statahead_info *sai = lli->lli_sai;
 	struct page	      *page = NULL;
-	__u64		     pos    = 0;
+	u64		     pos    = 0;
 	int		       first  = 0;
 	int		       rc     = 0;
 	struct md_op_data *op_data;
@@ -998,7 +998,7 @@ static int ll_statahead_thread(void *arg)
 		     ent && sai->sai_task && !sa_low_hit(sai);
 		     ent = lu_dirent_next(ent)) {
 			struct lu_fid fid;
-			__u64 hash;
+			u64 hash;
 			int namelen;
 			char *name;
 
@@ -1228,7 +1228,7 @@ static int is_first_dirent(struct inode *dir, struct dentry *dentry)
 	const struct qstr  *target = &dentry->d_name;
 	struct md_op_data *op_data;
 	struct page	  *page;
-	__u64		 pos    = 0;
+	u64		 pos    = 0;
 	int		   dot_de;
 	int rc = LS_NOT_FIRST_DE;
 
@@ -1259,7 +1259,7 @@ static int is_first_dirent(struct inode *dir, struct dentry *dentry)
 		dp = page_address(page);
 		for (ent = lu_dirent_start(dp); ent;
 		     ent = lu_dirent_next(ent)) {
-			__u64 hash;
+			u64 hash;
 			int namelen;
 			char *name;
 
@@ -1425,7 +1425,7 @@ static int revalidate_statahead_dentry(struct inode *dir,
 		struct inode *inode = entry->se_inode;
 		struct lookup_intent it = { .it_op = IT_GETATTR,
 					    .it_lock_handle = entry->se_handle };
-		__u64 bits;
+		u64 bits;
 
 		rc = md_revalidate_lock(ll_i2mdexp(dir), &it,
 					ll_inode2fid(inode), &bits);
diff --git a/drivers/staging/lustre/lustre/llite/vvp_internal.h b/drivers/staging/lustre/lustre/llite/vvp_internal.h
index 70d62bf..e8712d8 100644
--- a/drivers/staging/lustre/lustre/llite/vvp_internal.h
+++ b/drivers/staging/lustre/lustre/llite/vvp_internal.h
@@ -100,7 +100,7 @@ struct vvp_io {
 	/**
 	 * Layout version when this IO is initialized
 	 */
-	__u32			vui_layout_gen;
+	u32			vui_layout_gen;
 	/**
 	 * File descriptor against which IO is done.
 	 */
diff --git a/drivers/staging/lustre/lustre/llite/vvp_io.c b/drivers/staging/lustre/lustre/llite/vvp_io.c
index d9f02ae..26a7897 100644
--- a/drivers/staging/lustre/lustre/llite/vvp_io.c
+++ b/drivers/staging/lustre/lustre/llite/vvp_io.c
@@ -188,7 +188,7 @@ static int vvp_prep_size(const struct lu_env *env, struct cl_object *obj,
 			i_size_write(inode, kms);
 			CDEBUG(D_VFSTRACE, DFID " updating i_size %llu\n",
 			       PFID(lu_object_fid(&obj->co_lu)),
-			       (__u64)i_size_read(inode));
+			       (u64)i_size_read(inode));
 		}
 	}
 
@@ -204,7 +204,7 @@ static int vvp_prep_size(const struct lu_env *env, struct cl_object *obj,
  */
 
 static int vvp_io_one_lock_index(const struct lu_env *env, struct cl_io *io,
-				 __u32 enqflags, enum cl_lock_mode mode,
+				 u32 enqflags, enum cl_lock_mode mode,
 			  pgoff_t start, pgoff_t end)
 {
 	struct vvp_io          *vio   = vvp_env_io(env);
@@ -234,7 +234,7 @@ static int vvp_io_one_lock_index(const struct lu_env *env, struct cl_io *io,
 }
 
 static int vvp_io_one_lock(const struct lu_env *env, struct cl_io *io,
-			   __u32 enqflags, enum cl_lock_mode mode,
+			   u32 enqflags, enum cl_lock_mode mode,
 			   loff_t start, loff_t end)
 {
 	struct cl_object *obj = io->ci_obj;
@@ -355,7 +355,7 @@ static void vvp_io_fini(const struct lu_env *env, const struct cl_io_slice *ios)
 	}
 
 	if (!io->ci_ignore_layout && io->ci_verify_layout) {
-		__u32 gen = 0;
+		u32 gen = 0;
 
 		/* check layout version */
 		ll_layout_refresh(inode, &gen);
@@ -590,8 +590,8 @@ static int vvp_io_setattr_lock(const struct lu_env *env,
 			       const struct cl_io_slice *ios)
 {
 	struct cl_io  *io  = ios->cis_io;
-	__u64 new_size;
-	__u32 enqflags = 0;
+	u64 new_size;
+	u32 enqflags = 0;
 
 	if (cl_io_is_trunc(io)) {
 		new_size = io->u.ci_setattr.sa_attr.lvb_size;
diff --git a/drivers/staging/lustre/lustre/llite/xattr_cache.c b/drivers/staging/lustre/lustre/llite/xattr_cache.c
index 5da69ba0..bb235e0 100644
--- a/drivers/staging/lustre/lustre/llite/xattr_cache.c
+++ b/drivers/staging/lustre/lustre/llite/xattr_cache.c
@@ -338,7 +338,7 @@ static int ll_xattr_cache_refill(struct inode *inode)
 	const char *xdata, *xval, *xtail, *xvtail;
 	struct ll_inode_info *lli = ll_i2info(inode);
 	struct mdt_body *body;
-	__u32 *xsizes;
+	u32 *xsizes;
 	int rc, i;
 
 	rc = ll_xattr_find_get_lock(inode, &oit, &req);
@@ -373,7 +373,7 @@ static int ll_xattr_cache_refill(struct inode *inode)
 	xval = req_capsule_server_sized_get(&req->rq_pill, &RMF_EAVALS,
 					    body->mbo_aclsize);
 	xsizes = req_capsule_server_sized_get(&req->rq_pill, &RMF_EAVALS_LENS,
-					      body->mbo_max_mdsize * sizeof(__u32));
+					      body->mbo_max_mdsize * sizeof(u32));
 	if (!xdata || !xval || !xsizes) {
 		CERROR("wrong setxattr reply\n");
 		rc = -EPROTO;
@@ -458,7 +458,7 @@ static int ll_xattr_cache_refill(struct inode *inode)
  * \retval -ENODATA no such attr or the list is empty
  */
 int ll_xattr_cache_get(struct inode *inode, const char *name, char *buffer,
-		       size_t size, __u64 valid)
+		       size_t size, u64 valid)
 {
 	struct ll_inode_info *lli = ll_i2info(inode);
 	int rc = 0;
diff --git a/drivers/staging/lustre/lustre/lmv/lmv_intent.c b/drivers/staging/lustre/lustre/lmv/lmv_intent.c
index ba6410e..bc364b6 100644
--- a/drivers/staging/lustre/lustre/lmv/lmv_intent.c
+++ b/drivers/staging/lustre/lustre/lmv/lmv_intent.c
@@ -52,7 +52,7 @@ static int lmv_intent_remote(struct obd_export *exp, struct lookup_intent *it,
 			     const struct lu_fid *parent_fid,
 			     struct ptlrpc_request **reqp,
 			     ldlm_blocking_callback cb_blocking,
-			     __u64 extra_lock_flags)
+			     u64 extra_lock_flags)
 {
 	struct obd_device	*obd = exp->exp_obd;
 	struct lmv_obd		*lmv = &obd->u.lmv;
@@ -262,7 +262,7 @@ static int lmv_intent_open(struct obd_export *exp, struct md_op_data *op_data,
 			   struct lookup_intent *it,
 			   struct ptlrpc_request **reqp,
 			   ldlm_blocking_callback cb_blocking,
-			   __u64 extra_lock_flags)
+			   u64 extra_lock_flags)
 {
 	struct obd_device	*obd = exp->exp_obd;
 	struct lmv_obd		*lmv = &obd->u.lmv;
@@ -353,7 +353,7 @@ static int lmv_intent_lookup(struct obd_export *exp,
 			     struct lookup_intent *it,
 			     struct ptlrpc_request **reqp,
 			     ldlm_blocking_callback cb_blocking,
-			     __u64 extra_lock_flags)
+			     u64 extra_lock_flags)
 {
 	struct lmv_stripe_md *lsm = op_data->op_mea1;
 	struct obd_device      *obd = exp->exp_obd;
@@ -475,7 +475,7 @@ static int lmv_intent_lookup(struct obd_export *exp,
 int lmv_intent_lock(struct obd_export *exp, struct md_op_data *op_data,
 		    struct lookup_intent *it, struct ptlrpc_request **reqp,
 		    ldlm_blocking_callback cb_blocking,
-		    __u64 extra_lock_flags)
+		    u64 extra_lock_flags)
 {
 	int		rc;
 
diff --git a/drivers/staging/lustre/lustre/lmv/lmv_internal.h b/drivers/staging/lustre/lustre/lmv/lmv_internal.h
index f2c41c7..c0881ff 100644
--- a/drivers/staging/lustre/lustre/lmv/lmv_internal.h
+++ b/drivers/staging/lustre/lustre/lmv/lmv_internal.h
@@ -46,7 +46,7 @@
 int lmv_intent_lock(struct obd_export *exp, struct md_op_data *op_data,
 		    struct lookup_intent *it, struct ptlrpc_request **reqp,
 		    ldlm_blocking_callback cb_blocking,
-		    __u64 extra_lock_flags);
+		    u64 extra_lock_flags);
 
 int lmv_fld_lookup(struct lmv_obd *lmv, const struct lu_fid *fid, u32 *mds);
 int __lmv_fid_alloc(struct lmv_obd *lmv, struct lu_fid *fid, u32 mds);
diff --git a/drivers/staging/lustre/lustre/lmv/lmv_obd.c b/drivers/staging/lustre/lustre/lmv/lmv_obd.c
index 7e4ffeb..65ae944 100644
--- a/drivers/staging/lustre/lustre/lmv/lmv_obd.c
+++ b/drivers/staging/lustre/lustre/lmv/lmv_obd.c
@@ -369,7 +369,7 @@ static void lmv_del_target(struct lmv_obd *lmv, int index)
 }
 
 static int lmv_add_target(struct obd_device *obd, struct obd_uuid *uuidp,
-			  __u32 index, int gen)
+			  u32 index, int gen)
 {
 	struct lmv_obd      *lmv = &obd->u.lmv;
 	struct obd_device *mdc_obd;
@@ -401,8 +401,8 @@ static int lmv_add_target(struct obd_device *obd, struct obd_uuid *uuidp,
 	if (index >= lmv->tgts_size) {
 		/* We need to reallocate the lmv target array. */
 		struct lmv_tgt_desc **newtgts, **old = NULL;
-		__u32 newsize = 1;
-		__u32 oldsize = 0;
+		u32 newsize = 1;
+		u32 oldsize = 0;
 
 		while (newsize < index + 1)
 			newsize <<= 1;
@@ -757,7 +757,7 @@ static int lmv_hsm_ct_unregister(struct lmv_obd *lmv, unsigned int cmd, int len,
 				 struct lustre_kernelcomm *lk,
 				 void __user *uarg)
 {
-	__u32 i;
+	u32 i;
 
 	/* unregister request (call from llapi_hsm_copytool_fini) */
 	for (i = 0; i < lmv->desc.ld_tgt_count; i++) {
@@ -783,7 +783,7 @@ static int lmv_hsm_ct_register(struct lmv_obd *lmv, unsigned int cmd, int len,
 			       struct lustre_kernelcomm *lk, void __user *uarg)
 {
 	struct file *filp;
-	__u32 i, j;
+	u32 i, j;
 	int err;
 	bool any_set = false;
 	struct kkuc_ct_data kcd = {
@@ -873,9 +873,9 @@ static int lmv_iocontrol(unsigned int cmd, struct obd_export *exp,
 		struct obd_ioctl_data *data = karg;
 		struct obd_device *mdc_obd;
 		struct obd_statfs stat_buf = {0};
-		__u32 index;
+		u32 index;
 
-		memcpy(&index, data->ioc_inlbuf2, sizeof(__u32));
+		memcpy(&index, data->ioc_inlbuf2, sizeof(u32));
 		if (index >= count)
 			return -ENODEV;
 
@@ -971,7 +971,7 @@ static int lmv_iocontrol(unsigned int cmd, struct obd_export *exp,
 		 * Note: this is from llite(see ll_dir_ioctl()), @uarg does not
 		 * point to user space memory for FID2MDTIDX.
 		 */
-		*(__u32 *)uarg = mdt_index;
+		*(u32 *)uarg = mdt_index;
 		break;
 	}
 	case OBD_IOC_FID2PATH: {
@@ -1292,7 +1292,7 @@ static int lmv_process_config(struct obd_device *obd, u32 len, void *buf)
 	struct lustre_cfg	*lcfg = buf;
 	struct obd_uuid		obd_uuid;
 	int			gen;
-	__u32			index;
+	u32			index;
 	int			rc;
 
 	switch (lcfg->lcfg_command) {
@@ -1327,7 +1327,7 @@ static int lmv_process_config(struct obd_device *obd, u32 len, void *buf)
 }
 
 static int lmv_statfs(const struct lu_env *env, struct obd_export *exp,
-		      struct obd_statfs *osfs, __u64 max_age, __u32 flags)
+		      struct obd_statfs *osfs, u64 max_age, u32 flags)
 {
 	struct obd_device     *obd = class_exp2obd(exp);
 	struct lmv_obd	*lmv = &obd->u.lmv;
@@ -1585,7 +1585,7 @@ struct lmv_tgt_desc*
 static int lmv_create(struct obd_export *exp, struct md_op_data *op_data,
 		      const void *data, size_t datalen, umode_t mode,
 		      uid_t uid, gid_t gid, kernel_cap_t cap_effective,
-		      __u64 rdev, struct ptlrpc_request **request)
+		      u64 rdev, struct ptlrpc_request **request)
 {
 	struct obd_device       *obd = exp->exp_obd;
 	struct lmv_obd	  *lmv = &obd->u.lmv;
@@ -1639,7 +1639,7 @@ static int lmv_create(struct obd_export *exp, struct md_op_data *op_data,
 static int
 lmv_enqueue(struct obd_export *exp, struct ldlm_enqueue_info *einfo,
 	    const union ldlm_policy_data *policy, struct md_op_data *op_data,
-	    struct lustre_handle *lockh, __u64 extra_lock_flags)
+	    struct lustre_handle *lockh, u64 extra_lock_flags)
 {
 	struct obd_device	*obd = exp->exp_obd;
 	struct lmv_obd	   *lmv = &obd->u.lmv;
@@ -2029,7 +2029,7 @@ static int lmv_fsync(struct obd_export *exp, const struct lu_fid *fid,
 static int lmv_get_min_striped_entry(struct obd_export *exp,
 				     struct md_op_data *op_data,
 				     struct md_callback *cb_op,
-				     __u64 hash_offset, int *stripe_offset,
+				     u64 hash_offset, int *stripe_offset,
 				     struct lu_dirent **entp,
 				     struct page **ppage)
 {
@@ -2046,7 +2046,7 @@ static int lmv_get_min_striped_entry(struct obd_export *exp,
 
 	stripe_count = lsm->lsm_md_stripe_count;
 	for (i = 0; i < stripe_count; i++) {
-		__u64 stripe_hash = hash_offset;
+		u64 stripe_hash = hash_offset;
 		struct lu_dirent *ent = NULL;
 		struct page *page = NULL;
 		struct lu_dirpage *dp;
@@ -2167,12 +2167,12 @@ static int lmv_get_min_striped_entry(struct obd_export *exp,
 static int lmv_read_striped_page(struct obd_export *exp,
 				 struct md_op_data *op_data,
 				 struct md_callback *cb_op,
-				 __u64 offset, struct page **ppage)
+				 u64 offset, struct page **ppage)
 {
 	struct inode *master_inode = op_data->op_data;
 	struct lu_fid master_fid = op_data->op_fid1;
-	__u64 hash_offset = offset;
-	__u32 ldp_flags;
+	u64 hash_offset = offset;
+	u32 ldp_flags;
 	struct page *min_ent_page = NULL;
 	struct page *ent_page = NULL;
 	struct lu_dirent *min_ent = NULL;
@@ -2203,7 +2203,7 @@ static int lmv_read_striped_page(struct obd_export *exp,
 	ent = area;
 	last_ent = ent;
 	do {
-		__u16 ent_size;
+		u16 ent_size;
 
 		/* Find the minum entry from all sub-stripes */
 		rc = lmv_get_min_striped_entry(exp, op_data, cb_op, hash_offset,
@@ -2295,7 +2295,7 @@ static int lmv_read_striped_page(struct obd_export *exp,
 }
 
 static int lmv_read_page(struct obd_export *exp, struct md_op_data *op_data,
-			 struct md_callback *cb_op, __u64 offset,
+			 struct md_callback *cb_op, u64 offset,
 			 struct page **ppage)
 {
 	struct lmv_stripe_md *lsm = op_data->op_mea1;
@@ -2517,7 +2517,7 @@ static int lmv_precleanup(struct obd_device *obd)
  * \retval negative	negated errno on failure
  */
 static int lmv_get_info(const struct lu_env *env, struct obd_export *exp,
-			__u32 keylen, void *key, __u32 *vallen, void *val)
+			u32 keylen, void *key, u32 *vallen, void *val)
 {
 	struct obd_device       *obd;
 	struct lmv_obd	  *lmv;
@@ -2534,7 +2534,7 @@ static int lmv_get_info(const struct lu_env *env, struct obd_export *exp,
 	if (keylen >= strlen("remote_flag") && !strcmp(key, "remote_flag")) {
 		int i;
 
-		LASSERT(*vallen == sizeof(__u32));
+		LASSERT(*vallen == sizeof(u32));
 		for (i = 0; i < lmv->desc.ld_tgt_count; i++) {
 			struct lmv_tgt_desc *tgt = lmv->tgts[i];
 
@@ -2780,7 +2780,7 @@ static int lmv_cancel_unused(struct obd_export *exp, const struct lu_fid *fid,
 
 static int lmv_set_lock_data(struct obd_export *exp,
 			     const struct lustre_handle *lockh,
-			     void *data, __u64 *bits)
+			     void *data, u64 *bits)
 {
 	struct lmv_obd	  *lmv = &exp->exp_obd->u.lmv;
 	struct lmv_tgt_desc *tgt = lmv->tgts[0];
@@ -2791,7 +2791,7 @@ static int lmv_set_lock_data(struct obd_export *exp,
 	return md_set_lock_data(tgt->ltd_exp, lockh, data, bits);
 }
 
-static enum ldlm_mode lmv_lock_match(struct obd_export *exp, __u64 flags,
+static enum ldlm_mode lmv_lock_match(struct obd_export *exp, u64 flags,
 				     const struct lu_fid *fid,
 				     enum ldlm_type type,
 				     union ldlm_policy_data *policy,
@@ -2925,7 +2925,7 @@ static int lmv_intent_getattr_async(struct obd_export *exp,
 }
 
 static int lmv_revalidate_lock(struct obd_export *exp, struct lookup_intent *it,
-			       struct lu_fid *fid, __u64 *bits)
+			       struct lu_fid *fid, u64 *bits)
 {
 	struct obd_device       *obd = exp->exp_obd;
 	struct lmv_obd	  *lmv = &obd->u.lmv;
@@ -2967,7 +2967,7 @@ static int lmv_quotactl(struct obd_device *unused, struct obd_export *exp,
 	struct lmv_obd      *lmv = &obd->u.lmv;
 	struct lmv_tgt_desc *tgt = lmv->tgts[0];
 	int rc = 0;
-	__u64 curspace = 0, curinodes = 0;
+	u64 curspace = 0, curinodes = 0;
 	u32 i;
 
 	if (!tgt || !tgt->ltd_exp || !tgt->ltd_active ||
diff --git a/drivers/staging/lustre/lustre/lov/lov_cl_internal.h b/drivers/staging/lustre/lustre/lov/lov_cl_internal.h
index 5d4c83b..d83b8de 100644
--- a/drivers/staging/lustre/lustre/lov/lov_cl_internal.h
+++ b/drivers/staging/lustre/lustre/lov/lov_cl_internal.h
@@ -98,9 +98,9 @@ struct lov_device {
 	struct cl_device	  ld_cl;
 	struct lov_obd	   *ld_lov;
 	/** size of lov_device::ld_target[] array */
-	__u32		     ld_target_nr;
+	u32		     ld_target_nr;
 	struct lovsub_device    **ld_target;
-	__u32		     ld_flags;
+	u32		     ld_flags;
 };
 
 /**
diff --git a/drivers/staging/lustre/lustre/lov/lov_dev.c b/drivers/staging/lustre/lustre/lov/lov_dev.c
index abf2ede..67d30fb 100644
--- a/drivers/staging/lustre/lustre/lov/lov_dev.c
+++ b/drivers/staging/lustre/lustre/lov/lov_dev.c
@@ -218,7 +218,7 @@ static struct lu_device *lov_device_free(const struct lu_env *env,
 }
 
 static void lov_cl_del_target(const struct lu_env *env, struct lu_device *dev,
-			      __u32 index)
+			      u32 index)
 {
 	struct lov_device *ld = lu2lov_dev(dev);
 
@@ -231,8 +231,8 @@ static void lov_cl_del_target(const struct lu_env *env, struct lu_device *dev,
 static int lov_expand_targets(const struct lu_env *env, struct lov_device *dev)
 {
 	int   result;
-	__u32 tgt_size;
-	__u32 sub_size;
+	u32 tgt_size;
+	u32 sub_size;
 
 	result = 0;
 	tgt_size = dev->ld_lov->lov_tgt_size;
@@ -257,7 +257,7 @@ static int lov_expand_targets(const struct lu_env *env, struct lov_device *dev)
 }
 
 static int lov_cl_add_target(const struct lu_env *env, struct lu_device *dev,
-			     __u32 index)
+			     u32 index)
 {
 	struct obd_device    *obd = dev->ld_obd;
 	struct lov_device    *ld  = lu2lov_dev(dev);
@@ -304,7 +304,7 @@ static int lov_process_config(const struct lu_env *env,
 	int cmd;
 	int rc;
 	int gen;
-	__u32 index;
+	u32 index;
 
 	lov_tgts_getref(obd);
 
diff --git a/drivers/staging/lustre/lustre/lov/lov_internal.h b/drivers/staging/lustre/lustre/lov/lov_internal.h
index 2b31c99..9708f1b 100644
--- a/drivers/staging/lustre/lustre/lov/lov_internal.h
+++ b/drivers/staging/lustre/lustre/lov/lov_internal.h
@@ -241,7 +241,7 @@ struct lov_request_set {
 
 /* lov_merge.c */
 int lov_merge_lvb_kms(struct lov_stripe_md *lsm, int index,
-		      struct ost_lvb *lvb, __u64 *kms_place);
+		      struct ost_lvb *lvb, u64 *kms_place);
 
 /* lov_offset.c */
 u64 lov_stripe_size(struct lov_stripe_md *lsm, int index, u64 ost_size,
@@ -267,17 +267,17 @@ int lov_prep_statfs_set(struct obd_device *obd, struct obd_info *oinfo,
 void lov_stripe_lock(struct lov_stripe_md *md);
 void lov_stripe_unlock(struct lov_stripe_md *md);
 void lov_fix_desc(struct lov_desc *desc);
-void lov_fix_desc_stripe_size(__u64 *val);
-void lov_fix_desc_stripe_count(__u32 *val);
-void lov_fix_desc_pattern(__u32 *val);
-void lov_fix_desc_qos_maxage(__u32 *val);
+void lov_fix_desc_stripe_size(u64 *val);
+void lov_fix_desc_stripe_count(u32 *val);
+void lov_fix_desc_pattern(u32 *val);
+void lov_fix_desc_qos_maxage(u32 *val);
 u16 lov_get_stripe_count(struct lov_obd *lov, u32 magic, u16 stripe_count);
-int lov_connect_obd(struct obd_device *obd, __u32 index, int activate,
+int lov_connect_obd(struct obd_device *obd, u32 index, int activate,
 		    struct obd_connect_data *data);
 int lov_setup(struct obd_device *obd, struct lustre_cfg *lcfg);
 int lov_process_config_base(struct obd_device *obd, struct lustre_cfg *lcfg,
-			    __u32 *indexp, int *genp);
-int lov_del_target(struct obd_device *obd, __u32 index,
+			    u32 *indexp, int *genp);
+int lov_del_target(struct obd_device *obd, u32 index,
 		   struct obd_uuid *uuidp, int gen);
 
 /* lov_pack.c */
@@ -303,8 +303,8 @@ struct lov_stripe_md *lov_unpackmd(struct lov_obd *lov, void *buf,
 /* ost_pool methods */
 int lov_ost_pool_init(struct ost_pool *op, unsigned int count);
 int lov_ost_pool_extend(struct ost_pool *op, unsigned int min_count);
-int lov_ost_pool_add(struct ost_pool *op, __u32 idx, unsigned int min_count);
-int lov_ost_pool_remove(struct ost_pool *op, __u32 idx);
+int lov_ost_pool_add(struct ost_pool *op, u32 idx, unsigned int min_count);
+int lov_ost_pool_remove(struct ost_pool *op, u32 idx);
 int lov_ost_pool_free(struct ost_pool *op);
 
 /* high level pool methods */
diff --git a/drivers/staging/lustre/lustre/lov/lov_merge.c b/drivers/staging/lustre/lustre/lov/lov_merge.c
index 79edc26..ab0ba12 100644
--- a/drivers/staging/lustre/lustre/lov/lov_merge.c
+++ b/drivers/staging/lustre/lustre/lov/lov_merge.c
@@ -42,12 +42,12 @@
  * uptodate time on the local client.
  */
 int lov_merge_lvb_kms(struct lov_stripe_md *lsm, int index,
-		      struct ost_lvb *lvb, __u64 *kms_place)
+		      struct ost_lvb *lvb, u64 *kms_place)
 {
 	struct lov_stripe_md_entry *lse = lsm->lsm_entries[index];
-	__u64 size = 0;
-	__u64 kms = 0;
-	__u64 blocks = 0;
+	u64 size = 0;
+	u64 kms = 0;
+	u64 blocks = 0;
 	s64 current_mtime = lvb->lvb_mtime;
 	s64 current_atime = lvb->lvb_atime;
 	s64 current_ctime = lvb->lvb_ctime;
diff --git a/drivers/staging/lustre/lustre/lov/lov_obd.c b/drivers/staging/lustre/lustre/lov/lov_obd.c
index 6959b91..109dd69 100644
--- a/drivers/staging/lustre/lustre/lov/lov_obd.c
+++ b/drivers/staging/lustre/lustre/lov/lov_obd.c
@@ -123,7 +123,7 @@ static int lov_set_osc_active(struct obd_device *obd, struct obd_uuid *uuid,
 static int lov_notify(struct obd_device *obd, struct obd_device *watched,
 		      enum obd_notify_event ev);
 
-int lov_connect_obd(struct obd_device *obd, __u32 index, int activate,
+int lov_connect_obd(struct obd_device *obd, u32 index, int activate,
 		    struct obd_connect_data *data)
 {
 	struct lov_obd *lov = &obd->u.lov;
@@ -477,7 +477,7 @@ static int lov_notify(struct obd_device *obd, struct obd_device *watched,
 }
 
 static int lov_add_target(struct obd_device *obd, struct obd_uuid *uuidp,
-			  __u32 index, int gen, int active)
+			  u32 index, int gen, int active)
 {
 	struct lov_obd *lov = &obd->u.lov;
 	struct lov_tgt_desc *tgt;
@@ -511,9 +511,9 @@ static int lov_add_target(struct obd_device *obd, struct obd_uuid *uuidp,
 	if (index >= lov->lov_tgt_size) {
 		/* We need to reallocate the lov target array. */
 		struct lov_tgt_desc **newtgts, **old = NULL;
-		__u32 newsize, oldsize = 0;
+		u32 newsize, oldsize = 0;
 
-		newsize = max_t(__u32, lov->lov_tgt_size, 2);
+		newsize = max_t(u32, lov->lov_tgt_size, 2);
 		while (newsize < index + 1)
 			newsize <<= 1;
 		newtgts = kcalloc(newsize, sizeof(*newtgts), GFP_NOFS);
@@ -609,7 +609,7 @@ static int lov_add_target(struct obd_device *obd, struct obd_uuid *uuidp,
 }
 
 /* Schedule a target for deletion */
-int lov_del_target(struct obd_device *obd, __u32 index,
+int lov_del_target(struct obd_device *obd, u32 index,
 		   struct obd_uuid *uuidp, int gen)
 {
 	struct lov_obd *lov = &obd->u.lov;
@@ -681,7 +681,7 @@ static void __lov_del_obd(struct obd_device *obd, struct lov_tgt_desc *tgt)
 		class_manual_cleanup(osc_obd);
 }
 
-void lov_fix_desc_stripe_size(__u64 *val)
+void lov_fix_desc_stripe_size(u64 *val)
 {
 	if (*val < LOV_MIN_STRIPE_SIZE) {
 		if (*val != 0)
@@ -695,13 +695,13 @@ void lov_fix_desc_stripe_size(__u64 *val)
 	}
 }
 
-void lov_fix_desc_stripe_count(__u32 *val)
+void lov_fix_desc_stripe_count(u32 *val)
 {
 	if (*val == 0)
 		*val = 1;
 }
 
-void lov_fix_desc_pattern(__u32 *val)
+void lov_fix_desc_pattern(u32 *val)
 {
 	/* from lov_setstripe */
 	if ((*val != 0) && (*val != LOV_PATTERN_RAID0)) {
@@ -710,7 +710,7 @@ void lov_fix_desc_pattern(__u32 *val)
 	}
 }
 
-void lov_fix_desc_qos_maxage(__u32 *val)
+void lov_fix_desc_qos_maxage(u32 *val)
 {
 	if (*val == 0)
 		*val = LOV_DESC_QOS_MAXAGE_DEFAULT;
@@ -843,7 +843,7 @@ static int lov_cleanup(struct obd_device *obd)
 }
 
 int lov_process_config_base(struct obd_device *obd, struct lustre_cfg *lcfg,
-			    __u32 *indexp, int *genp)
+			    u32 *indexp, int *genp)
 {
 	struct obd_uuid obd_uuid;
 	int cmd;
@@ -853,7 +853,7 @@ int lov_process_config_base(struct obd_device *obd, struct lustre_cfg *lcfg,
 	case LCFG_LOV_ADD_OBD:
 	case LCFG_LOV_ADD_INA:
 	case LCFG_LOV_DEL_OBD: {
-		__u32 index;
+		u32 index;
 		int gen;
 		/* lov_modify_tgts add  0:lov_mdsA  1:ost1_UUID  2:0  3:1 */
 		if (LUSTRE_CFG_BUFLEN(lcfg, 1) > sizeof(obd_uuid.uuid)) {
@@ -923,7 +923,7 @@ int lov_process_config_base(struct obd_device *obd, struct lustre_cfg *lcfg,
 }
 
 static int lov_statfs_async(struct obd_export *exp, struct obd_info *oinfo,
-			    __u64 max_age, struct ptlrpc_request_set *rqset)
+			    u64 max_age, struct ptlrpc_request_set *rqset)
 {
 	struct obd_device      *obd = class_exp2obd(exp);
 	struct lov_request_set *set;
@@ -961,7 +961,7 @@ static int lov_statfs_async(struct obd_export *exp, struct obd_info *oinfo,
 }
 
 static int lov_statfs(const struct lu_env *env, struct obd_export *exp,
-		      struct obd_statfs *osfs, __u64 max_age, __u32 flags)
+		      struct obd_statfs *osfs, u64 max_age, u32 flags)
 {
 	struct ptlrpc_request_set *set = NULL;
 	struct obd_info oinfo = {
@@ -998,10 +998,10 @@ static int lov_iocontrol(unsigned int cmd, struct obd_export *exp, int len,
 		struct obd_ioctl_data *data = karg;
 		struct obd_device *osc_obd;
 		struct obd_statfs stat_buf = {0};
-		__u32 index;
-		__u32 flags;
+		u32 index;
+		u32 flags;
 
-		memcpy(&index, data->ioc_inlbuf2, sizeof(__u32));
+		memcpy(&index, data->ioc_inlbuf2, sizeof(u32));
 		if (index >= count)
 			return -ENODEV;
 
@@ -1021,7 +1021,7 @@ static int lov_iocontrol(unsigned int cmd, struct obd_export *exp, int len,
 				       sizeof(struct obd_uuid))))
 			return -EFAULT;
 
-		memcpy(&flags, data->ioc_inlbuf1, sizeof(__u32));
+		memcpy(&flags, data->ioc_inlbuf1, sizeof(u32));
 		flags = flags & LL_STATFS_NODELAY ? OBD_STATFS_NODELAY : 0;
 
 		/* got statfs data */
@@ -1040,7 +1040,7 @@ static int lov_iocontrol(unsigned int cmd, struct obd_export *exp, int len,
 		struct obd_ioctl_data *data;
 		struct lov_desc *desc;
 		char *buf = NULL;
-		__u32 *genp;
+		u32 *genp;
 
 		len = 0;
 		if (obd_ioctl_getdata(&buf, &len, uarg))
@@ -1058,7 +1058,7 @@ static int lov_iocontrol(unsigned int cmd, struct obd_export *exp, int len,
 			return -EINVAL;
 		}
 
-		if (sizeof(__u32) * count > data->ioc_inllen3) {
+		if (sizeof(u32) * count > data->ioc_inllen3) {
 			kvfree(buf);
 			return -EINVAL;
 		}
@@ -1067,7 +1067,7 @@ static int lov_iocontrol(unsigned int cmd, struct obd_export *exp, int len,
 		memcpy(desc, &lov->desc, sizeof(*desc));
 
 		uuidp = (struct obd_uuid *)data->ioc_inlbuf2;
-		genp = (__u32 *)data->ioc_inlbuf3;
+		genp = (u32 *)data->ioc_inlbuf3;
 		/* the uuid will be empty for deleted OSTs */
 		for (i = 0; i < count; i++, uuidp++, genp++) {
 			if (!lov->lov_tgts[i])
@@ -1172,7 +1172,7 @@ static int lov_iocontrol(unsigned int cmd, struct obd_export *exp, int len,
 }
 
 static int lov_get_info(const struct lu_env *env, struct obd_export *exp,
-			__u32 keylen, void *key, __u32 *vallen, void *val)
+			u32 keylen, void *key, u32 *vallen, void *val)
 {
 	struct obd_device *obddev = class_exp2obd(exp);
 	struct lov_obd *lov = &obddev->u.lov;
@@ -1283,8 +1283,8 @@ static int lov_quotactl(struct obd_device *obd, struct obd_export *exp,
 {
 	struct lov_obd      *lov = &obd->u.lov;
 	struct lov_tgt_desc *tgt;
-	__u64		curspace = 0;
-	__u64		bhardlimit = 0;
+	u64		curspace = 0;
+	u64		bhardlimit = 0;
 	int		  i, rc = 0;
 
 	if (oqctl->qc_cmd != Q_GETOQUOTA &&
diff --git a/drivers/staging/lustre/lustre/lov/lov_pack.c b/drivers/staging/lustre/lustre/lov/lov_pack.c
index 0a6bb1e..fde5160 100644
--- a/drivers/staging/lustre/lustre/lov/lov_pack.c
+++ b/drivers/staging/lustre/lustre/lov/lov_pack.c
@@ -252,7 +252,7 @@ ssize_t lov_lsm_pack(const struct lov_stripe_md *lsm, void *buf,
 /* Find the max stripecount we should use */
 u16 lov_get_stripe_count(struct lov_obd *lov, u32 magic, u16 stripe_count)
 {
-	__u32 max_stripes = LOV_MAX_STRIPE_COUNT_OLD;
+	u32 max_stripes = LOV_MAX_STRIPE_COUNT_OLD;
 
 	if (!stripe_count)
 		stripe_count = lov->desc.ld_default_stripe_count;
diff --git a/drivers/staging/lustre/lustre/lov/lov_pool.c b/drivers/staging/lustre/lustre/lov/lov_pool.c
index b90fb1c..177f5a5 100644
--- a/drivers/staging/lustre/lustre/lov/lov_pool.c
+++ b/drivers/staging/lustre/lustre/lov/lov_pool.c
@@ -273,7 +273,7 @@ int lov_ost_pool_extend(struct ost_pool *op, unsigned int min_count)
 	return 0;
 }
 
-int lov_ost_pool_add(struct ost_pool *op, __u32 idx, unsigned int min_count)
+int lov_ost_pool_add(struct ost_pool *op, u32 idx, unsigned int min_count)
 {
 	int rc = 0, i;
 
@@ -298,7 +298,7 @@ int lov_ost_pool_add(struct ost_pool *op, __u32 idx, unsigned int min_count)
 	return rc;
 }
 
-int lov_ost_pool_remove(struct ost_pool *op, __u32 idx)
+int lov_ost_pool_remove(struct ost_pool *op, u32 idx)
 {
 	int i;
 
diff --git a/drivers/staging/lustre/lustre/lov/lov_request.c b/drivers/staging/lustre/lustre/lov/lov_request.c
index d13e8d1..45dca36 100644
--- a/drivers/staging/lustre/lustre/lov/lov_request.c
+++ b/drivers/staging/lustre/lustre/lov/lov_request.c
@@ -136,7 +136,7 @@ static int lov_check_and_wait_active(struct lov_obd *lov, int ost_idx)
 	return rc;
 }
 
-#define LOV_U64_MAX ((__u64)~0ULL)
+#define LOV_U64_MAX ((u64)~0ULL)
 #define LOV_SUM_MAX(tot, add)					   \
 	do {							    \
 		if ((tot) + (add) < (tot))			      \
@@ -188,7 +188,7 @@ static void lov_update_statfs(struct obd_statfs *osfs,
 			      int success)
 {
 	int shift = 0, quit = 0;
-	__u64 tmp;
+	u64 tmp;
 
 	if (success == 0) {
 		memcpy(osfs, lov_sfs, sizeof(*lov_sfs));
diff --git a/drivers/staging/lustre/lustre/mdc/mdc_internal.h b/drivers/staging/lustre/lustre/mdc/mdc_internal.h
index 6da9046..2b849e8 100644
--- a/drivers/staging/lustre/lustre/mdc/mdc_internal.h
+++ b/drivers/staging/lustre/lustre/mdc/mdc_internal.h
@@ -39,20 +39,20 @@
 int mdc_tunables_init(struct obd_device *obd);
 
 void mdc_pack_body(struct ptlrpc_request *req, const struct lu_fid *fid,
-		   __u64 valid, size_t ea_size, __u32 suppgid, u32 flags);
+		   u64 valid, size_t ea_size, u32 suppgid, u32 flags);
 void mdc_swap_layouts_pack(struct ptlrpc_request *req,
 			   struct md_op_data *op_data);
-void mdc_readdir_pack(struct ptlrpc_request *req, __u64 pgoff, size_t size,
+void mdc_readdir_pack(struct ptlrpc_request *req, u64 pgoff, size_t size,
 		      const struct lu_fid *fid);
-void mdc_getattr_pack(struct ptlrpc_request *req, __u64 valid, u32 flags,
+void mdc_getattr_pack(struct ptlrpc_request *req, u64 valid, u32 flags,
 		      struct md_op_data *data, size_t ea_size);
 void mdc_setattr_pack(struct ptlrpc_request *req, struct md_op_data *op_data,
 		      void *ea, size_t ealen);
 void mdc_create_pack(struct ptlrpc_request *req, struct md_op_data *op_data,
 		     const void *data, size_t datalen, umode_t mode, uid_t uid,
-		     gid_t gid, kernel_cap_t capability, __u64 rdev);
+		     gid_t gid, kernel_cap_t capability, u64 rdev);
 void mdc_open_pack(struct ptlrpc_request *req, struct md_op_data *op_data,
-		   umode_t mode, __u64 rdev, __u64 flags, const void *data,
+		   umode_t mode, u64 rdev, u64 flags, const void *data,
 		   size_t datalen);
 void mdc_file_secctx_pack(struct ptlrpc_request *req,
 			  const char *secctx_name,
@@ -68,7 +68,7 @@ void mdc_rename_pack(struct ptlrpc_request *req, struct md_op_data *op_data,
 /* mdc/mdc_locks.c */
 int mdc_set_lock_data(struct obd_export *exp,
 		      const struct lustre_handle *lockh,
-		      void *data, __u64 *bits);
+		      void *data, u64 *bits);
 
 int mdc_null_inode(struct obd_export *exp, const struct lu_fid *fid);
 
@@ -77,7 +77,7 @@ int mdc_intent_lock(struct obd_export *exp,
 		    struct lookup_intent *it,
 		    struct ptlrpc_request **reqp,
 		    ldlm_blocking_callback cb_blocking,
-		    __u64 extra_lock_flags);
+		    u64 extra_lock_flags);
 
 int mdc_enqueue(struct obd_export *exp, struct ldlm_enqueue_info *einfo,
 		const union ldlm_policy_data *policy,
@@ -86,7 +86,7 @@ int mdc_enqueue(struct obd_export *exp, struct ldlm_enqueue_info *einfo,
 
 int mdc_resource_get_unused(struct obd_export *exp, const struct lu_fid *fid,
 			    struct list_head *cancels, enum ldlm_mode  mode,
-			    __u64 bits);
+			    u64 bits);
 /* mdc/mdc_request.c */
 int mdc_fid_alloc(const struct lu_env *env, struct obd_export *exp,
 		  struct lu_fid *fid, struct md_op_data *op_data);
@@ -101,7 +101,7 @@ int mdc_set_open_replay_data(struct obd_export *exp,
 
 int mdc_create(struct obd_export *exp, struct md_op_data *op_data,
 	       const void *data, size_t datalen, umode_t mode, uid_t uid,
-	       gid_t gid, kernel_cap_t capability, __u64 rdev,
+	       gid_t gid, kernel_cap_t capability, u64 rdev,
 	       struct ptlrpc_request **request);
 int mdc_link(struct obd_export *exp, struct md_op_data *op_data,
 	     struct ptlrpc_request **request);
@@ -118,12 +118,12 @@ int mdc_cancel_unused(struct obd_export *exp, const struct lu_fid *fid,
 		      enum ldlm_cancel_flags flags, void *opaque);
 
 int mdc_revalidate_lock(struct obd_export *exp, struct lookup_intent *it,
-			struct lu_fid *fid, __u64 *bits);
+			struct lu_fid *fid, u64 *bits);
 
 int mdc_intent_getattr_async(struct obd_export *exp,
 			     struct md_enqueue_info *minfo);
 
-enum ldlm_mode mdc_lock_match(struct obd_export *exp, __u64 flags,
+enum ldlm_mode mdc_lock_match(struct obd_export *exp, u64 flags,
 			      const struct lu_fid *fid, enum ldlm_type type,
 			      union ldlm_policy_data *policy,
 			      enum ldlm_mode mode,
@@ -141,7 +141,7 @@ static inline int mdc_prep_elc_req(struct obd_export *exp,
 				 count);
 }
 
-static inline unsigned long hash_x_index(__u64 hash, int hash64)
+static inline unsigned long hash_x_index(u64 hash, int hash64)
 {
 	if (BITS_PER_LONG == 32 && hash64)
 		hash >>= 32;
diff --git a/drivers/staging/lustre/lustre/mdc/mdc_lib.c b/drivers/staging/lustre/lustre/mdc/mdc_lib.c
index a1b1e75..3dfc863 100644
--- a/drivers/staging/lustre/lustre/mdc/mdc_lib.c
+++ b/drivers/staging/lustre/lustre/mdc/mdc_lib.c
@@ -42,7 +42,7 @@ static void set_mrc_cr_flags(struct mdt_rec_create *mrc, u64 flags)
 	mrc->cr_flags_h = (u32)(flags >> 32);
 }
 
-static void __mdc_pack_body(struct mdt_body *b, __u32 suppgid)
+static void __mdc_pack_body(struct mdt_body *b, u32 suppgid)
 {
 	b->mbo_suppgid = suppgid;
 	b->mbo_uid = from_kuid(&init_user_ns, current_uid());
@@ -65,7 +65,7 @@ void mdc_swap_layouts_pack(struct ptlrpc_request *req,
 }
 
 void mdc_pack_body(struct ptlrpc_request *req, const struct lu_fid *fid,
-		   __u64 valid, size_t ea_size, __u32 suppgid, u32 flags)
+		   u64 valid, size_t ea_size, u32 suppgid, u32 flags)
 {
 	struct mdt_body *b = req_capsule_client_get(&req->rq_pill,
 						    &RMF_MDT_BODY);
@@ -134,7 +134,7 @@ void mdc_file_secctx_pack(struct ptlrpc_request *req, const char *secctx_name,
 	memcpy(buf, secctx, buf_size);
 }
 
-void mdc_readdir_pack(struct ptlrpc_request *req, __u64 pgoff, size_t size,
+void mdc_readdir_pack(struct ptlrpc_request *req, u64 pgoff, size_t size,
 		      const struct lu_fid *fid)
 {
 	struct mdt_body *b = req_capsule_client_get(&req->rq_pill,
@@ -151,11 +151,11 @@ void mdc_readdir_pack(struct ptlrpc_request *req, __u64 pgoff, size_t size,
 void mdc_create_pack(struct ptlrpc_request *req, struct md_op_data *op_data,
 		     const void *data, size_t datalen, umode_t mode,
 		     uid_t uid, gid_t gid, kernel_cap_t cap_effective,
-		     __u64 rdev)
+		     u64 rdev)
 {
 	struct mdt_rec_create	*rec;
 	char			*tmp;
-	__u64			 flags;
+	u64			 flags;
 
 	BUILD_BUG_ON(sizeof(struct mdt_rec_reint) != sizeof(struct mdt_rec_create));
 	rec = req_capsule_client_get(&req->rq_pill, &RMF_REC_REINT);
@@ -189,9 +189,9 @@ void mdc_create_pack(struct ptlrpc_request *req, struct md_op_data *op_data,
 			     op_data->op_file_secctx_size);
 }
 
-static inline __u64 mds_pack_open_flags(__u64 flags)
+static inline u64 mds_pack_open_flags(u64 flags)
 {
-	__u64 cr_flags = (flags & (FMODE_READ | FMODE_WRITE |
+	u64 cr_flags = (flags & (FMODE_READ | FMODE_WRITE |
 				   MDS_OPEN_FL_INTERNAL));
 	if (flags & O_CREAT)
 		cr_flags |= MDS_OPEN_CREAT;
@@ -218,12 +218,12 @@ static inline __u64 mds_pack_open_flags(__u64 flags)
 
 /* packing of MDS records */
 void mdc_open_pack(struct ptlrpc_request *req, struct md_op_data *op_data,
-		   umode_t mode, __u64 rdev, __u64 flags, const void *lmm,
+		   umode_t mode, u64 rdev, u64 flags, const void *lmm,
 		   size_t lmmlen)
 {
 	struct mdt_rec_create *rec;
 	char *tmp;
-	__u64 cr_flags;
+	u64 cr_flags;
 
 	BUILD_BUG_ON(sizeof(struct mdt_rec_reint) != sizeof(struct mdt_rec_create));
 	rec = req_capsule_client_get(&req->rq_pill, &RMF_REC_REINT);
@@ -268,7 +268,7 @@ void mdc_open_pack(struct ptlrpc_request *req, struct md_op_data *op_data,
 
 static inline u64 attr_pack(unsigned int ia_valid, enum op_xvalid ia_xvalid)
 {
-	__u64 sa_valid = 0;
+	u64 sa_valid = 0;
 
 	if (ia_valid & ATTR_MODE)
 		sa_valid |= MDS_ATTR_MODE;
@@ -483,7 +483,7 @@ void mdc_rename_pack(struct ptlrpc_request *req, struct md_op_data *op_data,
 	}
 }
 
-void mdc_getattr_pack(struct ptlrpc_request *req, __u64 valid, u32 flags,
+void mdc_getattr_pack(struct ptlrpc_request *req, u64 valid, u32 flags,
 		      struct md_op_data *op_data, size_t ea_size)
 {
 	struct mdt_body *b = req_capsule_client_get(&req->rq_pill,
diff --git a/drivers/staging/lustre/lustre/mdc/mdc_locks.c b/drivers/staging/lustre/lustre/mdc/mdc_locks.c
index a60959d..e16dce6 100644
--- a/drivers/staging/lustre/lustre/mdc/mdc_locks.c
+++ b/drivers/staging/lustre/lustre/mdc/mdc_locks.c
@@ -96,7 +96,7 @@ int it_open_error(int phase, struct lookup_intent *it)
 
 /* this must be called on a lockh that is known to have a referenced lock */
 int mdc_set_lock_data(struct obd_export *exp, const struct lustre_handle *lockh,
-		      void *data, __u64 *bits)
+		      void *data, u64 *bits)
 {
 	struct ldlm_lock *lock;
 	struct inode *new_inode = data;
@@ -131,7 +131,7 @@ int mdc_set_lock_data(struct obd_export *exp, const struct lustre_handle *lockh,
 	return 0;
 }
 
-enum ldlm_mode mdc_lock_match(struct obd_export *exp, __u64 flags,
+enum ldlm_mode mdc_lock_match(struct obd_export *exp, u64 flags,
 			      const struct lu_fid *fid, enum ldlm_type type,
 			      union ldlm_policy_data *policy,
 			      enum ldlm_mode mode,
@@ -319,7 +319,7 @@ static int mdc_save_lovea(struct ptlrpc_request *req,
 
 	/* pack the intent */
 	lit = req_capsule_client_get(&req->rq_pill, &RMF_LDLM_INTENT);
-	lit->opc = (__u64)it->it_op;
+	lit->opc = (u64)it->it_op;
 
 	/* pack the intended request */
 	mdc_open_pack(req, op_data, it->it_create_mode, 0, it->it_flags, lmm,
@@ -423,7 +423,7 @@ static struct ptlrpc_request *mdc_intent_unlink_pack(struct obd_export *exp,
 
 	/* pack the intent */
 	lit = req_capsule_client_get(&req->rq_pill, &RMF_LDLM_INTENT);
-	lit->opc = (__u64)it->it_op;
+	lit->opc = (u64)it->it_op;
 
 	/* pack the intended request */
 	mdc_unlink_pack(req, op_data);
@@ -463,7 +463,7 @@ static struct ptlrpc_request *mdc_intent_getattr_pack(struct obd_export *exp,
 
 	/* pack the intent */
 	lit = req_capsule_client_get(&req->rq_pill, &RMF_LDLM_INTENT);
-	lit->opc = (__u64)it->it_op;
+	lit->opc = (u64)it->it_op;
 
 	if (obddev->u.cli.cl_default_mds_easize > 0)
 		easize = obddev->u.cli.cl_default_mds_easize;
@@ -504,7 +504,7 @@ static struct ptlrpc_request *mdc_intent_layout_pack(struct obd_export *exp,
 
 	/* pack the intent */
 	lit = req_capsule_client_get(&req->rq_pill, &RMF_LDLM_INTENT);
-	lit->opc = (__u64)it->it_op;
+	lit->opc = (u64)it->it_op;
 
 	/* pack the layout intent request */
 	layout = req_capsule_client_get(&req->rq_pill, &RMF_LAYOUT_INTENT);
@@ -1031,7 +1031,7 @@ static int mdc_finish_intent_lock(struct obd_export *exp,
 }
 
 int mdc_revalidate_lock(struct obd_export *exp, struct lookup_intent *it,
-			struct lu_fid *fid, __u64 *bits)
+			struct lu_fid *fid, u64 *bits)
 {
 	/* We could just return 1 immediately, but since we should only
 	 * be called in revalidate_it if we already have a lock, let's
@@ -1126,7 +1126,7 @@ int mdc_revalidate_lock(struct obd_export *exp, struct lookup_intent *it,
  */
 int mdc_intent_lock(struct obd_export *exp, struct md_op_data *op_data,
 		    struct lookup_intent *it, struct ptlrpc_request **reqp,
-		    ldlm_blocking_callback cb_blocking, __u64 extra_lock_flags)
+		    ldlm_blocking_callback cb_blocking, u64 extra_lock_flags)
 {
 	struct ldlm_enqueue_info einfo = {
 		.ei_type	= LDLM_IBITS,
@@ -1192,7 +1192,7 @@ static int mdc_intent_getattr_async_interpret(const struct lu_env *env,
 	struct lustre_handle     *lockh;
 	struct obd_device	*obddev;
 	struct ldlm_reply	 *lockrep;
-	__u64		     flags = LDLM_FL_HAS_INTENT;
+	u64		     flags = LDLM_FL_HAS_INTENT;
 
 	it    = &minfo->mi_it;
 	lockh = &minfo->mi_lockh;
@@ -1240,7 +1240,7 @@ int mdc_intent_getattr_async(struct obd_export *exp,
 		.l_inodebits = { MDS_INODELOCK_LOOKUP | MDS_INODELOCK_UPDATE }
 	};
 	int		      rc = 0;
-	__u64		    flags = LDLM_FL_HAS_INTENT;
+	u64		    flags = LDLM_FL_HAS_INTENT;
 
 	CDEBUG(D_DLMTRACE,
 	       "name: %.*s in inode " DFID ", intent: %s flags %#Lo\n",
diff --git a/drivers/staging/lustre/lustre/mdc/mdc_reint.c b/drivers/staging/lustre/lustre/mdc/mdc_reint.c
index bdffe6d..765c908 100644
--- a/drivers/staging/lustre/lustre/mdc/mdc_reint.c
+++ b/drivers/staging/lustre/lustre/mdc/mdc_reint.c
@@ -64,7 +64,7 @@ static int mdc_reint(struct ptlrpc_request *request, int level)
  */
 int mdc_resource_get_unused(struct obd_export *exp, const struct lu_fid *fid,
 			    struct list_head *cancels, enum ldlm_mode mode,
-			    __u64 bits)
+			    u64 bits)
 {
 	struct ldlm_namespace *ns = exp->exp_obd->obd_namespace;
 	union ldlm_policy_data policy = {};
@@ -103,7 +103,7 @@ int mdc_setattr(struct obd_export *exp, struct md_op_data *op_data,
 	LIST_HEAD(cancels);
 	struct ptlrpc_request *req;
 	int count = 0, rc;
-	__u64 bits;
+	u64 bits;
 
 	bits = MDS_INODELOCK_UPDATE;
 	if (op_data->op_attr.ia_valid & (ATTR_MODE | ATTR_UID | ATTR_GID))
@@ -151,7 +151,7 @@ int mdc_setattr(struct obd_export *exp, struct md_op_data *op_data,
 int mdc_create(struct obd_export *exp, struct md_op_data *op_data,
 	       const void *data, size_t datalen, umode_t mode,
 	       uid_t uid, gid_t gid, kernel_cap_t cap_effective,
-	       __u64 rdev, struct ptlrpc_request **request)
+	       u64 rdev, struct ptlrpc_request **request)
 {
 	struct ptlrpc_request *req;
 	int level, rc;
diff --git a/drivers/staging/lustre/lustre/mdc/mdc_request.c b/drivers/staging/lustre/lustre/mdc/mdc_request.c
index 09b30ef..1aee1c5 100644
--- a/drivers/staging/lustre/lustre/mdc/mdc_request.c
+++ b/drivers/staging/lustre/lustre/mdc/mdc_request.c
@@ -278,7 +278,7 @@ static int mdc_xattr_common(struct obd_export *exp,
 			    int opcode, u64 valid,
 			    const char *xattr_name, const char *input,
 			    int input_size, int output_size, int flags,
-			    __u32 suppgid, struct ptlrpc_request **request)
+			    u32 suppgid, struct ptlrpc_request **request)
 {
 	struct ptlrpc_request *req;
 	int   xattr_namelen = 0;
@@ -594,7 +594,7 @@ void mdc_replay_open(struct ptlrpc_request *req)
 
 	close_req = mod->mod_close_req;
 	if (close_req) {
-		__u32 opc = lustre_msg_get_opc(close_req->rq_reqmsg);
+		u32 opc = lustre_msg_get_opc(close_req->rq_reqmsg);
 		struct mdt_ioepoch *epoch;
 
 		LASSERT(opc == MDS_CLOSE);
@@ -977,8 +977,8 @@ static void mdc_release_page(struct page *page, int remove)
 	put_page(page);
 }
 
-static struct page *mdc_page_locate(struct address_space *mapping, __u64 *hash,
-				    __u64 *start, __u64 *end, int hash64)
+static struct page *mdc_page_locate(struct address_space *mapping, u64 *hash,
+				    u64 *start, u64 *end, int hash64)
 {
 	/*
 	 * Complement of hash is used as an index so that
@@ -1107,8 +1107,8 @@ static void mdc_adjust_dirpages(struct page **pages, int cfs_pgs, int lu_pgs)
 
 	for (i = 0; i < cfs_pgs; i++) {
 		struct lu_dirpage *dp = kmap(pages[i]);
-		__u64 hash_end = le64_to_cpu(dp->ldp_hash_end);
-		__u32 flags = le32_to_cpu(dp->ldp_flags);
+		u64 hash_end = le64_to_cpu(dp->ldp_hash_end);
+		u32 flags = le32_to_cpu(dp->ldp_flags);
 		struct lu_dirpage *first = dp;
 
 		while (--lu_pgs > 0) {
@@ -1159,7 +1159,7 @@ static void mdc_adjust_dirpages(struct page **pages, int cfs_pgs, int lu_pgs)
 /* parameters for readdir page */
 struct readpage_param {
 	struct md_op_data	*rp_mod;
-	__u64			rp_off;
+	u64			rp_off;
 	int			rp_hash64;
 	struct obd_export	*rp_exp;
 	struct md_callback	*rp_cb;
@@ -1234,7 +1234,7 @@ static int mdc_read_page_remote(void *data, struct page *page0)
 	CDEBUG(D_CACHE, "read %d/%d pages\n", rd_pgs, npages);
 	for (i = 1; i < npages; i++) {
 		unsigned long offset;
-		__u64 hash;
+		u64 hash;
 		int ret;
 
 		page = page_pool[i];
@@ -1285,7 +1285,7 @@ static int mdc_read_page_remote(void *data, struct page *page0)
  *			errno(<0) get the page failed
  */
 static int mdc_read_page(struct obd_export *exp, struct md_op_data *op_data,
-			 struct md_callback *cb_op, __u64 hash_offset,
+			 struct md_callback *cb_op, u64 hash_offset,
 			 struct page **ppage)
 {
 	struct lookup_intent it = { .it_op = IT_READDIR };
@@ -1293,8 +1293,8 @@ static int mdc_read_page(struct obd_export *exp, struct md_op_data *op_data,
 	struct inode *dir = op_data->op_data;
 	struct address_space *mapping;
 	struct lu_dirpage *dp;
-	__u64 start = 0;
-	__u64 end = 0;
+	u64 start = 0;
+	u64 end = 0;
 	struct lustre_handle lockh;
 	struct ptlrpc_request *enq_req = NULL;
 	struct readpage_param rp_param;
@@ -1418,7 +1418,7 @@ static int mdc_read_page(struct obd_export *exp, struct md_op_data *op_data,
 
 static int mdc_statfs(const struct lu_env *env,
 		      struct obd_export *exp, struct obd_statfs *osfs,
-		      __u64 max_age, __u32 flags)
+		      u64 max_age, u32 flags)
 {
 	struct obd_device     *obd = class_exp2obd(exp);
 	struct ptlrpc_request *req;
@@ -1476,7 +1476,7 @@ static int mdc_statfs(const struct lu_env *env,
 
 static int mdc_ioc_fid2path(struct obd_export *exp, struct getinfo_fid2path *gf)
 {
-	__u32 keylen, vallen;
+	u32 keylen, vallen;
 	void *key;
 	int rc;
 
@@ -1567,9 +1567,9 @@ static int mdc_ioc_hsm_progress(struct obd_export *exp,
 	return rc;
 }
 
-static int mdc_ioc_hsm_ct_register(struct obd_import *imp, __u32 archives)
+static int mdc_ioc_hsm_ct_register(struct obd_import *imp, u32 archives)
 {
-	__u32			*archive_mask;
+	u32			*archive_mask;
 	struct ptlrpc_request	*req;
 	int			 rc;
 
@@ -1967,7 +1967,7 @@ static int mdc_iocontrol(unsigned int cmd, struct obd_export *exp, int len,
 	case IOC_OBD_STATFS: {
 		struct obd_statfs stat_buf = {0};
 
-		if (*((__u32 *)data->ioc_inlbuf2) != 0) {
+		if (*((u32 *)data->ioc_inlbuf2) != 0) {
 			rc = -ENODEV;
 			goto out;
 		}
@@ -2056,7 +2056,7 @@ static int mdc_get_info_rpc(struct obd_export *exp,
 	req_capsule_set_size(&req->rq_pill, &RMF_GETINFO_KEY,
 			     RCL_CLIENT, keylen);
 	req_capsule_set_size(&req->rq_pill, &RMF_GETINFO_VALLEN,
-			     RCL_CLIENT, sizeof(__u32));
+			     RCL_CLIENT, sizeof(u32));
 
 	rc = ptlrpc_request_pack(req, LUSTRE_MDS_VERSION, MDS_GET_INFO);
 	if (rc) {
@@ -2067,7 +2067,7 @@ static int mdc_get_info_rpc(struct obd_export *exp,
 	tmp = req_capsule_client_get(&req->rq_pill, &RMF_GETINFO_KEY);
 	memcpy(tmp, key, keylen);
 	tmp = req_capsule_client_get(&req->rq_pill, &RMF_GETINFO_VALLEN);
-	memcpy(tmp, &vallen, sizeof(__u32));
+	memcpy(tmp, &vallen, sizeof(u32));
 
 	req_capsule_set_size(&req->rq_pill, &RMF_GETINFO_VAL,
 			     RCL_SERVER, vallen);
@@ -2119,7 +2119,7 @@ static void lustre_swab_hal(struct hsm_action_list *h)
 static void lustre_swab_kuch(struct kuc_hdr *l)
 {
 	__swab16s(&l->kuc_magic);
-	/* __u8 l->kuc_transport */
+	/* u8 l->kuc_transport */
 	__swab16s(&l->kuc_msgtype);
 	__swab16s(&l->kuc_msglen);
 }
@@ -2128,7 +2128,7 @@ static int mdc_ioc_hsm_ct_start(struct obd_export *exp,
 				struct lustre_kernelcomm *lk)
 {
 	struct obd_import  *imp = class_exp2cliimp(exp);
-	__u32		    archive = lk->lk_data;
+	u32		    archive = lk->lk_data;
 	int		    rc = 0;
 
 	if (lk->lk_group != KUC_GRP_HSM) {
@@ -2264,7 +2264,7 @@ static int mdc_set_info_async(const struct lu_env *env,
 }
 
 static int mdc_get_info(const struct lu_env *env, struct obd_export *exp,
-			__u32 keylen, void *key, __u32 *vallen, void *val)
+			u32 keylen, void *key, u32 *vallen, void *val)
 {
 	int rc = -EINVAL;
 
diff --git a/drivers/staging/lustre/lustre/mgc/mgc_request.c b/drivers/staging/lustre/lustre/mgc/mgc_request.c
index ca74c75..dc80081 100644
--- a/drivers/staging/lustre/lustre/mgc/mgc_request.c
+++ b/drivers/staging/lustre/lustre/mgc/mgc_request.c
@@ -53,7 +53,7 @@
 static int mgc_name2resid(char *name, int len, struct ldlm_res_id *res_id,
 			  int type)
 {
-	__u64 resname = 0;
+	u64 resname = 0;
 
 	if (len > sizeof(resname)) {
 		CERROR("name too long: %s\n", name);
@@ -883,10 +883,10 @@ static int mgc_set_mgs_param(struct obd_export *exp,
 }
 
 /* Take a config lock so we can get cancel notifications */
-static int mgc_enqueue(struct obd_export *exp, __u32 type,
-		       union ldlm_policy_data *policy, __u32 mode,
-		       __u64 *flags, void *bl_cb, void *cp_cb, void *gl_cb,
-		       void *data, __u32 lvb_len, void *lvb_swabber,
+static int mgc_enqueue(struct obd_export *exp, u32 type,
+		       union ldlm_policy_data *policy, u32 mode,
+		       u64 *flags, void *bl_cb, void *cp_cb, void *gl_cb,
+		       void *data, u32 lvb_len, void *lvb_swabber,
 		       struct lustre_handle *lockh)
 {
 	struct config_llog_data *cld = data;
@@ -1055,7 +1055,7 @@ static int mgc_set_info_async(const struct lu_env *env, struct obd_export *exp,
 }
 
 static int mgc_get_info(const struct lu_env *env, struct obd_export *exp,
-			__u32 keylen, void *key, __u32 *vallen, void *val)
+			u32 keylen, void *key, u32 *vallen, void *val)
 {
 	int rc = -EINVAL;
 
@@ -1120,7 +1120,7 @@ enum {
 
 static int mgc_apply_recover_logs(struct obd_device *mgc,
 				  struct config_llog_data *cld,
-				  __u64 max_version,
+				  u64 max_version,
 				  void *data, int datalen, bool mne_swab)
 {
 	struct config_llog_instance *cfg = &cld->cld_cfg;
@@ -1597,7 +1597,7 @@ static bool mgc_import_in_recovery(struct obd_import *imp)
 int mgc_process_log(struct obd_device *mgc, struct config_llog_data *cld)
 {
 	struct lustre_handle lockh = { 0 };
-	__u64 flags = LDLM_FL_NO_LRU;
+	u64 flags = LDLM_FL_NO_LRU;
 	bool retry = false;
 	int rc = 0, rcl;
 
diff --git a/drivers/staging/lustre/lustre/obdecho/echo_client.c b/drivers/staging/lustre/lustre/obdecho/echo_client.c
index 39b7ab1..4f9dbc4 100644
--- a/drivers/staging/lustre/lustre/obdecho/echo_client.c
+++ b/drivers/staging/lustre/lustre/obdecho/echo_client.c
@@ -85,7 +85,7 @@ struct echo_lock {
 	struct cl_lock_slice   el_cl;
 	struct list_head	     el_chain;
 	struct echo_object    *el_object;
-	__u64		  el_cookie;
+	u64		  el_cookie;
 	atomic_t	   el_refcount;
 };
 
@@ -912,7 +912,7 @@ static int cl_echo_object_put(struct echo_object *eco)
 
 static int __cl_echo_enqueue(struct lu_env *env, struct echo_object *eco,
 			     u64 start, u64 end, int mode,
-			     __u64 *cookie, __u32 enqflags)
+			     u64 *cookie, u32 enqflags)
 {
 	struct cl_io *io;
 	struct cl_lock *lck;
@@ -954,7 +954,7 @@ static int __cl_echo_enqueue(struct lu_env *env, struct echo_object *eco,
 }
 
 static int __cl_echo_cancel(struct lu_env *env, struct echo_device *ed,
-			    __u64 cookie)
+			    u64 cookie)
 {
 	struct echo_client_obd *ec = ed->ed_ec;
 	struct echo_lock       *ecl = NULL;
diff --git a/drivers/staging/lustre/lustre/obdecho/echo_internal.h b/drivers/staging/lustre/lustre/obdecho/echo_internal.h
index 42faa16..ac7a209 100644
--- a/drivers/staging/lustre/lustre/obdecho/echo_internal.h
+++ b/drivers/staging/lustre/lustre/obdecho/echo_internal.h
@@ -34,7 +34,7 @@
 
 /* The persistent object (i.e. actually stores stuff!) */
 #define ECHO_PERSISTENT_OBJID    1ULL
-#define ECHO_PERSISTENT_SIZE     ((__u64)(1 << 20))
+#define ECHO_PERSISTENT_SIZE     ((u64)(1 << 20))
 
 /* block size to use for data verification */
 #define OBD_ECHO_BLOCK_SIZE	(4 << 10)
diff --git a/drivers/staging/lustre/lustre/osc/osc_cache.c b/drivers/staging/lustre/lustre/osc/osc_cache.c
index 7e80a07..bef422c 100644
--- a/drivers/staging/lustre/lustre/osc/osc_cache.c
+++ b/drivers/staging/lustre/lustre/osc/osc_cache.c
@@ -880,7 +880,7 @@ int osc_extent_finish(const struct lu_env *env, struct osc_extent *ext,
 	int nr_pages = ext->oe_nr_pages;
 	int lost_grant = 0;
 	int blocksize = cli->cl_import->imp_obd->obd_osfs.os_bsize ? : 4096;
-	__u64 last_off = 0;
+	u64 last_off = 0;
 	int last_count = -1;
 
 	OSC_EXTENT_DUMP(D_CACHE, ext, "extent finished.\n");
@@ -991,7 +991,7 @@ static int osc_extent_truncate(struct osc_extent *ext, pgoff_t trunc_index,
 	struct osc_async_page *tmp;
 	int pages_in_chunk = 0;
 	int ppc_bits = cli->cl_chunkbits - PAGE_SHIFT;
-	__u64 trunc_chunk = trunc_index >> ppc_bits;
+	u64 trunc_chunk = trunc_index >> ppc_bits;
 	int grants = 0;
 	int nr_pages = 0;
 	int rc = 0;
@@ -1799,7 +1799,7 @@ static int osc_list_maint(struct client_obd *cli, struct osc_object *osc)
  * sync so that the app can get a sync error and break the cycle of queueing
  * pages for which writeback will fail.
  */
-static void osc_process_ar(struct osc_async_rc *ar, __u64 xid,
+static void osc_process_ar(struct osc_async_rc *ar, u64 xid,
 			   int rc)
 {
 	if (rc) {
@@ -1824,7 +1824,7 @@ static void osc_ap_completion(const struct lu_env *env, struct client_obd *cli,
 {
 	struct osc_object *osc = oap->oap_obj;
 	struct lov_oinfo *loi = osc->oo_oinfo;
-	__u64 xid = 0;
+	u64 xid = 0;
 
 	if (oap->oap_request) {
 		xid = ptlrpc_req_xid(oap->oap_request);
@@ -3127,7 +3127,7 @@ static bool check_and_discard_cb(const struct lu_env *env, struct cl_io *io,
 		tmp = osc_dlmlock_at_pgoff(env, osc, index,
 					   OSC_DAP_FL_TEST_LOCK);
 		if (tmp) {
-			__u64 end = tmp->l_policy_data.l_extent.end;
+			u64 end = tmp->l_policy_data.l_extent.end;
 			/* Cache the first-non-overlapped index so as to skip
 			 * all pages within [index, oti_fn_index). This is safe
 			 * because if tmp lock is canceled, it will discard
diff --git a/drivers/staging/lustre/lustre/osc/osc_cl_internal.h b/drivers/staging/lustre/lustre/osc/osc_cl_internal.h
index c0f58f4..c89c894 100644
--- a/drivers/staging/lustre/lustre/osc/osc_cl_internal.h
+++ b/drivers/staging/lustre/lustre/osc/osc_cl_internal.h
@@ -255,7 +255,7 @@ struct osc_lock {
 	/* underlying DLM lock */
 	struct ldlm_lock		*ols_dlmlock;
 	/* DLM flags with which osc_lock::ols_lock was enqueued */
-	__u64				ols_flags;
+	u64				ols_flags;
 	/* osc_lock::ols_lock handle */
 	struct lustre_handle		ols_handle;
 	struct ldlm_enqueue_info	ols_einfo;
diff --git a/drivers/staging/lustre/lustre/osc/osc_internal.h b/drivers/staging/lustre/lustre/osc/osc_internal.h
index c61ef89..4033365 100644
--- a/drivers/staging/lustre/lustre/osc/osc_internal.h
+++ b/drivers/staging/lustre/lustre/osc/osc_internal.h
@@ -91,7 +91,7 @@ static inline void osc_wake_cache_waiters(struct client_obd *cli)
 	wake_up(&cli->cl_cache_waiters);
 }
 
-int osc_shrink_grant_to_target(struct client_obd *cli, __u64 target_bytes);
+int osc_shrink_grant_to_target(struct client_obd *cli, u64 target_bytes);
 void osc_update_next_shrink(struct client_obd *cli);
 
 /*
@@ -103,7 +103,7 @@ typedef int (*osc_enqueue_upcall_f)(void *cookie, struct lustre_handle *lockh,
 				    int rc);
 
 int osc_enqueue_base(struct obd_export *exp, struct ldlm_res_id *res_id,
-		     __u64 *flags, union ldlm_policy_data *policy,
+		     u64 *flags, union ldlm_policy_data *policy,
 		     struct ost_lvb *lvb, int kms_valid,
 		     osc_enqueue_upcall_f upcall,
 		     void *cookie, struct ldlm_enqueue_info *einfo,
@@ -111,7 +111,7 @@ int osc_enqueue_base(struct obd_export *exp, struct ldlm_res_id *res_id,
 
 int osc_match_base(struct obd_export *exp, struct ldlm_res_id *res_id,
 		   enum ldlm_type type, union ldlm_policy_data *policy,
-		   enum ldlm_mode mode, __u64 *flags, void *data,
+		   enum ldlm_mode mode, u64 *flags, void *data,
 		   struct lustre_handle *lockh, int unref);
 
 int osc_setattr_async(struct obd_export *exp, struct obdo *oa,
diff --git a/drivers/staging/lustre/lustre/osc/osc_io.c b/drivers/staging/lustre/lustre/osc/osc_io.c
index 0a7bfe2..cf5b3cc 100644
--- a/drivers/staging/lustre/lustre/osc/osc_io.c
+++ b/drivers/staging/lustre/lustre/osc/osc_io.c
@@ -240,7 +240,7 @@ static void osc_page_touch_at(const struct lu_env *env,
 	struct lov_oinfo *loi = cl2osc(obj)->oo_oinfo;
 	struct cl_attr *attr = &osc_env_info(env)->oti_attr;
 	int valid;
-	__u64 kms;
+	u64 kms;
 
 	/* offset within stripe */
 	kms = cl_offset(obj, idx) + to;
@@ -454,7 +454,7 @@ static bool trunc_check_cb(const struct lu_env *env, struct cl_io *io,
 {
 	struct cl_page *page = ops->ops_cl.cpl_page;
 	struct osc_async_page *oap;
-	__u64 start = *(__u64 *)cbdata;
+	u64 start = *(u64 *)cbdata;
 
 	oap = &ops->ops_oap;
 	if (oap->oap_cmd & OBD_BRW_WRITE &&
@@ -470,7 +470,7 @@ static bool trunc_check_cb(const struct lu_env *env, struct cl_io *io,
 }
 
 static void osc_trunc_check(const struct lu_env *env, struct cl_io *io,
-			    struct osc_io *oio, __u64 size)
+			    struct osc_io *oio, u64 size)
 {
 	struct cl_object *clob;
 	int partial;
@@ -498,7 +498,7 @@ static int osc_io_setattr_start(const struct lu_env *env,
 	struct cl_attr *attr = &osc_env_info(env)->oti_attr;
 	struct obdo *oa = &oio->oi_oa;
 	struct osc_async_cbargs *cbargs = &oio->oi_cbarg;
-	__u64 size = io->u.ci_setattr.sa_attr.lvb_size;
+	u64 size = io->u.ci_setattr.sa_attr.lvb_size;
 	unsigned int ia_avalid = io->u.ci_setattr.sa_avalid;
 	enum op_xvalid ia_xvalid = io->u.ci_setattr.sa_xvalid;
 	int result = 0;
@@ -615,7 +615,7 @@ static void osc_io_setattr_end(const struct lu_env *env,
 	}
 
 	if (cl_io_is_trunc(io)) {
-		__u64 size = io->u.ci_setattr.sa_attr.lvb_size;
+		u64 size = io->u.ci_setattr.sa_attr.lvb_size;
 
 		osc_trunc_check(env, io, oio, size);
 		osc_cache_truncate_end(env, oio->oi_trunc);
diff --git a/drivers/staging/lustre/lustre/osc/osc_lock.c b/drivers/staging/lustre/lustre/osc/osc_lock.c
index 1781243..06d813e 100644
--- a/drivers/staging/lustre/lustre/osc/osc_lock.c
+++ b/drivers/staging/lustre/lustre/osc/osc_lock.c
@@ -153,9 +153,9 @@ static void osc_lock_build_policy(const struct lu_env *env,
 	policy->l_extent.gid = d->cld_gid;
 }
 
-static __u64 osc_enq2ldlm_flags(__u32 enqflags)
+static u64 osc_enq2ldlm_flags(u32 enqflags)
 {
-	__u64 result = 0;
+	u64 result = 0;
 
 	LASSERT((enqflags & ~CEF_MASK) == 0);
 
@@ -200,7 +200,7 @@ static void osc_lock_lvb_update(const struct lu_env *env,
 
 	cl_object_attr_lock(obj);
 	if (dlmlock) {
-		__u64 size;
+		u64 size;
 
 		check_res_locked(dlmlock->l_resource);
 		LASSERT(lvb == dlmlock->l_lvb_data);
@@ -450,7 +450,7 @@ static int __osc_dlm_blocking_ast(const struct lu_env *env,
 	if (obj) {
 		struct ldlm_extent *extent = &dlmlock->l_policy_data.l_extent;
 		struct cl_attr *attr = &osc_env_info(env)->oti_attr;
-		__u64 old_kms;
+		u64 old_kms;
 
 		/* Destroy pages covered by the extent of the DLM lock */
 		result = osc_lock_flush(cl2osc(obj),
@@ -1146,7 +1146,7 @@ int osc_lock_init(const struct lu_env *env,
 		  const struct cl_io *io)
 {
 	struct osc_lock *oscl;
-	__u32 enqflags = lock->cll_descr.cld_enq_flags;
+	u32 enqflags = lock->cll_descr.cld_enq_flags;
 
 	oscl = kmem_cache_zalloc(osc_lock_kmem, GFP_NOFS);
 	if (!oscl)
@@ -1200,7 +1200,7 @@ struct ldlm_lock *osc_dlmlock_at_pgoff(const struct lu_env *env,
 	struct lustre_handle lockh;
 	struct ldlm_lock *lock = NULL;
 	enum ldlm_mode mode;
-	__u64 flags;
+	u64 flags;
 
 	ostid_build_res_name(&obj->oo_oinfo->loi_oi, resname);
 	osc_index2policy(policy, osc2cl(obj), index, index);
diff --git a/drivers/staging/lustre/lustre/osc/osc_object.c b/drivers/staging/lustre/lustre/osc/osc_object.c
index e9ecb77..1097380 100644
--- a/drivers/staging/lustre/lustre/osc/osc_object.c
+++ b/drivers/staging/lustre/lustre/osc/osc_object.c
@@ -173,7 +173,7 @@ static int osc_attr_update(const struct lu_env *env, struct cl_object *obj,
 		lvb->lvb_blocks = attr->cat_blocks;
 	if (valid & CAT_KMS) {
 		CDEBUG(D_CACHE, "set kms from %llu to %llu\n",
-		       oinfo->loi_kms, (__u64)attr->cat_kms);
+		       oinfo->loi_kms, (u64)attr->cat_kms);
 		loi_kms_set(oinfo, attr->cat_kms);
 	}
 	return 0;
diff --git a/drivers/staging/lustre/lustre/osc/osc_request.c b/drivers/staging/lustre/lustre/osc/osc_request.c
index 2599caa..e92c8ac 100644
--- a/drivers/staging/lustre/lustre/osc/osc_request.c
+++ b/drivers/staging/lustre/lustre/osc/osc_request.c
@@ -101,7 +101,7 @@ struct osc_enqueue_args {
 	struct obd_export	*oa_exp;
 	enum ldlm_type		oa_type;
 	enum ldlm_mode		oa_mode;
-	__u64			*oa_flags;
+	u64			*oa_flags;
 	osc_enqueue_upcall_f	oa_upcall;
 	void			*oa_cookie;
 	struct ost_lvb		*oa_lvb;
@@ -535,7 +535,7 @@ int osc_sync_base(struct osc_object *obj, struct obdo *oa,
  */
 static int osc_resource_get_unused(struct obd_export *exp, struct obdo *oa,
 				   struct list_head *cancels,
-				   enum ldlm_mode mode, __u64 lock_flags)
+				   enum ldlm_mode mode, u64 lock_flags)
 {
 	struct ldlm_namespace *ns = exp->exp_obd->obd_namespace;
 	struct ldlm_res_id res_id;
@@ -783,7 +783,7 @@ static void osc_shrink_grant_local(struct client_obd *cli, struct obdo *oa)
  */
 static int osc_shrink_grant(struct client_obd *cli)
 {
-	__u64 target_bytes = (cli->cl_max_rpcs_in_flight + 1) *
+	u64 target_bytes = (cli->cl_max_rpcs_in_flight + 1) *
 			     (cli->cl_max_pages_per_rpc << PAGE_SHIFT);
 
 	spin_lock(&cli->cl_loi_list_lock);
@@ -794,7 +794,7 @@ static int osc_shrink_grant(struct client_obd *cli)
 	return osc_shrink_grant_to_target(cli, target_bytes);
 }
 
-int osc_shrink_grant_to_target(struct client_obd *cli, __u64 target_bytes)
+int osc_shrink_grant_to_target(struct client_obd *cli, u64 target_bytes)
 {
 	int rc = 0;
 	struct ost_body	*body;
@@ -1000,7 +1000,7 @@ static int check_write_rcs(struct ptlrpc_request *req,
 			   u32 page_count, struct brw_page **pga)
 {
 	int i;
-	__u32 *remote_rcs;
+	u32 *remote_rcs;
 
 	remote_rcs = req_capsule_server_sized_get(&req->rq_pill, &RMF_RCS,
 						  sizeof(*remote_rcs) *
@@ -1055,7 +1055,7 @@ static u32 osc_checksum_bulk(int nob, u32 pg_count,
 			     struct brw_page **pga, int opc,
 			     enum cksum_type cksum_type)
 {
-	__u32 cksum;
+	u32 cksum;
 	int i = 0;
 	struct ahash_request *hdesc;
 	unsigned int bufsize;
@@ -1285,7 +1285,7 @@ static int osc_brw_prep_request(int cmd, struct client_obd *cli,
 		oa->o_cksum = body->oa.o_cksum;
 		/* 1 RC per niobuf */
 		req_capsule_set_size(pill, &RMF_RCS, RCL_SERVER,
-				     sizeof(__u32) * niocount);
+				     sizeof(u32) * niocount);
 	} else {
 		if (cli->cl_checksum &&
 		    !sptlrpc_flavor_has_bulk(&req->rq_flvr)) {
@@ -1395,7 +1395,7 @@ static int check_write_checksum(struct obdo *oa,
 				u32 client_cksum, u32 server_cksum,
 				struct osc_brw_async_args *aa)
 {
-	__u32 new_cksum;
+	u32 new_cksum;
 	char *msg;
 	enum cksum_type cksum_type;
 
@@ -1452,7 +1452,7 @@ static int osc_brw_fini_request(struct ptlrpc_request *req, int rc)
 			&req->rq_import->imp_connection->c_peer;
 	struct client_obd *cli = aa->aa_cli;
 	struct ost_body *body;
-	__u32 client_cksum = 0;
+	u32 client_cksum = 0;
 
 	if (rc < 0 && rc != -EDQUOT) {
 		DEBUG_REQ(D_INFO, req, "Failed request with rc = %d\n", rc);
@@ -1534,7 +1534,7 @@ static int osc_brw_fini_request(struct ptlrpc_request *req, int rc)
 
 	if (body->oa.o_valid & OBD_MD_FLCKSUM) {
 		static int cksum_counter;
-		__u32 server_cksum = body->oa.o_cksum;
+		u32 server_cksum = body->oa.o_cksum;
 		char *via = "";
 		char *router = "";
 		enum cksum_type cksum_type;
@@ -2050,7 +2050,7 @@ static int osc_set_lock_data(struct ldlm_lock *lock, void *data)
 static int osc_enqueue_fini(struct ptlrpc_request *req,
 			    osc_enqueue_upcall_f upcall, void *cookie,
 			    struct lustre_handle *lockh, enum ldlm_mode mode,
-			    __u64 *flags, int agl, int errcode)
+			    u64 *flags, int agl, int errcode)
 {
 	bool intent = *flags & LDLM_FL_HAS_INTENT;
 	int rc;
@@ -2090,8 +2090,8 @@ static int osc_enqueue_interpret(const struct lu_env *env,
 	struct lustre_handle *lockh = &aa->oa_lockh;
 	enum ldlm_mode mode = aa->oa_mode;
 	struct ost_lvb *lvb = aa->oa_lvb;
-	__u32 lvb_len = sizeof(*lvb);
-	__u64 flags = 0;
+	u32 lvb_len = sizeof(*lvb);
+	u64 flags = 0;
 
 	/* ldlm_cli_enqueue is holding a reference on the lock, so it must
 	 * be valid.
@@ -2143,7 +2143,7 @@ static int osc_enqueue_interpret(const struct lu_env *env,
  * release locks just after they are obtained.
  */
 int osc_enqueue_base(struct obd_export *exp, struct ldlm_res_id *res_id,
-		     __u64 *flags, union ldlm_policy_data *policy,
+		     u64 *flags, union ldlm_policy_data *policy,
 		     struct ost_lvb *lvb, int kms_valid,
 		     osc_enqueue_upcall_f upcall, void *cookie,
 		     struct ldlm_enqueue_info *einfo,
@@ -2153,7 +2153,7 @@ int osc_enqueue_base(struct obd_export *exp, struct ldlm_res_id *res_id,
 	struct lustre_handle lockh = { 0 };
 	struct ptlrpc_request *req = NULL;
 	int intent = *flags & LDLM_FL_HAS_INTENT;
-	__u64 match_flags = *flags;
+	u64 match_flags = *flags;
 	enum ldlm_mode mode;
 	int rc;
 
@@ -2292,11 +2292,11 @@ int osc_enqueue_base(struct obd_export *exp, struct ldlm_res_id *res_id,
 
 int osc_match_base(struct obd_export *exp, struct ldlm_res_id *res_id,
 		   enum ldlm_type type, union ldlm_policy_data *policy,
-		   enum ldlm_mode mode, __u64 *flags, void *data,
+		   enum ldlm_mode mode, u64 *flags, void *data,
 		   struct lustre_handle *lockh, int unref)
 {
 	struct obd_device *obd = exp->exp_obd;
-	__u64 lflags = *flags;
+	u64 lflags = *flags;
 	enum ldlm_mode rc;
 
 	if (OBD_FAIL_CHECK(OBD_FAIL_OSC_MATCH))
@@ -2371,7 +2371,7 @@ static int osc_statfs_interpret(const struct lu_env *env,
 }
 
 static int osc_statfs_async(struct obd_export *exp,
-			    struct obd_info *oinfo, __u64 max_age,
+			    struct obd_info *oinfo, u64 max_age,
 			    struct ptlrpc_request_set *rqset)
 {
 	struct obd_device *obd = class_exp2obd(exp);
@@ -2415,7 +2415,7 @@ static int osc_statfs_async(struct obd_export *exp,
 }
 
 static int osc_statfs(const struct lu_env *env, struct obd_export *exp,
-		      struct obd_statfs *osfs, __u64 max_age, __u32 flags)
+		      struct obd_statfs *osfs, u64 max_age, u32 flags)
 {
 	struct obd_device *obd = class_exp2obd(exp);
 	struct obd_statfs *msfs;
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [lustre-devel] [PATCH 09/26] lustre: cleanup white spaces in fid and fld layer
  2019-01-31 17:19 [lustre-devel] [PATCH 00/26] lustre: cleanups with no code changes James Simmons
                   ` (7 preceding siblings ...)
  2019-01-31 17:19 ` [lustre-devel] [PATCH 08/26] lustre: convert remaining code to kernel types James Simmons
@ 2019-01-31 17:19 ` James Simmons
  2019-01-31 17:19 ` [lustre-devel] [PATCH 10/26] ldlm: cleanup white spaces James Simmons
                   ` (17 subsequent siblings)
  26 siblings, 0 replies; 30+ messages in thread
From: James Simmons @ 2019-01-31 17:19 UTC (permalink / raw)
  To: lustre-devel

The lustre code is very messy and difficult to read. Remove excess
white space and properly align data structures so they are easy on
the eyes.

Signed-off-by: James Simmons <jsimmons@infradead.org>
---
 drivers/staging/lustre/lustre/fid/fid_request.c  | 10 +++++-----
 drivers/staging/lustre/lustre/fid/lproc_fid.c    |  2 +-
 drivers/staging/lustre/lustre/fld/fld_cache.c    | 14 +++++++-------
 drivers/staging/lustre/lustre/fld/fld_internal.h | 24 ++++++++++++------------
 drivers/staging/lustre/lustre/fld/fld_request.c  |  8 ++++----
 5 files changed, 29 insertions(+), 29 deletions(-)

diff --git a/drivers/staging/lustre/lustre/fid/fid_request.c b/drivers/staging/lustre/lustre/fid/fid_request.c
index 3f79f22..45dd275 100644
--- a/drivers/staging/lustre/lustre/fid/fid_request.c
+++ b/drivers/staging/lustre/lustre/fid/fid_request.c
@@ -55,12 +55,12 @@ static int seq_client_rpc(struct lu_client_seq *seq,
 			  struct lu_seq_range *output, u32 opc,
 			  const char *opcname)
 {
-	struct obd_export     *exp = seq->lcs_exp;
+	struct obd_export *exp = seq->lcs_exp;
 	struct ptlrpc_request *req;
-	struct lu_seq_range   *out, *in;
-	u32                 *op;
-	unsigned int           debug_mask;
-	int                    rc;
+	struct lu_seq_range *out, *in;
+	u32 *op;
+	unsigned int debug_mask;
+	int rc;
 
 	LASSERT(exp && !IS_ERR(exp));
 	req = ptlrpc_request_alloc_pack(class_exp2cliimp(exp), &RQF_SEQ_QUERY,
diff --git a/drivers/staging/lustre/lustre/fid/lproc_fid.c b/drivers/staging/lustre/lustre/fid/lproc_fid.c
index d583778..d7d23b4 100644
--- a/drivers/staging/lustre/lustre/fid/lproc_fid.c
+++ b/drivers/staging/lustre/lustre/fid/lproc_fid.c
@@ -143,7 +143,7 @@
 			     size_t count, loff_t *off)
 {
 	struct lu_client_seq *seq;
-	u64  max;
+	u64 max;
 	int rc, val;
 
 	seq = ((struct seq_file *)file->private_data)->private;
diff --git a/drivers/staging/lustre/lustre/fld/fld_cache.c b/drivers/staging/lustre/lustre/fld/fld_cache.c
index 749d33b..b4baa53 100644
--- a/drivers/staging/lustre/lustre/fld/fld_cache.c
+++ b/drivers/staging/lustre/lustre/fld/fld_cache.c
@@ -255,8 +255,8 @@ static void fld_cache_punch_hole(struct fld_cache *cache,
 				 struct fld_cache_entry *f_new)
 {
 	const struct lu_seq_range *range = &f_new->fce_range;
-	const u64 new_start  = range->lsr_start;
-	const u64 new_end  = range->lsr_end;
+	const u64 new_start = range->lsr_start;
+	const u64 new_end = range->lsr_end;
 	struct fld_cache_entry *fldt;
 
 	fldt = kzalloc(sizeof(*fldt), GFP_ATOMIC);
@@ -294,8 +294,8 @@ static void fld_cache_overlap_handle(struct fld_cache *cache,
 				     struct fld_cache_entry *f_new)
 {
 	const struct lu_seq_range *range = &f_new->fce_range;
-	const u64 new_start  = range->lsr_start;
-	const u64 new_end  = range->lsr_end;
+	const u64 new_start = range->lsr_start;
+	const u64 new_end = range->lsr_end;
 	const u32 mdt = range->lsr_index;
 
 	/* this is overlap case, these case are checking overlapping with
@@ -381,8 +381,8 @@ static int fld_cache_insert_nolock(struct fld_cache *cache,
 	struct fld_cache_entry *n;
 	struct list_head *head;
 	struct list_head *prev = NULL;
-	const u64 new_start  = f_new->fce_range.lsr_start;
-	const u64 new_end  = f_new->fce_range.lsr_end;
+	const u64 new_start = f_new->fce_range.lsr_start;
+	const u64 new_end = f_new->fce_range.lsr_end;
 	u32 new_flags  = f_new->fce_range.lsr_flags;
 
 	/*
@@ -425,7 +425,7 @@ static int fld_cache_insert_nolock(struct fld_cache *cache,
 int fld_cache_insert(struct fld_cache *cache,
 		     const struct lu_seq_range *range)
 {
-	struct fld_cache_entry	*flde;
+	struct fld_cache_entry *flde;
 	int rc;
 
 	flde = fld_cache_entry_create(range);
diff --git a/drivers/staging/lustre/lustre/fld/fld_internal.h b/drivers/staging/lustre/lustre/fld/fld_internal.h
index 66a0fb6..76666a4 100644
--- a/drivers/staging/lustre/lustre/fld/fld_internal.h
+++ b/drivers/staging/lustre/lustre/fld/fld_internal.h
@@ -75,10 +75,10 @@ struct lu_fld_hash {
 };
 
 struct fld_cache_entry {
-	struct list_head	       fce_lru;
-	struct list_head	       fce_list;
+	struct list_head	fce_lru;
+	struct list_head	fce_list;
 	/** fld cache entries are sorted on range->lsr_start field. */
-	struct lu_seq_range      fce_range;
+	struct lu_seq_range	fce_range;
 };
 
 struct fld_cache {
@@ -86,29 +86,29 @@ struct fld_cache {
 	 * Cache guard, protects fci_hash mostly because others immutable after
 	 * init is finished.
 	 */
-	rwlock_t		 fci_lock;
+	rwlock_t		fci_lock;
 
 	/** Cache shrink threshold */
-	int		      fci_threshold;
+	int			fci_threshold;
 
 	/** Preferred number of cached entries */
-	int		      fci_cache_size;
+	int			fci_cache_size;
 
 	/** Current number of cached entries. Protected by \a fci_lock */
-	int		      fci_cache_count;
+	int			fci_cache_count;
 
 	/** LRU list fld entries. */
-	struct list_head	       fci_lru;
+	struct list_head	fci_lru;
 
 	/** sorted fld entries. */
-	struct list_head	       fci_entries_head;
+	struct list_head	fci_entries_head;
 
 	/** Cache statistics. */
-	struct fld_stats	 fci_stat;
+	struct fld_stats	fci_stat;
 
 	/** Cache name used for debug and messages. */
-	char		     fci_name[LUSTRE_MDT_MAXNAMELEN];
-	unsigned int		 fci_no_shrink:1;
+	char			fci_name[LUSTRE_MDT_MAXNAMELEN];
+	unsigned int		fci_no_shrink:1;
 };
 
 enum {
diff --git a/drivers/staging/lustre/lustre/fld/fld_request.c b/drivers/staging/lustre/lustre/fld/fld_request.c
index 8a915b9..248fffa 100644
--- a/drivers/staging/lustre/lustre/fld/fld_request.c
+++ b/drivers/staging/lustre/lustre/fld/fld_request.c
@@ -307,10 +307,10 @@ int fld_client_rpc(struct obd_export *exp,
 		   struct ptlrpc_request **reqp)
 {
 	struct ptlrpc_request *req = NULL;
-	struct lu_seq_range   *prange;
-	u32		 *op;
-	int		    rc = 0;
-	struct obd_import     *imp;
+	struct lu_seq_range *prange;
+	u32 *op;
+	int rc = 0;
+	struct obd_import *imp;
 
 	LASSERT(exp);
 
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [lustre-devel] [PATCH 10/26] ldlm: cleanup white spaces
  2019-01-31 17:19 [lustre-devel] [PATCH 00/26] lustre: cleanups with no code changes James Simmons
                   ` (8 preceding siblings ...)
  2019-01-31 17:19 ` [lustre-devel] [PATCH 09/26] lustre: cleanup white spaces in fid and fld layer James Simmons
@ 2019-01-31 17:19 ` James Simmons
  2019-01-31 17:19 ` [lustre-devel] [PATCH 11/26] llite: " James Simmons
                   ` (16 subsequent siblings)
  26 siblings, 0 replies; 30+ messages in thread
From: James Simmons @ 2019-01-31 17:19 UTC (permalink / raw)
  To: lustre-devel

The ldlm code is very messy and difficult to read. Remove excess
white space and properly align data structures so they are easy on
the eyes.

Signed-off-by: James Simmons <jsimmons@infradead.org>
---
 drivers/staging/lustre/lustre/ldlm/ldlm_flock.c    |  4 +-
 drivers/staging/lustre/lustre/ldlm/ldlm_lib.c      | 22 +++---
 drivers/staging/lustre/lustre/ldlm/ldlm_lock.c     | 36 +++++-----
 drivers/staging/lustre/lustre/ldlm/ldlm_lockd.c    | 28 ++++----
 drivers/staging/lustre/lustre/ldlm/ldlm_pool.c     |  5 +-
 drivers/staging/lustre/lustre/ldlm/ldlm_request.c  | 28 ++++----
 drivers/staging/lustre/lustre/ldlm/ldlm_resource.c | 78 +++++++++++-----------
 7 files changed, 99 insertions(+), 102 deletions(-)

diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c b/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
index baa5b3a..4fc380d2 100644
--- a/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
+++ b/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
@@ -314,8 +314,8 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req)
 int
 ldlm_flock_completion_ast(struct ldlm_lock *lock, u64 flags, void *data)
 {
-	struct file_lock		*getlk = lock->l_ast_data;
-	int				rc = 0;
+	struct file_lock *getlk = lock->l_ast_data;
+	int rc = 0;
 
 	OBD_FAIL_TIMEOUT(OBD_FAIL_LDLM_CP_CB_WAIT2, 4);
 	if (OBD_FAIL_PRECHECK(OBD_FAIL_LDLM_CP_CB_WAIT3)) {
diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_lib.c b/drivers/staging/lustre/lustre/ldlm/ldlm_lib.c
index 5a59c15..aef83ff 100644
--- a/drivers/staging/lustre/lustre/ldlm/ldlm_lib.c
+++ b/drivers/staging/lustre/lustre/ldlm/ldlm_lib.c
@@ -512,11 +512,11 @@ int client_connect_import(const struct lu_env *env,
 			  struct obd_device *obd, struct obd_uuid *cluuid,
 			  struct obd_connect_data *data, void *localdata)
 {
-	struct client_obd       *cli    = &obd->u.cli;
-	struct obd_import       *imp    = cli->cl_import;
+	struct client_obd *cli = &obd->u.cli;
+	struct obd_import *imp = cli->cl_import;
 	struct obd_connect_data *ocd;
-	struct lustre_handle    conn    = { 0 };
-	int		     rc;
+	struct lustre_handle conn = { 0 };
+	int rc;
 
 	*exp = NULL;
 	down_write(&cli->cl_sem);
@@ -703,9 +703,9 @@ int target_pack_pool_reply(struct ptlrpc_request *req)
 void target_send_reply(struct ptlrpc_request *req, int rc, int fail_id)
 {
 	struct ptlrpc_service_part *svcpt;
-	int			netrc;
+	int netrc;
 	struct ptlrpc_reply_state *rs;
-	struct obd_export	 *exp;
+	struct obd_export *exp;
 
 	if (req->rq_no_reply)
 		return;
@@ -736,11 +736,11 @@ void target_send_reply(struct ptlrpc_request *req, int rc, int fail_id)
 
 	/* disable reply scheduling while I'm setting up */
 	rs->rs_scheduled = 1;
-	rs->rs_on_net    = 1;
-	rs->rs_xid       = req->rq_xid;
-	rs->rs_transno   = req->rq_transno;
-	rs->rs_export    = exp;
-	rs->rs_opc       = lustre_msg_get_opc(req->rq_reqmsg);
+	rs->rs_on_net = 1;
+	rs->rs_xid = req->rq_xid;
+	rs->rs_transno = req->rq_transno;
+	rs->rs_export = exp;
+	rs->rs_opc = lustre_msg_get_opc(req->rq_reqmsg);
 
 	spin_lock(&exp->exp_uncommitted_replies_lock);
 	CDEBUG(D_NET, "rs transno = %llu, last committed = %llu\n",
diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_lock.c b/drivers/staging/lustre/lustre/ldlm/ldlm_lock.c
index e726e76..cea0e22 100644
--- a/drivers/staging/lustre/lustre/ldlm/ldlm_lock.c
+++ b/drivers/staging/lustre/lustre/ldlm/ldlm_lock.c
@@ -682,7 +682,7 @@ void ldlm_lock_addref_internal_nolock(struct ldlm_lock *lock,
 int ldlm_lock_addref_try(const struct lustre_handle *lockh, enum ldlm_mode mode)
 {
 	struct ldlm_lock *lock;
-	int	       result;
+	int result;
 
 	result = -EAGAIN;
 	lock = ldlm_handle2lock(lockh);
@@ -1344,7 +1344,7 @@ enum ldlm_mode ldlm_lock_match(struct ldlm_namespace *ns, u64 flags,
 			}
 		}
 	}
- out2:
+out2:
 	if (rc) {
 		LDLM_DEBUG(lock, "matched (%llu %llu)",
 			   (type == LDLM_PLAIN || type == LDLM_IBITS) ?
@@ -1568,8 +1568,6 @@ struct ldlm_lock *ldlm_lock_create(struct ldlm_namespace *ns,
 	return ERR_PTR(rc);
 }
 
-
-
 /**
  * Enqueue (request) a lock.
  * On the client this is called from ldlm_cli_enqueue_fini
@@ -1629,9 +1627,9 @@ enum ldlm_error ldlm_lock_enqueue(struct ldlm_namespace *ns,
 ldlm_work_bl_ast_lock(struct ptlrpc_request_set *rqset, void *opaq)
 {
 	struct ldlm_cb_set_arg *arg = opaq;
-	struct ldlm_lock_desc   d;
-	int		     rc;
-	struct ldlm_lock       *lock;
+	struct ldlm_lock_desc d;
+	int rc;
+	struct ldlm_lock *lock;
 
 	if (list_empty(arg->list))
 		return -ENOENT;
@@ -1664,9 +1662,9 @@ enum ldlm_error ldlm_lock_enqueue(struct ldlm_namespace *ns,
 static int
 ldlm_work_cp_ast_lock(struct ptlrpc_request_set *rqset, void *opaq)
 {
-	struct ldlm_cb_set_arg  *arg = opaq;
-	int		      rc = 0;
-	struct ldlm_lock	*lock;
+	struct ldlm_cb_set_arg *arg = opaq;
+	int rc = 0;
+	struct ldlm_lock *lock;
 	ldlm_completion_callback completion_callback;
 
 	if (list_empty(arg->list))
@@ -1711,9 +1709,9 @@ enum ldlm_error ldlm_lock_enqueue(struct ldlm_namespace *ns,
 ldlm_work_revoke_ast_lock(struct ptlrpc_request_set *rqset, void *opaq)
 {
 	struct ldlm_cb_set_arg *arg = opaq;
-	struct ldlm_lock_desc   desc;
-	int		     rc;
-	struct ldlm_lock       *lock;
+	struct ldlm_lock_desc desc;
+	int rc;
+	struct ldlm_lock *lock;
 
 	if (list_empty(arg->list))
 		return -ENOENT;
@@ -1737,10 +1735,10 @@ enum ldlm_error ldlm_lock_enqueue(struct ldlm_namespace *ns,
  */
 static int ldlm_work_gl_ast_lock(struct ptlrpc_request_set *rqset, void *opaq)
 {
-	struct ldlm_cb_set_arg		*arg = opaq;
-	struct ldlm_glimpse_work	*gl_work;
-	struct ldlm_lock		*lock;
-	int				 rc = 0;
+	struct ldlm_cb_set_arg *arg = opaq;
+	struct ldlm_glimpse_work *gl_work;
+	struct ldlm_lock *lock;
+	int rc = 0;
 
 	if (list_empty(arg->list))
 		return -ENOENT;
@@ -1776,8 +1774,8 @@ int ldlm_run_ast_work(struct ldlm_namespace *ns, struct list_head *rpc_list,
 		      enum ldlm_desc_ast_t ast_type)
 {
 	struct ldlm_cb_set_arg *arg;
-	set_producer_func       work_ast_lock;
-	int		     rc;
+	set_producer_func work_ast_lock;
+	int rc;
 
 	if (list_empty(rpc_list))
 		return 0;
diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_lockd.c b/drivers/staging/lustre/lustre/ldlm/ldlm_lockd.c
index e766f798..bae67ac 100644
--- a/drivers/staging/lustre/lustre/ldlm/ldlm_lockd.c
+++ b/drivers/staging/lustre/lustre/ldlm/ldlm_lockd.c
@@ -77,32 +77,32 @@ struct ldlm_bl_pool {
 	 * as a priority. It is used for LDLM_FL_DISCARD_DATA requests.
 	 * see bug 13843
 	 */
-	struct list_head	      blp_prio_list;
+	struct list_head	blp_prio_list;
 
 	/*
 	 * blp_list is used for all other callbacks which are likely
 	 * to take longer to process.
 	 */
-	struct list_head	      blp_list;
+	struct list_head	blp_list;
 
-	wait_queue_head_t	     blp_waitq;
+	wait_queue_head_t	blp_waitq;
 	struct completion	blp_comp;
-	atomic_t	    blp_num_threads;
-	atomic_t	    blp_busy_threads;
-	int		     blp_min_threads;
-	int		     blp_max_threads;
+	atomic_t		blp_num_threads;
+	atomic_t		blp_busy_threads;
+	int			blp_min_threads;
+	int			blp_max_threads;
 };
 
 struct ldlm_bl_work_item {
-	struct list_head	      blwi_entry;
+	struct list_head	blwi_entry;
 	struct ldlm_namespace  *blwi_ns;
 	struct ldlm_lock_desc   blwi_ld;
 	struct ldlm_lock       *blwi_lock;
-	struct list_head	      blwi_head;
-	int		     blwi_count;
+	struct list_head	blwi_head;
+	int			blwi_count;
 	struct completion	blwi_comp;
-	enum ldlm_cancel_flags  blwi_flags;
-	int		     blwi_mem_pressure;
+	enum ldlm_cancel_flags	blwi_flags;
+	int			blwi_mem_pressure;
 };
 
 /**
@@ -928,8 +928,8 @@ static ssize_t cancel_unused_locks_before_replay_store(struct kobject *kobj,
 
 static int ldlm_setup(void)
 {
-	static struct ptlrpc_service_conf	conf;
-	struct ldlm_bl_pool			*blp = NULL;
+	static struct ptlrpc_service_conf conf;
+	struct ldlm_bl_pool *blp = NULL;
 	int rc = 0;
 	int i;
 
diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_pool.c b/drivers/staging/lustre/lustre/ldlm/ldlm_pool.c
index e94d8a3..5b23767f 100644
--- a/drivers/staging/lustre/lustre/ldlm/ldlm_pool.c
+++ b/drivers/staging/lustre/lustre/ldlm/ldlm_pool.c
@@ -465,8 +465,7 @@ static ssize_t grant_speed_show(struct kobject *kobj, struct attribute *attr,
 {
 	struct ldlm_pool *pl = container_of(kobj, struct ldlm_pool,
 					    pl_kobj);
-
-	int	       grant_speed;
+	int grant_speed;
 
 	spin_lock(&pl->pl_lock);
 	/* serialize with ldlm_pool_recalc */
@@ -902,7 +901,7 @@ static void ldlm_pools_recalc(struct work_struct *ws)
 	 * Recalc at least ldlm_namespace_nr_read(client) namespaces.
 	 */
 	for (nr = ldlm_namespace_nr_read(client); nr > 0; nr--) {
-		int     skip;
+		int skip;
 		/*
 		 * Lock the list, get first @ns in the list, getref, move it
 		 * to the tail, unlock and call pool recalc. This way we avoid
diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_request.c b/drivers/staging/lustre/lustre/ldlm/ldlm_request.c
index a7fe8c6..b819ade 100644
--- a/drivers/staging/lustre/lustre/ldlm/ldlm_request.c
+++ b/drivers/staging/lustre/lustre/ldlm/ldlm_request.c
@@ -565,9 +565,9 @@ int ldlm_prep_elc_req(struct obd_export *exp, struct ptlrpc_request *req,
 		      int version, int opc, int canceloff,
 		      struct list_head *cancels, int count)
 {
-	struct ldlm_namespace   *ns = exp->exp_obd->obd_namespace;
-	struct req_capsule      *pill = &req->rq_pill;
-	struct ldlm_request     *dlm = NULL;
+	struct ldlm_namespace *ns = exp->exp_obd->obd_namespace;
+	struct req_capsule *pill = &req->rq_pill;
+	struct ldlm_request *dlm = NULL;
 	int flags, avail, to_free, pack = 0;
 	LIST_HEAD(head);
 	int rc;
@@ -675,11 +675,11 @@ int ldlm_cli_enqueue(struct obd_export *exp, struct ptlrpc_request **reqp,
 		     struct lustre_handle *lockh, int async)
 {
 	struct ldlm_namespace *ns;
-	struct ldlm_lock      *lock;
-	struct ldlm_request   *body;
-	int		    is_replay = *flags & LDLM_FL_REPLAY;
-	int		    req_passed_in = 1;
-	int		    rc, err;
+	struct ldlm_lock *lock;
+	struct ldlm_request *body;
+	int is_replay = *flags & LDLM_FL_REPLAY;
+	int req_passed_in = 1;
+	int rc, err;
 	struct ptlrpc_request *req;
 
 	ns = exp->exp_obd->obd_namespace;
@@ -1718,8 +1718,8 @@ static int ldlm_cli_hash_cancel_unused(struct cfs_hash *hs,
 				       struct cfs_hash_bd *bd,
 				       struct hlist_node *hnode, void *arg)
 {
-	struct ldlm_resource	   *res = cfs_hash_object(hs, hnode);
-	struct ldlm_cli_cancel_arg     *lc = arg;
+	struct ldlm_resource *res = cfs_hash_object(hs, hnode);
+	struct ldlm_cli_cancel_arg *lc = arg;
 
 	ldlm_cli_cancel_unused_resource(ldlm_res_to_ns(res), &res->lr_name,
 					NULL, LCK_MINMODE,
@@ -1878,9 +1878,9 @@ static int replay_lock_interpret(const struct lu_env *env,
 				 struct ptlrpc_request *req,
 				 struct ldlm_async_args *aa, int rc)
 {
-	struct ldlm_lock     *lock;
-	struct ldlm_reply    *reply;
-	struct obd_export    *exp;
+	struct ldlm_lock *lock;
+	struct ldlm_reply *reply;
+	struct obd_export *exp;
 
 	atomic_dec(&req->rq_import->imp_replay_inflight);
 	if (rc != ELDLM_OK)
@@ -1920,7 +1920,7 @@ static int replay_one_lock(struct obd_import *imp, struct ldlm_lock *lock)
 {
 	struct ptlrpc_request *req;
 	struct ldlm_async_args *aa;
-	struct ldlm_request   *body;
+	struct ldlm_request *body;
 	int flags;
 
 	/* Bug 11974: Do not replay a lock which is actively being canceled */
diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_resource.c b/drivers/staging/lustre/lustre/ldlm/ldlm_resource.c
index e0b9918..85c5047 100644
--- a/drivers/staging/lustre/lustre/ldlm/ldlm_resource.c
+++ b/drivers/staging/lustre/lustre/ldlm/ldlm_resource.c
@@ -132,9 +132,9 @@ static ssize_t resource_count_show(struct kobject *kobj, struct attribute *attr,
 {
 	struct ldlm_namespace *ns = container_of(kobj, struct ldlm_namespace,
 						 ns_kobj);
-	u64		  res = 0;
-	struct cfs_hash_bd	  bd;
-	int		    i;
+	u64 res = 0;
+	struct cfs_hash_bd bd;
+	int i;
 
 	/* result is not strictly consistent */
 	cfs_hash_for_each_bucket(ns->ns_rs_hash, &bd, i)
@@ -148,7 +148,7 @@ static ssize_t lock_count_show(struct kobject *kobj, struct attribute *attr,
 {
 	struct ldlm_namespace *ns = container_of(kobj, struct ldlm_namespace,
 						 ns_kobj);
-	u64		  locks;
+	u64 locks;
 
 	locks = lprocfs_stats_collector(ns->ns_stats, LDLM_NSS_LOCKS,
 					LPROCFS_FIELDS_FLAGS_SUM);
@@ -407,9 +407,9 @@ struct ldlm_resource *ldlm_resource_getref(struct ldlm_resource *res)
 static unsigned int ldlm_res_hop_hash(struct cfs_hash *hs,
 				      const void *key, unsigned int mask)
 {
-	const struct ldlm_res_id     *id  = key;
-	unsigned int		val = 0;
-	unsigned int		i;
+	const struct ldlm_res_id *id  = key;
+	unsigned int val = 0;
+	unsigned int i;
 
 	for (i = 0; i < RES_NAME_SIZE; i++)
 		val += id->name[i];
@@ -420,9 +420,9 @@ static unsigned int ldlm_res_hop_fid_hash(struct cfs_hash *hs,
 					  const void *key, unsigned int mask)
 {
 	const struct ldlm_res_id *id = key;
-	struct lu_fid       fid;
-	u32	       hash;
-	u32	       val;
+	struct lu_fid fid;
+	u32 hash;
+	u32 val;
 
 	fid.f_seq = id->name[LUSTRE_RES_ID_SEQ_OFF];
 	fid.f_oid = (u32)id->name[LUSTRE_RES_ID_VER_OID_OFF];
@@ -448,7 +448,7 @@ static unsigned int ldlm_res_hop_fid_hash(struct cfs_hash *hs,
 
 static void *ldlm_res_hop_key(struct hlist_node *hnode)
 {
-	struct ldlm_resource   *res;
+	struct ldlm_resource *res;
 
 	res = hlist_entry(hnode, struct ldlm_resource, lr_hash);
 	return &res->lr_name;
@@ -456,7 +456,7 @@ static void *ldlm_res_hop_key(struct hlist_node *hnode)
 
 static int ldlm_res_hop_keycmp(const void *key, struct hlist_node *hnode)
 {
-	struct ldlm_resource   *res;
+	struct ldlm_resource *res;
 
 	res = hlist_entry(hnode, struct ldlm_resource, lr_hash);
 	return ldlm_res_eq((const struct ldlm_res_id *)key,
@@ -506,13 +506,13 @@ static void ldlm_res_hop_put(struct cfs_hash *hs, struct hlist_node *hnode)
 };
 
 struct ldlm_ns_hash_def {
-	enum ldlm_ns_type nsd_type;
+	enum ldlm_ns_type	nsd_type;
 	/** hash bucket bits */
-	unsigned int	nsd_bkt_bits;
+	unsigned int		nsd_bkt_bits;
 	/** hash bits */
-	unsigned int	nsd_all_bits;
+	unsigned int		nsd_all_bits;
 	/** hash operations */
-	struct cfs_hash_ops *nsd_hops;
+	struct cfs_hash_ops	*nsd_hops;
 };
 
 static struct ldlm_ns_hash_def ldlm_ns_hash_defs[] = {
@@ -578,10 +578,10 @@ struct ldlm_namespace *ldlm_namespace_new(struct obd_device *obd, char *name,
 {
 	struct ldlm_namespace *ns = NULL;
 	struct ldlm_ns_bucket *nsb;
-	struct ldlm_ns_hash_def    *nsd;
-	struct cfs_hash_bd	  bd;
-	int		    idx;
-	int		    rc;
+	struct ldlm_ns_hash_def *nsd;
+	struct cfs_hash_bd bd;
+	int idx;
+	int rc;
 
 	LASSERT(obd);
 
@@ -625,10 +625,10 @@ struct ldlm_namespace *ldlm_namespace_new(struct obd_device *obd, char *name,
 		nsb->nsb_namespace = ns;
 	}
 
-	ns->ns_obd      = obd;
+	ns->ns_obd = obd;
 	ns->ns_appetite = apt;
-	ns->ns_client   = client;
-	ns->ns_name     = kstrdup(name, GFP_KERNEL);
+	ns->ns_client = client;
+	ns->ns_name = kstrdup(name, GFP_KERNEL);
 	if (!ns->ns_name)
 		goto out_hash;
 
@@ -638,13 +638,13 @@ struct ldlm_namespace *ldlm_namespace_new(struct obd_device *obd, char *name,
 	atomic_set(&ns->ns_bref, 0);
 	init_waitqueue_head(&ns->ns_waitq);
 
-	ns->ns_max_parallel_ast   = LDLM_DEFAULT_PARALLEL_AST_LIMIT;
-	ns->ns_nr_unused	  = 0;
-	ns->ns_max_unused	 = LDLM_DEFAULT_LRU_SIZE;
-	ns->ns_max_age	    = LDLM_DEFAULT_MAX_ALIVE;
+	ns->ns_max_parallel_ast = LDLM_DEFAULT_PARALLEL_AST_LIMIT;
+	ns->ns_nr_unused = 0;
+	ns->ns_max_unused = LDLM_DEFAULT_LRU_SIZE;
+	ns->ns_max_age = LDLM_DEFAULT_MAX_ALIVE;
 	ns->ns_orig_connect_flags = 0;
-	ns->ns_connect_flags      = 0;
-	ns->ns_stopping	   = 0;
+	ns->ns_connect_flags = 0;
+	ns->ns_stopping = 0;
 
 	rc = ldlm_namespace_sysfs_register(ns);
 	if (rc != 0) {
@@ -775,7 +775,7 @@ static int ldlm_resource_clean(struct cfs_hash *hs, struct cfs_hash_bd *bd,
 static int ldlm_resource_complain(struct cfs_hash *hs, struct cfs_hash_bd *bd,
 				  struct hlist_node *hnode, void *arg)
 {
-	struct ldlm_resource  *res = cfs_hash_object(hs, hnode);
+	struct ldlm_resource *res = cfs_hash_object(hs, hnode);
 
 	lock_res(res);
 	CERROR("%s: namespace resource " DLDLMRES
@@ -1045,11 +1045,11 @@ struct ldlm_resource *
 		  const struct ldlm_res_id *name, enum ldlm_type type,
 		  int create)
 {
-	struct hlist_node     *hnode;
+	struct hlist_node *hnode;
 	struct ldlm_resource *res = NULL;
-	struct cfs_hash_bd	 bd;
-	u64		 version;
-	int		      ns_refcount = 0;
+	struct cfs_hash_bd bd;
+	u64 version;
+	int ns_refcount = 0;
 	int rc;
 
 	LASSERT(!parent);
@@ -1075,9 +1075,9 @@ struct ldlm_resource *
 	if (!res)
 		return ERR_PTR(-ENOMEM);
 
-	res->lr_ns_bucket  = cfs_hash_bd_extra_get(ns->ns_rs_hash, &bd);
-	res->lr_name       = *name;
-	res->lr_type       = type;
+	res->lr_ns_bucket = cfs_hash_bd_extra_get(ns->ns_rs_hash, &bd);
+	res->lr_name = *name;
+	res->lr_type = type;
 
 	cfs_hash_bd_lock(ns->ns_rs_hash, &bd, 1);
 	hnode = (version == cfs_hash_bd_version_get(&bd)) ?  NULL :
@@ -1179,7 +1179,7 @@ static void __ldlm_resource_putref_final(struct cfs_hash_bd *bd,
 void ldlm_resource_putref(struct ldlm_resource *res)
 {
 	struct ldlm_namespace *ns = ldlm_res_to_ns(res);
-	struct cfs_hash_bd   bd;
+	struct cfs_hash_bd bd;
 
 	LASSERT_ATOMIC_GT_LT(&res->lr_refcount, 0, LI_POISON);
 	CDEBUG(D_INFO, "putref res: %p count: %d\n",
@@ -1253,7 +1253,7 @@ static int ldlm_res_hash_dump(struct cfs_hash *hs, struct cfs_hash_bd *bd,
 			      struct hlist_node *hnode, void *arg)
 {
 	struct ldlm_resource *res = cfs_hash_object(hs, hnode);
-	int    level = (int)(unsigned long)arg;
+	int level = (int)(unsigned long)arg;
 
 	lock_res(res);
 	ldlm_resource_dump(level, res);
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [lustre-devel] [PATCH 11/26] llite: cleanup white spaces
  2019-01-31 17:19 [lustre-devel] [PATCH 00/26] lustre: cleanups with no code changes James Simmons
                   ` (9 preceding siblings ...)
  2019-01-31 17:19 ` [lustre-devel] [PATCH 10/26] ldlm: cleanup white spaces James Simmons
@ 2019-01-31 17:19 ` James Simmons
  2019-01-31 17:19 ` [lustre-devel] [PATCH 12/26] lmv: " James Simmons
                   ` (15 subsequent siblings)
  26 siblings, 0 replies; 30+ messages in thread
From: James Simmons @ 2019-01-31 17:19 UTC (permalink / raw)
  To: lustre-devel

The llite code is very messy and difficult to read. Remove excess
white space and properly align data structures so they are easy on
the eyes.

Signed-off-by: James Simmons <jsimmons@infradead.org>
---
 drivers/staging/lustre/lustre/llite/dir.c          | 114 ++++++-------
 drivers/staging/lustre/lustre/llite/file.c         | 180 ++++++++++-----------
 drivers/staging/lustre/lustre/llite/glimpse.c      |  18 +--
 drivers/staging/lustre/lustre/llite/lcommon_cl.c   |  24 +--
 drivers/staging/lustre/lustre/llite/llite_lib.c    |  52 +++---
 drivers/staging/lustre/lustre/llite/llite_mmap.c   |  52 +++---
 drivers/staging/lustre/lustre/llite/llite_nfs.c    |  42 ++---
 drivers/staging/lustre/lustre/llite/lproc_llite.c  |  80 ++++-----
 drivers/staging/lustre/lustre/llite/namei.c        |  49 +++---
 drivers/staging/lustre/lustre/llite/rw.c           |   2 +-
 drivers/staging/lustre/lustre/llite/rw26.c         |  57 ++++---
 drivers/staging/lustre/lustre/llite/statahead.c    | 133 +++++++--------
 drivers/staging/lustre/lustre/llite/super25.c      |  16 +-
 drivers/staging/lustre/lustre/llite/vvp_dev.c      | 104 ++++++------
 drivers/staging/lustre/lustre/llite/vvp_internal.h |  56 +++----
 drivers/staging/lustre/lustre/llite/vvp_io.c       | 169 ++++++++++---------
 drivers/staging/lustre/lustre/llite/vvp_object.c   |  30 ++--
 drivers/staging/lustre/lustre/llite/vvp_page.c     |  88 +++++-----
 drivers/staging/lustre/lustre/llite/xattr.c        |  48 +++---
 19 files changed, 658 insertions(+), 656 deletions(-)

diff --git a/drivers/staging/lustre/lustre/llite/dir.c b/drivers/staging/lustre/lustre/llite/dir.c
index 4520344..fd1af4a 100644
--- a/drivers/staging/lustre/lustre/llite/dir.c
+++ b/drivers/staging/lustre/lustre/llite/dir.c
@@ -200,13 +200,13 @@ static u16 ll_dirent_type_get(struct lu_dirent *ent)
 int ll_dir_read(struct inode *inode, u64 *ppos, struct md_op_data *op_data,
 		struct dir_context *ctx)
 {
-	struct ll_sb_info    *sbi	= ll_i2sbi(inode);
-	u64		   pos		= *ppos;
+	struct ll_sb_info *sbi = ll_i2sbi(inode);
+	u64 pos = *ppos;
 	bool is_api32 = ll_need_32bit_api(sbi);
-	int		   is_hash64 = sbi->ll_flags & LL_SBI_64BIT_HASH;
-	struct page	  *page;
-	bool		   done = false;
-	int		   rc = 0;
+	int is_hash64 = sbi->ll_flags & LL_SBI_64BIT_HASH;
+	struct page *page;
+	bool done = false;
+	int rc = 0;
 
 	page = ll_get_dir_page(inode, op_data, pos);
 
@@ -225,11 +225,11 @@ int ll_dir_read(struct inode *inode, u64 *ppos, struct md_op_data *op_data,
 		dp = page_address(page);
 		for (ent = lu_dirent_start(dp); ent && !done;
 		     ent = lu_dirent_next(ent)) {
-			u16	  type;
-			int	    namelen;
-			struct lu_fid  fid;
-			u64	  lhash;
-			u64	  ino;
+			u16 type;
+			int namelen;
+			struct lu_fid fid;
+			u64 lhash;
+			u64 ino;
 
 			hash = le64_to_cpu(ent->lde_hash);
 			if (hash < pos)
@@ -291,14 +291,14 @@ int ll_dir_read(struct inode *inode, u64 *ppos, struct md_op_data *op_data,
 
 static int ll_readdir(struct file *filp, struct dir_context *ctx)
 {
-	struct inode		*inode	= file_inode(filp);
-	struct ll_file_data	*lfd	= LUSTRE_FPRIVATE(filp);
-	struct ll_sb_info	*sbi	= ll_i2sbi(inode);
+	struct inode *inode = file_inode(filp);
+	struct ll_file_data *lfd = LUSTRE_FPRIVATE(filp);
+	struct ll_sb_info *sbi	= ll_i2sbi(inode);
 	u64 pos = lfd ? lfd->lfd_pos : 0;
-	int			hash64	= sbi->ll_flags & LL_SBI_64BIT_HASH;
+	int hash64 = sbi->ll_flags & LL_SBI_64BIT_HASH;
 	bool api32 = ll_need_32bit_api(sbi);
 	struct md_op_data *op_data;
-	int			rc;
+	int rc;
 
 	CDEBUG(D_VFSTRACE,
 	       "VFS Op:inode=" DFID "(%p) pos/size %lu/%llu 32bit_api %d\n",
@@ -626,7 +626,7 @@ int ll_dir_getstripe(struct inode *inode, void **plmm, int *plmm_size,
 		     struct ptlrpc_request **request, u64 valid)
 {
 	struct ll_sb_info *sbi = ll_i2sbi(inode);
-	struct mdt_body   *body;
+	struct mdt_body *body;
 	struct lov_mds_md *lmm = NULL;
 	struct ptlrpc_request *req = NULL;
 	int rc, lmmsize;
@@ -744,8 +744,8 @@ int ll_get_mdt_idx(struct inode *inode)
  */
 static int ll_ioc_copy_start(struct super_block *sb, struct hsm_copy *copy)
 {
-	struct ll_sb_info		*sbi = ll_s2sbi(sb);
-	struct hsm_progress_kernel	 hpk;
+	struct ll_sb_info *sbi = ll_s2sbi(sb);
+	struct hsm_progress_kernel hpk;
 	int rc2, rc = 0;
 
 	/* Forge a hsm_progress based on data from copy. */
@@ -759,8 +759,8 @@ static int ll_ioc_copy_start(struct super_block *sb, struct hsm_copy *copy)
 
 	/* For archive request, we need to read the current file version. */
 	if (copy->hc_hai.hai_action == HSMA_ARCHIVE) {
-		struct inode	*inode;
-		u64		 data_version = 0;
+		struct inode *inode;
+		u64 data_version = 0;
 
 		/* Get inode for this fid */
 		inode = search_inode_for_lustre(sb, &copy->hc_hai.hai_fid);
@@ -819,8 +819,8 @@ static int ll_ioc_copy_start(struct super_block *sb, struct hsm_copy *copy)
  */
 static int ll_ioc_copy_end(struct super_block *sb, struct hsm_copy *copy)
 {
-	struct ll_sb_info		*sbi = ll_s2sbi(sb);
-	struct hsm_progress_kernel	 hpk;
+	struct ll_sb_info *sbi = ll_s2sbi(sb);
+	struct hsm_progress_kernel hpk;
 	int rc2, rc = 0;
 
 	/* If you modify the logic here, also check llapi_hsm_copy_end(). */
@@ -844,8 +844,8 @@ static int ll_ioc_copy_end(struct super_block *sb, struct hsm_copy *copy)
 	if (((copy->hc_hai.hai_action == HSMA_ARCHIVE) ||
 	     (copy->hc_hai.hai_action == HSMA_RESTORE)) &&
 	    (copy->hc_errval == 0)) {
-		struct inode	*inode;
-		u64		 data_version = 0;
+		struct inode *inode;
+		u64 data_version = 0;
 
 		/* Get lsm for this fid */
 		inode = search_inode_for_lustre(sb, &copy->hc_hai.hai_fid);
@@ -1160,13 +1160,13 @@ static long ll_dir_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
 	}
 	case LL_IOC_LMV_SETSTRIPE: {
 		struct lmv_user_md  *lum;
-		char		*buf = NULL;
-		char		*filename;
-		int		 namelen = 0;
-		int		 lumlen = 0;
+		char *buf = NULL;
+		char *filename;
+		int namelen = 0;
+		int lumlen = 0;
 		umode_t mode;
-		int		 len;
-		int		 rc;
+		int len;
+		int rc;
 
 		rc = obd_ioctl_getdata(&buf, &len, (void __user *)arg);
 		if (rc)
@@ -1428,21 +1428,21 @@ static long ll_dir_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
 			struct lov_user_mds_data __user *lmdp;
 			lstat_t st = { 0 };
 
-			st.st_dev     = inode->i_sb->s_dev;
-			st.st_mode    = body->mbo_mode;
-			st.st_nlink   = body->mbo_nlink;
-			st.st_uid     = body->mbo_uid;
-			st.st_gid     = body->mbo_gid;
-			st.st_rdev    = body->mbo_rdev;
-			st.st_size    = body->mbo_size;
+			st.st_dev = inode->i_sb->s_dev;
+			st.st_mode = body->mbo_mode;
+			st.st_nlink = body->mbo_nlink;
+			st.st_uid = body->mbo_uid;
+			st.st_gid = body->mbo_gid;
+			st.st_rdev = body->mbo_rdev;
+			st.st_size = body->mbo_size;
 			st.st_blksize = PAGE_SIZE;
-			st.st_blocks  = body->mbo_blocks;
-			st.st_atime   = body->mbo_atime;
-			st.st_mtime   = body->mbo_mtime;
-			st.st_ctime   = body->mbo_ctime;
-			st.st_ino     = cl_fid_build_ino(&body->mbo_fid1,
-							 sbi->ll_flags &
-							 LL_SBI_32BIT_API);
+			st.st_blocks = body->mbo_blocks;
+			st.st_atime = body->mbo_atime;
+			st.st_mtime = body->mbo_mtime;
+			st.st_ctime = body->mbo_ctime;
+			st.st_ino = cl_fid_build_ino(&body->mbo_fid1,
+						     sbi->ll_flags &
+						     LL_SBI_32BIT_API);
 
 			lmdp = (struct lov_user_mds_data __user *)arg;
 			if (copy_to_user(&lmdp->lmd_st, &st, sizeof(st))) {
@@ -1538,7 +1538,7 @@ static long ll_dir_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
 	}
 	case LL_IOC_HSM_REQUEST: {
 		struct hsm_user_request	*hur;
-		ssize_t			 totalsize;
+		ssize_t	totalsize;
 
 		hur = memdup_user((void __user *)arg, sizeof(*hur));
 		if (IS_ERR(hur))
@@ -1592,8 +1592,8 @@ static long ll_dir_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
 		return rc;
 	}
 	case LL_IOC_HSM_PROGRESS: {
-		struct hsm_progress_kernel	hpk;
-		struct hsm_progress		hp;
+		struct hsm_progress_kernel hpk;
+		struct hsm_progress hp;
 
 		if (copy_from_user(&hp, (void __user *)arg, sizeof(hp)))
 			return -EFAULT;
@@ -1622,7 +1622,7 @@ static long ll_dir_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
 
 	case LL_IOC_HSM_COPY_START: {
 		struct hsm_copy	*copy;
-		int		 rc;
+		int rc;
 
 		copy = memdup_user((char __user *)arg, sizeof(*copy));
 		if (IS_ERR(copy))
@@ -1637,7 +1637,7 @@ static long ll_dir_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
 	}
 	case LL_IOC_HSM_COPY_END: {
 		struct hsm_copy	*copy;
-		int		 rc;
+		int rc;
 
 		copy = memdup_user((char __user *)arg, sizeof(*copy));
 		if (IS_ERR(copy))
@@ -1756,11 +1756,11 @@ static int ll_dir_release(struct inode *inode, struct file *file)
 }
 
 const struct file_operations ll_dir_operations = {
-	.llseek   = ll_dir_seek,
-	.open     = ll_dir_open,
-	.release  = ll_dir_release,
-	.read     = generic_read_dir,
-	.iterate_shared  = ll_readdir,
-	.unlocked_ioctl   = ll_dir_ioctl,
-	.fsync    = ll_fsync,
+	.llseek			= ll_dir_seek,
+	.open			= ll_dir_open,
+	.release		= ll_dir_release,
+	.read			= generic_read_dir,
+	.iterate_shared		= ll_readdir,
+	.unlocked_ioctl		= ll_dir_ioctl,
+	.fsync			= ll_fsync,
 };
diff --git a/drivers/staging/lustre/lustre/llite/file.c b/drivers/staging/lustre/lustre/llite/file.c
index f71e273..6afaa90 100644
--- a/drivers/staging/lustre/lustre/llite/file.c
+++ b/drivers/staging/lustre/lustre/llite/file.c
@@ -1119,7 +1119,7 @@ static void ll_io_init(struct cl_io *io, const struct file *file, int write)
 				      file->f_flags & O_DIRECT ||
 				      IS_SYNC(inode);
 	}
-	io->ci_obj     = ll_i2info(inode)->lli_clob;
+	io->ci_obj = ll_i2info(inode)->lli_clob;
 	io->ci_lockreq = CILR_MAYBE;
 	if (ll_file_nolock(file)) {
 		io->ci_lockreq = CILR_NEVER;
@@ -1137,10 +1137,10 @@ static void ll_io_init(struct cl_io *io, const struct file *file, int write)
 		   loff_t *ppos, size_t count)
 {
 	struct ll_inode_info *lli = ll_i2info(file_inode(file));
-	struct ll_file_data  *fd  = LUSTRE_FPRIVATE(file);
+	struct ll_file_data *fd = LUSTRE_FPRIVATE(file);
 	struct vvp_io *vio = vvp_env_io(env);
 	struct range_lock range;
-	struct cl_io	 *io;
+	struct cl_io *io;
 	ssize_t result = 0;
 	int rc = 0;
 
@@ -1311,9 +1311,9 @@ static void ll_io_init(struct cl_io *io, const struct file *file, int write)
 
 static ssize_t ll_file_read_iter(struct kiocb *iocb, struct iov_iter *to)
 {
-	struct lu_env      *env;
+	struct lu_env *env;
 	struct vvp_io_args *args;
-	ssize_t	     result;
+	ssize_t result;
 	u16 refcheck;
 	ssize_t rc2;
 
@@ -1346,9 +1346,9 @@ static ssize_t ll_file_read_iter(struct kiocb *iocb, struct iov_iter *to)
  */
 static ssize_t ll_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
 {
-	struct lu_env      *env;
+	struct lu_env *env;
 	struct vvp_io_args *args;
-	ssize_t	     result;
+	ssize_t result;
 	u16 refcheck;
 
 	env = cl_env_get(&refcheck);
@@ -1393,7 +1393,7 @@ int ll_lov_getstripe_ea_info(struct inode *inode, const char *filename,
 			     struct ptlrpc_request **request)
 {
 	struct ll_sb_info *sbi = ll_i2sbi(inode);
-	struct mdt_body  *body;
+	struct mdt_body *body;
 	struct lov_mds_md *lmm = NULL;
 	struct ptlrpc_request *req = NULL;
 	struct md_op_data *op_data;
@@ -1439,7 +1439,7 @@ int ll_lov_getstripe_ea_info(struct inode *inode, const char *filename,
 
 	/*
 	 * This is coming from the MDS, so is probably in
-	 * little endian.  We convert it to host endian before
+	 * little endian. We convert it to host endian before
 	 * passing it to userspace.
 	 */
 	if (cpu_to_le32(LOV_MAGIC) != LOV_MAGIC) {
@@ -1483,11 +1483,11 @@ int ll_lov_getstripe_ea_info(struct inode *inode, const char *filename,
 static int ll_lov_setea(struct inode *inode, struct file *file,
 			void __user *arg)
 {
-	u64			 flags = MDS_OPEN_HAS_OBJS | FMODE_WRITE;
-	struct lov_user_md	*lump;
-	int			 lum_size = sizeof(struct lov_user_md) +
-					    sizeof(struct lov_user_ost_data);
-	int			 rc;
+	u64 flags = MDS_OPEN_HAS_OBJS | FMODE_WRITE;
+	struct lov_user_md *lump;
+	int lum_size = sizeof(struct lov_user_md) +
+		       sizeof(struct lov_user_ost_data);
+	int rc;
 
 	if (!capable(CAP_SYS_ADMIN))
 		return -EPERM;
@@ -1562,11 +1562,11 @@ static int ll_lov_setstripe(struct inode *inode, struct file *file,
 static int
 ll_get_grouplock(struct inode *inode, struct file *file, unsigned long arg)
 {
-	struct ll_inode_info   *lli = ll_i2info(inode);
-	struct ll_file_data    *fd = LUSTRE_FPRIVATE(file);
+	struct ll_inode_info *lli = ll_i2info(inode);
+	struct ll_file_data *fd = LUSTRE_FPRIVATE(file);
 	struct cl_object *obj = lli->lli_clob;
-	struct ll_grouplock    grouplock;
-	int		     rc;
+	struct ll_grouplock grouplock;
+	int rc;
 
 	if (arg == 0) {
 		CWARN("group id for group lock must not be 0\n");
@@ -1635,9 +1635,9 @@ static int ll_lov_setstripe(struct inode *inode, struct file *file,
 static int ll_put_grouplock(struct inode *inode, struct file *file,
 			    unsigned long arg)
 {
-	struct ll_inode_info   *lli = ll_i2info(inode);
-	struct ll_file_data    *fd = LUSTRE_FPRIVATE(file);
-	struct ll_grouplock    grouplock;
+	struct ll_inode_info *lli = ll_i2info(inode);
+	struct ll_file_data *fd = LUSTRE_FPRIVATE(file);
+	struct ll_grouplock grouplock;
 
 	spin_lock(&lli->lli_lock);
 	if (!(fd->fd_flags & LL_FILE_GROUP_LOCKED)) {
@@ -1931,12 +1931,12 @@ struct ll_swap_stack {
 static int ll_swap_layouts(struct file *file1, struct file *file2,
 			   struct lustre_swap_layouts *lsl)
 {
-	struct mdc_swap_layouts	 msl;
-	struct md_op_data	*op_data;
-	u32			 gid;
-	u64			 dv;
-	struct ll_swap_stack	*llss = NULL;
-	int			 rc;
+	struct mdc_swap_layouts msl;
+	struct md_op_data *op_data;
+	u32 gid;
+	u64 dv;
+	struct ll_swap_stack *llss = NULL;
+	int rc;
 
 	llss = kzalloc(sizeof(*llss), GFP_KERNEL);
 	if (!llss)
@@ -2041,8 +2041,8 @@ static int ll_swap_layouts(struct file *file1, struct file *file2,
 
 int ll_hsm_state_set(struct inode *inode, struct hsm_state_set *hss)
 {
-	struct md_op_data	*op_data;
-	int			 rc;
+	struct md_op_data *op_data;
+	int rc;
 
 	/* Detect out-of range masks */
 	if ((hss->hss_setmask | hss->hss_clearmask) & ~HSM_FLAGS_MASK)
@@ -2076,9 +2076,9 @@ int ll_hsm_state_set(struct inode *inode, struct hsm_state_set *hss)
 static int ll_hsm_import(struct inode *inode, struct file *file,
 			 struct hsm_user_import *hui)
 {
-	struct hsm_state_set	*hss = NULL;
-	struct iattr		*attr = NULL;
-	int			 rc;
+	struct hsm_state_set *hss = NULL;
+	struct iattr *attr = NULL;
+	int rc;
 
 	if (!S_ISREG(inode->i_mode))
 		return -EINVAL;
@@ -2303,9 +2303,9 @@ int ll_ioctl_fssetxattr(struct inode *inode, unsigned int cmd,
 static long
 ll_file_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
 {
-	struct inode		*inode = file_inode(file);
-	struct ll_file_data	*fd = LUSTRE_FPRIVATE(file);
-	int			 flags, rc;
+	struct inode *inode = file_inode(file);
+	struct ll_file_data *fd = LUSTRE_FPRIVATE(file);
+	int flags, rc;
 
 	CDEBUG(D_VFSTRACE, "VFS Op:inode=" DFID "(%p),cmd=%x\n",
 	       PFID(ll_inode2fid(inode)), inode, cmd);
@@ -2434,7 +2434,7 @@ int ll_ioctl_fssetxattr(struct inode *inode, unsigned int cmd,
 		return ll_fid2path(inode, (void __user *)arg);
 	case LL_IOC_DATA_VERSION: {
 		struct ioc_data_version	idv;
-		int			rc;
+		int rc;
 
 		if (copy_from_user(&idv, (char __user *)arg, sizeof(idv)))
 			return -EFAULT;
@@ -2464,9 +2464,9 @@ int ll_ioctl_fssetxattr(struct inode *inode, unsigned int cmd,
 	case OBD_IOC_GETMDNAME:
 		return ll_get_obd_name(inode, cmd, arg);
 	case LL_IOC_HSM_STATE_GET: {
-		struct md_op_data	*op_data;
-		struct hsm_user_state	*hus;
-		int			 rc;
+		struct md_op_data *op_data;
+		struct hsm_user_state *hus;
+		int rc;
 
 		hus = kzalloc(sizeof(*hus), GFP_KERNEL);
 		if (!hus)
@@ -2490,8 +2490,8 @@ int ll_ioctl_fssetxattr(struct inode *inode, unsigned int cmd,
 		return rc;
 	}
 	case LL_IOC_HSM_STATE_SET: {
-		struct hsm_state_set	*hss;
-		int			 rc;
+		struct hsm_state_set *hss;
+		int rc;
 
 		hss = memdup_user((char __user *)arg, sizeof(*hss));
 		if (IS_ERR(hss))
@@ -2503,9 +2503,9 @@ int ll_ioctl_fssetxattr(struct inode *inode, unsigned int cmd,
 		return rc;
 	}
 	case LL_IOC_HSM_ACTION: {
-		struct md_op_data		*op_data;
-		struct hsm_current_action	*hca;
-		int				 rc;
+		struct md_op_data *op_data;
+		struct hsm_current_action *hca;
+		int rc;
 
 		hca = kzalloc(sizeof(*hca), GFP_KERNEL);
 		if (!hca)
@@ -3564,56 +3564,56 @@ int ll_inode_permission(struct inode *inode, int mask)
 
 /* -o localflock - only provides locally consistent flock locks */
 const struct file_operations ll_file_operations = {
-	.read_iter = ll_file_read_iter,
-	.write_iter = ll_file_write_iter,
-	.unlocked_ioctl = ll_file_ioctl,
-	.open	   = ll_file_open,
-	.release	= ll_file_release,
-	.mmap	   = ll_file_mmap,
-	.llseek	 = ll_file_seek,
-	.splice_read    = generic_file_splice_read,
-	.fsync	  = ll_fsync,
-	.flush	  = ll_flush
+	.read_iter		= ll_file_read_iter,
+	.write_iter		= ll_file_write_iter,
+	.unlocked_ioctl		= ll_file_ioctl,
+	.open			= ll_file_open,
+	.release		= ll_file_release,
+	.mmap			= ll_file_mmap,
+	.llseek			= ll_file_seek,
+	.splice_read		= generic_file_splice_read,
+	.fsync			= ll_fsync,
+	.flush			= ll_flush
 };
 
 const struct file_operations ll_file_operations_flock = {
-	.read_iter    = ll_file_read_iter,
-	.write_iter   = ll_file_write_iter,
-	.unlocked_ioctl = ll_file_ioctl,
-	.open	   = ll_file_open,
-	.release	= ll_file_release,
-	.mmap	   = ll_file_mmap,
-	.llseek	 = ll_file_seek,
-	.splice_read    = generic_file_splice_read,
-	.fsync	  = ll_fsync,
-	.flush	  = ll_flush,
-	.flock	  = ll_file_flock,
-	.lock	   = ll_file_flock
+	.read_iter		= ll_file_read_iter,
+	.write_iter		= ll_file_write_iter,
+	.unlocked_ioctl		= ll_file_ioctl,
+	.open			= ll_file_open,
+	.release		= ll_file_release,
+	.mmap			= ll_file_mmap,
+	.llseek			= ll_file_seek,
+	.splice_read		= generic_file_splice_read,
+	.fsync			= ll_fsync,
+	.flush			= ll_flush,
+	.flock			= ll_file_flock,
+	.lock			= ll_file_flock
 };
 
 /* These are for -o noflock - to return ENOSYS on flock calls */
 const struct file_operations ll_file_operations_noflock = {
-	.read_iter    = ll_file_read_iter,
-	.write_iter   = ll_file_write_iter,
-	.unlocked_ioctl = ll_file_ioctl,
-	.open	   = ll_file_open,
-	.release	= ll_file_release,
-	.mmap	   = ll_file_mmap,
-	.llseek	 = ll_file_seek,
-	.splice_read    = generic_file_splice_read,
-	.fsync	  = ll_fsync,
-	.flush	  = ll_flush,
-	.flock	  = ll_file_noflock,
-	.lock	   = ll_file_noflock
+	.read_iter		= ll_file_read_iter,
+	.write_iter		= ll_file_write_iter,
+	.unlocked_ioctl		= ll_file_ioctl,
+	.open			= ll_file_open,
+	.release		= ll_file_release,
+	.mmap			= ll_file_mmap,
+	.llseek			= ll_file_seek,
+	.splice_read		= generic_file_splice_read,
+	.fsync			= ll_fsync,
+	.flush			= ll_flush,
+	.flock			= ll_file_noflock,
+	.lock			= ll_file_noflock
 };
 
 const struct inode_operations ll_file_inode_operations = {
-	.setattr	= ll_setattr,
-	.getattr	= ll_getattr,
-	.permission	= ll_inode_permission,
-	.listxattr	= ll_listxattr,
-	.fiemap		= ll_fiemap,
-	.get_acl	= ll_get_acl,
+	.setattr		= ll_setattr,
+	.getattr		= ll_getattr,
+	.permission		= ll_inode_permission,
+	.listxattr		= ll_listxattr,
+	.fiemap			= ll_fiemap,
+	.get_acl		= ll_get_acl,
 };
 
 int ll_layout_conf(struct inode *inode, const struct cl_object_conf *conf)
@@ -3746,7 +3746,7 @@ static int ll_layout_lock_set(struct lustre_handle *lockh, enum ldlm_mode mode,
 			      struct inode *inode)
 {
 	struct ll_inode_info *lli = ll_i2info(inode);
-	struct ll_sb_info    *sbi = ll_i2sbi(inode);
+	struct ll_sb_info *sbi = ll_i2sbi(inode);
 	struct ldlm_lock *lock;
 	struct cl_object_conf conf;
 	int rc = 0;
@@ -3834,10 +3834,10 @@ static int ll_layout_lock_set(struct lustre_handle *lockh, enum ldlm_mode mode,
  */
 static int ll_layout_intent(struct inode *inode, struct layout_intent *intent)
 {
-	struct ll_inode_info  *lli = ll_i2info(inode);
-	struct ll_sb_info     *sbi = ll_i2sbi(inode);
-	struct md_op_data     *op_data;
-	struct lookup_intent   it;
+	struct ll_inode_info *lli = ll_i2info(inode);
+	struct ll_sb_info *sbi = ll_i2sbi(inode);
+	struct md_op_data *op_data;
+	struct lookup_intent it;
 	struct ptlrpc_request *req;
 	int rc;
 
@@ -3966,7 +3966,7 @@ int ll_layout_write_intent(struct inode *inode, u64 start, u64 end)
 int ll_layout_restore(struct inode *inode, loff_t offset, u64 length)
 {
 	struct hsm_user_request	*hur;
-	int			 len, rc;
+	int len, rc;
 
 	len = sizeof(struct hsm_user_request) +
 	      sizeof(struct hsm_user_item);
diff --git a/drivers/staging/lustre/lustre/llite/glimpse.c b/drivers/staging/lustre/lustre/llite/glimpse.c
index d8712a3..27c233d 100644
--- a/drivers/staging/lustre/lustre/llite/glimpse.c
+++ b/drivers/staging/lustre/lustre/llite/glimpse.c
@@ -65,7 +65,7 @@ blkcnt_t dirty_cnt(struct inode *inode)
 {
 	blkcnt_t cnt = 0;
 	struct vvp_object *vob = cl_inode2vvp(inode);
-	void	      *results[1];
+	void *results[1];
 
 	if (inode->i_mapping)
 		cnt += radix_tree_gang_lookup_tag(&inode->i_mapping->i_pages,
@@ -80,7 +80,7 @@ blkcnt_t dirty_cnt(struct inode *inode)
 int cl_glimpse_lock(const struct lu_env *env, struct cl_io *io,
 		    struct inode *inode, struct cl_object *clob, int agl)
 {
-	const struct lu_fid  *fid   = lu_object_fid(&clob->co_lu);
+	const struct lu_fid *fid = lu_object_fid(&clob->co_lu);
 	struct cl_lock *lock = vvp_env_lock(env);
 	struct cl_lock_descr *descr = &lock->cll_descr;
 	int result = 0;
@@ -140,10 +140,10 @@ int cl_glimpse_lock(const struct lu_env *env, struct cl_io *io,
 static int cl_io_get(struct inode *inode, struct lu_env **envout,
 		     struct cl_io **ioout, u16 *refcheck)
 {
-	struct lu_env	  *env;
-	struct cl_io	   *io;
-	struct ll_inode_info	*lli = ll_i2info(inode);
-	struct cl_object       *clob = lli->lli_clob;
+	struct lu_env *env;
+	struct cl_io *io;
+	struct ll_inode_info *lli = ll_i2info(inode);
+	struct cl_object *clob = lli->lli_clob;
 	int result;
 
 	if (S_ISREG(inode->i_mode)) {
@@ -175,9 +175,9 @@ int __cl_glimpse_size(struct inode *inode, int agl)
 	 * cl_glimpse_size(), which doesn't make sense: glimpse locks are not
 	 * blocking anyway.
 	 */
-	struct lu_env	  *env = NULL;
-	struct cl_io	   *io  = NULL;
-	int		     result;
+	struct lu_env *env = NULL;
+	struct cl_io *io  = NULL;
+	int result;
 	u16 refcheck;
 
 	result = cl_io_get(inode, &env, &io, &refcheck);
diff --git a/drivers/staging/lustre/lustre/llite/lcommon_cl.c b/drivers/staging/lustre/lustre/llite/lcommon_cl.c
index ade3b12..afcaa5e 100644
--- a/drivers/staging/lustre/lustre/llite/lcommon_cl.c
+++ b/drivers/staging/lustre/lustre/llite/lcommon_cl.c
@@ -83,8 +83,8 @@ int cl_setattr_ost(struct cl_object *obj, const struct iattr *attr,
 		   enum op_xvalid xvalid, unsigned int attr_flags)
 {
 	struct lu_env *env;
-	struct cl_io  *io;
-	int	    result;
+	struct cl_io *io;
+	int result;
 	u16 refcheck;
 
 	env = cl_env_get(&refcheck);
@@ -137,11 +137,11 @@ int cl_setattr_ost(struct cl_object *obj, const struct iattr *attr,
  */
 int cl_file_inode_init(struct inode *inode, struct lustre_md *md)
 {
-	struct lu_env	*env;
+	struct lu_env *env;
 	struct ll_inode_info *lli;
-	struct cl_object     *clob;
-	struct lu_site       *site;
-	struct lu_fid	*fid;
+	struct cl_object *clob;
+	struct lu_site *site;
+	struct lu_fid *fid;
 	struct cl_object_conf conf = {
 		.coc_inode = inode,
 		.u = {
@@ -159,8 +159,8 @@ int cl_file_inode_init(struct inode *inode, struct lustre_md *md)
 		return PTR_ERR(env);
 
 	site = ll_i2sbi(inode)->ll_site;
-	lli  = ll_i2info(inode);
-	fid  = &lli->lli_fid;
+	lli = ll_i2info(inode);
+	fid = &lli->lli_fid;
 	LASSERT(fid_is_sane(fid));
 
 	if (!lli->lli_clob) {
@@ -207,7 +207,7 @@ int cl_file_inode_init(struct inode *inode, struct lustre_md *md)
 static void cl_object_put_last(struct lu_env *env, struct cl_object *obj)
 {
 	struct lu_object_header *header = obj->co_lu.lo_header;
-	wait_queue_entry_t	   waiter;
+	wait_queue_entry_t waiter;
 
 	if (unlikely(atomic_read(&header->loh_ref) != 1)) {
 		struct lu_site *site = obj->co_lu.lo_dev->ld_site;
@@ -234,9 +234,9 @@ static void cl_object_put_last(struct lu_env *env, struct cl_object *obj)
 
 void cl_inode_fini(struct inode *inode)
 {
-	struct lu_env	   *env;
-	struct ll_inode_info    *lli  = ll_i2info(inode);
-	struct cl_object	*clob = lli->lli_clob;
+	struct lu_env *env;
+	struct ll_inode_info *lli = ll_i2info(inode);
+	struct cl_object *clob = lli->lli_clob;
 	u16 refcheck;
 	int emergency;
 
diff --git a/drivers/staging/lustre/lustre/llite/llite_lib.c b/drivers/staging/lustre/lustre/llite/llite_lib.c
index 88b08dd..8e09fdd7 100644
--- a/drivers/staging/lustre/lustre/llite/llite_lib.c
+++ b/drivers/staging/lustre/lustre/llite/llite_lib.c
@@ -185,25 +185,25 @@ static int client_common_fill_super(struct super_block *sb, char *md, char *dt)
 	data->ocd_grant_blkbits = PAGE_SHIFT;
 
 	/* indicate the features supported by this client */
-	data->ocd_connect_flags = OBD_CONNECT_IBITS    | OBD_CONNECT_NODEVOH  |
-				  OBD_CONNECT_ATTRFID  |
-				  OBD_CONNECT_VERSION  | OBD_CONNECT_BRW_SIZE |
-				  OBD_CONNECT_CANCELSET | OBD_CONNECT_FID     |
-				  OBD_CONNECT_AT       | OBD_CONNECT_LOV_V3   |
-				  OBD_CONNECT_VBR	| OBD_CONNECT_FULL20  |
+	data->ocd_connect_flags = OBD_CONNECT_IBITS	| OBD_CONNECT_NODEVOH  |
+				  OBD_CONNECT_ATTRFID	|
+				  OBD_CONNECT_VERSION	| OBD_CONNECT_BRW_SIZE |
+				  OBD_CONNECT_CANCELSET | OBD_CONNECT_FID      |
+				  OBD_CONNECT_AT	| OBD_CONNECT_LOV_V3   |
+				  OBD_CONNECT_VBR	| OBD_CONNECT_FULL20   |
 				  OBD_CONNECT_64BITHASH |
 				  OBD_CONNECT_EINPROGRESS |
-				  OBD_CONNECT_JOBSTATS | OBD_CONNECT_LVB_TYPE |
-				  OBD_CONNECT_LAYOUTLOCK |
-				  OBD_CONNECT_PINGLESS |
-				  OBD_CONNECT_MAX_EASIZE |
-				  OBD_CONNECT_FLOCK_DEAD |
+				  OBD_CONNECT_JOBSTATS	| OBD_CONNECT_LVB_TYPE |
+				  OBD_CONNECT_LAYOUTLOCK  |
+				  OBD_CONNECT_PINGLESS	|
+				  OBD_CONNECT_MAX_EASIZE  |
+				  OBD_CONNECT_FLOCK_DEAD  |
 				  OBD_CONNECT_DISP_STRIPE | OBD_CONNECT_LFSCK |
 				  OBD_CONNECT_OPEN_BY_FID |
-				  OBD_CONNECT_DIR_STRIPE |
-				  OBD_CONNECT_BULK_MBITS |
-				  OBD_CONNECT_SUBTREE |
-				  OBD_CONNECT_FLAGS2 | OBD_CONNECT_MULTIMODRPCS;
+				  OBD_CONNECT_DIR_STRIPE  |
+				  OBD_CONNECT_BULK_MBITS  |
+				  OBD_CONNECT_SUBTREE	  |
+				  OBD_CONNECT_FLAGS2	  | OBD_CONNECT_MULTIMODRPCS;
 
 	data->ocd_connect_flags2 = 0;
 
@@ -382,9 +382,9 @@ static int client_common_fill_super(struct super_block *sb, char *md, char *dt)
 				  OBD_CONNECT_VBR	| OBD_CONNECT_FULL20   |
 				  OBD_CONNECT_64BITHASH | OBD_CONNECT_MAXBYTES |
 				  OBD_CONNECT_EINPROGRESS |
-				  OBD_CONNECT_JOBSTATS | OBD_CONNECT_LVB_TYPE |
-				  OBD_CONNECT_LAYOUTLOCK |
-				  OBD_CONNECT_PINGLESS | OBD_CONNECT_LFSCK |
+				  OBD_CONNECT_JOBSTATS	| OBD_CONNECT_LVB_TYPE |
+				  OBD_CONNECT_LAYOUTLOCK  |
+				  OBD_CONNECT_PINGLESS	| OBD_CONNECT_LFSCK |
 				  OBD_CONNECT_BULK_MBITS;
 
 	data->ocd_connect_flags2 = 0;
@@ -913,13 +913,13 @@ int ll_fill_super(struct super_block *sb)
 	struct lustre_profile *lprof = NULL;
 	struct lustre_sb_info *lsi = s2lsi(sb);
 	struct ll_sb_info *sbi;
-	char  *dt = NULL, *md = NULL;
-	char  *profilenm = get_profile_name(sb);
+	char *dt = NULL, *md = NULL;
+	char *profilenm = get_profile_name(sb);
 	struct config_llog_instance *cfg;
 	char name[MAX_STRING_SIZE];
 	char *ptr;
 	int len;
-	int    err;
+	int err;
 	static atomic_t ll_bdi_num = ATOMIC_INIT(0);
 
 	CDEBUG(D_VFSTRACE, "VFS Op: sb %p\n", sb);
@@ -2073,7 +2073,7 @@ int ll_iocontrol(struct inode *inode, struct file *file,
 
 int ll_flush_ctx(struct inode *inode)
 {
-	struct ll_sb_info  *sbi = ll_i2sbi(inode);
+	struct ll_sb_info *sbi = ll_i2sbi(inode);
 
 	CDEBUG(D_SEC, "flush context for user %d\n",
 	       from_kuid(&init_user_ns, current_uid()));
@@ -2186,10 +2186,10 @@ int ll_remount_fs(struct super_block *sb, int *flags, char *data)
  */
 void ll_open_cleanup(struct super_block *sb, struct ptlrpc_request *open_req)
 {
-	struct mdt_body			*body;
-	struct md_op_data		*op_data;
-	struct ptlrpc_request		*close_req = NULL;
-	struct obd_export		*exp	   = ll_s2sbi(sb)->ll_md_exp;
+	struct mdt_body	*body;
+	struct md_op_data *op_data;
+	struct ptlrpc_request *close_req = NULL;
+	struct obd_export *exp = ll_s2sbi(sb)->ll_md_exp;
 
 	body = req_capsule_server_get(&open_req->rq_pill, &RMF_MDT_BODY);
 	op_data = kzalloc(sizeof(*op_data), GFP_NOFS);
diff --git a/drivers/staging/lustre/lustre/llite/llite_mmap.c b/drivers/staging/lustre/lustre/llite/llite_mmap.c
index c6e9f10..f5aaaf7 100644
--- a/drivers/staging/lustre/lustre/llite/llite_mmap.c
+++ b/drivers/staging/lustre/lustre/llite/llite_mmap.c
@@ -90,11 +90,11 @@ struct vm_area_struct *our_vma(struct mm_struct *mm, unsigned long addr,
 ll_fault_io_init(struct lu_env *env, struct vm_area_struct *vma,
 		 pgoff_t index, unsigned long *ra_flags)
 {
-	struct file	       *file = vma->vm_file;
-	struct inode	       *inode = file_inode(file);
-	struct cl_io	       *io;
-	struct cl_fault_io     *fio;
-	int			rc;
+	struct file *file = vma->vm_file;
+	struct inode *inode = file_inode(file);
+	struct cl_io *io;
+	struct cl_fault_io *fio;
+	int rc;
 
 	if (ll_file_nolock(file))
 		return ERR_PTR(-EOPNOTSUPP);
@@ -105,7 +105,7 @@ struct vm_area_struct *our_vma(struct mm_struct *mm, unsigned long addr,
 	LASSERT(io->ci_obj);
 
 	fio = &io->u.ci_fault;
-	fio->ft_index      = index;
+	fio->ft_index = index;
 	fio->ft_executable = vma->vm_flags & VM_EXEC;
 
 	/*
@@ -146,14 +146,14 @@ struct vm_area_struct *our_vma(struct mm_struct *mm, unsigned long addr,
 static int __ll_page_mkwrite(struct vm_area_struct *vma, struct page *vmpage,
 			     bool *retry)
 {
-	struct lu_env	   *env;
-	struct cl_io	    *io;
-	struct vvp_io	   *vio;
-	int		      result;
+	struct lu_env *env;
+	struct cl_io *io;
+	struct vvp_io *vio;
+	int result;
 	u16 refcheck;
-	sigset_t	     old, new;
-	struct inode	     *inode;
-	struct ll_inode_info     *lli;
+	sigset_t old, new;
+	struct inode *inode;
+	struct ll_inode_info *lli;
 
 	env = cl_env_get(&refcheck);
 	if (IS_ERR(env))
@@ -173,7 +173,7 @@ static int __ll_page_mkwrite(struct vm_area_struct *vma, struct page *vmpage,
 	io->u.ci_fault.ft_writable = 1;
 
 	vio = vvp_env_io(env);
-	vio->u.fault.ft_vma    = vma;
+	vio->u.fault.ft_vma = vma;
 	vio->u.fault.ft_vmpage = vmpage;
 
 	siginitsetinv(&new, sigmask(SIGKILL) | sigmask(SIGTERM));
@@ -263,13 +263,13 @@ static inline vm_fault_t to_fault_error(int result)
  */
 static vm_fault_t __ll_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
 {
-	struct lu_env	   *env;
-	struct cl_io	    *io;
-	struct vvp_io	   *vio = NULL;
-	struct page	     *vmpage;
-	unsigned long	    ra_flags;
-	int		      result = 0;
-	vm_fault_t		fault_ret = 0;
+	struct lu_env *env;
+	struct cl_io *io;
+	struct vvp_io *vio = NULL;
+	struct page *vmpage;
+	unsigned long ra_flags;
+	int result = 0;
+	vm_fault_t fault_ret = 0;
 	u16 refcheck;
 
 	env = cl_env_get(&refcheck);
@@ -307,8 +307,8 @@ static vm_fault_t __ll_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
 	result = io->ci_result;
 	if (result == 0) {
 		vio = vvp_env_io(env);
-		vio->u.fault.ft_vma       = vma;
-		vio->u.fault.ft_vmpage    = NULL;
+		vio->u.fault.ft_vma = vma;
+		vio->u.fault.ft_vmpage = NULL;
 		vio->u.fault.ft_vmf = vmf;
 		vio->u.fault.ft_flags = 0;
 		vio->u.fault.ft_flags_valid = false;
@@ -444,7 +444,7 @@ static vm_fault_t ll_page_mkwrite(struct vm_fault *vmf)
  */
 static void ll_vm_open(struct vm_area_struct *vma)
 {
-	struct inode *inode    = file_inode(vma->vm_file);
+	struct inode *inode = file_inode(vma->vm_file);
 	struct vvp_object *vob = cl_inode2vvp(inode);
 
 	LASSERT(atomic_read(&vob->vob_mmap_cnt) >= 0);
@@ -456,8 +456,8 @@ static void ll_vm_open(struct vm_area_struct *vma)
  */
 static void ll_vm_close(struct vm_area_struct *vma)
 {
-	struct inode      *inode = file_inode(vma->vm_file);
-	struct vvp_object *vob   = cl_inode2vvp(inode);
+	struct inode *inode = file_inode(vma->vm_file);
+	struct vvp_object *vob = cl_inode2vvp(inode);
 
 	atomic_dec(&vob->vob_mmap_cnt);
 	LASSERT(atomic_read(&vob->vob_mmap_cnt) >= 0);
diff --git a/drivers/staging/lustre/lustre/llite/llite_nfs.c b/drivers/staging/lustre/lustre/llite/llite_nfs.c
index 7c5c9b8..3f34073 100644
--- a/drivers/staging/lustre/lustre/llite/llite_nfs.c
+++ b/drivers/staging/lustre/lustre/llite/llite_nfs.c
@@ -76,14 +76,14 @@ void get_uuid2fsid(const char *name, int len, __kernel_fsid_t *fsid)
 struct inode *search_inode_for_lustre(struct super_block *sb,
 				      const struct lu_fid *fid)
 {
-	struct ll_sb_info     *sbi = ll_s2sbi(sb);
+	struct ll_sb_info *sbi = ll_s2sbi(sb);
 	struct ptlrpc_request *req = NULL;
-	struct inode	  *inode = NULL;
-	int		   eadatalen = 0;
-	unsigned long	      hash = cl_fid_build_ino(fid,
-						      ll_need_32bit_api(sbi));
-	struct  md_op_data    *op_data;
-	int		   rc;
+	struct inode *inode = NULL;
+	int eadatalen = 0;
+	unsigned long hash = cl_fid_build_ino(fid,
+					      ll_need_32bit_api(sbi));
+	struct md_op_data *op_data;
+	int rc;
 
 	CDEBUG(D_INFO, "searching inode for:(%lu," DFID ")\n", hash, PFID(fid));
 
@@ -123,15 +123,15 @@ struct inode *search_inode_for_lustre(struct super_block *sb,
 }
 
 struct lustre_nfs_fid {
-	struct lu_fid   lnf_child;
-	struct lu_fid   lnf_parent;
+	struct lu_fid	lnf_child;
+	struct lu_fid	lnf_parent;
 };
 
 static struct dentry *
 ll_iget_for_nfs(struct super_block *sb,
 		struct lu_fid *fid, struct lu_fid *parent)
 {
-	struct inode  *inode;
+	struct inode *inode;
 	struct dentry *result;
 
 	if (!fid_is_sane(fid))
@@ -179,7 +179,7 @@ struct lustre_nfs_fid {
 
 /**
  * \a connectable - is nfsd will connect himself or this should be done
- *		  at lustre
+ *		    at lustre
  *
  * The return value is file handle type:
  * 1 -- contains child file handle;
@@ -297,12 +297,12 @@ static struct dentry *ll_fh_to_parent(struct super_block *sb, struct fid *fid,
 int ll_dir_get_parent_fid(struct inode *dir, struct lu_fid *parent_fid)
 {
 	struct ptlrpc_request *req = NULL;
-	struct ll_sb_info     *sbi;
-	struct mdt_body       *body;
+	struct ll_sb_info *sbi;
+	struct mdt_body *body;
 	static const char dotdot[] = "..";
-	struct md_op_data     *op_data;
-	int		   rc;
-	int		      lmmsize;
+	struct md_op_data *op_data;
+	int rc;
+	int lmmsize;
 
 	LASSERT(dir && S_ISDIR(dir->i_mode));
 
@@ -361,9 +361,9 @@ static struct dentry *ll_get_parent(struct dentry *dchild)
 }
 
 const struct export_operations lustre_export_operations = {
-	.get_parent = ll_get_parent,
-	.encode_fh  = ll_encode_fh,
-	.get_name   = ll_get_name,
-	.fh_to_dentry = ll_fh_to_dentry,
-	.fh_to_parent = ll_fh_to_parent,
+	.get_parent	= ll_get_parent,
+	.encode_fh	= ll_encode_fh,
+	.get_name	= ll_get_name,
+	.fh_to_dentry	= ll_fh_to_dentry,
+	.fh_to_parent	= ll_fh_to_parent,
 };
diff --git a/drivers/staging/lustre/lustre/llite/lproc_llite.c b/drivers/staging/lustre/lustre/llite/lproc_llite.c
index 672de81..001bed9 100644
--- a/drivers/staging/lustre/lustre/llite/lproc_llite.c
+++ b/drivers/staging/lustre/lustre/llite/lproc_llite.c
@@ -452,8 +452,8 @@ static ssize_t max_read_ahead_whole_mb_store(struct kobject *kobj,
 
 static int ll_max_cached_mb_seq_show(struct seq_file *m, void *v)
 {
-	struct super_block     *sb    = m->private;
-	struct ll_sb_info      *sbi   = ll_s2sbi(sb);
+	struct super_block *sb = m->private;
+	struct ll_sb_info *sbi = ll_s2sbi(sb);
 	struct cl_client_cache *cache = sbi->ll_cache;
 	int shift = 20 - PAGE_SHIFT;
 	long max_cached_mb;
@@ -1050,8 +1050,8 @@ static ssize_t fast_read_store(struct kobject *kobj,
 
 static int ll_unstable_stats_seq_show(struct seq_file *m, void *v)
 {
-	struct super_block     *sb    = m->private;
-	struct ll_sb_info      *sbi   = ll_s2sbi(sb);
+	struct super_block *sb = m->private;
+	struct ll_sb_info *sbi = ll_s2sbi(sb);
 	struct cl_client_cache *cache = sbi->ll_cache;
 	long pages;
 	int mb;
@@ -1232,9 +1232,9 @@ static void llite_kobj_release(struct kobject *kobj)
 };
 
 static const struct llite_file_opcode {
-	u32       opcode;
-	u32       type;
-	const char *opname;
+	u32		opcode;
+	u32		type;
+	const char	*opname;
 } llite_opcode_table[LPROC_LL_FILE_OPCODES] = {
 	/* file operation */
 	{ LPROC_LL_DIRTY_HITS,     LPROCFS_TYPE_REGS, "dirty_pages_hits" },
@@ -1247,31 +1247,31 @@ static void llite_kobj_release(struct kobject *kobj)
 				   "brw_read" },
 	{ LPROC_LL_BRW_WRITE,      LPROCFS_CNTR_AVGMINMAX | LPROCFS_TYPE_PAGES,
 				   "brw_write" },
-	{ LPROC_LL_IOCTL,	  LPROCFS_TYPE_REGS, "ioctl" },
+	{ LPROC_LL_IOCTL,	   LPROCFS_TYPE_REGS, "ioctl" },
 	{ LPROC_LL_OPEN,	   LPROCFS_TYPE_REGS, "open" },
-	{ LPROC_LL_RELEASE,	LPROCFS_TYPE_REGS, "close" },
-	{ LPROC_LL_MAP,	    LPROCFS_TYPE_REGS, "mmap" },
-	{ LPROC_LL_FAULT,		LPROCFS_TYPE_REGS, "page_fault" },
-	{ LPROC_LL_MKWRITE,		LPROCFS_TYPE_REGS, "page_mkwrite" },
-	{ LPROC_LL_LLSEEK,	 LPROCFS_TYPE_REGS, "seek" },
-	{ LPROC_LL_FSYNC,	  LPROCFS_TYPE_REGS, "fsync" },
-	{ LPROC_LL_READDIR,	LPROCFS_TYPE_REGS, "readdir" },
+	{ LPROC_LL_RELEASE,	   LPROCFS_TYPE_REGS, "close" },
+	{ LPROC_LL_MAP,		   LPROCFS_TYPE_REGS, "mmap" },
+	{ LPROC_LL_FAULT,	   LPROCFS_TYPE_REGS, "page_fault" },
+	{ LPROC_LL_MKWRITE,	   LPROCFS_TYPE_REGS, "page_mkwrite" },
+	{ LPROC_LL_LLSEEK,	   LPROCFS_TYPE_REGS, "seek" },
+	{ LPROC_LL_FSYNC,	   LPROCFS_TYPE_REGS, "fsync" },
+	{ LPROC_LL_READDIR,	   LPROCFS_TYPE_REGS, "readdir" },
 	/* inode operation */
-	{ LPROC_LL_SETATTR,	LPROCFS_TYPE_REGS, "setattr" },
-	{ LPROC_LL_TRUNC,	  LPROCFS_TYPE_REGS, "truncate" },
-	{ LPROC_LL_FLOCK,	  LPROCFS_TYPE_REGS, "flock" },
-	{ LPROC_LL_GETATTR,	LPROCFS_TYPE_REGS, "getattr" },
+	{ LPROC_LL_SETATTR,	   LPROCFS_TYPE_REGS, "setattr" },
+	{ LPROC_LL_TRUNC,	   LPROCFS_TYPE_REGS, "truncate" },
+	{ LPROC_LL_FLOCK,	   LPROCFS_TYPE_REGS, "flock" },
+	{ LPROC_LL_GETATTR,	   LPROCFS_TYPE_REGS, "getattr" },
 	/* dir inode operation */
-	{ LPROC_LL_CREATE,	 LPROCFS_TYPE_REGS, "create" },
+	{ LPROC_LL_CREATE,	   LPROCFS_TYPE_REGS, "create" },
 	{ LPROC_LL_LINK,	   LPROCFS_TYPE_REGS, "link" },
-	{ LPROC_LL_UNLINK,	 LPROCFS_TYPE_REGS, "unlink" },
-	{ LPROC_LL_SYMLINK,	LPROCFS_TYPE_REGS, "symlink" },
-	{ LPROC_LL_MKDIR,	  LPROCFS_TYPE_REGS, "mkdir" },
-	{ LPROC_LL_RMDIR,	  LPROCFS_TYPE_REGS, "rmdir" },
-	{ LPROC_LL_MKNOD,	  LPROCFS_TYPE_REGS, "mknod" },
-	{ LPROC_LL_RENAME,	 LPROCFS_TYPE_REGS, "rename" },
+	{ LPROC_LL_UNLINK,	   LPROCFS_TYPE_REGS, "unlink" },
+	{ LPROC_LL_SYMLINK,	   LPROCFS_TYPE_REGS, "symlink" },
+	{ LPROC_LL_MKDIR,	   LPROCFS_TYPE_REGS, "mkdir" },
+	{ LPROC_LL_RMDIR,	   LPROCFS_TYPE_REGS, "rmdir" },
+	{ LPROC_LL_MKNOD,	   LPROCFS_TYPE_REGS, "mknod" },
+	{ LPROC_LL_RENAME,	   LPROCFS_TYPE_REGS, "rename" },
 	/* special inode operation */
-	{ LPROC_LL_STAFS,	  LPROCFS_TYPE_REGS, "statfs" },
+	{ LPROC_LL_STAFS,	   LPROCFS_TYPE_REGS, "statfs" },
 	{ LPROC_LL_ALLOC_INODE,    LPROCFS_TYPE_REGS, "alloc_inode" },
 	{ LPROC_LL_SETXATTR,       LPROCFS_TYPE_REGS, "setxattr" },
 	{ LPROC_LL_GETXATTR,       LPROCFS_TYPE_REGS, "getxattr" },
@@ -1301,19 +1301,19 @@ void ll_stats_ops_tally(struct ll_sb_info *sbi, int op, int count)
 EXPORT_SYMBOL(ll_stats_ops_tally);
 
 static const char *ra_stat_string[] = {
-	[RA_STAT_HIT] = "hits",
-	[RA_STAT_MISS] = "misses",
-	[RA_STAT_DISTANT_READPAGE] = "readpage not consecutive",
-	[RA_STAT_MISS_IN_WINDOW] = "miss inside window",
-	[RA_STAT_FAILED_GRAB_PAGE] = "failed grab_cache_page",
-	[RA_STAT_FAILED_MATCH] = "failed lock match",
-	[RA_STAT_DISCARDED] = "read but discarded",
-	[RA_STAT_ZERO_LEN] = "zero length file",
-	[RA_STAT_ZERO_WINDOW] = "zero size window",
-	[RA_STAT_EOF] = "read-ahead to EOF",
-	[RA_STAT_MAX_IN_FLIGHT] = "hit max r-a issue",
-	[RA_STAT_WRONG_GRAB_PAGE] = "wrong page from grab_cache_page",
-	[RA_STAT_FAILED_REACH_END] = "failed to reach end"
+	[RA_STAT_HIT]			= "hits",
+	[RA_STAT_MISS]			= "misses",
+	[RA_STAT_DISTANT_READPAGE]	= "readpage not consecutive",
+	[RA_STAT_MISS_IN_WINDOW]	= "miss inside window",
+	[RA_STAT_FAILED_GRAB_PAGE]	= "failed grab_cache_page",
+	[RA_STAT_FAILED_MATCH]		= "failed lock match",
+	[RA_STAT_DISCARDED]		= "read but discarded",
+	[RA_STAT_ZERO_LEN]		= "zero length file",
+	[RA_STAT_ZERO_WINDOW]		= "zero size window",
+	[RA_STAT_EOF]			= "read-ahead to EOF",
+	[RA_STAT_MAX_IN_FLIGHT]		= "hit max r-a issue",
+	[RA_STAT_WRONG_GRAB_PAGE]	= "wrong page from grab_cache_page",
+	[RA_STAT_FAILED_REACH_END]	= "failed to reach end"
 };
 
 int ll_debugfs_register_super(struct super_block *sb, const char *name)
diff --git a/drivers/staging/lustre/lustre/llite/namei.c b/drivers/staging/lustre/lustre/llite/namei.c
index 8bdf947..a87c8a2 100644
--- a/drivers/staging/lustre/lustre/llite/namei.c
+++ b/drivers/staging/lustre/lustre/llite/namei.c
@@ -53,7 +53,7 @@ static int ll_create_it(struct inode *dir, struct dentry *dentry,
 static int ll_test_inode(struct inode *inode, void *opaque)
 {
 	struct ll_inode_info *lli = ll_i2info(inode);
-	struct lustre_md     *md = opaque;
+	struct lustre_md *md = opaque;
 
 	if (unlikely(!(md->body->mbo_valid & OBD_MD_FLID))) {
 		CERROR("MDS body missing FID\n");
@@ -102,7 +102,7 @@ static int ll_set_inode(struct inode *inode, void *opaque)
 struct inode *ll_iget(struct super_block *sb, ino_t hash,
 		      struct lustre_md *md)
 {
-	struct inode	 *inode;
+	struct inode *inode;
 	int rc = 0;
 
 	LASSERT(hash != 0);
@@ -499,8 +499,9 @@ static int ll_lookup_it_finish(struct ptlrpc_request *request,
 		 */
 		/* Check that parent has UPDATE lock. */
 		struct lookup_intent parent_it = {
-					.it_op = IT_GETATTR,
-					.it_lock_handle = 0 };
+			.it_op = IT_GETATTR,
+			.it_lock_handle = 0
+		};
 		struct lu_fid fid = ll_i2info(parent)->lli_fid;
 
 		/* If it is striped directory, get the real stripe parent */
@@ -1255,28 +1256,28 @@ static int ll_rename(struct inode *src, struct dentry *src_dchild,
 }
 
 const struct inode_operations ll_dir_inode_operations = {
-	.mknod	      = ll_mknod,
-	.atomic_open	    = ll_atomic_open,
-	.lookup	     = ll_lookup_nd,
-	.create	     = ll_create_nd,
+	.mknod			= ll_mknod,
+	.atomic_open		= ll_atomic_open,
+	.lookup			= ll_lookup_nd,
+	.create			= ll_create_nd,
 	/* We need all these non-raw things for NFSD, to not patch it. */
-	.unlink	     = ll_unlink,
-	.mkdir	      = ll_mkdir,
-	.rmdir	      = ll_rmdir,
-	.symlink	    = ll_symlink,
-	.link	       = ll_link,
-	.rename		= ll_rename,
-	.setattr	    = ll_setattr,
-	.getattr	    = ll_getattr,
-	.permission	 = ll_inode_permission,
-	.listxattr	  = ll_listxattr,
-	.get_acl	    = ll_get_acl,
+	.unlink			= ll_unlink,
+	.mkdir			= ll_mkdir,
+	.rmdir			= ll_rmdir,
+	.symlink		= ll_symlink,
+	.link			= ll_link,
+	.rename			= ll_rename,
+	.setattr		= ll_setattr,
+	.getattr		= ll_getattr,
+	.permission		= ll_inode_permission,
+	.listxattr		= ll_listxattr,
+	.get_acl		= ll_get_acl,
 };
 
 const struct inode_operations ll_special_inode_operations = {
-	.setattr	= ll_setattr,
-	.getattr	= ll_getattr,
-	.permission     = ll_inode_permission,
-	.listxattr      = ll_listxattr,
-	.get_acl	    = ll_get_acl,
+	.setattr		= ll_setattr,
+	.getattr		= ll_getattr,
+	.permission		= ll_inode_permission,
+	.listxattr		= ll_listxattr,
+	.get_acl		= ll_get_acl,
 };
diff --git a/drivers/staging/lustre/lustre/llite/rw.c b/drivers/staging/lustre/lustre/llite/rw.c
index e207d7c..af983ee 100644
--- a/drivers/staging/lustre/lustre/llite/rw.c
+++ b/drivers/staging/lustre/lustre/llite/rw.c
@@ -309,7 +309,7 @@ static inline int stride_io_mode(struct ll_readahead_state *ras)
 static int ria_page_count(struct ra_io_arg *ria)
 {
 	u64 length = ria->ria_end >= ria->ria_start ?
-		       ria->ria_end - ria->ria_start + 1 : 0;
+		     ria->ria_end - ria->ria_start + 1 : 0;
 
 	return stride_pg_count(ria->ria_stoff, ria->ria_length,
 			       ria->ria_pages, ria->ria_start,
diff --git a/drivers/staging/lustre/lustre/llite/rw26.c b/drivers/staging/lustre/lustre/llite/rw26.c
index 9843c9e..e4ce3b6 100644
--- a/drivers/staging/lustre/lustre/llite/rw26.c
+++ b/drivers/staging/lustre/lustre/llite/rw26.c
@@ -67,9 +67,9 @@
 static void ll_invalidatepage(struct page *vmpage, unsigned int offset,
 			      unsigned int length)
 {
-	struct inode     *inode;
-	struct lu_env    *env;
-	struct cl_page   *page;
+	struct inode *inode;
+	struct lu_env *env;
+	struct cl_page *page;
 	struct cl_object *obj;
 
 	LASSERT(PageLocked(vmpage));
@@ -101,9 +101,9 @@ static void ll_invalidatepage(struct page *vmpage, unsigned int offset,
 
 static int ll_releasepage(struct page *vmpage, gfp_t gfp_mask)
 {
-	struct lu_env     *env;
-	struct cl_object  *obj;
-	struct cl_page    *page;
+	struct lu_env *env;
+	struct cl_object *obj;
+	struct cl_page *page;
 	struct address_space *mapping;
 	int result = 0;
 
@@ -177,9 +177,9 @@ static ssize_t ll_direct_IO_seg(const struct lu_env *env, struct cl_io *io,
 				loff_t file_offset, struct page **pages,
 				int page_count)
 {
-	struct cl_page    *clp;
-	struct cl_2queue  *queue;
-	struct cl_object  *obj = io->ci_obj;
+	struct cl_page *clp;
+	struct cl_2queue *queue;
+	struct cl_object *obj = io->ci_obj;
 	int i;
 	ssize_t rc = 0;
 	size_t page_size = cl_page_size(obj);
@@ -214,8 +214,8 @@ static ssize_t ll_direct_IO_seg(const struct lu_env *env, struct cl_io *io,
 			struct page *vmpage = cl_page_vmpage(clp);
 			struct page *src_page;
 			struct page *dst_page;
-			void       *src;
-			void       *dst;
+			void *src;
+			void *dst;
 
 			src_page = (rw == WRITE) ? pages[i] : vmpage;
 			dst_page = (rw == WRITE) ? vmpage : pages[i];
@@ -386,11 +386,11 @@ static ssize_t ll_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
 static int ll_prepare_partial_page(const struct lu_env *env, struct cl_io *io,
 				   struct cl_page *pg)
 {
-	struct cl_attr *attr   = vvp_env_thread_attr(env);
-	struct cl_object *obj  = io->ci_obj;
-	struct vvp_page *vpg   = cl_object_page_slice(obj, pg);
-	loff_t          offset = cl_offset(obj, vvp_index(vpg));
-	int             result;
+	struct cl_attr *attr = vvp_env_thread_attr(env);
+	struct cl_object *obj = io->ci_obj;
+	struct vvp_page *vpg = cl_object_page_slice(obj, pg);
+	loff_t offset = cl_offset(obj, vvp_index(vpg));
+	int result;
 
 	cl_object_attr_lock(obj);
 	result = cl_object_attr_get(env, obj, attr);
@@ -421,7 +421,7 @@ static int ll_write_begin(struct file *file, struct address_space *mapping,
 {
 	struct ll_cl_context *lcc;
 	const struct lu_env *env = NULL;
-	struct cl_io   *io;
+	struct cl_io *io;
 	struct cl_page *page = NULL;
 	struct cl_object *clob = ll_i2info(mapping->host)->lli_clob;
 	pgoff_t index = pos >> PAGE_SHIFT;
@@ -594,8 +594,7 @@ static int ll_write_end(struct file *file, struct address_space *mapping,
 #ifdef CONFIG_MIGRATION
 static int ll_migratepage(struct address_space *mapping,
 			  struct page *newpage, struct page *page,
-			  enum migrate_mode mode
-		)
+			  enum migrate_mode mode)
 {
 	/* Always fail page migration until we have a proper implementation */
 	return -EIO;
@@ -603,16 +602,16 @@ static int ll_migratepage(struct address_space *mapping,
 #endif
 
 const struct address_space_operations ll_aops = {
-	.readpage	= ll_readpage,
-	.direct_IO      = ll_direct_IO,
-	.writepage      = ll_writepage,
-	.writepages     = ll_writepages,
-	.set_page_dirty = __set_page_dirty_nobuffers,
-	.write_begin    = ll_write_begin,
-	.write_end      = ll_write_end,
-	.invalidatepage = ll_invalidatepage,
-	.releasepage    = (void *)ll_releasepage,
+	.readpage		= ll_readpage,
+	.direct_IO		= ll_direct_IO,
+	.writepage		= ll_writepage,
+	.writepages		= ll_writepages,
+	.set_page_dirty		= __set_page_dirty_nobuffers,
+	.write_begin		= ll_write_begin,
+	.write_end		= ll_write_end,
+	.invalidatepage		= ll_invalidatepage,
+	.releasepage		= (void *)ll_releasepage,
 #ifdef CONFIG_MIGRATION
-	.migratepage    = ll_migratepage,
+	.migratepage		= ll_migratepage,
 #endif
 };
diff --git a/drivers/staging/lustre/lustre/llite/statahead.c b/drivers/staging/lustre/lustre/llite/statahead.c
index 6f5c7ab..0c305ba 100644
--- a/drivers/staging/lustre/lustre/llite/statahead.c
+++ b/drivers/staging/lustre/lustre/llite/statahead.c
@@ -61,25 +61,25 @@ enum se_stat {
  */
 struct sa_entry {
 	/* link into sai_interim_entries or sai_entries */
-	struct list_head	      se_list;
+	struct list_head	se_list;
 	/* link into sai hash table locally */
-	struct list_head	      se_hash;
+	struct list_head	se_hash;
 	/* entry index in the sai */
-	u64		   se_index;
+	u64			se_index;
 	/* low layer ldlm lock handle */
-	u64		   se_handle;
+	u64			se_handle;
 	/* entry status */
 	enum se_stat		se_state;
 	/* entry size, contains name */
-	int		     se_size;
+	int			se_size;
 	/* pointer to async getattr enqueue info */
-	struct md_enqueue_info *se_minfo;
+	struct md_enqueue_info	*se_minfo;
 	/* pointer to the async getattr request */
-	struct ptlrpc_request  *se_req;
+	struct ptlrpc_request	*se_req;
 	/* pointer to the target inode */
-	struct inode	   *se_inode;
+	struct inode		*se_inode;
 	/* entry name */
-	struct qstr	     se_qstr;
+	struct qstr		se_qstr;
 	/* entry fid */
 	struct lu_fid		se_fid;
 };
@@ -175,9 +175,9 @@ static inline int is_omitted_entry(struct ll_statahead_info *sai, u64 index)
 	 const char *name, int len, const struct lu_fid *fid)
 {
 	struct ll_inode_info *lli;
-	struct sa_entry   *entry;
-	int		   entry_size;
-	char		 *dname;
+	struct sa_entry *entry;
+	int entry_size;
+	char *dname;
 
 	entry_size = sizeof(struct sa_entry) + (len & ~3) + 4;
 	entry = kzalloc(entry_size, GFP_NOFS);
@@ -368,9 +368,9 @@ static void sa_free(struct ll_statahead_info *sai, struct sa_entry *entry)
 static void ll_agl_add(struct ll_statahead_info *sai,
 		       struct inode *inode, int index)
 {
-	struct ll_inode_info *child  = ll_i2info(inode);
+	struct ll_inode_info *child = ll_i2info(inode);
 	struct ll_inode_info *parent = ll_i2info(sai->sai_dentry->d_inode);
-	int		   added  = 0;
+	int added = 0;
 
 	spin_lock(&child->lli_agl_lock);
 	if (child->lli_agl_index == 0) {
@@ -398,7 +398,7 @@ static struct ll_statahead_info *ll_sai_alloc(struct dentry *dentry)
 {
 	struct ll_inode_info *lli = ll_i2info(dentry->d_inode);
 	struct ll_statahead_info *sai;
-	int		       i;
+	int i;
 
 	sai = kzalloc(sizeof(*sai), GFP_NOFS);
 	if (!sai)
@@ -491,9 +491,9 @@ static void ll_sai_put(struct ll_statahead_info *sai)
 /* Do NOT forget to drop inode refcount when into sai_agls. */
 static void ll_agl_trigger(struct inode *inode, struct ll_statahead_info *sai)
 {
-	struct ll_inode_info *lli   = ll_i2info(inode);
-	u64		 index = lli->lli_agl_index;
-	int		   rc;
+	struct ll_inode_info *lli = ll_i2info(inode);
+	u64 index = lli->lli_agl_index;
+	int rc;
 
 	LASSERT(list_empty(&lli->lli_agl_list));
 
@@ -569,12 +569,12 @@ static void sa_instantiate(struct ll_statahead_info *sai,
 			   struct sa_entry *entry)
 {
 	struct inode *dir = sai->sai_dentry->d_inode;
-	struct inode	   *child;
+	struct inode *child;
 	struct md_enqueue_info *minfo;
-	struct lookup_intent   *it;
-	struct ptlrpc_request  *req;
+	struct lookup_intent *it;
+	struct ptlrpc_request *req;
 	struct mdt_body	*body;
-	int		     rc    = 0;
+	int rc = 0;
 
 	LASSERT(entry->se_handle != 0);
 
@@ -660,9 +660,9 @@ static void sa_handle_callback(struct ll_statahead_info *sai)
 static int ll_statahead_interpret(struct ptlrpc_request *req,
 				  struct md_enqueue_info *minfo, int rc)
 {
-	struct lookup_intent     *it  = &minfo->mi_it;
-	struct inode	     *dir = minfo->mi_dir;
-	struct ll_inode_info     *lli = ll_i2info(dir);
+	struct lookup_intent *it = &minfo->mi_it;
+	struct inode *dir = minfo->mi_dir;
+	struct ll_inode_info *lli = ll_i2info(dir);
 	struct ll_statahead_info *sai = lli->lli_sai;
 	struct sa_entry *entry = (struct sa_entry *)minfo->mi_cbdata;
 	u64 handle = 0;
@@ -738,9 +738,9 @@ static void sa_fini_data(struct md_enqueue_info *minfo)
 static struct md_enqueue_info *
 sa_prep_data(struct inode *dir, struct inode *child, struct sa_entry *entry)
 {
-	struct md_enqueue_info   *minfo;
+	struct md_enqueue_info *minfo;
 	struct ldlm_enqueue_info *einfo;
-	struct md_op_data	*op_data;
+	struct md_op_data *op_data;
 
 	minfo = kzalloc(sizeof(*minfo), GFP_NOFS);
 	if (!minfo)
@@ -762,11 +762,11 @@ static void sa_fini_data(struct md_enqueue_info *minfo)
 	minfo->mi_cbdata = entry;
 
 	einfo = &minfo->mi_einfo;
-	einfo->ei_type   = LDLM_IBITS;
-	einfo->ei_mode   = it_to_lock_mode(&minfo->mi_it);
-	einfo->ei_cb_bl  = ll_md_blocking_ast;
-	einfo->ei_cb_cp  = ldlm_completion_ast;
-	einfo->ei_cb_gl  = NULL;
+	einfo->ei_type = LDLM_IBITS;
+	einfo->ei_mode = it_to_lock_mode(&minfo->mi_it);
+	einfo->ei_cb_bl = ll_md_blocking_ast;
+	einfo->ei_cb_cp = ldlm_completion_ast;
+	einfo->ei_cb_gl = NULL;
 	einfo->ei_cbdata = NULL;
 
 	return minfo;
@@ -775,8 +775,8 @@ static void sa_fini_data(struct md_enqueue_info *minfo)
 /* async stat for file not found in dcache */
 static int sa_lookup(struct inode *dir, struct sa_entry *entry)
 {
-	struct md_enqueue_info   *minfo;
-	int		       rc;
+	struct md_enqueue_info *minfo;
+	int rc;
 
 	minfo = sa_prep_data(dir, NULL, entry);
 	if (IS_ERR(minfo))
@@ -799,10 +799,12 @@ static int sa_lookup(struct inode *dir, struct sa_entry *entry)
 static int sa_revalidate(struct inode *dir, struct sa_entry *entry,
 			 struct dentry *dentry)
 {
-	struct inode	     *inode = d_inode(dentry);
-	struct lookup_intent      it = { .it_op = IT_GETATTR,
-					 .it_lock_handle = 0 };
-	struct md_enqueue_info   *minfo;
+	struct inode *inode = d_inode(dentry);
+	struct lookup_intent it = {
+		.it_op = IT_GETATTR,
+		.it_lock_handle = 0
+	};
+	struct md_enqueue_info *minfo;
 	int rc;
 
 	if (unlikely(!inode))
@@ -841,12 +843,12 @@ static int sa_revalidate(struct inode *dir, struct sa_entry *entry,
 static void sa_statahead(struct dentry *parent, const char *name, int len,
 			 const struct lu_fid *fid)
 {
-	struct inode	     *dir    = d_inode(parent);
-	struct ll_inode_info     *lli    = ll_i2info(dir);
-	struct ll_statahead_info *sai    = lli->lli_sai;
-	struct dentry	    *dentry = NULL;
+	struct inode *dir = d_inode(parent);
+	struct ll_inode_info *lli = ll_i2info(dir);
+	struct ll_statahead_info *sai = lli->lli_sai;
+	struct dentry *dentry = NULL;
 	struct sa_entry *entry;
-	int		       rc;
+	int rc;
 
 	entry = sa_alloc(parent, sai, sai->sai_index, name, len, fid);
 	if (IS_ERR(entry))
@@ -875,10 +877,10 @@ static void sa_statahead(struct dentry *parent, const char *name, int len,
 /* async glimpse (agl) thread main function */
 static int ll_agl_thread(void *arg)
 {
-	struct dentry	    *parent = arg;
-	struct inode	     *dir    = d_inode(parent);
-	struct ll_inode_info     *plli   = ll_i2info(dir);
-	struct ll_inode_info     *clli;
+	struct dentry *parent = arg;
+	struct inode *dir = d_inode(parent);
+	struct ll_inode_info *plli = ll_i2info(dir);
+	struct ll_inode_info *clli;
 	/* We already own this reference, so it is safe to take it without a lock. */
 	struct ll_statahead_info *sai = plli->lli_sai;
 
@@ -929,7 +931,7 @@ static int ll_agl_thread(void *arg)
 /* start agl thread */
 static void ll_start_agl(struct dentry *parent, struct ll_statahead_info *sai)
 {
-	struct ll_inode_info  *plli;
+	struct ll_inode_info *plli;
 	struct task_struct *task;
 
 	CDEBUG(D_READA, "start agl thread: sai %p, parent %pd\n",
@@ -957,15 +959,15 @@ static void ll_start_agl(struct dentry *parent, struct ll_statahead_info *sai)
 /* statahead thread main function */
 static int ll_statahead_thread(void *arg)
 {
-	struct dentry	    *parent = arg;
-	struct inode	     *dir    = d_inode(parent);
-	struct ll_inode_info     *lli   = ll_i2info(dir);
-	struct ll_sb_info	*sbi    = ll_i2sbi(dir);
+	struct dentry *parent = arg;
+	struct inode *dir = d_inode(parent);
+	struct ll_inode_info *lli = ll_i2info(dir);
+	struct ll_sb_info *sbi = ll_i2sbi(dir);
 	struct ll_statahead_info *sai = lli->lli_sai;
-	struct page	      *page = NULL;
-	u64		     pos    = 0;
-	int		       first  = 0;
-	int		       rc     = 0;
+	struct page *page = NULL;
+	u64 pos = 0;
+	int first = 0;
+	int rc = 0;
 	struct md_op_data *op_data;
 
 	CDEBUG(D_READA, "statahead thread starting: sai %p, parent %pd\n",
@@ -980,7 +982,7 @@ static int ll_statahead_thread(void *arg)
 
 	while (pos != MDS_DIR_END_OFF && sai->sai_task) {
 		struct lu_dirpage *dp;
-		struct lu_dirent  *ent;
+		struct lu_dirent *ent;
 
 		sai->sai_in_readpage = 1;
 		page = ll_get_dir_page(dir, op_data, pos);
@@ -1225,11 +1227,11 @@ enum {
 /* file is first dirent under @dir */
 static int is_first_dirent(struct inode *dir, struct dentry *dentry)
 {
-	const struct qstr  *target = &dentry->d_name;
+	const struct qstr *target = &dentry->d_name;
 	struct md_op_data *op_data;
-	struct page	  *page;
-	u64		 pos    = 0;
-	int		   dot_de;
+	struct page *page;
+	u64 pos = 0;
+	int dot_de;
 	int rc = LS_NOT_FIRST_DE;
 
 	op_data = ll_prep_md_op_data(NULL, dir, dir, NULL, 0, 0,
@@ -1243,7 +1245,7 @@ static int is_first_dirent(struct inode *dir, struct dentry *dentry)
 
 	while (1) {
 		struct lu_dirpage *dp;
-		struct lu_dirent  *ent;
+		struct lu_dirent *ent;
 
 		if (IS_ERR(page)) {
 			struct ll_inode_info *lli = ll_i2info(dir);
@@ -1423,8 +1425,10 @@ static int revalidate_statahead_dentry(struct inode *dir,
 	if (smp_load_acquire(&entry->se_state) == SA_ENTRY_SUCC &&
 	    entry->se_inode) {
 		struct inode *inode = entry->se_inode;
-		struct lookup_intent it = { .it_op = IT_GETATTR,
-					    .it_lock_handle = entry->se_handle };
+		struct lookup_intent it = {
+			.it_op = IT_GETATTR,
+			.it_lock_handle = entry->se_handle
+		};
 		u64 bits;
 
 		rc = md_revalidate_lock(ll_i2mdexp(dir), &it,
@@ -1517,7 +1521,6 @@ static int start_statahead_thread(struct inode *dir, struct dentry *dentry)
 		goto out;
 	}
 
-
 	sai = ll_sai_alloc(parent);
 	if (!sai) {
 		rc = -ENOMEM;
diff --git a/drivers/staging/lustre/lustre/llite/super25.c b/drivers/staging/lustre/lustre/llite/super25.c
index 3ad0b11..c2b1668 100644
--- a/drivers/staging/lustre/lustre/llite/super25.c
+++ b/drivers/staging/lustre/lustre/llite/super25.c
@@ -73,14 +73,14 @@ static void ll_destroy_inode(struct inode *inode)
 
 /* exported operations */
 struct super_operations lustre_super_operations = {
-	.alloc_inode   = ll_alloc_inode,
-	.destroy_inode = ll_destroy_inode,
-	.evict_inode   = ll_delete_inode,
-	.put_super     = ll_put_super,
-	.statfs	= ll_statfs,
-	.umount_begin  = ll_umount_begin,
-	.remount_fs    = ll_remount_fs,
-	.show_options  = ll_show_options,
+	.alloc_inode		= ll_alloc_inode,
+	.destroy_inode		= ll_destroy_inode,
+	.evict_inode		= ll_delete_inode,
+	.put_super		= ll_put_super,
+	.statfs			= ll_statfs,
+	.umount_begin		= ll_umount_begin,
+	.remount_fs		= ll_remount_fs,
+	.show_options		= ll_show_options,
 };
 
 /** This is the entry point for the mount call into Lustre.
diff --git a/drivers/staging/lustre/lustre/llite/vvp_dev.c b/drivers/staging/lustre/lustre/llite/vvp_dev.c
index 4e55599..c10ca6e 100644
--- a/drivers/staging/lustre/lustre/llite/vvp_dev.c
+++ b/drivers/staging/lustre/lustre/llite/vvp_dev.c
@@ -111,9 +111,9 @@ static void ll_thread_key_fini(const struct lu_context *ctx,
 }
 
 struct lu_context_key ll_thread_key = {
-	.lct_tags = LCT_CL_THREAD,
-	.lct_init = ll_thread_key_init,
-	.lct_fini = ll_thread_key_fini
+	.lct_tags	= LCT_CL_THREAD,
+	.lct_init	= ll_thread_key_init,
+	.lct_fini	= ll_thread_key_fini
 };
 
 static void *vvp_session_key_init(const struct lu_context *ctx,
@@ -136,9 +136,9 @@ static void vvp_session_key_fini(const struct lu_context *ctx,
 }
 
 struct lu_context_key vvp_session_key = {
-	.lct_tags = LCT_SESSION,
-	.lct_init = vvp_session_key_init,
-	.lct_fini = vvp_session_key_fini
+	.lct_tags	= LCT_SESSION,
+	.lct_init	= vvp_session_key_init,
+	.lct_fini	= vvp_session_key_fini
 };
 
 static void *vvp_thread_key_init(const struct lu_context *ctx,
@@ -161,24 +161,24 @@ static void vvp_thread_key_fini(const struct lu_context *ctx,
 }
 
 struct lu_context_key vvp_thread_key = {
-	.lct_tags = LCT_CL_THREAD,
-	.lct_init = vvp_thread_key_init,
-	.lct_fini = vvp_thread_key_fini
+	.lct_tags	= LCT_CL_THREAD,
+	.lct_init	= vvp_thread_key_init,
+	.lct_fini	= vvp_thread_key_fini
 };
 
 /* type constructor/destructor: vvp_type_{init,fini,start,stop}(). */
 LU_TYPE_INIT_FINI(vvp, &vvp_thread_key, &ll_thread_key, &vvp_session_key);
 
 static const struct lu_device_operations vvp_lu_ops = {
-	.ldo_object_alloc      = vvp_object_alloc
+	.ldo_object_alloc	= vvp_object_alloc
 };
 
 static struct lu_device *vvp_device_free(const struct lu_env *env,
 					 struct lu_device *d)
 {
-	struct vvp_device *vdv  = lu2vvp_dev(d);
-	struct cl_site    *site = lu2cl_site(d->ld_site);
-	struct lu_device  *next = cl2lu_dev(vdv->vdv_next);
+	struct vvp_device *vdv = lu2vvp_dev(d);
+	struct cl_site *site = lu2cl_site(d->ld_site);
+	struct lu_device *next = cl2lu_dev(vdv->vdv_next);
 
 	if (d->ld_site) {
 		cl_site_fini(site);
@@ -194,8 +194,8 @@ static struct lu_device *vvp_device_alloc(const struct lu_env *env,
 					  struct lustre_cfg *cfg)
 {
 	struct vvp_device *vdv;
-	struct lu_device  *lud;
-	struct cl_site    *site;
+	struct lu_device *lud;
+	struct cl_site *site;
 	int rc;
 
 	vdv = kzalloc(sizeof(*vdv), GFP_NOFS);
@@ -229,7 +229,7 @@ static struct lu_device *vvp_device_alloc(const struct lu_env *env,
 static int vvp_device_init(const struct lu_env *env, struct lu_device *d,
 			   const char *name, struct lu_device *next)
 {
-	struct vvp_device  *vdv;
+	struct vvp_device *vdv;
 	int rc;
 
 	vdv = lu2vvp_dev(d);
@@ -254,23 +254,23 @@ static struct lu_device *vvp_device_fini(const struct lu_env *env,
 }
 
 static const struct lu_device_type_operations vvp_device_type_ops = {
-	.ldto_init = vvp_type_init,
-	.ldto_fini = vvp_type_fini,
+	.ldto_init		= vvp_type_init,
+	.ldto_fini		= vvp_type_fini,
 
-	.ldto_start = vvp_type_start,
-	.ldto_stop  = vvp_type_stop,
+	.ldto_start		= vvp_type_start,
+	.ldto_stop		= vvp_type_stop,
 
-	.ldto_device_alloc = vvp_device_alloc,
+	.ldto_device_alloc	= vvp_device_alloc,
 	.ldto_device_free	= vvp_device_free,
 	.ldto_device_init	= vvp_device_init,
 	.ldto_device_fini	= vvp_device_fini,
 };
 
 struct lu_device_type vvp_device_type = {
-	.ldt_tags     = LU_DEVICE_CL,
-	.ldt_name     = LUSTRE_VVP_NAME,
-	.ldt_ops      = &vvp_device_type_ops,
-	.ldt_ctx_tags = LCT_CL_THREAD
+	.ldt_tags		= LU_DEVICE_CL,
+	.ldt_name		= LUSTRE_VVP_NAME,
+	.ldt_ops		= &vvp_device_type_ops,
+	.ldt_ctx_tags		= LCT_CL_THREAD
 };
 
 /**
@@ -312,8 +312,8 @@ void vvp_global_fini(void)
 int cl_sb_init(struct super_block *sb)
 {
 	struct ll_sb_info *sbi;
-	struct cl_device  *cl;
-	struct lu_env     *env;
+	struct cl_device *cl;
+	struct lu_env *env;
 	int rc = 0;
 	u16 refcheck;
 
@@ -336,10 +336,10 @@ int cl_sb_init(struct super_block *sb)
 int cl_sb_fini(struct super_block *sb)
 {
 	struct ll_sb_info *sbi;
-	struct lu_env     *env;
-	struct cl_device  *cld;
+	struct lu_env *env;
+	struct cl_device *cld;
 	u16 refcheck;
-	int		result;
+	int result;
 
 	sbi = ll_s2sbi(sb);
 	env = cl_env_get(&refcheck);
@@ -378,20 +378,20 @@ struct vvp_pgcache_id {
 struct vvp_seq_private {
 	struct ll_sb_info	*vsp_sbi;
 	struct lu_env		*vsp_env;
-	u16			vsp_refcheck;
+	u16			 vsp_refcheck;
 	struct cl_object	*vsp_clob;
-	struct vvp_pgcache_id	vsp_id;
+	struct vvp_pgcache_id	 vsp_id;
 	/*
 	 * prev_pos is the 'pos' of the last object returned
 	 * by ->start of ->next.
 	 */
-	loff_t			vsp_prev_pos;
+	loff_t			 vsp_prev_pos;
 };
 
 static int vvp_pgcache_obj_get(struct cfs_hash *hs, struct cfs_hash_bd *bd,
 			       struct hlist_node *hnode, void *data)
 {
-	struct vvp_pgcache_id   *id  = data;
+	struct vvp_pgcache_id *id = data;
 	struct lu_object_header *hdr = cfs_hash_object(hs, hnode);
 
 	if (lu_object_is_dying(hdr))
@@ -411,7 +411,7 @@ static struct cl_object *vvp_pgcache_obj(const struct lu_env *env,
 {
 	LASSERT(lu_device_is_cl(dev));
 
-	id->vpi_obj    = NULL;
+	id->vpi_obj = NULL;
 	id->vpi_curdep = id->vpi_depth;
 
 	cfs_hash_hlist_for_each(dev->ld_site->ls_obj_hash, id->vpi_bucket,
@@ -464,19 +464,19 @@ static struct page *vvp_pgcache_current(struct vvp_seq_private *priv)
 	}
 }
 
-#define seq_page_flag(seq, page, flag, has_flags) do {		  \
-	if (test_bit(PG_##flag, &(page)->flags)) {		  \
+#define seq_page_flag(seq, page, flag, has_flags) do {			\
+	if (test_bit(PG_##flag, &(page)->flags)) {			\
 		seq_printf(seq, "%s"#flag, has_flags ? "|" : "");       \
-		has_flags = 1;					  \
-	}							       \
+		has_flags = 1;						\
+	}								\
 } while (0)
 
 static void vvp_pgcache_page_show(const struct lu_env *env,
 				  struct seq_file *seq, struct cl_page *page)
 {
 	struct vvp_page *vpg;
-	struct page      *vmpage;
-	int	      has_flags;
+	struct page *vmpage;
+	int has_flags;
 
 	vpg = cl2vvp_page(cl_page_at(page, &vvp_device_type));
 	vmpage = vpg->vpg_page;
@@ -502,8 +502,8 @@ static void vvp_pgcache_page_show(const struct lu_env *env,
 static int vvp_pgcache_show(struct seq_file *f, void *v)
 {
 	struct vvp_seq_private *priv = f->private;
-	struct page		*vmpage = v;
-	struct cl_page		*page;
+	struct page *vmpage = v;
+	struct cl_page *page;
 
 	seq_printf(f, "%8lx@" DFID ": ", vmpage->index,
 		   PFID(lu_object_fid(&priv->vsp_clob->co_lu)));
@@ -575,10 +575,10 @@ static void vvp_pgcache_stop(struct seq_file *f, void *v)
 }
 
 static const struct seq_operations vvp_pgcache_ops = {
-	.start = vvp_pgcache_start,
-	.next  = vvp_pgcache_next,
-	.stop  = vvp_pgcache_stop,
-	.show  = vvp_pgcache_show
+	.start	= vvp_pgcache_start,
+	.next	= vvp_pgcache_next,
+	.stop	= vvp_pgcache_stop,
+	.show	= vvp_pgcache_show
 };
 
 static int vvp_dump_pgcache_seq_open(struct inode *inode, struct file *filp)
@@ -617,9 +617,9 @@ static int vvp_dump_pgcache_seq_release(struct inode *inode, struct file *file)
 }
 
 const struct file_operations vvp_dump_pgcache_file_ops = {
-	.owner   = THIS_MODULE,
-	.open    = vvp_dump_pgcache_seq_open,
-	.read    = seq_read,
-	.llseek	 = seq_lseek,
-	.release = vvp_dump_pgcache_seq_release,
+	.owner		= THIS_MODULE,
+	.open		= vvp_dump_pgcache_seq_open,
+	.read		= seq_read,
+	.llseek		= seq_lseek,
+	.release	= vvp_dump_pgcache_seq_release,
 };
diff --git a/drivers/staging/lustre/lustre/llite/vvp_internal.h b/drivers/staging/lustre/lustre/llite/vvp_internal.h
index e8712d8..102d143 100644
--- a/drivers/staging/lustre/lustre/llite/vvp_internal.h
+++ b/drivers/staging/lustre/lustre/llite/vvp_internal.h
@@ -53,12 +53,12 @@
  */
 struct vvp_io {
 	/** super class */
-	struct cl_io_slice     vui_cl;
-	struct cl_io_lock_link vui_link;
+	struct cl_io_slice	vui_cl;
+	struct cl_io_lock_link	vui_link;
 	/**
 	 * I/O vector information to or from which read/write is going.
 	 */
-	struct iov_iter *vui_iter;
+	struct iov_iter		*vui_iter;
 	/**
 	 * Total size for the left IO.
 	 */
@@ -70,30 +70,30 @@ struct vvp_io {
 			 * Inode modification time that is checked across DLM
 			 * lock request.
 			 */
-			time64_t	    ft_mtime;
-			struct vm_area_struct *ft_vma;
+			time64_t		ft_mtime;
+			struct vm_area_struct	*ft_vma;
 			/**
 			 *  locked page returned from vvp_io
 			 */
-			struct page	    *ft_vmpage;
+			struct page		*ft_vmpage;
 			/**
 			 * kernel fault info
 			 */
-			struct vm_fault *ft_vmf;
+			struct vm_fault		*ft_vmf;
 			/**
 			 * fault API used bitflags for return code.
 			 */
-			unsigned int    ft_flags;
+			unsigned int		ft_flags;
 			/**
 			 * check that flags are from filemap_fault
 			 */
-			bool		ft_flags_valid;
+			bool			ft_flags_valid;
 		} fault;
 		struct {
-			struct cl_page_list vui_queue;
-			unsigned long vui_written;
-			int vui_from;
-			int vui_to;
+			struct cl_page_list	vui_queue;
+			unsigned long		vui_written;
+			int			vui_from;
+			int			vui_to;
 		} write;
 	} u;
 
@@ -108,10 +108,10 @@ struct vvp_io {
 	struct kiocb		*vui_iocb;
 
 	/* Readahead state. */
-	pgoff_t	vui_ra_start;
-	pgoff_t	vui_ra_count;
+	pgoff_t			vui_ra_start;
+	pgoff_t			vui_ra_count;
 	/* Set when vui_ra_{start,count} have been initialized. */
-	bool		vui_ra_valid;
+	bool			vui_ra_valid;
 };
 
 extern struct lu_device_type vvp_device_type;
@@ -131,7 +131,7 @@ struct vvp_thread_info {
 
 static inline struct vvp_thread_info *vvp_env_info(const struct lu_env *env)
 {
-	struct vvp_thread_info      *vti;
+	struct vvp_thread_info *vti;
 
 	vti = lu_context_key_get(&env->le_ctx, &vvp_thread_key);
 	LASSERT(vti);
@@ -166,7 +166,7 @@ static inline struct cl_io *vvp_env_thread_io(const struct lu_env *env)
 }
 
 struct vvp_session {
-	struct vvp_io cs_ios;
+	struct vvp_io	cs_ios;
 };
 
 static inline struct vvp_session *vvp_env_session(const struct lu_env *env)
@@ -189,8 +189,8 @@ static inline struct vvp_io *vvp_env_io(const struct lu_env *env)
  */
 struct vvp_object {
 	struct cl_object_header vob_header;
-	struct cl_object        vob_cl;
-	struct inode           *vob_inode;
+	struct cl_object	vob_cl;
+	struct inode	       *vob_inode;
 
 	/**
 	 * Number of transient pages.  This is no longer protected by i_sem,
@@ -223,12 +223,12 @@ struct vvp_object {
  * VVP-private page state.
  */
 struct vvp_page {
-	struct cl_page_slice vpg_cl;
-	unsigned int	vpg_defer_uptodate:1,
-			vpg_ra_updated:1,
-			vpg_ra_used:1;
+	struct cl_page_slice	vpg_cl;
+	unsigned int		vpg_defer_uptodate:1,
+				vpg_ra_updated:1,
+				vpg_ra_used:1;
 	/** VM page */
-	struct page	  *vpg_page;
+	struct page		*vpg_page;
 };
 
 static inline struct vvp_page *cl2vvp_page(const struct cl_page_slice *slice)
@@ -242,12 +242,12 @@ static inline pgoff_t vvp_index(struct vvp_page *vvp)
 }
 
 struct vvp_device {
-	struct cl_device    vdv_cl;
-	struct cl_device   *vdv_next;
+	struct cl_device	vdv_cl;
+	struct cl_device	*vdv_next;
 };
 
 struct vvp_lock {
-	struct cl_lock_slice vlk_cl;
+	struct cl_lock_slice	vlk_cl;
 };
 
 void *ccc_key_init(const struct lu_context *ctx,
diff --git a/drivers/staging/lustre/lustre/llite/vvp_io.c b/drivers/staging/lustre/lustre/llite/vvp_io.c
index 26a7897..593b10c 100644
--- a/drivers/staging/lustre/lustre/llite/vvp_io.c
+++ b/drivers/staging/lustre/lustre/llite/vvp_io.c
@@ -63,8 +63,8 @@ static struct vvp_io *cl2vvp_io(const struct lu_env *env,
 static bool can_populate_pages(const struct lu_env *env, struct cl_io *io,
 			       struct inode *inode)
 {
-	struct ll_inode_info	*lli = ll_i2info(inode);
-	struct vvp_io		*vio = vvp_env_io(env);
+	struct ll_inode_info *lli = ll_i2info(inode);
+	struct vvp_io *vio = vvp_env_io(env);
 	bool rc = true;
 
 	switch (io->ci_type) {
@@ -120,9 +120,9 @@ static int vvp_prep_size(const struct lu_env *env, struct cl_object *obj,
 			 struct cl_io *io, loff_t start, size_t count,
 			 int *exceed)
 {
-	struct cl_attr *attr  = vvp_env_thread_attr(env);
-	struct inode   *inode = vvp_object_inode(obj);
-	loff_t	  pos   = start + count - 1;
+	struct cl_attr *attr = vvp_env_thread_attr(env);
+	struct inode *inode = vvp_object_inode(obj);
+	loff_t pos = start + count - 1;
 	loff_t kms;
 	int result;
 
@@ -205,11 +205,11 @@ static int vvp_prep_size(const struct lu_env *env, struct cl_object *obj,
 
 static int vvp_io_one_lock_index(const struct lu_env *env, struct cl_io *io,
 				 u32 enqflags, enum cl_lock_mode mode,
-			  pgoff_t start, pgoff_t end)
+				 pgoff_t start, pgoff_t end)
 {
-	struct vvp_io          *vio   = vvp_env_io(env);
-	struct cl_lock_descr   *descr = &vio->vui_link.cill_descr;
-	struct cl_object       *obj   = io->ci_obj;
+	struct vvp_io *vio = vvp_env_io(env);
+	struct cl_lock_descr *descr = &vio->vui_link.cill_descr;
+	struct cl_object *obj = io->ci_obj;
 
 	CLOBINVRNT(env, obj, vvp_object_invariant(obj));
 
@@ -222,11 +222,11 @@ static int vvp_io_one_lock_index(const struct lu_env *env, struct cl_io *io,
 		descr->cld_gid  = vio->vui_fd->fd_grouplock.lg_gid;
 		enqflags |= CEF_LOCK_MATCH;
 	} else {
-		descr->cld_mode  = mode;
+		descr->cld_mode = mode;
 	}
-	descr->cld_obj   = obj;
+	descr->cld_obj = obj;
 	descr->cld_start = start;
-	descr->cld_end   = end;
+	descr->cld_end  = end;
 	descr->cld_enq_flags = enqflags;
 
 	cl_io_lock_add(env, io, &vio->vui_link);
@@ -267,8 +267,8 @@ static void vvp_io_write_iter_fini(const struct lu_env *env,
 static int vvp_io_fault_iter_init(const struct lu_env *env,
 				  const struct cl_io_slice *ios)
 {
-	struct vvp_io *vio   = cl2vvp_io(env, ios);
-	struct inode  *inode = vvp_object_inode(ios->cis_obj);
+	struct vvp_io *vio = cl2vvp_io(env, ios);
+	struct inode *inode = vvp_object_inode(ios->cis_obj);
 
 	LASSERT(inode == file_inode(vio->vui_fd->fd_file));
 	vio->u.fault.ft_mtime = inode->i_mtime.tv_sec;
@@ -277,9 +277,9 @@ static int vvp_io_fault_iter_init(const struct lu_env *env,
 
 static void vvp_io_fini(const struct lu_env *env, const struct cl_io_slice *ios)
 {
-	struct cl_io     *io  = ios->cis_io;
+	struct cl_io *io = ios->cis_io;
 	struct cl_object *obj = io->ci_obj;
-	struct vvp_io    *vio = cl2vvp_io(env, ios);
+	struct vvp_io *vio = cl2vvp_io(env, ios);
 	struct inode *inode = vvp_object_inode(obj);
 	int rc;
 
@@ -376,7 +376,7 @@ static void vvp_io_fini(const struct lu_env *env, const struct cl_io_slice *ios)
 static void vvp_io_fault_fini(const struct lu_env *env,
 			      const struct cl_io_slice *ios)
 {
-	struct cl_io   *io   = ios->cis_io;
+	struct cl_io *io = ios->cis_io;
 	struct cl_page *page = io->u.ci_fault.ft_page;
 
 	CLOBINVRNT(env, io->ci_obj, vvp_object_invariant(io->ci_obj));
@@ -405,13 +405,13 @@ static int vvp_mmap_locks(const struct lu_env *env,
 			  struct vvp_io *vio, struct cl_io *io)
 {
 	struct vvp_thread_info *cti = vvp_env_info(env);
-	struct mm_struct       *mm = current->mm;
+	struct mm_struct *mm = current->mm;
 	struct vm_area_struct  *vma;
-	struct cl_lock_descr   *descr = &cti->vti_descr;
+	struct cl_lock_descr *descr = &cti->vti_descr;
 	union ldlm_policy_data policy;
-	unsigned long	   addr;
-	ssize_t		 count;
-	int		 result = 0;
+	unsigned long addr;
+	ssize_t	count;
+	int result = 0;
 	struct iov_iter i;
 	struct iovec iov;
 
@@ -492,7 +492,7 @@ static void vvp_io_advance(const struct lu_env *env,
 			   size_t nob)
 {
 	struct cl_object *obj = ios->cis_io->ci_obj;
-	struct vvp_io	 *vio = cl2vvp_io(env, ios);
+	struct vvp_io *vio = cl2vvp_io(env, ios);
 
 	CLOBINVRNT(env, obj, vvp_object_invariant(obj));
 
@@ -533,7 +533,7 @@ static int vvp_io_rw_lock(const struct lu_env *env, struct cl_io *io,
 static int vvp_io_read_lock(const struct lu_env *env,
 			    const struct cl_io_slice *ios)
 {
-	struct cl_io	 *io = ios->cis_io;
+	struct cl_io *io = ios->cis_io;
 	struct cl_io_rw_common *rd = &io->u.ci_rd.rd;
 	int result;
 
@@ -546,7 +546,7 @@ static int vvp_io_read_lock(const struct lu_env *env,
 static int vvp_io_fault_lock(const struct lu_env *env,
 			     const struct cl_io_slice *ios)
 {
-	struct cl_io *io   = ios->cis_io;
+	struct cl_io *io = ios->cis_io;
 	struct vvp_io *vio = cl2vvp_io(env, ios);
 	/*
 	 * XXX LDLM_FL_CBPENDING
@@ -567,10 +567,10 @@ static int vvp_io_write_lock(const struct lu_env *env,
 
 	if (io->u.ci_wr.wr_append) {
 		start = 0;
-		end   = OBD_OBJECT_EOF;
+		end = OBD_OBJECT_EOF;
 	} else {
 		start = io->u.ci_wr.wr.crw_pos;
-		end   = start + io->u.ci_wr.wr.crw_count - 1;
+		end = start + io->u.ci_wr.wr.crw_count - 1;
 	}
 	return vvp_io_rw_lock(env, io, CLM_WRITE, start, end);
 }
@@ -589,7 +589,7 @@ static int vvp_io_setattr_iter_init(const struct lu_env *env,
 static int vvp_io_setattr_lock(const struct lu_env *env,
 			       const struct cl_io_slice *ios)
 {
-	struct cl_io  *io  = ios->cis_io;
+	struct cl_io *io = ios->cis_io;
 	u64 new_size;
 	u32 enqflags = 0;
 
@@ -619,7 +619,7 @@ static int vvp_io_setattr_lock(const struct lu_env *env,
 
 static int vvp_do_vmtruncate(struct inode *inode, size_t size)
 {
-	int     result;
+	int result;
 	/*
 	 * Only ll_inode_size_lock is taken at this level.
 	 */
@@ -637,9 +637,9 @@ static int vvp_do_vmtruncate(struct inode *inode, size_t size)
 static int vvp_io_setattr_time(const struct lu_env *env,
 			       const struct cl_io_slice *ios)
 {
-	struct cl_io       *io    = ios->cis_io;
-	struct cl_object   *obj   = io->ci_obj;
-	struct cl_attr     *attr  = vvp_env_thread_attr(env);
+	struct cl_io *io = ios->cis_io;
+	struct cl_object *obj = io->ci_obj;
+	struct cl_attr *attr = vvp_env_thread_attr(env);
 	int result;
 	unsigned valid = CAT_CTIME;
 
@@ -662,8 +662,8 @@ static int vvp_io_setattr_time(const struct lu_env *env,
 static int vvp_io_setattr_start(const struct lu_env *env,
 				const struct cl_io_slice *ios)
 {
-	struct cl_io	*io    = ios->cis_io;
-	struct inode	*inode = vvp_object_inode(io->ci_obj);
+	struct cl_io *io = ios->cis_io;
+	struct inode *inode = vvp_object_inode(io->ci_obj);
 	struct ll_inode_info *lli = ll_i2info(inode);
 
 	if (cl_io_is_trunc(io)) {
@@ -683,7 +683,7 @@ static int vvp_io_setattr_start(const struct lu_env *env,
 static void vvp_io_setattr_end(const struct lu_env *env,
 			       const struct cl_io_slice *ios)
 {
-	struct cl_io *io    = ios->cis_io;
+	struct cl_io *io = ios->cis_io;
 	struct inode *inode = vvp_object_inode(io->ci_obj);
 	struct ll_inode_info *lli = ll_i2info(inode);
 
@@ -716,18 +716,17 @@ static void vvp_io_setattr_fini(const struct lu_env *env,
 static int vvp_io_read_start(const struct lu_env *env,
 			     const struct cl_io_slice *ios)
 {
-	struct vvp_io     *vio   = cl2vvp_io(env, ios);
-	struct cl_io      *io    = ios->cis_io;
-	struct cl_object  *obj   = io->ci_obj;
-	struct inode      *inode = vvp_object_inode(obj);
+	struct vvp_io *vio = cl2vvp_io(env, ios);
+	struct cl_io *io = ios->cis_io;
+	struct cl_object *obj = io->ci_obj;
+	struct inode *inode = vvp_object_inode(obj);
 	struct ll_inode_info *lli = ll_i2info(inode);
-	struct file       *file  = vio->vui_fd->fd_file;
-
-	int     result;
-	loff_t  pos = io->u.ci_rd.rd.crw_pos;
-	long    cnt = io->u.ci_rd.rd.crw_count;
-	long    tot = vio->vui_tot_count;
-	int     exceed = 0;
+	struct file *file = vio->vui_fd->fd_file;
+	int result;
+	loff_t pos = io->u.ci_rd.rd.crw_pos;
+	long cnt = io->u.ci_rd.rd.crw_count;
+	long tot = vio->vui_tot_count;
+	int exceed = 0;
 
 	CLOBINVRNT(env, obj, vvp_object_invariant(obj));
 
@@ -956,10 +955,10 @@ int vvp_io_write_commit(const struct lu_env *env, struct cl_io *io)
 static int vvp_io_write_start(const struct lu_env *env,
 			      const struct cl_io_slice *ios)
 {
-	struct vvp_io      *vio   = cl2vvp_io(env, ios);
-	struct cl_io       *io    = ios->cis_io;
-	struct cl_object   *obj   = io->ci_obj;
-	struct inode       *inode = vvp_object_inode(obj);
+	struct vvp_io *vio = cl2vvp_io(env, ios);
+	struct cl_io *io = ios->cis_io;
+	struct cl_object *obj = io->ci_obj;
+	struct inode *inode = vvp_object_inode(obj);
 	struct ll_inode_info *lli = ll_i2info(inode);
 	bool lock_inode = !inode_is_locked(inode) && !IS_NOSEC(inode);
 	loff_t pos = io->u.ci_wr.wr.crw_pos;
@@ -1111,19 +1110,19 @@ static void mkwrite_commit_callback(const struct lu_env *env, struct cl_io *io,
 static int vvp_io_fault_start(const struct lu_env *env,
 			      const struct cl_io_slice *ios)
 {
-	struct vvp_io       *vio     = cl2vvp_io(env, ios);
-	struct cl_io	*io      = ios->cis_io;
-	struct cl_object    *obj     = io->ci_obj;
-	struct inode        *inode   = vvp_object_inode(obj);
+	struct vvp_io *vio = cl2vvp_io(env, ios);
+	struct cl_io *io = ios->cis_io;
+	struct cl_object *obj = io->ci_obj;
+	struct inode *inode = vvp_object_inode(obj);
 	struct ll_inode_info *lli = ll_i2info(inode);
-	struct cl_fault_io  *fio     = &io->u.ci_fault;
-	struct vvp_fault_io *cfio    = &vio->u.fault;
-	loff_t	       offset;
-	int		  result  = 0;
-	struct page	  *vmpage  = NULL;
-	struct cl_page      *page;
-	loff_t	       size;
-	pgoff_t		     last_index;
+	struct cl_fault_io *fio = &io->u.ci_fault;
+	struct vvp_fault_io *cfio = &vio->u.fault;
+	loff_t offset;
+	int result = 0;
+	struct page *vmpage = NULL;
+	struct cl_page *page;
+	loff_t size;
+	pgoff_t last_index;
 
 	down_read(&lli->lli_trunc_sem);
 
@@ -1318,40 +1317,40 @@ static int vvp_io_read_ahead(const struct lu_env *env,
 	.op = {
 		[CIT_READ] = {
 			.cio_fini	= vvp_io_fini,
-			.cio_lock      = vvp_io_read_lock,
-			.cio_start     = vvp_io_read_start,
+			.cio_lock	= vvp_io_read_lock,
+			.cio_start	= vvp_io_read_start,
 			.cio_end	= vvp_io_rw_end,
 			.cio_advance	= vvp_io_advance,
 		},
 		[CIT_WRITE] = {
-			.cio_fini      = vvp_io_fini,
-			.cio_iter_init = vvp_io_write_iter_init,
-			.cio_iter_fini = vvp_io_write_iter_fini,
-			.cio_lock      = vvp_io_write_lock,
-			.cio_start     = vvp_io_write_start,
+			.cio_fini	= vvp_io_fini,
+			.cio_iter_init	= vvp_io_write_iter_init,
+			.cio_iter_fini	= vvp_io_write_iter_fini,
+			.cio_lock	= vvp_io_write_lock,
+			.cio_start	= vvp_io_write_start,
 			.cio_end	= vvp_io_rw_end,
-			.cio_advance   = vvp_io_advance,
+			.cio_advance	= vvp_io_advance,
 		},
 		[CIT_SETATTR] = {
-			.cio_fini       = vvp_io_setattr_fini,
-			.cio_iter_init  = vvp_io_setattr_iter_init,
-			.cio_lock       = vvp_io_setattr_lock,
-			.cio_start      = vvp_io_setattr_start,
+			.cio_fini	= vvp_io_setattr_fini,
+			.cio_iter_init	= vvp_io_setattr_iter_init,
+			.cio_lock	= vvp_io_setattr_lock,
+			.cio_start	= vvp_io_setattr_start,
 			.cio_end	= vvp_io_setattr_end
 		},
 		[CIT_FAULT] = {
-			.cio_fini      = vvp_io_fault_fini,
-			.cio_iter_init = vvp_io_fault_iter_init,
-			.cio_lock      = vvp_io_fault_lock,
-			.cio_start     = vvp_io_fault_start,
-			.cio_end       = vvp_io_fault_end,
+			.cio_fini	= vvp_io_fault_fini,
+			.cio_iter_init	= vvp_io_fault_iter_init,
+			.cio_lock	= vvp_io_fault_lock,
+			.cio_start	= vvp_io_fault_start,
+			.cio_end	= vvp_io_fault_end,
 		},
 		[CIT_FSYNC] = {
-			.cio_start  = vvp_io_fsync_start,
-			.cio_fini   = vvp_io_fini
+			.cio_start	= vvp_io_fsync_start,
+			.cio_fini	= vvp_io_fini
 		},
 		[CIT_MISC] = {
-			.cio_fini   = vvp_io_fini
+			.cio_fini	= vvp_io_fini
 		},
 		[CIT_LADVISE] = {
 			.cio_fini	= vvp_io_fini
@@ -1363,9 +1362,9 @@ static int vvp_io_read_ahead(const struct lu_env *env,
 int vvp_io_init(const struct lu_env *env, struct cl_object *obj,
 		struct cl_io *io)
 {
-	struct vvp_io      *vio   = vvp_env_io(env);
-	struct inode       *inode = vvp_object_inode(obj);
-	int		 result;
+	struct vvp_io *vio = vvp_env_io(env);
+	struct inode *inode = vvp_object_inode(obj);
+	int result;
 
 	CLOBINVRNT(env, obj, vvp_object_invariant(obj));
 
diff --git a/drivers/staging/lustre/lustre/llite/vvp_object.c b/drivers/staging/lustre/lustre/llite/vvp_object.c
index 86e077b..1637972 100644
--- a/drivers/staging/lustre/lustre/llite/vvp_object.c
+++ b/drivers/staging/lustre/lustre/llite/vvp_object.c
@@ -50,7 +50,7 @@
 
 int vvp_object_invariant(const struct cl_object *obj)
 {
-	struct inode *inode  = vvp_object_inode(obj);
+	struct inode *inode = vvp_object_inode(obj);
 	struct ll_inode_info *lli = ll_i2info(inode);
 
 	return (S_ISREG(inode->i_mode) || inode->i_mode == 0) &&
@@ -60,8 +60,8 @@ int vvp_object_invariant(const struct cl_object *obj)
 static int vvp_object_print(const struct lu_env *env, void *cookie,
 			    lu_printer_t p, const struct lu_object *o)
 {
-	struct vvp_object    *obj   = lu2vvp(o);
-	struct inode	 *inode = obj->vob_inode;
+	struct vvp_object *obj = lu2vvp(o);
+	struct inode *inode = obj->vob_inode;
 	struct ll_inode_info *lli;
 
 	(*p)(env, cookie, "(%d %d) inode: %p ",
@@ -209,13 +209,13 @@ static void vvp_req_attr_set(const struct lu_env *env, struct cl_object *obj,
 }
 
 static const struct cl_object_operations vvp_ops = {
-	.coo_page_init = vvp_page_init,
-	.coo_lock_init = vvp_lock_init,
-	.coo_io_init   = vvp_io_init,
-	.coo_attr_get  = vvp_attr_get,
-	.coo_attr_update = vvp_attr_update,
-	.coo_conf_set  = vvp_conf_set,
-	.coo_prune     = vvp_prune,
+	.coo_page_init		= vvp_page_init,
+	.coo_lock_init		= vvp_lock_init,
+	.coo_io_init		= vvp_io_init,
+	.coo_attr_get		= vvp_attr_get,
+	.coo_attr_update	= vvp_attr_update,
+	.coo_conf_set		= vvp_conf_set,
+	.coo_prune		= vvp_prune,
 	.coo_glimpse		= vvp_object_glimpse,
 	.coo_req_attr_set	= vvp_req_attr_set
 };
@@ -235,8 +235,8 @@ static int vvp_object_init(const struct lu_env *env, struct lu_object *obj,
 {
 	struct vvp_device *dev = lu2vvp_dev(obj->lo_dev);
 	struct vvp_object *vob = lu2vvp(obj);
-	struct lu_object  *below;
-	struct lu_device  *under;
+	struct lu_object *below;
+	struct lu_device *under;
 	int result;
 
 	under = &dev->vdv_next->cd_lu_dev;
@@ -272,8 +272,8 @@ static void vvp_object_free(const struct lu_env *env, struct lu_object *obj)
 struct vvp_object *cl_inode2vvp(struct inode *inode)
 {
 	struct ll_inode_info *lli = ll_i2info(inode);
-	struct cl_object     *obj = lli->lli_clob;
-	struct lu_object     *lu;
+	struct cl_object *obj = lli->lli_clob;
+	struct lu_object *lu;
 
 	lu = lu_object_locate(obj->co_lu.lo_header, &vvp_device_type);
 	LASSERT(lu);
@@ -285,7 +285,7 @@ struct lu_object *vvp_object_alloc(const struct lu_env *env,
 				   struct lu_device *dev)
 {
 	struct vvp_object *vob;
-	struct lu_object  *obj;
+	struct lu_object *obj;
 
 	vob = kmem_cache_zalloc(vvp_object_kmem, GFP_NOFS);
 	if (vob) {
diff --git a/drivers/staging/lustre/lustre/llite/vvp_page.c b/drivers/staging/lustre/lustre/llite/vvp_page.c
index dcc4d8f..77bf923 100644
--- a/drivers/staging/lustre/lustre/llite/vvp_page.c
+++ b/drivers/staging/lustre/lustre/llite/vvp_page.c
@@ -65,8 +65,8 @@ static void vvp_page_fini_common(struct vvp_page *vpg)
 static void vvp_page_fini(const struct lu_env *env,
 			  struct cl_page_slice *slice)
 {
-	struct vvp_page *vpg     = cl2vvp_page(slice);
-	struct page     *vmpage  = vpg->vpg_page;
+	struct vvp_page *vpg = cl2vvp_page(slice);
+	struct page *vmpage = vpg->vpg_page;
 
 	/*
 	 * vmpage->private was already cleared when page was moved into
@@ -80,8 +80,8 @@ static int vvp_page_own(const struct lu_env *env,
 			const struct cl_page_slice *slice, struct cl_io *io,
 			int nonblock)
 {
-	struct vvp_page *vpg    = cl2vvp_page(slice);
-	struct page     *vmpage = vpg->vpg_page;
+	struct vvp_page *vpg = cl2vvp_page(slice);
+	struct page *vmpage = vpg->vpg_page;
 
 	LASSERT(vmpage);
 	if (nonblock) {
@@ -138,8 +138,8 @@ static void vvp_page_discard(const struct lu_env *env,
 			     const struct cl_page_slice *slice,
 			     struct cl_io *unused)
 {
-	struct page	   *vmpage  = cl2vm_page(slice);
-	struct vvp_page      *vpg     = cl2vvp_page(slice);
+	struct page *vmpage = cl2vm_page(slice);
+	struct vvp_page *vpg = cl2vvp_page(slice);
 
 	LASSERT(vmpage);
 	LASSERT(PageLocked(vmpage));
@@ -153,10 +153,10 @@ static void vvp_page_discard(const struct lu_env *env,
 static void vvp_page_delete(const struct lu_env *env,
 			    const struct cl_page_slice *slice)
 {
-	struct page       *vmpage = cl2vm_page(slice);
-	struct inode     *inode  = vmpage->mapping->host;
-	struct cl_object *obj    = slice->cpl_obj;
-	struct cl_page   *page   = slice->cpl_page;
+	struct page *vmpage = cl2vm_page(slice);
+	struct inode *inode = vmpage->mapping->host;
+	struct cl_object *obj = slice->cpl_obj;
+	struct cl_page *page = slice->cpl_page;
 	int refc;
 
 	LASSERT(PageLocked(vmpage));
@@ -252,10 +252,10 @@ static void vvp_page_completion_read(const struct lu_env *env,
 				     const struct cl_page_slice *slice,
 				     int ioret)
 {
-	struct vvp_page *vpg    = cl2vvp_page(slice);
-	struct page     *vmpage = vpg->vpg_page;
-	struct cl_page  *page   = slice->cpl_page;
-	struct inode    *inode  = vvp_object_inode(page->cp_obj);
+	struct vvp_page *vpg = cl2vvp_page(slice);
+	struct page *vmpage = vpg->vpg_page;
+	struct cl_page *page = slice->cpl_page;
+	struct inode *inode = vvp_object_inode(page->cp_obj);
 
 	LASSERT(PageLocked(vmpage));
 	CL_PAGE_HEADER(D_PAGE, env, page, "completing READ with %d\n", ioret);
@@ -278,9 +278,9 @@ static void vvp_page_completion_write(const struct lu_env *env,
 				      const struct cl_page_slice *slice,
 				      int ioret)
 {
-	struct vvp_page *vpg     = cl2vvp_page(slice);
-	struct cl_page  *pg     = slice->cpl_page;
-	struct page     *vmpage = vpg->vpg_page;
+	struct vvp_page *vpg = cl2vvp_page(slice);
+	struct cl_page *pg = slice->cpl_page;
+	struct page *vmpage = vpg->vpg_page;
 
 	CL_PAGE_HEADER(D_PAGE, env, pg, "completing WRITE with %d\n", ioret);
 
@@ -345,7 +345,7 @@ static int vvp_page_print(const struct lu_env *env,
 			  void *cookie, lu_printer_t printer)
 {
 	struct vvp_page *vpg = cl2vvp_page(slice);
-	struct page     *vmpage = vpg->vpg_page;
+	struct page *vmpage = vpg->vpg_page;
 
 	(*printer)(env, cookie, LUSTRE_VVP_NAME "-page@%p(%d:%d) vm@%p ",
 		   vpg, vpg->vpg_defer_uptodate, vpg->vpg_ra_used, vmpage);
@@ -374,26 +374,26 @@ static int vvp_page_fail(const struct lu_env *env,
 }
 
 static const struct cl_page_operations vvp_page_ops = {
-	.cpo_own	   = vvp_page_own,
-	.cpo_assume	= vvp_page_assume,
-	.cpo_unassume      = vvp_page_unassume,
-	.cpo_disown	= vvp_page_disown,
-	.cpo_discard       = vvp_page_discard,
-	.cpo_delete	= vvp_page_delete,
-	.cpo_export	= vvp_page_export,
-	.cpo_is_vmlocked   = vvp_page_is_vmlocked,
-	.cpo_fini	  = vvp_page_fini,
-	.cpo_print	 = vvp_page_print,
+	.cpo_own		= vvp_page_own,
+	.cpo_assume		= vvp_page_assume,
+	.cpo_unassume		= vvp_page_unassume,
+	.cpo_disown		= vvp_page_disown,
+	.cpo_discard		= vvp_page_discard,
+	.cpo_delete		= vvp_page_delete,
+	.cpo_export		= vvp_page_export,
+	.cpo_is_vmlocked	= vvp_page_is_vmlocked,
+	.cpo_fini		= vvp_page_fini,
+	.cpo_print		= vvp_page_print,
 	.io = {
 		[CRT_READ] = {
 			.cpo_prep	= vvp_page_prep_read,
-			.cpo_completion  = vvp_page_completion_read,
+			.cpo_completion	= vvp_page_completion_read,
 			.cpo_make_ready = vvp_page_fail,
 		},
 		[CRT_WRITE] = {
 			.cpo_prep	= vvp_page_prep_write,
-			.cpo_completion  = vvp_page_completion_write,
-			.cpo_make_ready  = vvp_page_make_ready,
+			.cpo_completion = vvp_page_completion_write,
+			.cpo_make_ready = vvp_page_make_ready,
 		},
 	},
 };
@@ -446,8 +446,8 @@ static void vvp_transient_page_discard(const struct lu_env *env,
 static int vvp_transient_page_is_vmlocked(const struct lu_env *env,
 					  const struct cl_page_slice *slice)
 {
-	struct inode    *inode = vvp_object_inode(slice->cpl_obj);
-	int	locked;
+	struct inode *inode = vvp_object_inode(slice->cpl_obj);
+	int locked;
 
 	locked = !inode_trylock(inode);
 	if (!locked)
@@ -474,22 +474,22 @@ static void vvp_transient_page_fini(const struct lu_env *env,
 }
 
 static const struct cl_page_operations vvp_transient_page_ops = {
-	.cpo_own	   = vvp_transient_page_own,
-	.cpo_assume	= vvp_transient_page_assume,
-	.cpo_unassume      = vvp_transient_page_unassume,
-	.cpo_disown	= vvp_transient_page_disown,
-	.cpo_discard       = vvp_transient_page_discard,
-	.cpo_fini	  = vvp_transient_page_fini,
-	.cpo_is_vmlocked   = vvp_transient_page_is_vmlocked,
-	.cpo_print	 = vvp_page_print,
+	.cpo_own		= vvp_transient_page_own,
+	.cpo_assume		= vvp_transient_page_assume,
+	.cpo_unassume		= vvp_transient_page_unassume,
+	.cpo_disown		= vvp_transient_page_disown,
+	.cpo_discard		= vvp_transient_page_discard,
+	.cpo_fini		= vvp_transient_page_fini,
+	.cpo_is_vmlocked	= vvp_transient_page_is_vmlocked,
+	.cpo_print		= vvp_page_print,
 	.io = {
 		[CRT_READ] = {
 			.cpo_prep	= vvp_transient_page_prep,
-			.cpo_completion  = vvp_transient_page_completion,
+			.cpo_completion = vvp_transient_page_completion,
 		},
 		[CRT_WRITE] = {
 			.cpo_prep	= vvp_transient_page_prep,
-			.cpo_completion  = vvp_transient_page_completion,
+			.cpo_completion	= vvp_transient_page_completion,
 		}
 	}
 };
@@ -498,7 +498,7 @@ int vvp_page_init(const struct lu_env *env, struct cl_object *obj,
 		  struct cl_page *page, pgoff_t index)
 {
 	struct vvp_page *vpg = cl_object_page_slice(obj, page);
-	struct page     *vmpage = page->cp_vmpage;
+	struct page *vmpage = page->cp_vmpage;
 
 	CLOBINVRNT(env, obj, vvp_object_invariant(obj));
 
diff --git a/drivers/staging/lustre/lustre/llite/xattr.c b/drivers/staging/lustre/lustre/llite/xattr.c
index 0670ed3..22f178a 100644
--- a/drivers/staging/lustre/lustre/llite/xattr.c
+++ b/drivers/staging/lustre/lustre/llite/xattr.c
@@ -631,45 +631,45 @@ ssize_t ll_listxattr(struct dentry *dentry, char *buffer, size_t size)
 }
 
 static const struct xattr_handler ll_user_xattr_handler = {
-	.prefix = XATTR_USER_PREFIX,
-	.flags = XATTR_USER_T,
-	.get = ll_xattr_get_common,
-	.set = ll_xattr_set_common,
+	.prefix		= XATTR_USER_PREFIX,
+	.flags		= XATTR_USER_T,
+	.get		= ll_xattr_get_common,
+	.set		= ll_xattr_set_common,
 };
 
 static const struct xattr_handler ll_trusted_xattr_handler = {
-	.prefix = XATTR_TRUSTED_PREFIX,
-	.flags = XATTR_TRUSTED_T,
-	.get = ll_xattr_get,
-	.set = ll_xattr_set,
+	.prefix		= XATTR_TRUSTED_PREFIX,
+	.flags		= XATTR_TRUSTED_T,
+	.get		= ll_xattr_get,
+	.set		= ll_xattr_set,
 };
 
 static const struct xattr_handler ll_security_xattr_handler = {
-	.prefix = XATTR_SECURITY_PREFIX,
-	.flags = XATTR_SECURITY_T,
-	.get = ll_xattr_get_common,
-	.set = ll_xattr_set_common,
+	.prefix		= XATTR_SECURITY_PREFIX,
+	.flags		= XATTR_SECURITY_T,
+	.get		= ll_xattr_get_common,
+	.set		= ll_xattr_set_common,
 };
 
 static const struct xattr_handler ll_acl_access_xattr_handler = {
-	.name = XATTR_NAME_POSIX_ACL_ACCESS,
-	.flags = XATTR_ACL_ACCESS_T,
-	.get = ll_xattr_get_common,
-	.set = ll_xattr_set_common,
+	.name		= XATTR_NAME_POSIX_ACL_ACCESS,
+	.flags		= XATTR_ACL_ACCESS_T,
+	.get		= ll_xattr_get_common,
+	.set		= ll_xattr_set_common,
 };
 
 static const struct xattr_handler ll_acl_default_xattr_handler = {
-	.name = XATTR_NAME_POSIX_ACL_DEFAULT,
-	.flags = XATTR_ACL_DEFAULT_T,
-	.get = ll_xattr_get_common,
-	.set = ll_xattr_set_common,
+	.name		= XATTR_NAME_POSIX_ACL_DEFAULT,
+	.flags		= XATTR_ACL_DEFAULT_T,
+	.get		= ll_xattr_get_common,
+	.set		= ll_xattr_set_common,
 };
 
 static const struct xattr_handler ll_lustre_xattr_handler = {
-	.prefix = XATTR_LUSTRE_PREFIX,
-	.flags = XATTR_LUSTRE_T,
-	.get = ll_xattr_get,
-	.set = ll_xattr_set,
+	.prefix		= XATTR_LUSTRE_PREFIX,
+	.flags		= XATTR_LUSTRE_T,
+	.get		= ll_xattr_get,
+	.set		= ll_xattr_set,
 };
 
 const struct xattr_handler *ll_xattr_handlers[] = {
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [lustre-devel] [PATCH 12/26] lmv: cleanup white spaces
  2019-01-31 17:19 [lustre-devel] [PATCH 00/26] lustre: cleanups with no code changes James Simmons
                   ` (10 preceding siblings ...)
  2019-01-31 17:19 ` [lustre-devel] [PATCH 11/26] llite: " James Simmons
@ 2019-01-31 17:19 ` James Simmons
  2019-01-31 17:19 ` [lustre-devel] [PATCH 13/26] lov: " James Simmons
                   ` (14 subsequent siblings)
  26 siblings, 0 replies; 30+ messages in thread
From: James Simmons @ 2019-01-31 17:19 UTC (permalink / raw)
  To: lustre-devel

The lmv code is very messy and difficult to read. Remove excess
white space and properly align data structures so they are easy on
the eyes.

Signed-off-by: James Simmons <jsimmons@infradead.org>
---
 drivers/staging/lustre/lustre/lmv/lmv_intent.c |  38 +--
 drivers/staging/lustre/lustre/lmv/lmv_obd.c    | 334 ++++++++++++-------------
 drivers/staging/lustre/lustre/lmv/lproc_lmv.c  |  24 +-
 3 files changed, 198 insertions(+), 198 deletions(-)

diff --git a/drivers/staging/lustre/lustre/lmv/lmv_intent.c b/drivers/staging/lustre/lustre/lmv/lmv_intent.c
index bc364b6..8892426 100644
--- a/drivers/staging/lustre/lustre/lmv/lmv_intent.c
+++ b/drivers/staging/lustre/lustre/lmv/lmv_intent.c
@@ -54,15 +54,15 @@ static int lmv_intent_remote(struct obd_export *exp, struct lookup_intent *it,
 			     ldlm_blocking_callback cb_blocking,
 			     u64 extra_lock_flags)
 {
-	struct obd_device	*obd = exp->exp_obd;
-	struct lmv_obd		*lmv = &obd->u.lmv;
-	struct ptlrpc_request	*req = NULL;
-	struct lustre_handle	plock;
-	struct md_op_data	*op_data;
-	struct lmv_tgt_desc	*tgt;
-	struct mdt_body		*body;
-	int			pmode;
-	int			rc = 0;
+	struct obd_device *obd = exp->exp_obd;
+	struct lmv_obd *lmv = &obd->u.lmv;
+	struct ptlrpc_request *req = NULL;
+	struct lustre_handle plock;
+	struct md_op_data *op_data;
+	struct lmv_tgt_desc *tgt;
+	struct mdt_body	*body;
+	int pmode;
+	int rc = 0;
 
 	body = req_capsule_server_get(&(*reqp)->rq_pill, &RMF_MDT_BODY);
 	if (!body)
@@ -264,11 +264,11 @@ static int lmv_intent_open(struct obd_export *exp, struct md_op_data *op_data,
 			   ldlm_blocking_callback cb_blocking,
 			   u64 extra_lock_flags)
 {
-	struct obd_device	*obd = exp->exp_obd;
-	struct lmv_obd		*lmv = &obd->u.lmv;
-	struct lmv_tgt_desc	*tgt;
-	struct mdt_body		*body;
-	int			rc;
+	struct obd_device *obd = exp->exp_obd;
+	struct lmv_obd *lmv = &obd->u.lmv;
+	struct lmv_tgt_desc *tgt;
+	struct mdt_body	*body;
+	int rc;
 
 	if (it->it_flags & MDS_OPEN_BY_FID) {
 		LASSERT(fid_is_sane(&op_data->op_fid2));
@@ -356,11 +356,11 @@ static int lmv_intent_lookup(struct obd_export *exp,
 			     u64 extra_lock_flags)
 {
 	struct lmv_stripe_md *lsm = op_data->op_mea1;
-	struct obd_device      *obd = exp->exp_obd;
-	struct lmv_obd	 *lmv = &obd->u.lmv;
-	struct lmv_tgt_desc    *tgt = NULL;
+	struct obd_device *obd = exp->exp_obd;
+	struct lmv_obd *lmv = &obd->u.lmv;
+	struct lmv_tgt_desc *tgt = NULL;
 	struct mdt_body	*body;
-	int		     rc = 0;
+	int rc = 0;
 
 	/*
 	 * If it returns ERR_PTR(-EBADFD) then it is an unknown hash type
@@ -477,7 +477,7 @@ int lmv_intent_lock(struct obd_export *exp, struct md_op_data *op_data,
 		    ldlm_blocking_callback cb_blocking,
 		    u64 extra_lock_flags)
 {
-	int		rc;
+	int rc;
 
 	LASSERT(fid_is_sane(&op_data->op_fid1));
 
diff --git a/drivers/staging/lustre/lustre/lmv/lmv_obd.c b/drivers/staging/lustre/lustre/lmv/lmv_obd.c
index 65ae944..1c7379b 100644
--- a/drivers/staging/lustre/lustre/lmv/lmv_obd.c
+++ b/drivers/staging/lustre/lustre/lmv/lmv_obd.c
@@ -80,9 +80,9 @@ static int lmv_set_mdc_active(struct lmv_obd *lmv, const struct obd_uuid *uuid,
 			      int activate)
 {
 	struct lmv_tgt_desc *tgt = NULL;
-	struct obd_device      *obd;
-	u32		     i;
-	int		     rc = 0;
+	struct obd_device *obd;
+	u32 i;
+	int rc = 0;
 
 	CDEBUG(D_INFO, "Searching in lmv %p for uuid %s (activate=%d)\n",
 	       lmv, uuid->uuid, activate);
@@ -126,7 +126,7 @@ static int lmv_set_mdc_active(struct lmv_obd *lmv, const struct obd_uuid *uuid,
 	       activate ? "" : "in");
 	lmv_activate_target(lmv, tgt, activate);
 
- out_lmv_lock:
+out_lmv_lock:
 	spin_unlock(&lmv->lmv_lock);
 	return rc;
 }
@@ -143,9 +143,9 @@ static int lmv_notify(struct obd_device *obd, struct obd_device *watched,
 		      enum obd_notify_event ev)
 {
 	struct obd_connect_data *conn_data;
-	struct lmv_obd	  *lmv = &obd->u.lmv;
-	struct obd_uuid	 *uuid;
-	int		      rc = 0;
+	struct lmv_obd *lmv = &obd->u.lmv;
+	struct obd_uuid	*uuid;
+	int rc = 0;
 
 	if (strcmp(watched->obd_type->typ_name, LUSTRE_MDC_NAME)) {
 		CERROR("unexpected notification of %s %s!\n",
@@ -192,10 +192,10 @@ static int lmv_connect(const struct lu_env *env,
 		       struct obd_uuid *cluuid, struct obd_connect_data *data,
 		       void *localdata)
 {
-	struct lmv_obd	*lmv = &obd->u.lmv;
-	struct lustre_handle  conn = { 0 };
+	struct lmv_obd *lmv = &obd->u.lmv;
+	struct lustre_handle conn = { 0 };
 	struct obd_export *exp;
-	int		    rc = 0;
+	int rc = 0;
 
 	rc = class_connect(&conn, obd, cluuid);
 	if (rc) {
@@ -234,11 +234,11 @@ static int lmv_connect(const struct lu_env *env,
 
 static int lmv_init_ea_size(struct obd_export *exp, u32 easize, u32 def_easize)
 {
-	struct obd_device   *obd = exp->exp_obd;
-	struct lmv_obd      *lmv = &obd->u.lmv;
+	struct obd_device *obd = exp->exp_obd;
+	struct lmv_obd *lmv = &obd->u.lmv;
 	u32 i;
-	int		  rc = 0;
-	int		  change = 0;
+	int rc = 0;
+	int change = 0;
 
 	if (lmv->max_easize < easize) {
 		lmv->max_easize = easize;
@@ -277,13 +277,13 @@ static int lmv_init_ea_size(struct obd_export *exp, u32 easize, u32 def_easize)
 
 static int lmv_connect_mdc(struct obd_device *obd, struct lmv_tgt_desc *tgt)
 {
-	struct lmv_obd	  *lmv = &obd->u.lmv;
-	struct obd_uuid	 *cluuid = &lmv->cluuid;
-	struct obd_uuid	  lmv_mdc_uuid = { "LMV_MDC_UUID" };
-	struct obd_device       *mdc_obd;
-	struct obd_export       *mdc_exp;
-	struct lu_fld_target     target;
-	int		      rc;
+	struct lmv_obd *lmv = &obd->u.lmv;
+	struct obd_uuid *cluuid = &lmv->cluuid;
+	struct obd_uuid lmv_mdc_uuid = { "LMV_MDC_UUID" };
+	struct obd_device *mdc_obd;
+	struct obd_export *mdc_exp;
+	struct lu_fld_target target;
+	int rc;
 
 	mdc_obd = class_find_client_obd(&tgt->ltd_uuid, LUSTRE_MDC_NAME,
 					&obd->obd_uuid);
@@ -371,11 +371,11 @@ static void lmv_del_target(struct lmv_obd *lmv, int index)
 static int lmv_add_target(struct obd_device *obd, struct obd_uuid *uuidp,
 			  u32 index, int gen)
 {
-	struct lmv_obd      *lmv = &obd->u.lmv;
+	struct lmv_obd *lmv = &obd->u.lmv;
 	struct obd_device *mdc_obd;
 	struct lmv_tgt_desc *tgt;
 	int orig_tgt_count = 0;
-	int		  rc = 0;
+	int rc = 0;
 
 	CDEBUG(D_CONFIG, "Target uuid: %s. index %d\n", uuidp->uuid, index);
 
@@ -470,11 +470,11 @@ static int lmv_add_target(struct obd_device *obd, struct obd_uuid *uuidp,
 
 static int lmv_check_connect(struct obd_device *obd)
 {
-	struct lmv_obd       *lmv = &obd->u.lmv;
-	struct lmv_tgt_desc  *tgt;
+	struct lmv_obd *lmv = &obd->u.lmv;
+	struct lmv_tgt_desc *tgt;
 	u32 i;
-	int		   rc;
-	int		   easize;
+	int rc;
+	int easize;
 
 	if (lmv->connected)
 		return 0;
@@ -518,7 +518,7 @@ static int lmv_check_connect(struct obd_device *obd)
 	mutex_unlock(&lmv->lmv_init_mutex);
 	return 0;
 
- out_disc:
+out_disc:
 	while (i-- > 0) {
 		int rc2;
 
@@ -542,9 +542,9 @@ static int lmv_check_connect(struct obd_device *obd)
 
 static int lmv_disconnect_mdc(struct obd_device *obd, struct lmv_tgt_desc *tgt)
 {
-	struct lmv_obd	 *lmv = &obd->u.lmv;
-	struct obd_device      *mdc_obd;
-	int		     rc;
+	struct lmv_obd *lmv = &obd->u.lmv;
+	struct obd_device *mdc_obd;
+	int rc;
 
 	mdc_obd = class_exp2obd(tgt->ltd_exp);
 
@@ -582,9 +582,9 @@ static int lmv_disconnect_mdc(struct obd_device *obd, struct lmv_tgt_desc *tgt)
 
 static int lmv_disconnect(struct obd_export *exp)
 {
-	struct obd_device     *obd = class_exp2obd(exp);
-	struct lmv_obd	*lmv = &obd->u.lmv;
-	int		    rc;
+	struct obd_device *obd = class_exp2obd(exp);
+	struct lmv_obd *lmv = &obd->u.lmv;
+	int rc;
 	u32 i;
 
 	if (!lmv->tgts)
@@ -615,14 +615,14 @@ static int lmv_disconnect(struct obd_export *exp)
 static int lmv_fid2path(struct obd_export *exp, int len, void *karg,
 			void __user *uarg)
 {
-	struct obd_device	*obddev = class_exp2obd(exp);
-	struct lmv_obd		*lmv = &obddev->u.lmv;
+	struct obd_device *obddev = class_exp2obd(exp);
+	struct lmv_obd *lmv = &obddev->u.lmv;
 	struct getinfo_fid2path *gf = karg;
-	struct lmv_tgt_desc     *tgt;
+	struct lmv_tgt_desc *tgt;
 	struct getinfo_fid2path *remote_gf = NULL;
 	struct lu_fid root_fid;
-	int			remote_gf_size = 0;
-	int			rc;
+	int remote_gf_size = 0;
+	int rc;
 
 	tgt = lmv_find_target(lmv, &gf->gf_fid);
 	if (IS_ERR(tgt))
@@ -711,7 +711,7 @@ static int lmv_hsm_req_count(struct lmv_obd *lmv,
 			     const struct lmv_tgt_desc *tgt_mds)
 {
 	u32 i, nr = 0;
-	struct lmv_tgt_desc    *curr_tgt;
+	struct lmv_tgt_desc *curr_tgt;
 
 	/* count how many requests must be sent to the given target */
 	for (i = 0; i < hur->hur_request.hr_itemcount; i++) {
@@ -729,8 +729,8 @@ static int lmv_hsm_req_build(struct lmv_obd *lmv,
 			     const struct lmv_tgt_desc *tgt_mds,
 			     struct hsm_user_request *hur_out)
 {
-	int			i, nr_out;
-	struct lmv_tgt_desc    *curr_tgt;
+	int i, nr_out;
+	struct lmv_tgt_desc *curr_tgt;
 
 	/* build the hsm_user_request for the given target */
 	hur_out->hur_request = hur_in->hur_request;
@@ -857,12 +857,12 @@ static int lmv_hsm_ct_register(struct lmv_obd *lmv, unsigned int cmd, int len,
 static int lmv_iocontrol(unsigned int cmd, struct obd_export *exp,
 			 int len, void *karg, void __user *uarg)
 {
-	struct obd_device    *obddev = class_exp2obd(exp);
-	struct lmv_obd       *lmv = &obddev->u.lmv;
+	struct obd_device *obddev = class_exp2obd(exp);
+	struct lmv_obd *lmv = &obddev->u.lmv;
 	struct lmv_tgt_desc *tgt = NULL;
 	u32 i = 0;
-	int		   rc = 0;
-	int		   set = 0;
+	int rc = 0;
+	int set = 0;
 	u32 count = lmv->desc.ld_tgt_count;
 
 	if (count == 0)
@@ -872,7 +872,7 @@ static int lmv_iocontrol(unsigned int cmd, struct obd_export *exp,
 	case IOC_OBD_STATFS: {
 		struct obd_ioctl_data *data = karg;
 		struct obd_device *mdc_obd;
-		struct obd_statfs stat_buf = {0};
+		struct obd_statfs stat_buf = { 0 };
 		u32 index;
 
 		memcpy(&index, data->ioc_inlbuf2, sizeof(u32));
@@ -981,7 +981,7 @@ static int lmv_iocontrol(unsigned int cmd, struct obd_export *exp,
 	case LL_IOC_HSM_STATE_GET:
 	case LL_IOC_HSM_STATE_SET:
 	case LL_IOC_HSM_ACTION: {
-		struct md_op_data	*op_data = karg;
+		struct md_op_data *op_data = karg;
 
 		tgt = lmv_find_target(lmv, &op_data->op_fid1);
 		if (IS_ERR(tgt))
@@ -1059,8 +1059,8 @@ static int lmv_iocontrol(unsigned int cmd, struct obd_export *exp,
 		break;
 	}
 	case LL_IOC_LOV_SWAP_LAYOUTS: {
-		struct md_op_data	*op_data = karg;
-		struct lmv_tgt_desc	*tgt1, *tgt2;
+		struct md_op_data *op_data = karg;
+		struct lmv_tgt_desc *tgt1, *tgt2;
 
 		tgt1 = lmv_find_target(lmv, &op_data->op_fid1);
 		if (IS_ERR(tgt1))
@@ -1162,8 +1162,8 @@ static int lmv_placement_policy(struct obd_device *obd,
 
 int __lmv_fid_alloc(struct lmv_obd *lmv, struct lu_fid *fid, u32 mds)
 {
-	struct lmv_tgt_desc	*tgt;
-	int			 rc;
+	struct lmv_tgt_desc *tgt;
+	int rc;
 
 	tgt = lmv_get_target(lmv, mds, NULL);
 	if (IS_ERR(tgt))
@@ -1197,10 +1197,10 @@ int __lmv_fid_alloc(struct lmv_obd *lmv, struct lu_fid *fid, u32 mds)
 int lmv_fid_alloc(const struct lu_env *env, struct obd_export *exp,
 		  struct lu_fid *fid, struct md_op_data *op_data)
 {
-	struct obd_device     *obd = class_exp2obd(exp);
-	struct lmv_obd	*lmv = &obd->u.lmv;
-	u32		       mds = 0;
-	int		    rc;
+	struct obd_device *obd = class_exp2obd(exp);
+	struct lmv_obd *lmv = &obd->u.lmv;
+	u32 mds = 0;
+	int rc;
 
 	LASSERT(op_data);
 	LASSERT(fid);
@@ -1223,9 +1223,9 @@ int lmv_fid_alloc(const struct lu_env *env, struct obd_export *exp,
 
 static int lmv_setup(struct obd_device *obd, struct lustre_cfg *lcfg)
 {
-	struct lmv_obd	     *lmv = &obd->u.lmv;
-	struct lmv_desc	    *desc;
-	int			 rc;
+	struct lmv_obd *lmv = &obd->u.lmv;
+	struct lmv_desc *desc;
+	int rc;
 
 	if (LUSTRE_CFG_BUFLEN(lcfg, 1) < 1) {
 		CERROR("LMV setup requires a descriptor\n");
@@ -1270,7 +1270,7 @@ static int lmv_setup(struct obd_device *obd, struct lustre_cfg *lcfg)
 
 static int lmv_cleanup(struct obd_device *obd)
 {
-	struct lmv_obd   *lmv = &obd->u.lmv;
+	struct lmv_obd *lmv = &obd->u.lmv;
 
 	fld_client_fini(&lmv->lmv_fld);
 	if (lmv->tgts) {
@@ -1289,11 +1289,11 @@ static int lmv_cleanup(struct obd_device *obd)
 
 static int lmv_process_config(struct obd_device *obd, u32 len, void *buf)
 {
-	struct lustre_cfg	*lcfg = buf;
-	struct obd_uuid		obd_uuid;
-	int			gen;
-	u32			index;
-	int			rc;
+	struct lustre_cfg *lcfg = buf;
+	struct obd_uuid	obd_uuid;
+	int gen;
+	u32 index;
+	int rc;
 
 	switch (lcfg->lcfg_command) {
 	case LCFG_ADD_MDC:
@@ -1329,10 +1329,10 @@ static int lmv_process_config(struct obd_device *obd, u32 len, void *buf)
 static int lmv_statfs(const struct lu_env *env, struct obd_export *exp,
 		      struct obd_statfs *osfs, u64 max_age, u32 flags)
 {
-	struct obd_device     *obd = class_exp2obd(exp);
-	struct lmv_obd	*lmv = &obd->u.lmv;
-	struct obd_statfs     *temp;
-	int		    rc = 0;
+	struct obd_device *obd = class_exp2obd(exp);
+	struct lmv_obd *lmv = &obd->u.lmv;
+	struct obd_statfs *temp;
+	int rc = 0;
 	u32 i;
 
 	temp = kzalloc(sizeof(*temp), GFP_NOFS);
@@ -1379,8 +1379,8 @@ static int lmv_statfs(const struct lu_env *env, struct obd_export *exp,
 static int lmv_get_root(struct obd_export *exp, const char *fileset,
 			struct lu_fid *fid)
 {
-	struct obd_device    *obd = exp->exp_obd;
-	struct lmv_obd       *lmv = &obd->u.lmv;
+	struct obd_device *obd = exp->exp_obd;
+	struct lmv_obd *lmv = &obd->u.lmv;
 
 	return md_get_root(lmv->tgts[0]->ltd_exp, fileset, fid);
 }
@@ -1389,9 +1389,9 @@ static int lmv_getxattr(struct obd_export *exp, const struct lu_fid *fid,
 			u64 obd_md_valid, const char *name, size_t buf_size,
 			struct ptlrpc_request **req)
 {
-	struct obd_device      *obd = exp->exp_obd;
-	struct lmv_obd	 *lmv = &obd->u.lmv;
-	struct lmv_tgt_desc    *tgt;
+	struct obd_device *obd = exp->exp_obd;
+	struct lmv_obd *lmv = &obd->u.lmv;
+	struct lmv_tgt_desc *tgt;
 
 	tgt = lmv_find_target(lmv, fid);
 	if (IS_ERR(tgt))
@@ -1407,9 +1407,9 @@ static int lmv_setxattr(struct obd_export *exp, const struct lu_fid *fid,
 			unsigned int xattr_flags, u32 suppgid,
 			struct ptlrpc_request **req)
 {
-	struct obd_device      *obd = exp->exp_obd;
-	struct lmv_obd	 *lmv = &obd->u.lmv;
-	struct lmv_tgt_desc    *tgt;
+	struct obd_device *obd = exp->exp_obd;
+	struct lmv_obd *lmv = &obd->u.lmv;
+	struct lmv_tgt_desc *tgt;
 
 	tgt = lmv_find_target(lmv, fid);
 	if (IS_ERR(tgt))
@@ -1422,9 +1422,9 @@ static int lmv_setxattr(struct obd_export *exp, const struct lu_fid *fid,
 static int lmv_getattr(struct obd_export *exp, struct md_op_data *op_data,
 		       struct ptlrpc_request **request)
 {
-	struct obd_device       *obd = exp->exp_obd;
-	struct lmv_obd	  *lmv = &obd->u.lmv;
-	struct lmv_tgt_desc     *tgt;
+	struct obd_device *obd = exp->exp_obd;
+	struct lmv_obd *lmv = &obd->u.lmv;
+	struct lmv_tgt_desc *tgt;
 
 	tgt = lmv_find_target(lmv, &op_data->op_fid1);
 	if (IS_ERR(tgt))
@@ -1440,8 +1440,8 @@ static int lmv_getattr(struct obd_export *exp, struct md_op_data *op_data,
 
 static int lmv_null_inode(struct obd_export *exp, const struct lu_fid *fid)
 {
-	struct obd_device   *obd = exp->exp_obd;
-	struct lmv_obd      *lmv = &obd->u.lmv;
+	struct obd_device *obd = exp->exp_obd;
+	struct lmv_obd *lmv = &obd->u.lmv;
 	u32 i;
 
 	CDEBUG(D_INODE, "CBDATA for " DFID "\n", PFID(fid));
@@ -1463,9 +1463,9 @@ static int lmv_null_inode(struct obd_export *exp, const struct lu_fid *fid)
 static int lmv_close(struct obd_export *exp, struct md_op_data *op_data,
 		     struct md_open_data *mod, struct ptlrpc_request **request)
 {
-	struct obd_device     *obd = exp->exp_obd;
-	struct lmv_obd	*lmv = &obd->u.lmv;
-	struct lmv_tgt_desc   *tgt;
+	struct obd_device *obd = exp->exp_obd;
+	struct lmv_obd *lmv = &obd->u.lmv;
+	struct lmv_tgt_desc *tgt;
 
 	tgt = lmv_find_target(lmv, &op_data->op_fid1);
 	if (IS_ERR(tgt))
@@ -1587,10 +1587,10 @@ static int lmv_create(struct obd_export *exp, struct md_op_data *op_data,
 		      uid_t uid, gid_t gid, kernel_cap_t cap_effective,
 		      u64 rdev, struct ptlrpc_request **request)
 {
-	struct obd_device       *obd = exp->exp_obd;
-	struct lmv_obd	  *lmv = &obd->u.lmv;
-	struct lmv_tgt_desc     *tgt;
-	int		      rc;
+	struct obd_device *obd = exp->exp_obd;
+	struct lmv_obd *lmv = &obd->u.lmv;
+	struct lmv_tgt_desc *tgt;
+	int rc;
 
 	if (!lmv->desc.ld_active_tgt_count)
 		return -EIO;
@@ -1641,9 +1641,9 @@ static int lmv_create(struct obd_export *exp, struct md_op_data *op_data,
 	    const union ldlm_policy_data *policy, struct md_op_data *op_data,
 	    struct lustre_handle *lockh, u64 extra_lock_flags)
 {
-	struct obd_device	*obd = exp->exp_obd;
-	struct lmv_obd	   *lmv = &obd->u.lmv;
-	struct lmv_tgt_desc      *tgt;
+	struct obd_device *obd = exp->exp_obd;
+	struct lmv_obd *lmv = &obd->u.lmv;
+	struct lmv_tgt_desc *tgt;
 
 	CDEBUG(D_INODE, "ENQUEUE on " DFID "\n", PFID(&op_data->op_fid1));
 
@@ -1662,12 +1662,12 @@ static int lmv_create(struct obd_export *exp, struct md_op_data *op_data,
 lmv_getattr_name(struct obd_export *exp, struct md_op_data *op_data,
 		 struct ptlrpc_request **preq)
 {
-	struct ptlrpc_request   *req = NULL;
-	struct obd_device       *obd = exp->exp_obd;
-	struct lmv_obd	  *lmv = &obd->u.lmv;
-	struct lmv_tgt_desc     *tgt;
-	struct mdt_body	 *body;
-	int		      rc;
+	struct ptlrpc_request *req = NULL;
+	struct obd_device *obd = exp->exp_obd;
+	struct lmv_obd *lmv = &obd->u.lmv;
+	struct lmv_tgt_desc *tgt;
+	struct mdt_body	*body;
+	int rc;
 
 	tgt = lmv_locate_mds(lmv, op_data, &op_data->op_fid1);
 	if (IS_ERR(tgt))
@@ -1707,7 +1707,7 @@ static int lmv_create(struct obd_export *exp, struct md_op_data *op_data,
 	return rc;
 }
 
-#define md_op_data_fid(op_data, fl)		     \
+#define md_op_data_fid(op_data, fl)			\
 	(fl == MF_MDC_CANCEL_FID1 ? &op_data->op_fid1 : \
 	 fl == MF_MDC_CANCEL_FID2 ? &op_data->op_fid2 : \
 	 fl == MF_MDC_CANCEL_FID3 ? &op_data->op_fid3 : \
@@ -1718,11 +1718,11 @@ static int lmv_early_cancel(struct obd_export *exp, struct lmv_tgt_desc *tgt,
 			    struct md_op_data *op_data, int op_tgt,
 			    enum ldlm_mode mode, int bits, int flag)
 {
-	struct lu_fid	  *fid = md_op_data_fid(op_data, flag);
-	struct obd_device      *obd = exp->exp_obd;
-	struct lmv_obd	 *lmv = &obd->u.lmv;
+	struct lu_fid *fid = md_op_data_fid(op_data, flag);
+	struct obd_device *obd = exp->exp_obd;
+	struct lmv_obd *lmv = &obd->u.lmv;
 	union ldlm_policy_data policy = { { 0 } };
-	int		     rc = 0;
+	int rc = 0;
 
 	if (!fid_is_sane(fid))
 		return 0;
@@ -1756,10 +1756,10 @@ static int lmv_early_cancel(struct obd_export *exp, struct lmv_tgt_desc *tgt,
 static int lmv_link(struct obd_export *exp, struct md_op_data *op_data,
 		    struct ptlrpc_request **request)
 {
-	struct obd_device       *obd = exp->exp_obd;
-	struct lmv_obd	  *lmv = &obd->u.lmv;
-	struct lmv_tgt_desc     *tgt;
-	int		      rc;
+	struct obd_device *obd = exp->exp_obd;
+	struct lmv_obd *lmv = &obd->u.lmv;
+	struct lmv_tgt_desc *tgt;
+	int rc;
 
 	LASSERT(op_data->op_namelen != 0);
 
@@ -1803,13 +1803,13 @@ static int lmv_rename(struct obd_export *exp, struct md_op_data *op_data,
 		      const char *new, size_t newlen,
 		      struct ptlrpc_request **request)
 {
-	struct obd_device       *obd = exp->exp_obd;
-	struct lmv_obd	  *lmv = &obd->u.lmv;
+	struct obd_device *obd = exp->exp_obd;
+	struct lmv_obd *lmv = &obd->u.lmv;
 	struct obd_export *target_exp;
-	struct lmv_tgt_desc     *src_tgt;
+	struct lmv_tgt_desc *src_tgt;
 	struct lmv_tgt_desc *tgt_tgt;
 	struct mdt_body *body;
-	int			rc;
+	int rc;
 
 	LASSERT(oldlen != 0);
 
@@ -1968,9 +1968,9 @@ static int lmv_rename(struct obd_export *exp, struct md_op_data *op_data,
 static int lmv_setattr(struct obd_export *exp, struct md_op_data *op_data,
 		       void *ea, size_t ealen, struct ptlrpc_request **request)
 {
-	struct obd_device       *obd = exp->exp_obd;
-	struct lmv_obd	  *lmv = &obd->u.lmv;
-	struct lmv_tgt_desc     *tgt;
+	struct obd_device *obd = exp->exp_obd;
+	struct lmv_obd *lmv = &obd->u.lmv;
+	struct lmv_tgt_desc *tgt;
 
 	CDEBUG(D_INODE, "SETATTR for " DFID ", valid 0x%x/0x%x\n",
 	       PFID(&op_data->op_fid1), op_data->op_attr.ia_valid,
@@ -1987,9 +1987,9 @@ static int lmv_setattr(struct obd_export *exp, struct md_op_data *op_data,
 static int lmv_fsync(struct obd_export *exp, const struct lu_fid *fid,
 		     struct ptlrpc_request **request)
 {
-	struct obd_device	 *obd = exp->exp_obd;
-	struct lmv_obd	    *lmv = &obd->u.lmv;
-	struct lmv_tgt_desc       *tgt;
+	struct obd_device *obd = exp->exp_obd;
+	struct lmv_obd *lmv = &obd->u.lmv;
+	struct lmv_tgt_desc *tgt;
 
 	tgt = lmv_find_target(lmv, fid);
 	if (IS_ERR(tgt))
@@ -2341,13 +2341,13 @@ static int lmv_unlink(struct obd_export *exp, struct md_op_data *op_data,
 		      struct ptlrpc_request **request)
 {
 	struct lmv_stripe_md *lsm = op_data->op_mea1;
-	struct obd_device    *obd = exp->exp_obd;
-	struct lmv_obd	  *lmv = &obd->u.lmv;
+	struct obd_device *obd = exp->exp_obd;
+	struct lmv_obd *lmv = &obd->u.lmv;
 	struct lmv_tgt_desc *parent_tgt = NULL;
-	struct lmv_tgt_desc     *tgt = NULL;
-	struct mdt_body		*body;
+	struct lmv_tgt_desc *tgt = NULL;
+	struct mdt_body	*body;
 	int stripe_index = 0;
-	int		     rc;
+	int rc;
 
 retry_unlink:
 	/* For striped dir, we need to locate the parent as well */
@@ -2519,9 +2519,9 @@ static int lmv_precleanup(struct obd_device *obd)
 static int lmv_get_info(const struct lu_env *env, struct obd_export *exp,
 			u32 keylen, void *key, u32 *vallen, void *val)
 {
-	struct obd_device       *obd;
-	struct lmv_obd	  *lmv;
-	int		      rc = 0;
+	struct obd_device *obd;
+	struct lmv_obd *lmv;
+	int rc = 0;
 
 	obd = class_exp2obd(exp);
 	if (!obd) {
@@ -2590,9 +2590,9 @@ static int lmv_set_info_async(const struct lu_env *env, struct obd_export *exp,
 			      u32 keylen, void *key, u32 vallen,
 			      void *val, struct ptlrpc_request_set *set)
 {
-	struct lmv_tgt_desc    *tgt;
-	struct obd_device      *obd;
-	struct lmv_obd	 *lmv;
+	struct lmv_tgt_desc *tgt;
+	struct obd_device *obd;
+	struct lmv_obd *lmv;
 	int rc = 0;
 
 	obd = class_exp2obd(exp);
@@ -2756,10 +2756,10 @@ static int lmv_cancel_unused(struct obd_export *exp, const struct lu_fid *fid,
 			     enum ldlm_mode mode, enum ldlm_cancel_flags flags,
 			     void *opaque)
 {
-	struct obd_device       *obd = exp->exp_obd;
-	struct lmv_obd	  *lmv = &obd->u.lmv;
-	int		      rc = 0;
-	int		      err;
+	struct obd_device *obd = exp->exp_obd;
+	struct lmv_obd *lmv = &obd->u.lmv;
+	int rc = 0;
+	int err;
 	u32 i;
 
 	LASSERT(fid);
@@ -2782,7 +2782,7 @@ static int lmv_set_lock_data(struct obd_export *exp,
 			     const struct lustre_handle *lockh,
 			     void *data, u64 *bits)
 {
-	struct lmv_obd	  *lmv = &exp->exp_obd->u.lmv;
+	struct lmv_obd *lmv = &exp->exp_obd->u.lmv;
 	struct lmv_tgt_desc *tgt = lmv->tgts[0];
 
 	if (!tgt || !tgt->ltd_exp)
@@ -2798,9 +2798,9 @@ static enum ldlm_mode lmv_lock_match(struct obd_export *exp, u64 flags,
 				     enum ldlm_mode mode,
 				     struct lustre_handle *lockh)
 {
-	struct obd_device       *obd = exp->exp_obd;
-	struct lmv_obd	  *lmv = &obd->u.lmv;
-	enum ldlm_mode	      rc;
+	struct obd_device *obd = exp->exp_obd;
+	struct lmv_obd *lmv = &obd->u.lmv;
+	enum ldlm_mode rc;
 	int tgt;
 	u32 i;
 
@@ -2840,7 +2840,7 @@ static int lmv_get_lustre_md(struct obd_export *exp,
 			     struct obd_export *md_exp,
 			     struct lustre_md *md)
 {
-	struct lmv_obd	  *lmv = &exp->exp_obd->u.lmv;
+	struct lmv_obd *lmv = &exp->exp_obd->u.lmv;
 	struct lmv_tgt_desc *tgt = lmv->tgts[0];
 
 	if (!tgt || !tgt->ltd_exp)
@@ -2850,8 +2850,8 @@ static int lmv_get_lustre_md(struct obd_export *exp,
 
 static int lmv_free_lustre_md(struct obd_export *exp, struct lustre_md *md)
 {
-	struct obd_device       *obd = exp->exp_obd;
-	struct lmv_obd	  *lmv = &obd->u.lmv;
+	struct obd_device *obd = exp->exp_obd;
+	struct lmv_obd *lmv = &obd->u.lmv;
 	struct lmv_tgt_desc *tgt = lmv->tgts[0];
 
 	if (md->lmv) {
@@ -2867,9 +2867,9 @@ static int lmv_set_open_replay_data(struct obd_export *exp,
 				    struct obd_client_handle *och,
 				    struct lookup_intent *it)
 {
-	struct obd_device       *obd = exp->exp_obd;
-	struct lmv_obd	  *lmv = &obd->u.lmv;
-	struct lmv_tgt_desc     *tgt;
+	struct obd_device *obd = exp->exp_obd;
+	struct lmv_obd *lmv = &obd->u.lmv;
+	struct lmv_tgt_desc *tgt;
 
 	tgt = lmv_find_target(lmv, &och->och_fid);
 	if (IS_ERR(tgt))
@@ -2881,9 +2881,9 @@ static int lmv_set_open_replay_data(struct obd_export *exp,
 static int lmv_clear_open_replay_data(struct obd_export *exp,
 				      struct obd_client_handle *och)
 {
-	struct obd_device       *obd = exp->exp_obd;
-	struct lmv_obd	  *lmv = &obd->u.lmv;
-	struct lmv_tgt_desc     *tgt;
+	struct obd_device *obd = exp->exp_obd;
+	struct lmv_obd *lmv = &obd->u.lmv;
+	struct lmv_tgt_desc *tgt;
 
 	tgt = lmv_find_target(lmv, &och->och_fid);
 	if (IS_ERR(tgt))
@@ -2895,9 +2895,9 @@ static int lmv_clear_open_replay_data(struct obd_export *exp,
 static int lmv_intent_getattr_async(struct obd_export *exp,
 				    struct md_enqueue_info *minfo)
 {
-	struct md_op_data       *op_data = &minfo->mi_data;
-	struct obd_device       *obd = exp->exp_obd;
-	struct lmv_obd	  *lmv = &obd->u.lmv;
+	struct md_op_data *op_data = &minfo->mi_data;
+	struct obd_device *obd = exp->exp_obd;
+	struct lmv_obd *lmv = &obd->u.lmv;
 	struct lmv_tgt_desc *ptgt = NULL;
 	struct lmv_tgt_desc *ctgt = NULL;
 
@@ -2927,9 +2927,9 @@ static int lmv_intent_getattr_async(struct obd_export *exp,
 static int lmv_revalidate_lock(struct obd_export *exp, struct lookup_intent *it,
 			       struct lu_fid *fid, u64 *bits)
 {
-	struct obd_device       *obd = exp->exp_obd;
-	struct lmv_obd	  *lmv = &obd->u.lmv;
-	struct lmv_tgt_desc     *tgt;
+	struct obd_device *obd = exp->exp_obd;
+	struct lmv_obd *lmv = &obd->u.lmv;
+	struct lmv_tgt_desc *tgt;
 
 	tgt = lmv_find_target(lmv, fid);
 	if (IS_ERR(tgt))
@@ -2963,8 +2963,8 @@ static int lmv_revalidate_lock(struct obd_export *exp, struct lookup_intent *it,
 static int lmv_quotactl(struct obd_device *unused, struct obd_export *exp,
 			struct obd_quotactl *oqctl)
 {
-	struct obd_device   *obd = class_exp2obd(exp);
-	struct lmv_obd      *lmv = &obd->u.lmv;
+	struct obd_device *obd = class_exp2obd(exp);
+	struct lmv_obd *lmv = &obd->u.lmv;
 	struct lmv_tgt_desc *tgt = lmv->tgts[0];
 	int rc = 0;
 	u64 curspace = 0, curinodes = 0;
@@ -3045,20 +3045,20 @@ static int lmv_merge_attr(struct obd_export *exp,
 }
 
 static struct obd_ops lmv_obd_ops = {
-	.owner		= THIS_MODULE,
-	.setup		= lmv_setup,
-	.cleanup	= lmv_cleanup,
-	.precleanup	= lmv_precleanup,
-	.process_config	= lmv_process_config,
-	.connect	= lmv_connect,
-	.disconnect	= lmv_disconnect,
-	.statfs		= lmv_statfs,
-	.get_info	= lmv_get_info,
-	.set_info_async	= lmv_set_info_async,
-	.notify		= lmv_notify,
-	.get_uuid	= lmv_get_uuid,
-	.iocontrol	= lmv_iocontrol,
-	.quotactl	= lmv_quotactl
+	.owner			= THIS_MODULE,
+	.setup			= lmv_setup,
+	.cleanup		= lmv_cleanup,
+	.precleanup		= lmv_precleanup,
+	.process_config		= lmv_process_config,
+	.connect		= lmv_connect,
+	.disconnect		= lmv_disconnect,
+	.statfs			= lmv_statfs,
+	.get_info		= lmv_get_info,
+	.set_info_async		= lmv_set_info_async,
+	.notify			= lmv_notify,
+	.get_uuid		= lmv_get_uuid,
+	.iocontrol		= lmv_iocontrol,
+	.quotactl		= lmv_quotactl
 };
 
 static struct md_ops lmv_md_ops = {
diff --git a/drivers/staging/lustre/lustre/lmv/lproc_lmv.c b/drivers/staging/lustre/lustre/lmv/lproc_lmv.c
index 4e30026..e40473c 100644
--- a/drivers/staging/lustre/lustre/lmv/lproc_lmv.c
+++ b/drivers/staging/lustre/lustre/lmv/lproc_lmv.c
@@ -78,8 +78,8 @@ static ssize_t desc_uuid_show(struct kobject *kobj, struct attribute *attr,
 
 static void *lmv_tgt_seq_start(struct seq_file *p, loff_t *pos)
 {
-	struct obd_device       *dev = p->private;
-	struct lmv_obd	  *lmv = &dev->u.lmv;
+	struct obd_device *dev = p->private;
+	struct lmv_obd *lmv = &dev->u.lmv;
 
 	while (*pos < lmv->tgts_size) {
 		if (lmv->tgts[*pos])
@@ -96,8 +96,8 @@ static void lmv_tgt_seq_stop(struct seq_file *p, void *v)
 
 static void *lmv_tgt_seq_next(struct seq_file *p, void *v, loff_t *pos)
 {
-	struct obd_device       *dev = p->private;
-	struct lmv_obd	  *lmv = &dev->u.lmv;
+	struct obd_device *dev = p->private;
+	struct lmv_obd *lmv = &dev->u.lmv;
 
 	++*pos;
 	while (*pos < lmv->tgts_size) {
@@ -106,12 +106,12 @@ static void *lmv_tgt_seq_next(struct seq_file *p, void *v, loff_t *pos)
 		++*pos;
 	}
 
-	return  NULL;
+	return NULL;
 }
 
 static int lmv_tgt_seq_show(struct seq_file *p, void *v)
 {
-	struct lmv_tgt_desc     *tgt = v;
+	struct lmv_tgt_desc *tgt = v;
 
 	if (!tgt)
 		return 0;
@@ -123,16 +123,16 @@ static int lmv_tgt_seq_show(struct seq_file *p, void *v)
 }
 
 static const struct seq_operations lmv_tgt_sops = {
-	.start		 = lmv_tgt_seq_start,
-	.stop		  = lmv_tgt_seq_stop,
-	.next		  = lmv_tgt_seq_next,
-	.show		  = lmv_tgt_seq_show,
+	.start		= lmv_tgt_seq_start,
+	.stop		= lmv_tgt_seq_stop,
+	.next		= lmv_tgt_seq_next,
+	.show		= lmv_tgt_seq_show,
 };
 
 static int lmv_target_seq_open(struct inode *inode, struct file *file)
 {
-	struct seq_file	 *seq;
-	int		     rc;
+	struct seq_file *seq;
+	int rc;
 
 	rc = seq_open(file, &lmv_tgt_sops);
 	if (rc)
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [lustre-devel] [PATCH 13/26] lov: cleanup white spaces
  2019-01-31 17:19 [lustre-devel] [PATCH 00/26] lustre: cleanups with no code changes James Simmons
                   ` (11 preceding siblings ...)
  2019-01-31 17:19 ` [lustre-devel] [PATCH 12/26] lmv: " James Simmons
@ 2019-01-31 17:19 ` James Simmons
  2019-01-31 17:19 ` [lustre-devel] [PATCH 14/26] mdc: " James Simmons
                   ` (13 subsequent siblings)
  26 siblings, 0 replies; 30+ messages in thread
From: James Simmons @ 2019-01-31 17:19 UTC (permalink / raw)
  To: lustre-devel

The lov code is very messy and difficult to read. Remove excess
white space and properly align data structures so they are easy on
the eyes.

Signed-off-by: James Simmons <jsimmons@infradead.org>
---
 .../staging/lustre/lustre/lov/lov_cl_internal.h    |  50 +++----
 drivers/staging/lustre/lustre/lov/lov_dev.c        |  62 ++++-----
 drivers/staging/lustre/lustre/lov/lov_internal.h   |  56 ++++----
 drivers/staging/lustre/lustre/lov/lov_io.c         | 136 +++++++++---------
 drivers/staging/lustre/lustre/lov/lov_lock.c       |  34 ++---
 drivers/staging/lustre/lustre/lov/lov_obd.c        |  21 ++-
 drivers/staging/lustre/lustre/lov/lov_object.c     | 154 ++++++++++-----------
 drivers/staging/lustre/lustre/lov/lov_offset.c     |   6 +-
 drivers/staging/lustre/lustre/lov/lov_pack.c       |   4 +-
 drivers/staging/lustre/lustre/lov/lov_page.c       |  18 +--
 drivers/staging/lustre/lustre/lov/lov_pool.c       |  20 +--
 drivers/staging/lustre/lustre/lov/lov_request.c    |  12 +-
 drivers/staging/lustre/lustre/lov/lovsub_dev.c     |  32 ++---
 drivers/staging/lustre/lustre/lov/lovsub_lock.c    |   4 +-
 drivers/staging/lustre/lustre/lov/lovsub_object.c  |  29 ++--
 drivers/staging/lustre/lustre/lov/lovsub_page.c    |   2 +-
 drivers/staging/lustre/lustre/lov/lproc_lov.c      |  18 +--
 17 files changed, 328 insertions(+), 330 deletions(-)

diff --git a/drivers/staging/lustre/lustre/lov/lov_cl_internal.h b/drivers/staging/lustre/lustre/lov/lov_cl_internal.h
index d83b8de..22ef7b2 100644
--- a/drivers/staging/lustre/lustre/lov/lov_cl_internal.h
+++ b/drivers/staging/lustre/lustre/lov/lov_cl_internal.h
@@ -95,12 +95,12 @@ struct lov_device {
 	/*
 	 * XXX Locking of lov-private data is missing.
 	 */
-	struct cl_device	  ld_cl;
-	struct lov_obd	   *ld_lov;
+	struct cl_device	ld_cl;
+	struct lov_obd	       *ld_lov;
 	/** size of lov_device::ld_target[] array */
-	u32		     ld_target_nr;
-	struct lovsub_device    **ld_target;
-	u32		     ld_flags;
+	u32			ld_target_nr;
+	struct lovsub_device  **ld_target;
+	u32			ld_flags;
 };
 
 /**
@@ -180,7 +180,7 @@ struct lov_layout_raid0 {
  * function corresponding to the current layout type.
  */
 struct lov_object {
-	struct cl_object       lo_cl;
+	struct cl_object	lo_cl;
 	/**
 	 * Serializes object operations with transitions between layout types.
 	 *
@@ -203,15 +203,15 @@ struct lov_object {
 	 * How many IOs are on going on this object. Layout can be changed
 	 * only if there is no active IO.
 	 */
-	atomic_t	       lo_active_ios;
+	atomic_t		lo_active_ios;
 	/**
 	 * Waitq - wait for no one else is using lo_lsm
 	 */
-	wait_queue_head_t	       lo_waitq;
+	wait_queue_head_t	lo_waitq;
 	/**
 	 * Layout metadata. NULL if empty layout.
 	 */
-	struct lov_stripe_md  *lo_lsm;
+	struct lov_stripe_md	*lo_lsm;
 
 	union lov_layout_state {
 		struct lov_layout_state_empty {
@@ -259,9 +259,9 @@ struct lov_lock_sub {
  * lov-specific lock state.
  */
 struct lov_lock {
-	struct cl_lock_slice   lls_cl;
+	struct cl_lock_slice	lls_cl;
 	/** Number of sub-locks in this lock */
-	int		    lls_nr;
+	int			lls_nr;
 	/** sublock array */
 	struct lov_lock_sub     lls_sub[0];
 };
@@ -277,43 +277,43 @@ struct lov_page {
  */
 
 struct lovsub_device {
-	struct cl_device   acid_cl;
-	struct cl_device  *acid_next;
+	struct cl_device	acid_cl;
+	struct cl_device       *acid_next;
 };
 
 struct lovsub_object {
 	struct cl_object_header lso_header;
 	struct cl_object	lso_cl;
 	struct lov_object      *lso_super;
-	int		     lso_index;
+	int			lso_index;
 };
 
 /**
  * Lock state at lovsub layer.
  */
 struct lovsub_lock {
-	struct cl_lock_slice  lss_cl;
+	struct cl_lock_slice	lss_cl;
 };
 
 /**
  * Describe the environment settings for sublocks.
  */
 struct lov_sublock_env {
-	const struct lu_env *lse_env;
-	struct cl_io	*lse_io;
+	const struct lu_env	*lse_env;
+	struct cl_io		*lse_io;
 };
 
 struct lovsub_page {
-	struct cl_page_slice lsb_cl;
+	struct cl_page_slice	lsb_cl;
 };
 
 struct lov_thread_info {
 	struct cl_object_conf   lti_stripe_conf;
-	struct lu_fid	   lti_fid;
-	struct ost_lvb	  lti_lvb;
+	struct lu_fid		lti_fid;
+	struct ost_lvb		lti_lvb;
 	struct cl_2queue	lti_cl2q;
 	struct cl_page_list     lti_plist;
-	wait_queue_entry_t	  lti_waiter;
+	wait_queue_entry_t	lti_waiter;
 };
 
 /**
@@ -354,12 +354,12 @@ struct lov_io_sub {
  */
 struct lov_io {
 	/** super-class */
-	struct cl_io_slice lis_cl;
+	struct cl_io_slice	lis_cl;
 	/**
 	 * Pointer to the object slice. This is a duplicate of
 	 * lov_io::lis_cl::cis_object.
 	 */
-	struct lov_object *lis_object;
+	struct lov_object	*lis_object;
 	/**
 	 * Original end-of-io position for this IO, set by the upper layer as
 	 * cl_io::u::ci_rw::pos + cl_io::u::ci_rw::count. lov remembers this,
@@ -401,8 +401,8 @@ struct lov_io {
 };
 
 struct lov_session {
-	struct lov_io	  ls_io;
-	struct lov_sublock_env ls_subenv;
+	struct lov_io		ls_io;
+	struct lov_sublock_env	ls_subenv;
 };
 
 extern struct lu_device_type lov_device_type;
diff --git a/drivers/staging/lustre/lustre/lov/lov_dev.c b/drivers/staging/lustre/lustre/lov/lov_dev.c
index 67d30fb..a55b3f9 100644
--- a/drivers/staging/lustre/lustre/lov/lov_dev.c
+++ b/drivers/staging/lustre/lustre/lov/lov_dev.c
@@ -113,9 +113,9 @@ static void lov_key_fini(const struct lu_context *ctx,
 }
 
 struct lu_context_key lov_key = {
-	.lct_tags = LCT_CL_THREAD,
-	.lct_init = lov_key_init,
-	.lct_fini = lov_key_fini
+	.lct_tags	= LCT_CL_THREAD,
+	.lct_init	= lov_key_init,
+	.lct_fini	= lov_key_fini
 };
 
 static void *lov_session_key_init(const struct lu_context *ctx,
@@ -138,9 +138,9 @@ static void lov_session_key_fini(const struct lu_context *ctx,
 }
 
 struct lu_context_key lov_session_key = {
-	.lct_tags = LCT_SESSION,
-	.lct_init = lov_session_key_init,
-	.lct_fini = lov_session_key_fini
+	.lct_tags	= LCT_SESSION,
+	.lct_init	= lov_session_key_init,
+	.lct_fini	= lov_session_key_fini
 };
 
 /* type constructor/destructor: lov_type_{init,fini,start,stop}() */
@@ -181,8 +181,8 @@ static int lov_device_init(const struct lu_env *env, struct lu_device *d,
 
 	lov_foreach_target(ld, i) {
 		struct lovsub_device *lsd;
-		struct cl_device     *cl;
-		struct lov_tgt_desc  *desc;
+		struct cl_device *cl;
+		struct lov_tgt_desc *desc;
 
 		desc = ld->ld_lov->lov_tgts[i];
 		if (!desc)
@@ -230,7 +230,7 @@ static void lov_cl_del_target(const struct lu_env *env, struct lu_device *dev,
 
 static int lov_expand_targets(const struct lu_env *env, struct lov_device *dev)
 {
-	int   result;
+	int result;
 	u32 tgt_size;
 	u32 sub_size;
 
@@ -238,8 +238,8 @@ static int lov_expand_targets(const struct lu_env *env, struct lov_device *dev)
 	tgt_size = dev->ld_lov->lov_tgt_size;
 	sub_size = dev->ld_target_nr;
 	if (sub_size < tgt_size) {
-		struct lovsub_device    **newd;
-		const size_t	      sz   = sizeof(newd[0]);
+		struct lovsub_device **newd;
+		const size_t sz = sizeof(newd[0]);
 
 		newd = kcalloc(tgt_size, sz, GFP_NOFS);
 		if (newd) {
@@ -247,7 +247,7 @@ static int lov_expand_targets(const struct lu_env *env, struct lov_device *dev)
 				memcpy(newd, dev->ld_target, sub_size * sz);
 				kfree(dev->ld_target);
 			}
-			dev->ld_target    = newd;
+			dev->ld_target = newd;
 			dev->ld_target_nr = tgt_size;
 		} else {
 			result = -ENOMEM;
@@ -259,11 +259,11 @@ static int lov_expand_targets(const struct lu_env *env, struct lov_device *dev)
 static int lov_cl_add_target(const struct lu_env *env, struct lu_device *dev,
 			     u32 index)
 {
-	struct obd_device    *obd = dev->ld_obd;
-	struct lov_device    *ld  = lu2lov_dev(dev);
-	struct lov_tgt_desc  *tgt;
+	struct obd_device *obd = dev->ld_obd;
+	struct lov_device *ld  = lu2lov_dev(dev);
+	struct lov_tgt_desc *tgt;
 	struct lovsub_device *lsd;
-	struct cl_device     *cl;
+	struct cl_device *cl;
 	int rc;
 
 	lov_tgts_getref(obd);
@@ -330,8 +330,8 @@ static int lov_process_config(const struct lu_env *env,
 }
 
 static const struct lu_device_operations lov_lu_ops = {
-	.ldo_object_alloc      = lov_object_alloc,
-	.ldo_process_config    = lov_process_config,
+	.ldo_object_alloc	= lov_object_alloc,
+	.ldo_process_config	= lov_process_config,
 };
 
 static struct lu_device *lov_device_alloc(const struct lu_env *env,
@@ -349,7 +349,7 @@ static struct lu_device *lov_device_alloc(const struct lu_env *env,
 
 	cl_device_init(&ld->ld_cl, t);
 	d = lov2lu_dev(ld);
-	d->ld_ops	= &lov_lu_ops;
+	d->ld_ops = &lov_lu_ops;
 
 	/* setup the LOV OBD */
 	obd = class_name2obd(lustre_cfg_string(cfg, 0));
@@ -365,24 +365,24 @@ static struct lu_device *lov_device_alloc(const struct lu_env *env,
 }
 
 static const struct lu_device_type_operations lov_device_type_ops = {
-	.ldto_init = lov_type_init,
-	.ldto_fini = lov_type_fini,
+	.ldto_init		= lov_type_init,
+	.ldto_fini		= lov_type_fini,
 
-	.ldto_start = lov_type_start,
-	.ldto_stop  = lov_type_stop,
+	.ldto_start		= lov_type_start,
+	.ldto_stop		= lov_type_stop,
 
-	.ldto_device_alloc = lov_device_alloc,
-	.ldto_device_free  = lov_device_free,
+	.ldto_device_alloc	= lov_device_alloc,
+	.ldto_device_free	= lov_device_free,
 
-	.ldto_device_init    = lov_device_init,
-	.ldto_device_fini    = lov_device_fini
+	.ldto_device_init	= lov_device_init,
+	.ldto_device_fini	= lov_device_fini
 };
 
 struct lu_device_type lov_device_type = {
-	.ldt_tags     = LU_DEVICE_CL,
-	.ldt_name     = LUSTRE_LOV_NAME,
-	.ldt_ops      = &lov_device_type_ops,
-	.ldt_ctx_tags = LCT_CL_THREAD
+	.ldt_tags		= LU_DEVICE_CL,
+	.ldt_name		= LUSTRE_LOV_NAME,
+	.ldt_ops		= &lov_device_type_ops,
+	.ldt_ctx_tags		= LCT_CL_THREAD
 };
 
 /** @} lov */
diff --git a/drivers/staging/lustre/lustre/lov/lov_internal.h b/drivers/staging/lustre/lustre/lov/lov_internal.h
index 9708f1b..f69f2d6 100644
--- a/drivers/staging/lustre/lustre/lov/lov_internal.h
+++ b/drivers/staging/lustre/lustre/lov/lov_internal.h
@@ -169,29 +169,29 @@ struct lsm_operations {
  * already a 32-bit value the compiler handles this directly.
  */
 #if BITS_PER_LONG == 64
-# define lov_do_div64(n, base) ({					\
-	u64 __base = (base);					\
-	u64 __rem;							\
-	__rem = ((u64)(n)) % __base;				\
-	(n) = ((u64)(n)) / __base;					\
-	__rem;								\
+# define lov_do_div64(n, base) ({		\
+	u64 __base = (base);			\
+	u64 __rem;				\
+	__rem = ((u64)(n)) % __base;		\
+	(n) = ((u64)(n)) / __base;		\
+	__rem;					\
 })
 #elif BITS_PER_LONG == 32
-# define lov_do_div64(n, base) ({					\
-	u64 __rem;							\
+# define lov_do_div64(n, base) ({					      \
+	u64 __rem;							      \
 	if ((sizeof(base) > 4) && (((base) & 0xffffffff00000000ULL) != 0)) {  \
 		int __remainder;					      \
 		LASSERTF(!((base) & (LOV_MIN_STRIPE_SIZE - 1)), "64 bit lov " \
-			 "division %llu / %llu\n", (n), (u64)(base));    \
-		__remainder = (n) & (LOV_MIN_STRIPE_SIZE - 1);		\
-		(n) >>= LOV_MIN_STRIPE_BITS;				\
-		__rem = do_div(n, (base) >> LOV_MIN_STRIPE_BITS);	\
-		__rem <<= LOV_MIN_STRIPE_BITS;				\
-		__rem += __remainder;					\
-	} else {							\
-		__rem = do_div(n, base);				\
-	}								\
-	__rem;								\
+			 "division %llu / %llu\n", (n), (u64)(base));	      \
+		__remainder = (n) & (LOV_MIN_STRIPE_SIZE - 1);		      \
+		(n) >>= LOV_MIN_STRIPE_BITS;				      \
+		__rem = do_div(n, (base) >> LOV_MIN_STRIPE_BITS);	      \
+		__rem <<= LOV_MIN_STRIPE_BITS;				      \
+		__rem += __remainder;					      \
+	} else {							      \
+		__rem = do_div(n, base);				      \
+	}								      \
+	__rem;								      \
 })
 #endif
 
@@ -215,21 +215,21 @@ struct pool_desc {
 void lov_pool_hash_destroy(struct rhashtable *tbl);
 
 struct lov_request {
-	struct obd_info	  rq_oi;
-	struct lov_request_set  *rq_rqset;
+	struct obd_info		rq_oi;
+	struct lov_request_set *rq_rqset;
 
-	struct list_head	       rq_link;
+	struct list_head	rq_link;
 
-	int		      rq_idx;	/* index in lov->tgts array */
+	int			rq_idx;	/* index in lov->tgts array */
 };
 
 struct lov_request_set {
-	struct obd_info			*set_oi;
-	struct obd_device		*set_obd;
-	int				set_count;
-	atomic_t			set_completes;
-	atomic_t			set_success;
-	struct list_head			set_list;
+	struct obd_info		*set_oi;
+	struct obd_device	*set_obd;
+	int			set_count;
+	atomic_t		set_completes;
+	atomic_t		set_success;
+	struct list_head	set_list;
 };
 
 extern struct kmem_cache *lov_oinfo_slab;
diff --git a/drivers/staging/lustre/lustre/lov/lov_io.c b/drivers/staging/lustre/lustre/lov/lov_io.c
index 47bb618..de43f47 100644
--- a/drivers/staging/lustre/lustre/lov/lov_io.c
+++ b/drivers/staging/lustre/lustre/lov/lov_io.c
@@ -91,9 +91,9 @@ static int lov_io_sub_init(const struct lu_env *env, struct lov_io *lio,
 			   struct lov_io_sub *sub)
 {
 	struct lov_object *lov = lio->lis_object;
-	struct cl_io      *sub_io;
-	struct cl_object  *sub_obj;
-	struct cl_io      *io  = lio->lis_cl.cis_io;
+	struct cl_io *sub_io;
+	struct cl_object *sub_obj;
+	struct cl_io *io = lio->lis_cl.cis_io;
 	int index = lov_comp_entry(sub->sub_subio_index);
 	int stripe = lov_comp_stripe(sub->sub_subio_index);
 	int rc = 0;
@@ -377,11 +377,11 @@ static u64 lov_offset_mod(u64 val, int delta)
 static int lov_io_iter_init(const struct lu_env *env,
 			    const struct cl_io_slice *ios)
 {
-	struct lov_io	*lio = cl2lov_io(env, ios);
+	struct lov_io *lio = cl2lov_io(env, ios);
 	struct lov_stripe_md *lsm = lio->lis_object->lo_lsm;
 	struct cl_io *io = ios->cis_io;
 	struct lov_layout_entry *le;
-	struct lov_io_sub    *sub;
+	struct lov_io_sub *sub;
 	struct lu_extent ext;
 	int rc = 0;
 	int index;
@@ -461,9 +461,9 @@ static int lov_io_iter_init(const struct lu_env *env,
 static int lov_io_rw_iter_init(const struct lu_env *env,
 			       const struct cl_io_slice *ios)
 {
-	struct lov_io	*lio = cl2lov_io(env, ios);
+	struct lov_io *lio = cl2lov_io(env, ios);
 	struct lov_stripe_md *lsm = lio->lis_object->lo_lsm;
-	struct cl_io	 *io  = ios->cis_io;
+	struct cl_io *io = ios->cis_io;
 	u64 start = io->u.ci_rw.crw_pos;
 	struct lov_stripe_md_entry *lse;
 	int index;
@@ -872,8 +872,8 @@ static int lov_io_fault_start(const struct lu_env *env,
 			      const struct cl_io_slice *ios)
 {
 	struct cl_fault_io *fio;
-	struct lov_io      *lio;
-	struct lov_io_sub  *sub;
+	struct lov_io *lio;
+	struct lov_io_sub *sub;
 
 	fio = &ios->cis_io->u.ci_fault;
 	lio = cl2lov_io(env, ios);
@@ -906,31 +906,31 @@ static void lov_io_fsync_end(const struct lu_env *env,
 static const struct cl_io_operations lov_io_ops = {
 	.op = {
 		[CIT_READ] = {
-			.cio_fini      = lov_io_fini,
-			.cio_iter_init = lov_io_rw_iter_init,
-			.cio_iter_fini = lov_io_iter_fini,
-			.cio_lock      = lov_io_lock,
-			.cio_unlock    = lov_io_unlock,
-			.cio_start     = lov_io_start,
-			.cio_end       = lov_io_end
+			.cio_fini	= lov_io_fini,
+			.cio_iter_init	= lov_io_rw_iter_init,
+			.cio_iter_fini	= lov_io_iter_fini,
+			.cio_lock	= lov_io_lock,
+			.cio_unlock	= lov_io_unlock,
+			.cio_start	= lov_io_start,
+			.cio_end	= lov_io_end
 		},
 		[CIT_WRITE] = {
-			.cio_fini      = lov_io_fini,
-			.cio_iter_init = lov_io_rw_iter_init,
-			.cio_iter_fini = lov_io_iter_fini,
-			.cio_lock      = lov_io_lock,
-			.cio_unlock    = lov_io_unlock,
-			.cio_start     = lov_io_start,
-			.cio_end       = lov_io_end
+			.cio_fini	= lov_io_fini,
+			.cio_iter_init	= lov_io_rw_iter_init,
+			.cio_iter_fini	= lov_io_iter_fini,
+			.cio_lock	= lov_io_lock,
+			.cio_unlock	= lov_io_unlock,
+			.cio_start	= lov_io_start,
+			.cio_end	= lov_io_end
 		},
 		[CIT_SETATTR] = {
-			.cio_fini      = lov_io_fini,
-			.cio_iter_init = lov_io_setattr_iter_init,
-			.cio_iter_fini = lov_io_iter_fini,
-			.cio_lock      = lov_io_lock,
-			.cio_unlock    = lov_io_unlock,
-			.cio_start     = lov_io_start,
-			.cio_end       = lov_io_end
+			.cio_fini	= lov_io_fini,
+			.cio_iter_init	= lov_io_setattr_iter_init,
+			.cio_iter_fini	= lov_io_iter_fini,
+			.cio_lock	= lov_io_lock,
+			.cio_unlock	= lov_io_unlock,
+			.cio_start	= lov_io_start,
+			.cio_end	= lov_io_end
 		},
 		[CIT_DATA_VERSION] = {
 			.cio_fini	= lov_io_fini,
@@ -942,22 +942,22 @@ static void lov_io_fsync_end(const struct lu_env *env,
 			.cio_end	= lov_io_data_version_end,
 		},
 		[CIT_FAULT] = {
-			.cio_fini      = lov_io_fini,
-			.cio_iter_init = lov_io_iter_init,
-			.cio_iter_fini = lov_io_iter_fini,
-			.cio_lock      = lov_io_lock,
-			.cio_unlock    = lov_io_unlock,
-			.cio_start     = lov_io_fault_start,
-			.cio_end       = lov_io_end
+			.cio_fini	= lov_io_fini,
+			.cio_iter_init	= lov_io_iter_init,
+			.cio_iter_fini	= lov_io_iter_fini,
+			.cio_lock	= lov_io_lock,
+			.cio_unlock	= lov_io_unlock,
+			.cio_start	= lov_io_fault_start,
+			.cio_end	= lov_io_end
 		},
 		[CIT_FSYNC] = {
-			.cio_fini      = lov_io_fini,
-			.cio_iter_init = lov_io_iter_init,
-			.cio_iter_fini = lov_io_iter_fini,
-			.cio_lock      = lov_io_lock,
-			.cio_unlock    = lov_io_unlock,
-			.cio_start     = lov_io_start,
-			.cio_end       = lov_io_fsync_end
+			.cio_fini	= lov_io_fini,
+			.cio_iter_init	= lov_io_iter_init,
+			.cio_iter_fini	= lov_io_iter_fini,
+			.cio_lock	= lov_io_lock,
+			.cio_unlock	= lov_io_unlock,
+			.cio_start	= lov_io_start,
+			.cio_end	= lov_io_fsync_end
 		},
 		[CIT_LADVISE] = {
 			.cio_fini	= lov_io_fini,
@@ -969,12 +969,12 @@ static void lov_io_fsync_end(const struct lu_env *env,
 			.cio_end	= lov_io_end
 		},
 		[CIT_MISC] = {
-			.cio_fini   = lov_io_fini
+			.cio_fini	= lov_io_fini
 		}
 	},
 	.cio_read_ahead			= lov_io_read_ahead,
-	.cio_submit                    = lov_io_submit,
-	.cio_commit_async              = lov_io_commit_async,
+	.cio_submit			= lov_io_submit,
+	.cio_commit_async		= lov_io_commit_async,
 };
 
 /*****************************************************************************
@@ -1013,48 +1013,48 @@ static void lov_empty_impossible(const struct lu_env *env,
 static const struct cl_io_operations lov_empty_io_ops = {
 	.op = {
 		[CIT_READ] = {
-			.cio_fini       = lov_empty_io_fini,
+			.cio_fini	= lov_empty_io_fini,
 		},
 		[CIT_WRITE] = {
-			.cio_fini      = lov_empty_io_fini,
-			.cio_iter_init = LOV_EMPTY_IMPOSSIBLE,
-			.cio_lock      = LOV_EMPTY_IMPOSSIBLE,
-			.cio_start     = LOV_EMPTY_IMPOSSIBLE,
-			.cio_end       = LOV_EMPTY_IMPOSSIBLE
+			.cio_fini	= lov_empty_io_fini,
+			.cio_iter_init	= LOV_EMPTY_IMPOSSIBLE,
+			.cio_lock	= LOV_EMPTY_IMPOSSIBLE,
+			.cio_start	= LOV_EMPTY_IMPOSSIBLE,
+			.cio_end	= LOV_EMPTY_IMPOSSIBLE
 		},
 		[CIT_SETATTR] = {
-			.cio_fini      = lov_empty_io_fini,
-			.cio_iter_init = LOV_EMPTY_IMPOSSIBLE,
-			.cio_lock      = LOV_EMPTY_IMPOSSIBLE,
-			.cio_start     = LOV_EMPTY_IMPOSSIBLE,
-			.cio_end       = LOV_EMPTY_IMPOSSIBLE
+			.cio_fini	= lov_empty_io_fini,
+			.cio_iter_init	= LOV_EMPTY_IMPOSSIBLE,
+			.cio_lock	= LOV_EMPTY_IMPOSSIBLE,
+			.cio_start	= LOV_EMPTY_IMPOSSIBLE,
+			.cio_end	= LOV_EMPTY_IMPOSSIBLE
 		},
 		[CIT_FAULT] = {
-			.cio_fini      = lov_empty_io_fini,
-			.cio_iter_init = LOV_EMPTY_IMPOSSIBLE,
-			.cio_lock      = LOV_EMPTY_IMPOSSIBLE,
-			.cio_start     = LOV_EMPTY_IMPOSSIBLE,
-			.cio_end       = LOV_EMPTY_IMPOSSIBLE
+			.cio_fini	= lov_empty_io_fini,
+			.cio_iter_init	= LOV_EMPTY_IMPOSSIBLE,
+			.cio_lock	= LOV_EMPTY_IMPOSSIBLE,
+			.cio_start	= LOV_EMPTY_IMPOSSIBLE,
+			.cio_end	= LOV_EMPTY_IMPOSSIBLE
 		},
 		[CIT_FSYNC] = {
-			.cio_fini   = lov_empty_io_fini
+			.cio_fini	= lov_empty_io_fini
 		},
 		[CIT_LADVISE] = {
 			.cio_fini	= lov_empty_io_fini
 		},
 		[CIT_MISC] = {
-			.cio_fini   = lov_empty_io_fini
+			.cio_fini	= lov_empty_io_fini
 		}
 	},
 	.cio_submit			= lov_empty_io_submit,
-	.cio_commit_async              = LOV_EMPTY_IMPOSSIBLE
+	.cio_commit_async		= LOV_EMPTY_IMPOSSIBLE
 };
 
 int lov_io_init_composite(const struct lu_env *env, struct cl_object *obj,
 			  struct cl_io *io)
 {
-	struct lov_io       *lio = lov_env_io(env);
-	struct lov_object   *lov = cl2lov(obj);
+	struct lov_io *lio = lov_env_io(env);
+	struct lov_object *lov = cl2lov(obj);
 
 	INIT_LIST_HEAD(&lio->lis_active);
 	io->ci_result = lov_io_slice_init(lio, lov, io);
diff --git a/drivers/staging/lustre/lustre/lov/lov_lock.c b/drivers/staging/lustre/lustre/lov/lov_lock.c
index 9a46424..039c902 100644
--- a/drivers/staging/lustre/lustre/lov/lov_lock.c
+++ b/drivers/staging/lustre/lustre/lov/lov_lock.c
@@ -54,9 +54,9 @@ static struct lov_sublock_env *lov_sublock_env_get(const struct lu_env *env,
 						   struct lov_lock_sub *lls)
 {
 	struct lov_sublock_env *subenv;
-	struct lov_io	  *lio    = lov_env_io(env);
-	struct cl_io	   *io     = lio->lis_cl.cis_io;
-	struct lov_io_sub      *sub;
+	struct lov_io *lio = lov_env_io(env);
+	struct cl_io *io = lio->lis_cl.cis_io;
+	struct lov_io_sub *sub;
 
 	subenv = &lov_env_session(env)->ls_subenv;
 
@@ -71,7 +71,7 @@ static struct lov_sublock_env *lov_sublock_env_get(const struct lu_env *env,
 	 */
 	if (!io || !cl_object_same(io->ci_obj, parent->cll_descr.cld_obj)) {
 		subenv->lse_env = env;
-		subenv->lse_io  = io;
+		subenv->lse_io = io;
 	} else {
 		sub = lov_sub_get(env, lio, lls->sub_index);
 		if (!IS_ERR(sub)) {
@@ -154,7 +154,7 @@ static struct lov_lock *lov_lock_sub_init(const struct lu_env *env,
 	 */
 
 	lovlck = kvzalloc(offsetof(struct lov_lock, lls_sub[nr]),
-				 GFP_NOFS);
+			  GFP_NOFS);
 	if (!lovlck)
 		return ERR_PTR(-ENOMEM);
 
@@ -178,11 +178,11 @@ static struct lov_lock *lov_lock_sub_init(const struct lu_env *env,
 				continue;
 
 			LASSERT(!descr->cld_obj);
-			descr->cld_obj   = lovsub2cl(r0->lo_sub[i]);
+			descr->cld_obj = lovsub2cl(r0->lo_sub[i]);
 			descr->cld_start = cl_index(descr->cld_obj, start);
-			descr->cld_end   = cl_index(descr->cld_obj, end);
-			descr->cld_mode  = lock->cll_descr.cld_mode;
-			descr->cld_gid   = lock->cll_descr.cld_gid;
+			descr->cld_end = cl_index(descr->cld_obj, end);
+			descr->cld_mode = lock->cll_descr.cld_mode;
+			descr->cld_gid = lock->cll_descr.cld_gid;
 			descr->cld_enq_flags = lock->cll_descr.cld_enq_flags;
 
 			lls->sub_index = lov_comp_index(index, i);
@@ -244,7 +244,7 @@ static int lov_lock_enqueue(const struct lu_env *env,
 	int rc = 0;
 
 	for (i = 0; i < lovlck->lls_nr; ++i) {
-		struct lov_lock_sub  *lls = &lovlck->lls_sub[i];
+		struct lov_lock_sub *lls = &lovlck->lls_sub[i];
 		struct lov_sublock_env *subenv;
 
 		subenv = lov_sublock_env_get(env, lock, lls);
@@ -293,7 +293,7 @@ static int lov_lock_print(const struct lu_env *env, void *cookie,
 			  lu_printer_t p, const struct cl_lock_slice *slice)
 {
 	struct lov_lock *lck = cl2lov_lock(slice);
-	int	      i;
+	int i;
 
 	(*p)(env, cookie, "%d\n", lck->lls_nr);
 	for (i = 0; i < lck->lls_nr; ++i) {
@@ -307,10 +307,10 @@ static int lov_lock_print(const struct lu_env *env, void *cookie,
 }
 
 static const struct cl_lock_operations lov_lock_ops = {
-	.clo_fini      = lov_lock_fini,
-	.clo_enqueue   = lov_lock_enqueue,
-	.clo_cancel    = lov_lock_cancel,
-	.clo_print     = lov_lock_print
+	.clo_fini	= lov_lock_fini,
+	.clo_enqueue	= lov_lock_enqueue,
+	.clo_cancel	= lov_lock_cancel,
+	.clo_print	= lov_lock_print
 };
 
 int lov_lock_init_composite(const struct lu_env *env, struct cl_object *obj,
@@ -345,8 +345,8 @@ static int lov_empty_lock_print(const struct lu_env *env, void *cookie,
 
 /* XXX: more methods will be added later. */
 static const struct cl_lock_operations lov_empty_lock_ops = {
-	.clo_fini  = lov_empty_lock_fini,
-	.clo_print = lov_empty_lock_print
+	.clo_fini	= lov_empty_lock_fini,
+	.clo_print	= lov_empty_lock_print
 };
 
 int lov_lock_init_empty(const struct lu_env *env, struct cl_object *obj,
diff --git a/drivers/staging/lustre/lustre/lov/lov_obd.c b/drivers/staging/lustre/lustre/lov/lov_obd.c
index 109dd69..04d0a9e 100644
--- a/drivers/staging/lustre/lustre/lov/lov_obd.c
+++ b/drivers/staging/lustre/lustre/lov/lov_obd.c
@@ -425,7 +425,7 @@ static int lov_set_osc_active(struct obd_device *obd, struct obd_uuid *uuid,
 		CERROR("Unknown event(%d) for uuid %s", ev, uuid->uuid);
 	}
 
- out:
+out:
 	lov_tgts_putref(obd);
 	return index;
 }
@@ -925,7 +925,7 @@ int lov_process_config_base(struct obd_device *obd, struct lustre_cfg *lcfg,
 static int lov_statfs_async(struct obd_export *exp, struct obd_info *oinfo,
 			    u64 max_age, struct ptlrpc_request_set *rqset)
 {
-	struct obd_device      *obd = class_exp2obd(exp);
+	struct obd_device *obd = class_exp2obd(exp);
 	struct lov_request_set *set;
 	struct lov_request *req;
 	struct lov_obd *lov;
@@ -997,7 +997,7 @@ static int lov_iocontrol(unsigned int cmd, struct obd_export *exp, int len,
 	case IOC_OBD_STATFS: {
 		struct obd_ioctl_data *data = karg;
 		struct obd_device *osc_obd;
-		struct obd_statfs stat_buf = {0};
+		struct obd_statfs stat_buf = { 0 };
 		u32 index;
 		u32 flags;
 
@@ -1281,11 +1281,11 @@ void lov_stripe_unlock(struct lov_stripe_md *md)
 static int lov_quotactl(struct obd_device *obd, struct obd_export *exp,
 			struct obd_quotactl *oqctl)
 {
-	struct lov_obd      *lov = &obd->u.lov;
+	struct lov_obd *lov = &obd->u.lov;
 	struct lov_tgt_desc *tgt;
-	u64		curspace = 0;
-	u64		bhardlimit = 0;
-	int		  i, rc = 0;
+	u64 curspace = 0;
+	u64 bhardlimit = 0;
+	int i, rc = 0;
 
 	if (oqctl->qc_cmd != Q_GETOQUOTA &&
 	    oqctl->qc_cmd != LUSTRE_Q_SETQUOTA) {
@@ -1336,10 +1336,9 @@ static int lov_quotactl(struct obd_device *obd, struct obd_export *exp,
 }
 
 static struct obd_ops lov_obd_ops = {
-	.owner          = THIS_MODULE,
-	.setup          = lov_setup,
-	.cleanup        = lov_cleanup,
-	/*.process_config       = lov_process_config,*/
+	.owner		= THIS_MODULE,
+	.setup		= lov_setup,
+	.cleanup	= lov_cleanup,
 	.connect        = lov_connect,
 	.disconnect     = lov_disconnect,
 	.statfs         = lov_statfs,
diff --git a/drivers/staging/lustre/lustre/lov/lov_object.c b/drivers/staging/lustre/lustre/lov/lov_object.c
index 72f42fc..397ecc1 100644
--- a/drivers/staging/lustre/lustre/lov/lov_object.c
+++ b/drivers/staging/lustre/lustre/lov/lov_object.c
@@ -133,7 +133,7 @@ static int lov_init_sub(const struct lu_env *env, struct lov_object *lov,
 		return -EIO;
 	}
 
-	hdr    = cl_object_header(lov2cl(lov));
+	hdr = cl_object_header(lov2cl(lov));
 	subhdr = cl_object_header(subobj);
 
 	CDEBUG(D_INODE,
@@ -155,7 +155,7 @@ static int lov_init_sub(const struct lu_env *env, struct lov_object *lov,
 		r0->lo_sub[stripe]->lso_index = idx;
 		result = 0;
 	} else {
-		struct lu_object  *old_obj;
+		struct lu_object *old_obj;
 		struct lov_object *old_lov;
 		unsigned int mask = D_INODE;
 
@@ -392,16 +392,16 @@ static void lov_subobject_kill(const struct lu_env *env, struct lov_object *lov,
 			       struct lov_layout_raid0 *r0,
 			       struct lovsub_object *los, int idx)
 {
-	struct cl_object	*sub;
-	struct lu_site	  *site;
+	struct cl_object *sub;
+	struct lu_site *site;
 	wait_queue_head_t *wq;
-	wait_queue_entry_t	  *waiter;
+	wait_queue_entry_t *waiter;
 
 	LASSERT(r0->lo_sub[idx] == los);
 
-	sub  = lovsub2cl(los);
+	sub = lovsub2cl(los);
 	site = sub->co_lu.lo_dev->ld_site;
-	wq   = lu_site_wq_from_fid(site, &sub->co_lu.lo_header->loh_fid);
+	wq = lu_site_wq_from_fid(site, &sub->co_lu.lo_header->loh_fid);
 
 	cl_object_kill(env, sub);
 	/* release a reference to the sub-object and ... */
@@ -570,8 +570,8 @@ static int lov_print_composite(const struct lu_env *env, void *cookie,
 static int lov_print_released(const struct lu_env *env, void *cookie,
 			      lu_printer_t p, const struct lu_object *o)
 {
-	struct lov_object	*lov = lu2lov(o);
-	struct lov_stripe_md	*lsm = lov->lo_lsm;
+	struct lov_object *lov = lu2lov(o);
+	struct lov_stripe_md *lsm = lov->lo_lsm;
 
 	(*p)(env, cookie,
 	     "released: %s, lsm{%p 0x%08X %d %u}:\n",
@@ -684,24 +684,24 @@ static int lov_attr_get_composite(const struct lu_env *env,
 
 static const struct lov_layout_operations lov_dispatch[] = {
 	[LLT_EMPTY] = {
-		.llo_init      = lov_init_empty,
-		.llo_delete    = lov_delete_empty,
-		.llo_fini      = lov_fini_empty,
-		.llo_print     = lov_print_empty,
-		.llo_page_init = lov_page_init_empty,
-		.llo_lock_init = lov_lock_init_empty,
-		.llo_io_init   = lov_io_init_empty,
-		.llo_getattr   = lov_attr_get_empty
+		.llo_init	= lov_init_empty,
+		.llo_delete	= lov_delete_empty,
+		.llo_fini	= lov_fini_empty,
+		.llo_print	= lov_print_empty,
+		.llo_page_init	= lov_page_init_empty,
+		.llo_lock_init	= lov_lock_init_empty,
+		.llo_io_init	= lov_io_init_empty,
+		.llo_getattr	= lov_attr_get_empty
 	},
 	[LLT_RELEASED] = {
-		.llo_init      = lov_init_released,
-		.llo_delete    = lov_delete_empty,
-		.llo_fini      = lov_fini_released,
-		.llo_print     = lov_print_released,
-		.llo_page_init = lov_page_init_empty,
-		.llo_lock_init = lov_lock_init_empty,
-		.llo_io_init   = lov_io_init_released,
-		.llo_getattr   = lov_attr_get_empty
+		.llo_init	= lov_init_released,
+		.llo_delete	= lov_delete_empty,
+		.llo_fini	= lov_fini_released,
+		.llo_print	= lov_print_released,
+		.llo_page_init	= lov_page_init_empty,
+		.llo_lock_init	= lov_lock_init_empty,
+		.llo_io_init	= lov_io_init_released,
+		.llo_getattr	= lov_attr_get_empty
 	},
 	[LLT_COMP] = {
 		.llo_init	= lov_init_composite,
@@ -718,14 +718,14 @@ static int lov_attr_get_composite(const struct lu_env *env,
 /**
  * Performs a double-dispatch based on the layout type of an object.
  */
-#define LOV_2DISPATCH_NOLOCK(obj, op, ...)			      \
-({								      \
-	struct lov_object		      *__obj = (obj);	  \
-	enum lov_layout_type		    __llt;		  \
-									\
-	__llt = __obj->lo_type;					 \
+#define LOV_2DISPATCH_NOLOCK(obj, op, ...)			\
+({								\
+	struct lov_object		      *__obj = (obj);	\
+	enum lov_layout_type		    __llt;		\
+								\
+	__llt = __obj->lo_type;					\
 	LASSERT(__llt < ARRAY_SIZE(lov_dispatch));		\
-	lov_dispatch[__llt].op(__VA_ARGS__);			    \
+	lov_dispatch[__llt].op(__VA_ARGS__);			\
 })
 
 /**
@@ -763,18 +763,18 @@ static inline void lov_conf_thaw(struct lov_object *lov)
 		up_read(&lov->lo_type_guard);
 }
 
-#define LOV_2DISPATCH_MAYLOCK(obj, op, lock, ...)		       \
-({								      \
-	struct lov_object		      *__obj = (obj);	  \
-	int				     __lock = !!(lock);      \
-	typeof(lov_dispatch[0].op(__VA_ARGS__)) __result;	       \
-									\
-	if (__lock)						     \
-		lov_conf_freeze(__obj);					\
-	__result = LOV_2DISPATCH_NOLOCK(obj, op, __VA_ARGS__);	  \
-	if (__lock)						     \
-		lov_conf_thaw(__obj);					\
-	__result;						       \
+#define LOV_2DISPATCH_MAYLOCK(obj, op, lock, ...)		\
+({								\
+	struct lov_object *__obj = (obj);			\
+	int __lock = !!(lock);					\
+	typeof(lov_dispatch[0].op(__VA_ARGS__)) __result;	\
+								\
+	if (__lock)						\
+		lov_conf_freeze(__obj);				\
+	__result = LOV_2DISPATCH_NOLOCK(obj, op, __VA_ARGS__);	\
+	if (__lock)						\
+		lov_conf_thaw(__obj);				\
+	__result;						\
 })
 
 /**
@@ -783,16 +783,16 @@ static inline void lov_conf_thaw(struct lov_object *lov)
 #define LOV_2DISPATCH(obj, op, ...)		     \
 	LOV_2DISPATCH_MAYLOCK(obj, op, 1, __VA_ARGS__)
 
-#define LOV_2DISPATCH_VOID(obj, op, ...)				\
-do {								    \
-	struct lov_object		      *__obj = (obj);	  \
-	enum lov_layout_type		    __llt;		  \
-									\
-	lov_conf_freeze(__obj);						\
-	__llt = __obj->lo_type;					 \
-	LASSERT(__llt < ARRAY_SIZE(lov_dispatch));	\
-	lov_dispatch[__llt].op(__VA_ARGS__);			    \
-	lov_conf_thaw(__obj);						\
+#define LOV_2DISPATCH_VOID(obj, op, ...)			\
+do {								\
+	struct lov_object *__obj = (obj);			\
+	enum lov_layout_type __llt;				\
+								\
+	lov_conf_freeze(__obj);					\
+	__llt = __obj->lo_type;					\
+	LASSERT(__llt < ARRAY_SIZE(lov_dispatch));		\
+	lov_dispatch[__llt].op(__VA_ARGS__);			\
+	lov_conf_thaw(__obj);					\
 } while (0)
 
 static void lov_conf_lock(struct lov_object *lov)
@@ -901,10 +901,10 @@ static int lov_layout_change(const struct lu_env *unused,
 int lov_object_init(const struct lu_env *env, struct lu_object *obj,
 		    const struct lu_object_conf *conf)
 {
-	struct lov_object	    *lov   = lu2lov(obj);
+	struct lov_object *lov = lu2lov(obj);
 	struct lov_device *dev = lov_object_dev(lov);
-	const struct cl_object_conf  *cconf = lu2cl_conf(conf);
-	union  lov_layout_state      *set   = &lov->u;
+	const struct cl_object_conf *cconf = lu2cl_conf(conf);
+	union  lov_layout_state *set = &lov->u;
 	const struct lov_layout_operations *ops;
 	struct lov_stripe_md *lsm = NULL;
 	int rc;
@@ -938,9 +938,9 @@ int lov_object_init(const struct lu_env *env, struct lu_object *obj,
 static int lov_conf_set(const struct lu_env *env, struct cl_object *obj,
 			const struct cl_object_conf *conf)
 {
-	struct lov_stripe_md	*lsm = NULL;
-	struct lov_object	*lov = cl2lov(obj);
-	int			 result = 0;
+	struct lov_stripe_md *lsm = NULL;
+	struct lov_object *lov = cl2lov(obj);
+	int result = 0;
 
 	if (conf->coc_opc == OBJECT_CONF_SET &&
 	    conf->u.coc_layout.lb_buf) {
@@ -1662,25 +1662,25 @@ static loff_t lov_object_maxbytes(struct cl_object *obj)
 }
 
 static const struct cl_object_operations lov_ops = {
-	.coo_page_init = lov_page_init,
-	.coo_lock_init = lov_lock_init,
-	.coo_io_init   = lov_io_init,
-	.coo_attr_get  = lov_attr_get,
-	.coo_attr_update = lov_attr_update,
-	.coo_conf_set  = lov_conf_set,
-	.coo_getstripe = lov_object_getstripe,
-	.coo_layout_get	 = lov_object_layout_get,
-	.coo_maxbytes	 = lov_object_maxbytes,
-	.coo_fiemap	 = lov_object_fiemap,
+	.coo_page_init		= lov_page_init,
+	.coo_lock_init		= lov_lock_init,
+	.coo_io_init		= lov_io_init,
+	.coo_attr_get		= lov_attr_get,
+	.coo_attr_update	= lov_attr_update,
+	.coo_conf_set		= lov_conf_set,
+	.coo_getstripe		= lov_object_getstripe,
+	.coo_layout_get		= lov_object_layout_get,
+	.coo_maxbytes		= lov_object_maxbytes,
+	.coo_fiemap		= lov_object_fiemap,
 };
 
 static const struct lu_object_operations lov_lu_obj_ops = {
-	.loo_object_init      = lov_object_init,
-	.loo_object_delete    = lov_object_delete,
-	.loo_object_release   = NULL,
-	.loo_object_free      = lov_object_free,
-	.loo_object_print     = lov_object_print,
-	.loo_object_invariant = NULL
+	.loo_object_init	= lov_object_init,
+	.loo_object_delete	= lov_object_delete,
+	.loo_object_release	= NULL,
+	.loo_object_free	= lov_object_free,
+	.loo_object_print	= lov_object_print,
+	.loo_object_invariant	= NULL
 };
 
 struct lu_object *lov_object_alloc(const struct lu_env *env,
@@ -1688,7 +1688,7 @@ struct lu_object *lov_object_alloc(const struct lu_env *env,
 				   struct lu_device *dev)
 {
 	struct lov_object *lov;
-	struct lu_object  *obj;
+	struct lu_object *obj;
 
 	lov = kmem_cache_zalloc(lov_object_kmem, GFP_NOFS);
 	if (lov) {
diff --git a/drivers/staging/lustre/lustre/lov/lov_offset.c b/drivers/staging/lustre/lustre/lov/lov_offset.c
index ab02c34..26f5066 100644
--- a/drivers/staging/lustre/lustre/lov/lov_offset.c
+++ b/drivers/staging/lustre/lustre/lov/lov_offset.c
@@ -135,7 +135,7 @@ pgoff_t lov_stripe_pgoff(struct lov_stripe_md *lsm, int index,
 int lov_stripe_offset(struct lov_stripe_md *lsm, int index, u64 lov_off,
 		      int stripeno, u64 *obdoff)
 {
-	unsigned long ssize  = lsm->lsm_entries[index]->lsme_stripe_size;
+	unsigned long ssize = lsm->lsm_entries[index]->lsme_stripe_size;
 	u64 stripe_off, this_stripe, swidth;
 	int ret = 0;
 
@@ -188,7 +188,7 @@ int lov_stripe_offset(struct lov_stripe_md *lsm, int index, u64 lov_off,
 u64 lov_size_to_stripe(struct lov_stripe_md *lsm, int index, u64 file_size,
 		       int stripeno)
 {
-	unsigned long ssize  = lsm->lsm_entries[index]->lsme_stripe_size;
+	unsigned long ssize = lsm->lsm_entries[index]->lsme_stripe_size;
 	u64 stripe_off, this_stripe, swidth;
 
 	if (file_size == OBD_OBJECT_EOF)
@@ -270,7 +270,7 @@ int lov_stripe_intersects(struct lov_stripe_md *lsm, int index, int stripeno,
 /* compute which stripe number "lov_off" will be written into */
 int lov_stripe_number(struct lov_stripe_md *lsm, int index, u64 lov_off)
 {
-	unsigned long ssize  = lsm->lsm_entries[index]->lsme_stripe_size;
+	unsigned long ssize = lsm->lsm_entries[index]->lsme_stripe_size;
 	u64 stripe_off, swidth;
 
 	swidth = stripe_width(lsm, index);
diff --git a/drivers/staging/lustre/lustre/lov/lov_pack.c b/drivers/staging/lustre/lustre/lov/lov_pack.c
index fde5160..18ce9f9 100644
--- a/drivers/staging/lustre/lustre/lov/lov_pack.c
+++ b/drivers/staging/lustre/lustre/lov/lov_pack.c
@@ -52,7 +52,7 @@
 void lov_dump_lmm_common(int level, void *lmmp)
 {
 	struct lov_mds_md *lmm = lmmp;
-	struct ost_id	oi;
+	struct ost_id oi;
 
 	lmm_oi_le_to_cpu(&oi, &lmm->lmm_oi);
 	CDEBUG(level, "objid " DOSTID ", magic 0x%08x, pattern %#x\n",
@@ -76,7 +76,7 @@ static void lov_dump_lmm_objects(int level, struct lov_ost_data *lod,
 	}
 
 	for (i = 0; i < stripe_count; ++i, ++lod) {
-		struct ost_id	oi;
+		struct ost_id oi;
 
 		ostid_le_to_cpu(&lod->l_ost_oi, &oi);
 		CDEBUG(level, "stripe %u idx %u subobj " DOSTID "\n", i,
diff --git a/drivers/staging/lustre/lustre/lov/lov_page.c b/drivers/staging/lustre/lustre/lov/lov_page.c
index 90e2981..08485a9 100644
--- a/drivers/staging/lustre/lustre/lov/lov_page.c
+++ b/drivers/staging/lustre/lustre/lov/lov_page.c
@@ -62,24 +62,24 @@ static int lov_comp_page_print(const struct lu_env *env,
 }
 
 static const struct cl_page_operations lov_comp_page_ops = {
-	.cpo_print  = lov_comp_page_print
+	.cpo_print	= lov_comp_page_print
 };
 
 int lov_page_init_composite(const struct lu_env *env, struct cl_object *obj,
 			    struct cl_page *page, pgoff_t index)
 {
 	struct lov_object *loo = cl2lov(obj);
-	struct lov_io     *lio = lov_env_io(env);
+	struct lov_io *lio = lov_env_io(env);
 	struct lov_layout_raid0 *r0;
-	struct cl_object  *subobj;
-	struct cl_object  *o;
+	struct cl_object *subobj;
+	struct cl_object *o;
 	struct lov_io_sub *sub;
-	struct lov_page   *lpg = cl_object_page_slice(obj, page);
+	struct lov_page *lpg = cl_object_page_slice(obj, page);
 	u64 offset;
-	u64	    suboff;
-	int		stripe;
+	u64 suboff;
+	int stripe;
 	int entry;
-	int		rc;
+	int rc;
 
 	offset = cl_offset(obj, index);
 	entry = lov_lsm_entry(loo->lo_lsm, offset);
@@ -127,7 +127,7 @@ static int lov_empty_page_print(const struct lu_env *env,
 }
 
 static const struct cl_page_operations lov_empty_page_ops = {
-	.cpo_print = lov_empty_page_print
+	.cpo_print	= lov_empty_page_print
 };
 
 int lov_page_init_empty(const struct lu_env *env, struct cl_object *obj,
diff --git a/drivers/staging/lustre/lustre/lov/lov_pool.c b/drivers/staging/lustre/lustre/lov/lov_pool.c
index 177f5a5..833fac9 100644
--- a/drivers/staging/lustre/lustre/lov/lov_pool.c
+++ b/drivers/staging/lustre/lustre/lov/lov_pool.c
@@ -96,9 +96,9 @@ void lov_pool_putref(struct pool_desc *pool)
  */
 #define POOL_IT_MAGIC 0xB001CEA0
 struct pool_iterator {
-	int magic;
-	struct pool_desc *pool;
-	int idx;	/* from 0 to pool_tgt_size - 1 */
+	int			magic;
+	struct pool_desc	*pool;
+	int			idx;	/* from 0 to pool_tgt_size - 1 */
 };
 
 static void *pool_proc_next(struct seq_file *s, void *v, loff_t *pos)
@@ -204,10 +204,10 @@ static int pool_proc_show(struct seq_file *s, void *v)
 }
 
 static const struct seq_operations pool_proc_ops = {
-	.start	  = pool_proc_start,
-	.next	   = pool_proc_next,
-	.stop	   = pool_proc_stop,
-	.show	   = pool_proc_show,
+	.start		= pool_proc_start,
+	.next		= pool_proc_next,
+	.stop		= pool_proc_stop,
+	.show		= pool_proc_show,
 };
 
 static int pool_proc_open(struct inode *inode, struct file *file)
@@ -224,9 +224,9 @@ static int pool_proc_open(struct inode *inode, struct file *file)
 }
 
 static const struct file_operations pool_proc_operations = {
-	.open	   = pool_proc_open,
-	.read	   = seq_read,
-	.llseek	 = seq_lseek,
+	.open		= pool_proc_open,
+	.read		= seq_read,
+	.llseek		= seq_lseek,
 	.release	= seq_release,
 };
 
diff --git a/drivers/staging/lustre/lustre/lov/lov_request.c b/drivers/staging/lustre/lustre/lov/lov_request.c
index 45dca36..7f591ba 100644
--- a/drivers/staging/lustre/lustre/lov/lov_request.c
+++ b/drivers/staging/lustre/lustre/lov/lov_request.c
@@ -137,12 +137,12 @@ static int lov_check_and_wait_active(struct lov_obd *lov, int ost_idx)
 }
 
 #define LOV_U64_MAX ((u64)~0ULL)
-#define LOV_SUM_MAX(tot, add)					   \
-	do {							    \
-		if ((tot) + (add) < (tot))			      \
-			(tot) = LOV_U64_MAX;			    \
-		else						    \
-			(tot) += (add);				 \
+#define LOV_SUM_MAX(tot, add)			\
+	do {					\
+		if ((tot) + (add) < (tot))	\
+			(tot) = LOV_U64_MAX;	\
+		else				\
+			(tot) += (add);		\
 	} while (0)
 
 static int lov_fini_statfs(struct obd_device *obd, struct obd_statfs *osfs,
diff --git a/drivers/staging/lustre/lustre/lov/lovsub_dev.c b/drivers/staging/lustre/lustre/lov/lovsub_dev.c
index 7e89a2e..69380fc 100644
--- a/drivers/staging/lustre/lustre/lov/lovsub_dev.c
+++ b/drivers/staging/lustre/lustre/lov/lovsub_dev.c
@@ -52,7 +52,7 @@
 static int lovsub_device_init(const struct lu_env *env, struct lu_device *d,
 			      const char *name, struct lu_device *next)
 {
-	struct lovsub_device  *lsd = lu2lovsub_dev(d);
+	struct lovsub_device *lsd = lu2lovsub_dev(d);
 	struct lu_device_type *ldt;
 	int rc;
 
@@ -85,8 +85,8 @@ static struct lu_device *lovsub_device_fini(const struct lu_env *env,
 static struct lu_device *lovsub_device_free(const struct lu_env *env,
 					    struct lu_device *d)
 {
-	struct lovsub_device *lsd  = lu2lovsub_dev(d);
-	struct lu_device     *next = cl2lu_dev(lsd->acid_next);
+	struct lovsub_device *lsd = lu2lovsub_dev(d);
+	struct lu_device *next = cl2lu_dev(lsd->acid_next);
 
 	if (atomic_read(&d->ld_ref) && d->ld_site) {
 		LIBCFS_DEBUG_MSG_DATA_DECL(msgdata, D_ERROR, NULL);
@@ -98,16 +98,16 @@ static struct lu_device *lovsub_device_free(const struct lu_env *env,
 }
 
 static const struct lu_device_operations lovsub_lu_ops = {
-	.ldo_object_alloc      = lovsub_object_alloc,
-	.ldo_process_config    = NULL,
-	.ldo_recovery_complete = NULL
+	.ldo_object_alloc		= lovsub_object_alloc,
+	.ldo_process_config		= NULL,
+	.ldo_recovery_complete		= NULL
 };
 
 static struct lu_device *lovsub_device_alloc(const struct lu_env *env,
 					     struct lu_device_type *t,
 					     struct lustre_cfg *cfg)
 {
-	struct lu_device     *d;
+	struct lu_device *d;
 	struct lovsub_device *lsd;
 
 	lsd = kzalloc(sizeof(*lsd), GFP_NOFS);
@@ -117,7 +117,7 @@ static struct lu_device *lovsub_device_alloc(const struct lu_env *env,
 		result = cl_device_init(&lsd->acid_cl, t);
 		if (result == 0) {
 			d = lovsub2lu_dev(lsd);
-			d->ld_ops	 = &lovsub_lu_ops;
+			d->ld_ops = &lovsub_lu_ops;
 		} else {
 			d = ERR_PTR(result);
 		}
@@ -128,20 +128,20 @@ static struct lu_device *lovsub_device_alloc(const struct lu_env *env,
 }
 
 static const struct lu_device_type_operations lovsub_device_type_ops = {
-	.ldto_device_alloc = lovsub_device_alloc,
-	.ldto_device_free  = lovsub_device_free,
+	.ldto_device_alloc	= lovsub_device_alloc,
+	.ldto_device_free	= lovsub_device_free,
 
-	.ldto_device_init    = lovsub_device_init,
-	.ldto_device_fini    = lovsub_device_fini
+	.ldto_device_init	= lovsub_device_init,
+	.ldto_device_fini	= lovsub_device_fini
 };
 
 #define LUSTRE_LOVSUB_NAME	 "lovsub"
 
 struct lu_device_type lovsub_device_type = {
-	.ldt_tags     = LU_DEVICE_CL,
-	.ldt_name     = LUSTRE_LOVSUB_NAME,
-	.ldt_ops      = &lovsub_device_type_ops,
-	.ldt_ctx_tags = LCT_CL_THREAD
+	.ldt_tags		= LU_DEVICE_CL,
+	.ldt_name		= LUSTRE_LOVSUB_NAME,
+	.ldt_ops		= &lovsub_device_type_ops,
+	.ldt_ctx_tags		= LCT_CL_THREAD
 };
 
 /** @} lov */
diff --git a/drivers/staging/lustre/lustre/lov/lovsub_lock.c b/drivers/staging/lustre/lustre/lov/lovsub_lock.c
index ea492be..7b67c92 100644
--- a/drivers/staging/lustre/lustre/lov/lovsub_lock.c
+++ b/drivers/staging/lustre/lustre/lov/lovsub_lock.c
@@ -52,14 +52,14 @@
 static void lovsub_lock_fini(const struct lu_env *env,
 			     struct cl_lock_slice *slice)
 {
-	struct lovsub_lock   *lsl;
+	struct lovsub_lock *lsl;
 
 	lsl = cl2lovsub_lock(slice);
 	kmem_cache_free(lovsub_lock_kmem, lsl);
 }
 
 static const struct cl_lock_operations lovsub_lock_ops = {
-	.clo_fini    = lovsub_lock_fini,
+	.clo_fini	= lovsub_lock_fini,
 };
 
 int lovsub_lock_init(const struct lu_env *env, struct cl_object *obj,
diff --git a/drivers/staging/lustre/lustre/lov/lovsub_object.c b/drivers/staging/lustre/lustre/lov/lovsub_object.c
index da4b7f1..6ba09f1 100644
--- a/drivers/staging/lustre/lustre/lov/lovsub_object.c
+++ b/drivers/staging/lustre/lustre/lov/lovsub_object.c
@@ -52,10 +52,9 @@
 int lovsub_object_init(const struct lu_env *env, struct lu_object *obj,
 		       const struct lu_object_conf *conf)
 {
-	struct lovsub_device  *dev   = lu2lovsub_dev(obj->lo_dev);
-	struct lu_object      *below;
-	struct lu_device      *under;
-
+	struct lovsub_device *dev = lu2lovsub_dev(obj->lo_dev);
+	struct lu_object *below;
+	struct lu_device *under;
 	int result;
 
 	under = &dev->acid_next->cd_lu_dev;
@@ -73,7 +72,7 @@ int lovsub_object_init(const struct lu_env *env, struct lu_object *obj,
 static void lovsub_object_free(const struct lu_env *env, struct lu_object *obj)
 {
 	struct lovsub_object *los = lu2lovsub(obj);
-	struct lov_object    *lov = los->lso_super;
+	struct lov_object *lov = los->lso_super;
 
 	/* We can't assume lov was assigned here, because of the shadow
 	 * object handling in lu_object_find.
@@ -146,20 +145,20 @@ static void lovsub_req_attr_set(const struct lu_env *env, struct cl_object *obj,
 }
 
 static const struct cl_object_operations lovsub_ops = {
-	.coo_page_init = lovsub_page_init,
-	.coo_lock_init = lovsub_lock_init,
-	.coo_attr_update = lovsub_attr_update,
+	.coo_page_init		= lovsub_page_init,
+	.coo_lock_init		= lovsub_lock_init,
+	.coo_attr_update	= lovsub_attr_update,
 	.coo_glimpse		= lovsub_object_glimpse,
 	.coo_req_attr_set	= lovsub_req_attr_set
 };
 
 static const struct lu_object_operations lovsub_lu_obj_ops = {
-	.loo_object_init      = lovsub_object_init,
-	.loo_object_delete    = NULL,
-	.loo_object_release   = NULL,
-	.loo_object_free      = lovsub_object_free,
-	.loo_object_print     = lovsub_object_print,
-	.loo_object_invariant = NULL
+	.loo_object_init	= lovsub_object_init,
+	.loo_object_delete	= NULL,
+	.loo_object_release	= NULL,
+	.loo_object_free	= lovsub_object_free,
+	.loo_object_print	= lovsub_object_print,
+	.loo_object_invariant	= NULL
 };
 
 struct lu_object *lovsub_object_alloc(const struct lu_env *env,
@@ -167,7 +166,7 @@ struct lu_object *lovsub_object_alloc(const struct lu_env *env,
 				      struct lu_device *dev)
 {
 	struct lovsub_object *los;
-	struct lu_object     *obj;
+	struct lu_object *obj;
 
 	los = kmem_cache_zalloc(lovsub_object_kmem, GFP_NOFS);
 	if (los) {
diff --git a/drivers/staging/lustre/lustre/lov/lovsub_page.c b/drivers/staging/lustre/lustre/lov/lovsub_page.c
index 915520b..a8aa583 100644
--- a/drivers/staging/lustre/lustre/lov/lovsub_page.c
+++ b/drivers/staging/lustre/lustre/lov/lovsub_page.c
@@ -53,7 +53,7 @@ static void lovsub_page_fini(const struct lu_env *env,
 }
 
 static const struct cl_page_operations lovsub_page_ops = {
-	.cpo_fini   = lovsub_page_fini
+	.cpo_fini	= lovsub_page_fini
 };
 
 int lovsub_page_init(const struct lu_env *env, struct cl_object *obj,
diff --git a/drivers/staging/lustre/lustre/lov/lproc_lov.c b/drivers/staging/lustre/lustre/lov/lproc_lov.c
index 771c6f8..fc53f23 100644
--- a/drivers/staging/lustre/lustre/lov/lproc_lov.c
+++ b/drivers/staging/lustre/lustre/lov/lproc_lov.c
@@ -239,10 +239,10 @@ static int lov_tgt_seq_show(struct seq_file *p, void *v)
 }
 
 static const struct seq_operations lov_tgt_sops = {
-	.start = lov_tgt_seq_start,
-	.stop = lov_tgt_seq_stop,
-	.next = lov_tgt_seq_next,
-	.show = lov_tgt_seq_show,
+	.start		= lov_tgt_seq_start,
+	.stop		= lov_tgt_seq_stop,
+	.next		= lov_tgt_seq_next,
+	.show		= lov_tgt_seq_show,
 };
 
 static int lov_target_seq_open(struct inode *inode, struct file *file)
@@ -260,11 +260,11 @@ static int lov_target_seq_open(struct inode *inode, struct file *file)
 }
 
 static const struct file_operations lov_debugfs_target_fops = {
-	.owner   = THIS_MODULE,
-	.open    = lov_target_seq_open,
-	.read    = seq_read,
-	.llseek  = seq_lseek,
-	.release = lprocfs_seq_release,
+	.owner		= THIS_MODULE,
+	.open		= lov_target_seq_open,
+	.read		= seq_read,
+	.llseek		= seq_lseek,
+	.release	= lprocfs_seq_release,
 };
 
 static struct attribute *lov_attrs[] = {
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [lustre-devel] [PATCH 14/26] mdc: cleanup white spaces
  2019-01-31 17:19 [lustre-devel] [PATCH 00/26] lustre: cleanups with no code changes James Simmons
                   ` (12 preceding siblings ...)
  2019-01-31 17:19 ` [lustre-devel] [PATCH 13/26] lov: " James Simmons
@ 2019-01-31 17:19 ` James Simmons
  2019-01-31 17:19 ` [lustre-devel] [PATCH 15/26] mgc: " James Simmons
                   ` (12 subsequent siblings)
  26 siblings, 0 replies; 30+ messages in thread
From: James Simmons @ 2019-01-31 17:19 UTC (permalink / raw)
  To: lustre-devel

The mdc code is very messy and difficult to read. Remove excess
white space and properly align data structures so they are easy on
the eyes.

Signed-off-by: James Simmons <jsimmons@infradead.org>
---
 drivers/staging/lustre/lustre/mdc/mdc_lib.c     | 126 ++++++-------
 drivers/staging/lustre/lustre/mdc/mdc_locks.c   |  80 ++++-----
 drivers/staging/lustre/lustre/mdc/mdc_reint.c   |   2 +-
 drivers/staging/lustre/lustre/mdc/mdc_request.c | 228 ++++++++++++------------
 4 files changed, 218 insertions(+), 218 deletions(-)

diff --git a/drivers/staging/lustre/lustre/mdc/mdc_lib.c b/drivers/staging/lustre/lustre/mdc/mdc_lib.c
index 3dfc863..55d2ea1 100644
--- a/drivers/staging/lustre/lustre/mdc/mdc_lib.c
+++ b/drivers/staging/lustre/lustre/mdc/mdc_lib.c
@@ -153,30 +153,30 @@ void mdc_create_pack(struct ptlrpc_request *req, struct md_op_data *op_data,
 		     uid_t uid, gid_t gid, kernel_cap_t cap_effective,
 		     u64 rdev)
 {
-	struct mdt_rec_create	*rec;
-	char			*tmp;
-	u64			 flags;
+	struct mdt_rec_create *rec;
+	char *tmp;
+	u64 flags;
 
 	BUILD_BUG_ON(sizeof(struct mdt_rec_reint) != sizeof(struct mdt_rec_create));
 	rec = req_capsule_client_get(&req->rq_pill, &RMF_REC_REINT);
 
-	rec->cr_opcode   = REINT_CREATE;
-	rec->cr_fsuid    = uid;
-	rec->cr_fsgid    = gid;
-	rec->cr_cap      = cap_effective.cap[0];
-	rec->cr_fid1     = op_data->op_fid1;
-	rec->cr_fid2     = op_data->op_fid2;
-	rec->cr_mode     = mode;
-	rec->cr_rdev     = rdev;
-	rec->cr_time     = op_data->op_mod_time;
+	rec->cr_opcode = REINT_CREATE;
+	rec->cr_fsuid = uid;
+	rec->cr_fsgid = gid;
+	rec->cr_cap = cap_effective.cap[0];
+	rec->cr_fid1 = op_data->op_fid1;
+	rec->cr_fid2 = op_data->op_fid2;
+	rec->cr_mode = mode;
+	rec->cr_rdev = rdev;
+	rec->cr_time = op_data->op_mod_time;
 	rec->cr_suppgid1 = op_data->op_suppgids[0];
 	rec->cr_suppgid2 = op_data->op_suppgids[1];
 	flags = 0;
 	if (op_data->op_bias & MDS_CREATE_VOLATILE)
 		flags |= MDS_OPEN_VOLATILE;
 	set_mrc_cr_flags(rec, flags);
-	rec->cr_bias     = op_data->op_bias;
-	rec->cr_umask    = current_umask();
+	rec->cr_bias = op_data->op_bias;
+	rec->cr_umask = current_umask();
 
 	mdc_pack_name(req, &RMF_NAME, op_data->op_name, op_data->op_namelen);
 	if (data) {
@@ -229,21 +229,21 @@ void mdc_open_pack(struct ptlrpc_request *req, struct md_op_data *op_data,
 	rec = req_capsule_client_get(&req->rq_pill, &RMF_REC_REINT);
 
 	/* XXX do something about time, uid, gid */
-	rec->cr_opcode   = REINT_OPEN;
-	rec->cr_fsuid    = from_kuid(&init_user_ns, current_fsuid());
-	rec->cr_fsgid    = from_kgid(&init_user_ns, current_fsgid());
-	rec->cr_cap      = current_cap().cap[0];
+	rec->cr_opcode = REINT_OPEN;
+	rec->cr_fsuid = from_kuid(&init_user_ns, current_fsuid());
+	rec->cr_fsgid = from_kgid(&init_user_ns, current_fsgid());
+	rec->cr_cap = current_cap().cap[0];
 	rec->cr_fid1 = op_data->op_fid1;
 	rec->cr_fid2 = op_data->op_fid2;
 
-	rec->cr_mode     = mode;
+	rec->cr_mode = mode;
 	cr_flags = mds_pack_open_flags(flags);
-	rec->cr_rdev     = rdev;
-	rec->cr_time     = op_data->op_mod_time;
+	rec->cr_rdev = rdev;
+	rec->cr_time = op_data->op_mod_time;
 	rec->cr_suppgid1 = op_data->op_suppgids[0];
 	rec->cr_suppgid2 = op_data->op_suppgids[1];
-	rec->cr_bias     = op_data->op_bias;
-	rec->cr_umask    = current_umask();
+	rec->cr_bias = op_data->op_bias;
+	rec->cr_umask = current_umask();
 	rec->cr_old_handle = op_data->op_handle;
 
 	if (op_data->op_name) {
@@ -313,24 +313,24 @@ static inline u64 attr_pack(unsigned int ia_valid, enum op_xvalid ia_xvalid)
 static void mdc_setattr_pack_rec(struct mdt_rec_setattr *rec,
 				 struct md_op_data *op_data)
 {
-	rec->sa_opcode  = REINT_SETATTR;
-	rec->sa_fsuid   = from_kuid(&init_user_ns, current_fsuid());
-	rec->sa_fsgid   = from_kgid(&init_user_ns, current_fsgid());
-	rec->sa_cap     = current_cap().cap[0];
+	rec->sa_opcode = REINT_SETATTR;
+	rec->sa_fsuid = from_kuid(&init_user_ns, current_fsuid());
+	rec->sa_fsgid = from_kgid(&init_user_ns, current_fsgid());
+	rec->sa_cap = current_cap().cap[0];
 	rec->sa_suppgid = -1;
 
-	rec->sa_fid    = op_data->op_fid1;
+	rec->sa_fid = op_data->op_fid1;
 	rec->sa_valid  = attr_pack(op_data->op_attr.ia_valid,
 				   op_data->op_xvalid);
-	rec->sa_mode   = op_data->op_attr.ia_mode;
-	rec->sa_uid    = from_kuid(&init_user_ns, op_data->op_attr.ia_uid);
-	rec->sa_gid    = from_kgid(&init_user_ns, op_data->op_attr.ia_gid);
+	rec->sa_mode = op_data->op_attr.ia_mode;
+	rec->sa_uid = from_kuid(&init_user_ns, op_data->op_attr.ia_uid);
+	rec->sa_gid = from_kgid(&init_user_ns, op_data->op_attr.ia_gid);
 	rec->sa_projid = op_data->op_projid;
-	rec->sa_size   = op_data->op_attr.ia_size;
+	rec->sa_size = op_data->op_attr.ia_size;
 	rec->sa_blocks = op_data->op_attr_blocks;
-	rec->sa_atime  = op_data->op_attr.ia_atime.tv_sec;
-	rec->sa_mtime  = op_data->op_attr.ia_mtime.tv_sec;
-	rec->sa_ctime  = op_data->op_attr.ia_ctime.tv_sec;
+	rec->sa_atime = op_data->op_attr.ia_atime.tv_sec;
+	rec->sa_mtime = op_data->op_attr.ia_mtime.tv_sec;
+	rec->sa_ctime = op_data->op_attr.ia_ctime.tv_sec;
 	rec->sa_attr_flags = op_data->op_attr_flags;
 	if ((op_data->op_attr.ia_valid & ATTR_GID) &&
 	    in_group_p(op_data->op_attr.ia_gid))
@@ -383,18 +383,18 @@ void mdc_unlink_pack(struct ptlrpc_request *req, struct md_op_data *op_data)
 	BUILD_BUG_ON(sizeof(struct mdt_rec_reint) != sizeof(struct mdt_rec_unlink));
 	rec = req_capsule_client_get(&req->rq_pill, &RMF_REC_REINT);
 
-	rec->ul_opcode   = op_data->op_cli_flags & CLI_RM_ENTRY ?
-					REINT_RMENTRY : REINT_UNLINK;
-	rec->ul_fsuid    = op_data->op_fsuid;
-	rec->ul_fsgid    = op_data->op_fsgid;
-	rec->ul_cap      = op_data->op_cap.cap[0];
-	rec->ul_mode     = op_data->op_mode;
+	rec->ul_opcode = op_data->op_cli_flags & CLI_RM_ENTRY ?
+			 REINT_RMENTRY : REINT_UNLINK;
+	rec->ul_fsuid = op_data->op_fsuid;
+	rec->ul_fsgid = op_data->op_fsgid;
+	rec->ul_cap = op_data->op_cap.cap[0];
+	rec->ul_mode = op_data->op_mode;
 	rec->ul_suppgid1 = op_data->op_suppgids[0];
 	rec->ul_suppgid2 = -1;
-	rec->ul_fid1     = op_data->op_fid1;
-	rec->ul_fid2     = op_data->op_fid2;
-	rec->ul_time     = op_data->op_mod_time;
-	rec->ul_bias     = op_data->op_bias;
+	rec->ul_fid1 = op_data->op_fid1;
+	rec->ul_fid2 = op_data->op_fid2;
+	rec->ul_time = op_data->op_mod_time;
+	rec->ul_bias = op_data->op_bias;
 
 	mdc_pack_name(req, &RMF_NAME, op_data->op_name, op_data->op_namelen);
 }
@@ -406,16 +406,16 @@ void mdc_link_pack(struct ptlrpc_request *req, struct md_op_data *op_data)
 	BUILD_BUG_ON(sizeof(struct mdt_rec_reint) != sizeof(struct mdt_rec_link));
 	rec = req_capsule_client_get(&req->rq_pill, &RMF_REC_REINT);
 
-	rec->lk_opcode   = REINT_LINK;
-	rec->lk_fsuid    = op_data->op_fsuid; /* current->fsuid; */
-	rec->lk_fsgid    = op_data->op_fsgid; /* current->fsgid; */
-	rec->lk_cap      = op_data->op_cap.cap[0]; /* current->cap_effective; */
+	rec->lk_opcode = REINT_LINK;
+	rec->lk_fsuid = op_data->op_fsuid; /* current->fsuid; */
+	rec->lk_fsgid = op_data->op_fsgid; /* current->fsgid; */
+	rec->lk_cap = op_data->op_cap.cap[0]; /* current->cap_effective; */
 	rec->lk_suppgid1 = op_data->op_suppgids[0];
 	rec->lk_suppgid2 = op_data->op_suppgids[1];
-	rec->lk_fid1     = op_data->op_fid1;
-	rec->lk_fid2     = op_data->op_fid2;
-	rec->lk_time     = op_data->op_mod_time;
-	rec->lk_bias     = op_data->op_bias;
+	rec->lk_fid1 = op_data->op_fid1;
+	rec->lk_fid2 = op_data->op_fid2;
+	rec->lk_time = op_data->op_mod_time;
+	rec->lk_bias = op_data->op_bias;
 
 	mdc_pack_name(req, &RMF_NAME, op_data->op_name, op_data->op_namelen);
 }
@@ -455,18 +455,18 @@ void mdc_rename_pack(struct ptlrpc_request *req, struct md_op_data *op_data,
 	rec = req_capsule_client_get(&req->rq_pill, &RMF_REC_REINT);
 
 	/* XXX do something about time, uid, gid */
-	rec->rn_opcode	 = op_data->op_cli_flags & CLI_MIGRATE ?
-				REINT_MIGRATE : REINT_RENAME;
-	rec->rn_fsuid    = op_data->op_fsuid;
-	rec->rn_fsgid    = op_data->op_fsgid;
-	rec->rn_cap      = op_data->op_cap.cap[0];
+	rec->rn_opcode = op_data->op_cli_flags & CLI_MIGRATE ?
+			 REINT_MIGRATE : REINT_RENAME;
+	rec->rn_fsuid = op_data->op_fsuid;
+	rec->rn_fsgid = op_data->op_fsgid;
+	rec->rn_cap = op_data->op_cap.cap[0];
 	rec->rn_suppgid1 = op_data->op_suppgids[0];
 	rec->rn_suppgid2 = op_data->op_suppgids[1];
-	rec->rn_fid1     = op_data->op_fid1;
-	rec->rn_fid2     = op_data->op_fid2;
-	rec->rn_time     = op_data->op_mod_time;
-	rec->rn_mode     = op_data->op_mode;
-	rec->rn_bias     = op_data->op_bias;
+	rec->rn_fid1 = op_data->op_fid1;
+	rec->rn_fid2 = op_data->op_fid2;
+	rec->rn_time = op_data->op_mod_time;
+	rec->rn_mode = op_data->op_mode;
+	rec->rn_bias = op_data->op_bias;
 
 	mdc_pack_name(req, &RMF_NAME, old, oldlen);
 
diff --git a/drivers/staging/lustre/lustre/mdc/mdc_locks.c b/drivers/staging/lustre/lustre/mdc/mdc_locks.c
index e16dce6..430c422 100644
--- a/drivers/staging/lustre/lustre/mdc/mdc_locks.c
+++ b/drivers/staging/lustre/lustre/mdc/mdc_locks.c
@@ -203,7 +203,7 @@ static inline void mdc_clear_replay_flag(struct ptlrpc_request *req, int rc)
 }
 
 /* Save a large LOV EA into the request buffer so that it is available
- * for replay.  We don't do this in the initial request because the
+ * for replay. We don't do this in the initial request because the
  * original request doesn't need this buffer (at most it sends just the
  * lov_mds_md) and it is a waste of RAM/bandwidth to send the empty
  * buffer and may also be difficult to allocate and save a very large
@@ -247,14 +247,14 @@ static int mdc_save_lovea(struct ptlrpc_request *req,
 		     struct md_op_data *op_data)
 {
 	struct ptlrpc_request *req;
-	struct obd_device     *obddev = class_exp2obd(exp);
-	struct ldlm_intent    *lit;
+	struct obd_device *obddev = class_exp2obd(exp);
+	struct ldlm_intent *lit;
 	const void *lmm = op_data->op_data;
 	u32 lmmsize = op_data->op_data_size;
 	LIST_HEAD(cancels);
-	int		    count = 0;
-	int		    mode;
-	int		    rc;
+	int count = 0;
+	int mode;
+	int rc;
 
 	it->it_create_mode = (it->it_create_mode & ~S_IFMT) | S_IFREG;
 
@@ -344,8 +344,8 @@ static int mdc_save_lovea(struct ptlrpc_request *req,
 			 struct md_op_data *op_data)
 {
 	u32 ea_vals_buf_size = GA_DEFAULT_EA_VAL_LEN * GA_DEFAULT_EA_NUM;
-	struct ptlrpc_request	*req;
-	struct ldlm_intent	*lit;
+	struct ptlrpc_request *req;
+	struct ldlm_intent *lit;
 	int rc, count = 0;
 	LIST_HEAD(cancels);
 
@@ -403,9 +403,9 @@ static struct ptlrpc_request *mdc_intent_unlink_pack(struct obd_export *exp,
 						     struct md_op_data *op_data)
 {
 	struct ptlrpc_request *req;
-	struct obd_device     *obddev = class_exp2obd(exp);
-	struct ldlm_intent    *lit;
-	int		    rc;
+	struct obd_device *obddev = class_exp2obd(exp);
+	struct ldlm_intent *lit;
+	int rc;
 
 	req = ptlrpc_request_alloc(class_exp2cliimp(exp),
 				   &RQF_LDLM_INTENT_UNLINK);
@@ -439,12 +439,12 @@ static struct ptlrpc_request *mdc_intent_getattr_pack(struct obd_export *exp,
 						     struct md_op_data *op_data)
 {
 	struct ptlrpc_request *req;
-	struct obd_device     *obddev = class_exp2obd(exp);
-	u64		       valid = OBD_MD_FLGETATTR | OBD_MD_FLEASIZE |
-				       OBD_MD_FLMODEASIZE | OBD_MD_FLDIREA |
-				       OBD_MD_MEA | OBD_MD_FLACL;
-	struct ldlm_intent    *lit;
-	int		    rc;
+	struct obd_device *obddev = class_exp2obd(exp);
+	u64 valid = OBD_MD_FLGETATTR | OBD_MD_FLEASIZE |
+		    OBD_MD_FLMODEASIZE | OBD_MD_FLDIREA |
+		    OBD_MD_MEA | OBD_MD_FLACL;
+	struct ldlm_intent *lit;
+	int rc;
 	u32 easize;
 
 	req = ptlrpc_request_alloc(class_exp2cliimp(exp),
@@ -484,10 +484,10 @@ static struct ptlrpc_request *mdc_intent_layout_pack(struct obd_export *exp,
 						     struct lookup_intent *it,
 						     struct md_op_data *op_data)
 {
-	struct obd_device     *obd = class_exp2obd(exp);
+	struct obd_device *obd = class_exp2obd(exp);
 	struct ptlrpc_request *req;
-	struct ldlm_intent    *lit;
-	struct layout_intent  *layout;
+	struct ldlm_intent *lit;
+	struct layout_intent *layout;
 	int rc;
 
 	req = ptlrpc_request_alloc(class_exp2cliimp(exp),
@@ -546,11 +546,11 @@ static int mdc_finish_enqueue(struct obd_export *exp,
 			      struct lustre_handle *lockh,
 			      int rc)
 {
-	struct req_capsule  *pill = &req->rq_pill;
+	struct req_capsule *pill = &req->rq_pill;
 	struct ldlm_request *lockreq;
-	struct ldlm_reply   *lockrep;
-	struct ldlm_lock    *lock;
-	void		*lvb_data = NULL;
+	struct ldlm_reply *lockrep;
+	struct ldlm_lock *lock;
+	void *lvb_data = NULL;
 	u32 lvb_len = 0;
 
 	LASSERT(rc >= 0);
@@ -985,7 +985,7 @@ static int mdc_finish_intent_lock(struct obd_export *exp,
 
 matching_lock:
 	/* If we already have a matching lock, then cancel the new
-	 * one.  We have to set the data here instead of in
+	 * one. We have to set the data here instead of in
 	 * mdc_enqueue, because we need to use the child's inode as
 	 * the l_ast_data to match, and that's not available until
 	 * intent_finish has performed the iget().)
@@ -1185,16 +1185,16 @@ static int mdc_intent_getattr_async_interpret(const struct lu_env *env,
 					      void *args, int rc)
 {
 	struct mdc_getattr_args  *ga = args;
-	struct obd_export	*exp = ga->ga_exp;
-	struct md_enqueue_info   *minfo = ga->ga_minfo;
+	struct obd_export *exp = ga->ga_exp;
+	struct md_enqueue_info *minfo = ga->ga_minfo;
 	struct ldlm_enqueue_info *einfo = &minfo->mi_einfo;
-	struct lookup_intent     *it;
-	struct lustre_handle     *lockh;
-	struct obd_device	*obddev;
-	struct ldlm_reply	 *lockrep;
-	u64		     flags = LDLM_FL_HAS_INTENT;
+	struct lookup_intent *it;
+	struct lustre_handle *lockh;
+	struct obd_device *obddev;
+	struct ldlm_reply *lockrep;
+	u64 flags = LDLM_FL_HAS_INTENT;
 
-	it    = &minfo->mi_it;
+	it = &minfo->mi_it;
 	lockh = &minfo->mi_lockh;
 
 	obddev = class_exp2obd(exp);
@@ -1230,17 +1230,17 @@ static int mdc_intent_getattr_async_interpret(const struct lu_env *env,
 int mdc_intent_getattr_async(struct obd_export *exp,
 			     struct md_enqueue_info *minfo)
 {
-	struct md_op_data       *op_data = &minfo->mi_data;
-	struct lookup_intent    *it = &minfo->mi_it;
-	struct ptlrpc_request   *req;
+	struct md_op_data *op_data = &minfo->mi_data;
+	struct lookup_intent *it = &minfo->mi_it;
+	struct ptlrpc_request *req;
 	struct mdc_getattr_args *ga;
-	struct obd_device       *obddev = class_exp2obd(exp);
-	struct ldlm_res_id       res_id;
+	struct obd_device *obddev = class_exp2obd(exp);
+	struct ldlm_res_id res_id;
 	union ldlm_policy_data policy = {
 		.l_inodebits = { MDS_INODELOCK_LOOKUP | MDS_INODELOCK_UPDATE }
 	};
-	int		      rc = 0;
-	u64		    flags = LDLM_FL_HAS_INTENT;
+	int rc = 0;
+	u64 flags = LDLM_FL_HAS_INTENT;
 
 	CDEBUG(D_DLMTRACE,
 	       "name: %.*s in inode " DFID ", intent: %s flags %#Lo\n",
diff --git a/drivers/staging/lustre/lustre/mdc/mdc_reint.c b/drivers/staging/lustre/lustre/mdc/mdc_reint.c
index 765c908..e0e7b00 100644
--- a/drivers/staging/lustre/lustre/mdc/mdc_reint.c
+++ b/drivers/staging/lustre/lustre/mdc/mdc_reint.c
@@ -223,7 +223,7 @@ int mdc_create(struct obd_export *exp, struct md_op_data *op_data,
 		req->rq_sent = ktime_get_real_seconds() + resends;
 	}
 	level = LUSTRE_IMP_FULL;
- resend:
+resend:
 	rc = mdc_reint(req, level);
 
 	/* Resend if we were told to. */
diff --git a/drivers/staging/lustre/lustre/mdc/mdc_request.c b/drivers/staging/lustre/lustre/mdc/mdc_request.c
index 1aee1c5..3eb89ec 100644
--- a/drivers/staging/lustre/lustre/mdc/mdc_request.c
+++ b/drivers/staging/lustre/lustre/mdc/mdc_request.c
@@ -33,12 +33,12 @@
 
 #define DEBUG_SUBSYSTEM S_MDC
 
-# include <linux/module.h>
-# include <linux/pagemap.h>
-# include <linux/init.h>
-# include <linux/utsname.h>
-# include <linux/file.h>
-# include <linux/kthread.h>
+#include <linux/module.h>
+#include <linux/pagemap.h>
+#include <linux/init.h>
+#include <linux/utsname.h>
+#include <linux/file.h>
+#include <linux/kthread.h>
 #include <linux/prefetch.h>
 
 #include <lustre_errno.h>
@@ -96,8 +96,8 @@ static int mdc_get_root(struct obd_export *exp, const char *fileset,
 			struct lu_fid *rootfid)
 {
 	struct ptlrpc_request *req;
-	struct mdt_body       *body;
-	int		    rc;
+	struct mdt_body *body;
+	int rc;
 
 	if (fileset && !(exp_connect_flags(exp) & OBD_CONNECT_SUBTREE))
 		return -ENOTSUPP;
@@ -160,9 +160,9 @@ static int mdc_getattr_common(struct obd_export *exp,
 			      struct ptlrpc_request *req)
 {
 	struct req_capsule *pill = &req->rq_pill;
-	struct mdt_body    *body;
-	void	       *eadata;
-	int		 rc;
+	struct mdt_body *body;
+	void *eadata;
+	int rc;
 
 	/* Request message already built. */
 	rc = ptlrpc_queue_wait(req);
@@ -191,7 +191,7 @@ static int mdc_getattr(struct obd_export *exp, struct md_op_data *op_data,
 		       struct ptlrpc_request **request)
 {
 	struct ptlrpc_request *req;
-	int		    rc;
+	int rc;
 
 	/* Single MDS without an LMV case */
 	if (op_data->op_flags & MF_GET_MDT_IDX) {
@@ -230,7 +230,7 @@ static int mdc_getattr_name(struct obd_export *exp, struct md_op_data *op_data,
 			    struct ptlrpc_request **request)
 {
 	struct ptlrpc_request *req;
-	int		    rc;
+	int rc;
 
 	*request = NULL;
 	req = ptlrpc_request_alloc(class_exp2cliimp(exp),
@@ -281,9 +281,9 @@ static int mdc_xattr_common(struct obd_export *exp,
 			    u32 suppgid, struct ptlrpc_request **request)
 {
 	struct ptlrpc_request *req;
-	int   xattr_namelen = 0;
+	int xattr_namelen = 0;
 	char *tmp;
-	int   rc;
+	int rc;
 
 	*request = NULL;
 	req = ptlrpc_request_alloc(class_exp2cliimp(exp), fmt);
@@ -333,19 +333,19 @@ static int mdc_xattr_common(struct obd_export *exp,
 		struct mdt_rec_setxattr *rec;
 
 		BUILD_BUG_ON(sizeof(struct mdt_rec_setxattr) !=
-			 sizeof(struct mdt_rec_reint));
+			     sizeof(struct mdt_rec_reint));
 		rec = req_capsule_client_get(&req->rq_pill, &RMF_REC_REINT);
 		rec->sx_opcode = REINT_SETXATTR;
-		rec->sx_fsuid  = from_kuid(&init_user_ns, current_fsuid());
-		rec->sx_fsgid  = from_kgid(&init_user_ns, current_fsgid());
-		rec->sx_cap    = current_cap().cap[0];
+		rec->sx_fsuid = from_kuid(&init_user_ns, current_fsuid());
+		rec->sx_fsgid = from_kgid(&init_user_ns, current_fsgid());
+		rec->sx_cap = current_cap().cap[0];
 		rec->sx_suppgid1 = suppgid;
 		rec->sx_suppgid2 = -1;
-		rec->sx_fid    = *fid;
-		rec->sx_valid  = valid | OBD_MD_FLCTIME;
-		rec->sx_time   = ktime_get_real_seconds();
-		rec->sx_size   = output_size;
-		rec->sx_flags  = flags;
+		rec->sx_fid = *fid;
+		rec->sx_valid = valid | OBD_MD_FLCTIME;
+		rec->sx_time = ktime_get_real_seconds();
+		rec->sx_size = output_size;
+		rec->sx_flags = flags;
 
 	} else {
 		mdc_pack_body(req, fid, valid, output_size, suppgid, flags);
@@ -411,11 +411,11 @@ static int mdc_getxattr(struct obd_export *exp, const struct lu_fid *fid,
 #ifdef CONFIG_FS_POSIX_ACL
 static int mdc_unpack_acl(struct ptlrpc_request *req, struct lustre_md *md)
 {
-	struct req_capsule     *pill = &req->rq_pill;
+	struct req_capsule *pill = &req->rq_pill;
 	struct mdt_body	*body = md->body;
-	struct posix_acl       *acl;
-	void		   *buf;
-	int		     rc;
+	struct posix_acl *acl;
+	void *buf;
+	int rc;
 
 	if (!body->mbo_aclsize)
 		return 0;
@@ -643,11 +643,11 @@ int mdc_set_open_replay_data(struct obd_export *exp,
 			     struct obd_client_handle *och,
 			     struct lookup_intent *it)
 {
-	struct md_open_data   *mod;
+	struct md_open_data *mod;
 	struct mdt_rec_create *rec;
-	struct mdt_body       *body;
+	struct mdt_body *body;
 	struct ptlrpc_request *open_req = it->it_request;
-	struct obd_import     *imp = open_req->rq_import;
+	struct obd_import *imp = open_req->rq_import;
 
 	if (!open_req->rq_replay)
 		return 0;
@@ -758,11 +758,11 @@ static int mdc_clear_open_replay_data(struct obd_export *exp,
 static int mdc_close(struct obd_export *exp, struct md_op_data *op_data,
 		     struct md_open_data *mod, struct ptlrpc_request **request)
 {
-	struct obd_device     *obd = class_exp2obd(exp);
+	struct obd_device *obd = class_exp2obd(exp);
 	struct ptlrpc_request *req;
-	struct req_format     *req_fmt;
-	int                    rc;
-	int		       saved_rc = 0;
+	struct req_format *req_fmt;
+	int rc;
+	int saved_rc = 0;
 
 	if (op_data->op_bias & MDS_HSM_RELEASE) {
 		req_fmt = &RQF_MDS_INTENT_CLOSE;
@@ -1056,7 +1056,7 @@ static struct page *mdc_page_locate(struct address_space *mapping, u64 *hash,
  *
  * A lu_dirpage is laid out as follows, where s = ldp_hash_start,
  * e = ldp_hash_end, f = ldp_flags, p = padding, and each "ent" is a
- * struct lu_dirent.  It has size up to LU_PAGE_SIZE. The ldp_hash_end
+ * struct lu_dirent. It has size up to LU_PAGE_SIZE. The ldp_hash_end
  * value is used as a cookie to request the next lu_dirpage in a
  * directory listing that spans multiple pages (two in this example):
  *   ________
@@ -1420,11 +1420,11 @@ static int mdc_statfs(const struct lu_env *env,
 		      struct obd_export *exp, struct obd_statfs *osfs,
 		      u64 max_age, u32 flags)
 {
-	struct obd_device     *obd = class_exp2obd(exp);
+	struct obd_device *obd = class_exp2obd(exp);
 	struct ptlrpc_request *req;
-	struct obd_statfs     *msfs;
-	struct obd_import     *imp = NULL;
-	int		    rc;
+	struct obd_statfs *msfs;
+	struct obd_import *imp = NULL;
+	int rc;
 
 	/*
 	 * Since the request might also come from lprocfs, so we need
@@ -1487,7 +1487,7 @@ static int mdc_ioc_fid2path(struct obd_export *exp, struct getinfo_fid2path *gf)
 
 	/* Key is KEY_FID2PATH + getinfo_fid2path description */
 	keylen = cfs_size_round(sizeof(KEY_FID2PATH)) + sizeof(*gf) +
-		 sizeof(struct lu_fid);
+				sizeof(struct lu_fid);
 	key = kzalloc(keylen, GFP_NOFS);
 	if (!key)
 		return -ENOMEM;
@@ -1533,10 +1533,10 @@ static int mdc_ioc_fid2path(struct obd_export *exp, struct getinfo_fid2path *gf)
 static int mdc_ioc_hsm_progress(struct obd_export *exp,
 				struct hsm_progress_kernel *hpk)
 {
-	struct obd_import		*imp = class_exp2cliimp(exp);
-	struct hsm_progress_kernel	*req_hpk;
-	struct ptlrpc_request		*req;
-	int				 rc;
+	struct obd_import *imp = class_exp2cliimp(exp);
+	struct hsm_progress_kernel *req_hpk;
+	struct ptlrpc_request *req;
+	int rc;
 
 	req = ptlrpc_request_alloc_pack(imp, &RQF_MDS_HSM_PROGRESS,
 					LUSTRE_MDS_VERSION, MDS_HSM_PROGRESS);
@@ -1569,9 +1569,9 @@ static int mdc_ioc_hsm_progress(struct obd_export *exp,
 
 static int mdc_ioc_hsm_ct_register(struct obd_import *imp, u32 archives)
 {
-	u32			*archive_mask;
-	struct ptlrpc_request	*req;
-	int			 rc;
+	u32 *archive_mask;
+	struct ptlrpc_request *req;
+	int rc;
 
 	req = ptlrpc_request_alloc_pack(imp, &RQF_MDS_HSM_CT_REGISTER,
 					LUSTRE_MDS_VERSION,
@@ -1604,10 +1604,10 @@ static int mdc_ioc_hsm_ct_register(struct obd_import *imp, u32 archives)
 static int mdc_ioc_hsm_current_action(struct obd_export *exp,
 				      struct md_op_data *op_data)
 {
-	struct hsm_current_action	*hca = op_data->op_data;
-	struct hsm_current_action	*req_hca;
-	struct ptlrpc_request		*req;
-	int				 rc;
+	struct hsm_current_action *hca = op_data->op_data;
+	struct hsm_current_action *req_hca;
+	struct ptlrpc_request *req;
+	int rc;
 
 	req = ptlrpc_request_alloc(class_exp2cliimp(exp),
 				   &RQF_MDS_HSM_ACTION);
@@ -1645,8 +1645,8 @@ static int mdc_ioc_hsm_current_action(struct obd_export *exp,
 
 static int mdc_ioc_hsm_ct_unregister(struct obd_import *imp)
 {
-	struct ptlrpc_request	*req;
-	int			 rc;
+	struct ptlrpc_request *req;
+	int rc;
 
 	req = ptlrpc_request_alloc_pack(imp, &RQF_MDS_HSM_CT_UNREGISTER,
 					LUSTRE_MDS_VERSION,
@@ -1669,10 +1669,10 @@ static int mdc_ioc_hsm_ct_unregister(struct obd_import *imp)
 static int mdc_ioc_hsm_state_get(struct obd_export *exp,
 				 struct md_op_data *op_data)
 {
-	struct hsm_user_state	*hus = op_data->op_data;
-	struct hsm_user_state	*req_hus;
-	struct ptlrpc_request	*req;
-	int			 rc;
+	struct hsm_user_state *hus = op_data->op_data;
+	struct hsm_user_state *req_hus;
+	struct ptlrpc_request *req;
+	int rc;
 
 	req = ptlrpc_request_alloc(class_exp2cliimp(exp),
 				   &RQF_MDS_HSM_STATE_GET);
@@ -1710,10 +1710,10 @@ static int mdc_ioc_hsm_state_get(struct obd_export *exp,
 static int mdc_ioc_hsm_state_set(struct obd_export *exp,
 				 struct md_op_data *op_data)
 {
-	struct hsm_state_set	*hss = op_data->op_data;
-	struct hsm_state_set	*req_hss;
-	struct ptlrpc_request	*req;
-	int			 rc;
+	struct hsm_state_set *hss = op_data->op_data;
+	struct hsm_state_set *req_hss;
+	struct ptlrpc_request *req;
+	int rc;
 
 	req = ptlrpc_request_alloc(class_exp2cliimp(exp),
 				   &RQF_MDS_HSM_STATE_SET);
@@ -1750,12 +1750,12 @@ static int mdc_ioc_hsm_state_set(struct obd_export *exp,
 static int mdc_ioc_hsm_request(struct obd_export *exp,
 			       struct hsm_user_request *hur)
 {
-	struct obd_import	*imp = class_exp2cliimp(exp);
-	struct ptlrpc_request	*req;
-	struct hsm_request	*req_hr;
-	struct hsm_user_item	*req_hui;
-	char			*req_opaque;
-	int			 rc;
+	struct obd_import *imp = class_exp2cliimp(exp);
+	struct ptlrpc_request *req;
+	struct hsm_request *req_hr;
+	struct hsm_user_item *req_hui;
+	char *req_opaque;
+	int rc;
 
 	req = ptlrpc_request_alloc(imp, &RQF_MDS_HSM_REQUEST);
 	if (!req) {
@@ -1818,9 +1818,9 @@ static int mdc_ioc_hsm_ct_start(struct obd_export *exp,
 static int mdc_quotactl(struct obd_device *unused, struct obd_export *exp,
 			struct obd_quotactl *oqctl)
 {
-	struct ptlrpc_request   *req;
-	struct obd_quotactl     *oqc;
-	int		      rc;
+	struct ptlrpc_request *req;
+	struct obd_quotactl *oqc;
+	int rc;
 
 	req = ptlrpc_request_alloc_pack(class_exp2cliimp(exp),
 					&RQF_MDS_QUOTACTL, LUSTRE_MDS_VERSION,
@@ -1860,8 +1860,8 @@ static int mdc_ioc_swap_layouts(struct obd_export *exp,
 				struct md_op_data *op_data)
 {
 	LIST_HEAD(cancels);
-	struct ptlrpc_request	*req;
-	int			 rc, count;
+	struct ptlrpc_request *req;
+	int rc, count;
 	struct mdc_swap_layouts *msl, *payload;
 
 	msl = op_data->op_data;
@@ -1965,7 +1965,7 @@ static int mdc_iocontrol(unsigned int cmd, struct obd_export *exp, int len,
 	 * bz20731, LU-592.
 	 */
 	case IOC_OBD_STATFS: {
-		struct obd_statfs stat_buf = {0};
+		struct obd_statfs stat_buf = { 0 };
 
 		if (*((u32 *)data->ioc_inlbuf2) != 0) {
 			rc = -ENODEV;
@@ -2044,10 +2044,10 @@ static int mdc_get_info_rpc(struct obd_export *exp,
 			    u32 keylen, void *key,
 			    int vallen, void *val)
 {
-	struct obd_import      *imp = class_exp2cliimp(exp);
-	struct ptlrpc_request  *req;
-	char		   *tmp;
-	int		     rc = -EINVAL;
+	struct obd_import *imp = class_exp2cliimp(exp);
+	struct ptlrpc_request *req;
+	char *tmp;
+	int rc = -EINVAL;
 
 	req = ptlrpc_request_alloc(imp, &RQF_MDS_GET_INFO);
 	if (!req)
@@ -2104,7 +2104,7 @@ static void lustre_swab_hai(struct hsm_action_item *h)
 
 static void lustre_swab_hal(struct hsm_action_list *h)
 {
-	struct hsm_action_item	*hai;
+	struct hsm_action_item *hai;
 	u32 i;
 
 	__swab32s(&h->hal_version);
@@ -2127,9 +2127,9 @@ static void lustre_swab_kuch(struct kuc_hdr *l)
 static int mdc_ioc_hsm_ct_start(struct obd_export *exp,
 				struct lustre_kernelcomm *lk)
 {
-	struct obd_import  *imp = class_exp2cliimp(exp);
-	u32		    archive = lk->lk_data;
-	int		    rc = 0;
+	struct obd_import *imp = class_exp2cliimp(exp);
+	u32 archive = lk->lk_data;
+	int rc = 0;
 
 	if (lk->lk_group != KUC_GRP_HSM) {
 		CERROR("Bad copytool group %d\n", lk->lk_group);
@@ -2156,8 +2156,8 @@ static int mdc_ioc_hsm_ct_start(struct obd_export *exp,
  */
 static int mdc_hsm_copytool_send(size_t len, void *val)
 {
-	struct kuc_hdr		*lh = (struct kuc_hdr *)val;
-	struct hsm_action_list	*hal = (struct hsm_action_list *)(lh + 1);
+	struct kuc_hdr *lh = (struct kuc_hdr *)val;
+	struct hsm_action_list *hal = (struct hsm_action_list *)(lh + 1);
 
 	if (len < sizeof(*lh) + sizeof(*hal)) {
 		CERROR("Short HSM message %zu < %zu\n", len,
@@ -2189,9 +2189,9 @@ static int mdc_hsm_copytool_send(size_t len, void *val)
  */
 static int mdc_hsm_ct_reregister(void *data, void *cb_arg)
 {
-	struct kkuc_ct_data	*kcd = data;
-	struct obd_import	*imp = (struct obd_import *)cb_arg;
-	int			 rc;
+	struct kkuc_ct_data *kcd = data;
+	struct obd_import *imp = (struct obd_import *)cb_arg;
+	int rc;
 
 	if (!kcd || kcd->kcd_magic != KKUC_CT_DATA_MAGIC)
 		return -EPROTO;
@@ -2213,8 +2213,8 @@ static int mdc_set_info_async(const struct lu_env *env,
 			      u32 vallen, void *val,
 			      struct ptlrpc_request_set *set)
 {
-	struct obd_import	*imp = class_exp2cliimp(exp);
-	int			 rc;
+	struct obd_import *imp = class_exp2cliimp(exp);
+	int rc;
 
 	if (KEY_IS(KEY_READ_ONLY)) {
 		if (vallen != sizeof(int))
@@ -2310,7 +2310,7 @@ static int mdc_fsync(struct obd_export *exp, const struct lu_fid *fid,
 		     struct ptlrpc_request **request)
 {
 	struct ptlrpc_request *req;
-	int		    rc;
+	int rc;
 
 	*request = NULL;
 	req = ptlrpc_request_alloc(class_exp2cliimp(exp), &RQF_MDS_SYNC);
@@ -2437,14 +2437,14 @@ static int mdc_resource_inode_free(struct ldlm_resource *res)
 }
 
 static struct ldlm_valblock_ops inode_lvbo = {
-	.lvbo_free = mdc_resource_inode_free,
+	.lvbo_free	= mdc_resource_inode_free,
 };
 
 static int mdc_llog_init(struct obd_device *obd)
 {
-	struct obd_llog_group	*olg = &obd->obd_olg;
-	struct llog_ctxt	*ctxt;
-	int			 rc;
+	struct obd_llog_group *olg = &obd->obd_olg;
+	struct llog_ctxt *ctxt;
+	int rc;
 
 	rc = llog_setup(NULL, obd, olg, LLOG_CHANGELOG_REPL_CTXT, obd,
 			&llog_client_ops);
@@ -2570,25 +2570,25 @@ static int mdc_process_config(struct obd_device *obd, u32 len, void *buf)
 }
 
 static struct obd_ops mdc_obd_ops = {
-	.owner          = THIS_MODULE,
-	.setup          = mdc_setup,
-	.precleanup     = mdc_precleanup,
-	.cleanup        = mdc_cleanup,
-	.add_conn       = client_import_add_conn,
-	.del_conn       = client_import_del_conn,
-	.connect        = client_connect_import,
-	.disconnect     = client_disconnect_export,
-	.iocontrol      = mdc_iocontrol,
-	.set_info_async = mdc_set_info_async,
-	.statfs         = mdc_statfs,
-	.fid_init       = client_fid_init,
-	.fid_fini       = client_fid_fini,
-	.fid_alloc      = mdc_fid_alloc,
-	.import_event   = mdc_import_event,
-	.get_info       = mdc_get_info,
-	.process_config = mdc_process_config,
-	.get_uuid       = mdc_get_uuid,
-	.quotactl       = mdc_quotactl,
+	.owner			= THIS_MODULE,
+	.setup			= mdc_setup,
+	.precleanup		= mdc_precleanup,
+	.cleanup		= mdc_cleanup,
+	.add_conn		= client_import_add_conn,
+	.del_conn		= client_import_del_conn,
+	.connect		= client_connect_import,
+	.disconnect		= client_disconnect_export,
+	.iocontrol		= mdc_iocontrol,
+	.set_info_async		= mdc_set_info_async,
+	.statfs			= mdc_statfs,
+	.fid_init		= client_fid_init,
+	.fid_fini		= client_fid_fini,
+	.fid_alloc		= mdc_fid_alloc,
+	.import_event		= mdc_import_event,
+	.get_info		= mdc_get_info,
+	.process_config		= mdc_process_config,
+	.get_uuid		= mdc_get_uuid,
+	.quotactl		= mdc_quotactl,
 };
 
 static struct md_ops mdc_md_ops = {
@@ -2629,7 +2629,7 @@ static int __init mdc_init(void)
 		return rc;
 
 	return class_register_type(&mdc_obd_ops, &mdc_md_ops,
-				 LUSTRE_MDC_NAME, NULL);
+				   LUSTRE_MDC_NAME, NULL);
 }
 
 static void /*__exit*/ mdc_exit(void)
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [lustre-devel] [PATCH 15/26] mgc: cleanup white spaces
  2019-01-31 17:19 [lustre-devel] [PATCH 00/26] lustre: cleanups with no code changes James Simmons
                   ` (13 preceding siblings ...)
  2019-01-31 17:19 ` [lustre-devel] [PATCH 14/26] mdc: " James Simmons
@ 2019-01-31 17:19 ` James Simmons
  2019-01-31 17:19 ` [lustre-devel] [PATCH 16/26] obdclass: " James Simmons
                   ` (11 subsequent siblings)
  26 siblings, 0 replies; 30+ messages in thread
From: James Simmons @ 2019-01-31 17:19 UTC (permalink / raw)
  To: lustre-devel

The mgc code is very messy and difficult to read. Remove excess
white space and properly align data structures so they are easy on
the eyes.

Signed-off-by: James Simmons <jsimmons@infradead.org>
---
 drivers/staging/lustre/lustre/mgc/mgc_request.c | 85 +++++++++++++------------
 1 file changed, 43 insertions(+), 42 deletions(-)

diff --git a/drivers/staging/lustre/lustre/mgc/mgc_request.c b/drivers/staging/lustre/lustre/mgc/mgc_request.c
index dc80081..a4dfdc0 100644
--- a/drivers/staging/lustre/lustre/mgc/mgc_request.c
+++ b/drivers/staging/lustre/lustre/mgc/mgc_request.c
@@ -193,7 +193,7 @@ struct config_llog_data *do_config_log_add(struct obd_device *obd,
 					   struct super_block *sb)
 {
 	struct config_llog_data *cld;
-	int		      rc;
+	int rc;
 
 	CDEBUG(D_MGC, "do adding config log %s:%p\n", logname,
 	       cfg ? cfg->cfg_instance : NULL);
@@ -267,8 +267,8 @@ struct config_llog_data *do_config_log_add(struct obd_device *obd,
 		       struct super_block *sb, int type,
 		       struct config_llog_instance *cfg)
 {
-	struct config_llog_instance	lcfg = *cfg;
-	struct config_llog_data		*cld;
+	struct config_llog_instance lcfg = *cfg;
+	struct config_llog_data	*cld;
 
 	lcfg.cfg_instance = sb ? (void *)sb : (void *)obd;
 
@@ -296,9 +296,9 @@ struct config_llog_data *do_config_log_add(struct obd_device *obd,
 	struct config_llog_data *sptlrpc_cld = NULL;
 	struct config_llog_data *params_cld = NULL;
 	struct config_llog_data *recover_cld = NULL;
-	char			seclogname[32];
-	char			*ptr;
-	int			rc;
+	char seclogname[32];
+	char *ptr;
+	int rc;
 
 	CDEBUG(D_MGC, "adding config log %s:%p\n", logname, cfg->cfg_instance);
 
@@ -459,8 +459,8 @@ static int config_log_end(char *logname, struct config_llog_instance *cfg)
 
 int lprocfs_mgc_rd_ir_state(struct seq_file *m, void *data)
 {
-	struct obd_device       *obd = data;
-	struct obd_import       *imp;
+	struct obd_device *obd = data;
+	struct obd_import *imp;
 	struct obd_connect_data *ocd;
 	struct config_llog_data *cld;
 	int rc;
@@ -491,13 +491,14 @@ int lprocfs_mgc_rd_ir_state(struct seq_file *m, void *data)
 }
 
 /* reenqueue any lost locks */
-#define RQ_RUNNING 0x1
-#define RQ_NOW     0x2
-#define RQ_LATER   0x4
-#define RQ_STOP    0x8
-#define RQ_PRECLEANUP  0x10
+#define RQ_RUNNING	0x01
+#define RQ_NOW		0x02
+#define RQ_LATER	0x04
+#define RQ_STOP		0x08
+#define RQ_PRECLEANUP	0x10
+
 static int rq_state;
-static wait_queue_head_t	    rq_waitq;
+static wait_queue_head_t rq_waitq;
 static DECLARE_COMPLETION(rq_exit);
 static DECLARE_COMPLETION(rq_start);
 
@@ -620,7 +621,7 @@ static int mgc_requeue_thread(void *data)
 	return 0;
 }
 
-/* Add a cld to the list to requeue.  Start the requeue thread if needed.
+/* Add a cld to the list to requeue. Start the requeue thread if needed.
  * We are responsible for dropping the config log reference from here on out.
  */
 static void mgc_requeue_add(struct config_llog_data *cld)
@@ -647,8 +648,8 @@ static void mgc_requeue_add(struct config_llog_data *cld)
 
 static int mgc_llog_init(const struct lu_env *env, struct obd_device *obd)
 {
-	struct llog_ctxt	*ctxt;
-	int			 rc;
+	struct llog_ctxt *ctxt;
+	int rc;
 
 	/* setup only remote ctxt, the local disk context is switched per each
 	 * filesystem during mgc_fs_setup()
@@ -941,9 +942,9 @@ static void mgc_notify_active(struct obd_device *unused)
 static int mgc_target_register(struct obd_export *exp,
 			       struct mgs_target_info *mti)
 {
-	struct ptlrpc_request  *req;
+	struct ptlrpc_request *req;
 	struct mgs_target_info *req_mti, *rep_mti;
-	int		     rc;
+	int rc;
 
 	req = ptlrpc_request_alloc_pack(class_exp2cliimp(exp),
 					&RQF_MGS_TARGET_REG, LUSTRE_MGS_VERSION,
@@ -1009,8 +1010,8 @@ static int mgc_set_info_async(const struct lu_env *env, struct obd_export *exp,
 		return rc;
 	}
 	if (KEY_IS(KEY_MGSSEC)) {
-		struct client_obd     *cli = &exp->exp_obd->u.cli;
-		struct sptlrpc_flavor  flvr;
+		struct client_obd *cli = &exp->exp_obd->u.cli;
+		struct sptlrpc_flavor flvr;
 
 		/*
 		 * empty string means using current flavor, if which haven't
@@ -1040,7 +1041,7 @@ static int mgc_set_info_async(const struct lu_env *env, struct obd_export *exp,
 			cli->cl_flvr_mgc = flvr;
 		} else if (memcmp(&cli->cl_flvr_mgc, &flvr,
 				  sizeof(flvr)) != 0) {
-			char    str[20];
+			char str[20];
 
 			sptlrpc_flavor2name(&cli->cl_flvr_mgc,
 					    str, sizeof(str));
@@ -1125,15 +1126,15 @@ static int mgc_apply_recover_logs(struct obd_device *mgc,
 {
 	struct config_llog_instance *cfg = &cld->cld_cfg;
 	struct mgs_nidtbl_entry *entry;
-	struct lustre_cfg       *lcfg;
-	struct lustre_cfg_bufs   bufs;
-	u64   prev_version = 0;
+	struct lustre_cfg *lcfg;
+	struct lustre_cfg_bufs bufs;
+	u64 prev_version = 0;
 	char *inst;
 	char *buf;
-	int   bufsz;
-	int   pos;
-	int   rc  = 0;
-	int   off = 0;
+	int bufsz;
+	int pos;
+	int rc = 0;
+	int off = 0;
 
 	LASSERT(cfg->cfg_instance);
 	LASSERT(cfg->cfg_sb == cfg->cfg_instance);
@@ -1149,11 +1150,11 @@ static int mgc_apply_recover_logs(struct obd_device *mgc,
 	}
 
 	++pos;
-	buf   = inst + pos;
+	buf = inst + pos;
 	bufsz = PAGE_SIZE - pos;
 
 	while (datalen > 0) {
-		int   entry_len = sizeof(*entry);
+		int entry_len = sizeof(*entry);
 		int is_ost, i;
 		struct obd_device *obd;
 		char *obdname;
@@ -1191,7 +1192,7 @@ static int mgc_apply_recover_logs(struct obd_device *mgc,
 		if (entry->mne_length < entry_len)
 			break;
 
-		off     += entry->mne_length;
+		off += entry->mne_length;
 		datalen -= entry->mne_length;
 		if (datalen < 0)
 			break;
@@ -1323,7 +1324,7 @@ static int mgc_process_recover_log(struct obd_device *obd,
 	struct ptlrpc_request *req = NULL;
 	struct config_llog_instance *cfg = &cld->cld_cfg;
 	struct mgs_config_body *body;
-	struct mgs_config_res  *res;
+	struct mgs_config_res *res;
 	struct ptlrpc_bulk_desc *desc;
 	struct page **pages;
 	int nrpages;
@@ -1380,9 +1381,9 @@ static int mgc_process_recover_log(struct obd_device *obd,
 		goto out;
 	}
 	body->mcb_offset = cfg->cfg_last_idx + 1;
-	body->mcb_type   = cld->cld_type;
-	body->mcb_bits   = PAGE_SHIFT;
-	body->mcb_units  = nrpages;
+	body->mcb_type = cld->cld_type;
+	body->mcb_bits = PAGE_SHIFT;
+	body->mcb_units = nrpages;
 
 	/* allocate bulk transfer descriptor */
 	desc = ptlrpc_prep_bulk_imp(req, nrpages, 1,
@@ -1483,11 +1484,11 @@ static int mgc_process_recover_log(struct obd_device *obd,
 static int mgc_process_cfg_log(struct obd_device *mgc,
 			       struct config_llog_data *cld, int local_only)
 {
-	struct llog_ctxt	*ctxt;
-	struct lustre_sb_info	*lsi = NULL;
-	int			 rc = 0;
-	bool			 sptlrpc_started = false;
-	struct lu_env		*env;
+	struct llog_ctxt *ctxt;
+	struct lustre_sb_info *lsi = NULL;
+	int rc = 0;
+	bool sptlrpc_started = false;
+	struct lu_env *env;
 
 	LASSERT(cld);
 	LASSERT(mutex_is_locked(&cld->cld_lock));
@@ -1570,7 +1571,7 @@ static bool mgc_import_in_recovery(struct obd_import *imp)
  * Get a configuration log from the MGS and process it.
  *
  * This function is called for both clients and servers to process the
- * configuration log from the MGS.  The MGC enqueues a DLM lock on the
+ * configuration log from the MGS. The MGC enqueues a DLM lock on the
  * log from the MGS, and if the lock gets revoked the MGC will be notified
  * by the lock cancellation callback that the config log has changed,
  * and will enqueue another MGS lock on it, and then continue processing
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [lustre-devel] [PATCH 16/26] obdclass: cleanup white spaces
  2019-01-31 17:19 [lustre-devel] [PATCH 00/26] lustre: cleanups with no code changes James Simmons
                   ` (14 preceding siblings ...)
  2019-01-31 17:19 ` [lustre-devel] [PATCH 15/26] mgc: " James Simmons
@ 2019-01-31 17:19 ` James Simmons
  2019-01-31 17:19 ` [lustre-devel] [PATCH 17/26] obdecho: " James Simmons
                   ` (10 subsequent siblings)
  26 siblings, 0 replies; 30+ messages in thread
From: James Simmons @ 2019-01-31 17:19 UTC (permalink / raw)
  To: lustre-devel

The obdclass code is very messy and difficult to read. Remove excess
white space and properly align data structures so they are easy on
the eyes.

Signed-off-by: James Simmons <jsimmons@infradead.org>
---
 .../staging/lustre/lustre/obdclass/cl_internal.h   |   2 +-
 drivers/staging/lustre/lustre/obdclass/cl_io.c     |  28 ++---
 drivers/staging/lustre/lustre/obdclass/cl_lock.c   |   2 +-
 drivers/staging/lustre/lustre/obdclass/cl_object.c |  40 +++---
 drivers/staging/lustre/lustre/obdclass/cl_page.c   |  62 ++++-----
 drivers/staging/lustre/lustre/obdclass/class_obd.c |   6 +-
 drivers/staging/lustre/lustre/obdclass/genops.c    |  16 +--
 .../staging/lustre/lustre/obdclass/kernelcomm.c    |   8 +-
 drivers/staging/lustre/lustre/obdclass/linkea.c    |   6 +-
 drivers/staging/lustre/lustre/obdclass/llog.c      |  40 +++---
 drivers/staging/lustre/lustre/obdclass/llog_cat.c  |   6 +-
 .../staging/lustre/lustre/obdclass/llog_internal.h |  24 ++--
 drivers/staging/lustre/lustre/obdclass/llog_swab.c |  16 +--
 .../lustre/lustre/obdclass/lprocfs_counters.c      |  16 +--
 .../lustre/lustre/obdclass/lprocfs_status.c        | 140 ++++++++++-----------
 drivers/staging/lustre/lustre/obdclass/lu_object.c |  98 +++++++--------
 .../lustre/lustre/obdclass/lustre_handles.c        |   2 +-
 .../staging/lustre/lustre/obdclass/lustre_peer.c   |   8 +-
 .../staging/lustre/lustre/obdclass/obd_config.c    |  35 +++---
 drivers/staging/lustre/lustre/obdclass/obd_mount.c |  15 ++-
 20 files changed, 283 insertions(+), 287 deletions(-)

diff --git a/drivers/staging/lustre/lustre/obdclass/cl_internal.h b/drivers/staging/lustre/lustre/obdclass/cl_internal.h
index 8770e32..dc6bf10 100644
--- a/drivers/staging/lustre/lustre/obdclass/cl_internal.h
+++ b/drivers/staging/lustre/lustre/obdclass/cl_internal.h
@@ -44,7 +44,7 @@ struct cl_thread_info {
 	/**
 	 * Used for submitting a sync I/O.
 	 */
-	struct cl_sync_io    clt_anchor;
+	struct cl_sync_io	clt_anchor;
 };
 
 struct cl_thread_info *cl_env_info(const struct lu_env *env);
diff --git a/drivers/staging/lustre/lustre/obdclass/cl_io.c b/drivers/staging/lustre/lustre/obdclass/cl_io.c
index d3f2455..09fd45d 100644
--- a/drivers/staging/lustre/lustre/obdclass/cl_io.c
+++ b/drivers/staging/lustre/lustre/obdclass/cl_io.c
@@ -86,7 +86,7 @@ static int cl_io_invariant(const struct cl_io *io)
  */
 void cl_io_fini(const struct lu_env *env, struct cl_io *io)
 {
-	struct cl_io_slice    *slice;
+	struct cl_io_slice *slice;
 
 	LINVRNT(cl_io_type_is_valid(io->ci_type));
 	LINVRNT(cl_io_invariant(io));
@@ -207,8 +207,8 @@ int cl_io_rw_init(const struct lu_env *env, struct cl_io *io,
 			 "io range: %u [%llu, %llu) %u %u\n",
 			 iot, (u64)pos, (u64)pos + count,
 			 io->u.ci_rw.crw_nonblock, io->u.ci_wr.wr_append);
-	io->u.ci_rw.crw_pos    = pos;
-	io->u.ci_rw.crw_count  = count;
+	io->u.ci_rw.crw_pos = pos;
+	io->u.ci_rw.crw_count = count;
 	return cl_io_init(env, io, iot, io->ci_obj);
 }
 EXPORT_SYMBOL(cl_io_rw_init);
@@ -363,9 +363,9 @@ int cl_io_lock(const struct lu_env *env, struct cl_io *io)
  */
 void cl_io_unlock(const struct lu_env *env, struct cl_io *io)
 {
-	struct cl_lockset	*set;
-	struct cl_io_lock_link   *link;
-	struct cl_io_lock_link   *temp;
+	struct cl_lockset *set;
+	struct cl_io_lock_link *link;
+	struct cl_io_lock_link *temp;
 	const struct cl_io_slice *scan;
 
 	LASSERT(cl_io_is_loopable(io));
@@ -460,7 +460,7 @@ static void cl_io_rw_advance(const struct lu_env *env, struct cl_io *io,
 	LINVRNT(cl_io_is_loopable(io));
 	LINVRNT(cl_io_invariant(io));
 
-	io->u.ci_rw.crw_pos   += nob;
+	io->u.ci_rw.crw_pos += nob;
 	io->u.ci_rw.crw_count -= nob;
 
 	/* layers have to be notified. */
@@ -506,8 +506,8 @@ int cl_io_lock_alloc_add(const struct lu_env *env, struct cl_io *io,
 
 	link = kzalloc(sizeof(*link), GFP_NOFS);
 	if (link) {
-		link->cill_descr     = *descr;
-		link->cill_fini      = cl_free_io_lock_link;
+		link->cill_descr = *descr;
+		link->cill_fini = cl_free_io_lock_link;
 		result = cl_io_lock_add(env, io, link);
 		if (result) /* lock match */
 			link->cill_fini(env, link);
@@ -575,7 +575,7 @@ int cl_io_read_ahead(const struct lu_env *env, struct cl_io *io,
 		     pgoff_t start, struct cl_read_ahead *ra)
 {
 	const struct cl_io_slice *scan;
-	int		       result = 0;
+	int result = 0;
 
 	LINVRNT(io->ci_type == CIT_READ || io->ci_type == CIT_FAULT);
 	LINVRNT(cl_io_invariant(io));
@@ -715,7 +715,7 @@ int cl_io_submit_sync(const struct lu_env *env, struct cl_io *io,
  */
 int cl_io_loop(const struct lu_env *env, struct cl_io *io)
 {
-	int result   = 0;
+	int result = 0;
 
 	LINVRNT(cl_io_is_loopable(io));
 
@@ -725,7 +725,7 @@ int cl_io_loop(const struct lu_env *env, struct cl_io *io)
 		io->ci_continue = 0;
 		result = cl_io_iter_init(env, io);
 		if (result == 0) {
-			nob    = io->ci_nob;
+			nob = io->ci_nob;
 			result = cl_io_lock(env, io);
 			if (result == 0) {
 				/*
@@ -774,7 +774,7 @@ void cl_io_slice_add(struct cl_io *io, struct cl_io_slice *slice,
 		list_empty(linkage));
 
 	list_add_tail(linkage, &io->ci_layers);
-	slice->cis_io  = io;
+	slice->cis_io = io;
 	slice->cis_obj = obj;
 	slice->cis_iop = ops;
 }
@@ -879,7 +879,6 @@ void cl_page_list_splice(struct cl_page_list *list, struct cl_page_list *head)
 }
 EXPORT_SYMBOL(cl_page_list_splice);
 
-
 /**
  * Disowns pages in a queue.
  */
@@ -1102,7 +1101,6 @@ int cl_sync_io_wait(const struct lu_env *env, struct cl_sync_io *anchor,
 	while (unlikely(atomic_read(&anchor->csi_barrier) != 0))
 		cpu_relax();
 
-
 	return rc;
 }
 EXPORT_SYMBOL(cl_sync_io_wait);
diff --git a/drivers/staging/lustre/lustre/obdclass/cl_lock.c b/drivers/staging/lustre/lustre/obdclass/cl_lock.c
index 425ca9c..d7bcb8c 100644
--- a/drivers/staging/lustre/lustre/obdclass/cl_lock.c
+++ b/drivers/staging/lustre/lustre/obdclass/cl_lock.c
@@ -243,7 +243,7 @@ void cl_lock_descr_print(const struct lu_env *env, void *cookie,
 			 lu_printer_t printer,
 			 const struct cl_lock_descr *descr)
 {
-	const struct lu_fid  *fid;
+	const struct lu_fid *fid;
 
 	fid = lu_object_fid(&descr->cld_obj->co_lu);
 	(*printer)(env, cookie, DDESCR "@" DFID, PDESCR(descr), PFID(fid));
diff --git a/drivers/staging/lustre/lustre/obdclass/cl_object.c b/drivers/staging/lustre/lustre/obdclass/cl_object.c
index 5b59a71..05d784a 100644
--- a/drivers/staging/lustre/lustre/obdclass/cl_object.c
+++ b/drivers/staging/lustre/lustre/obdclass/cl_object.c
@@ -483,8 +483,8 @@ void cl_site_fini(struct cl_site *s)
 EXPORT_SYMBOL(cl_site_fini);
 
 static struct cache_stats cl_env_stats = {
-	.cs_name    = "envs",
-	.cs_stats = { ATOMIC_INIT(0), }
+	.cs_name	= "envs",
+	.cs_stats	= { ATOMIC_INIT(0), }
 };
 
 /**
@@ -495,11 +495,11 @@ int cl_site_stats_print(const struct cl_site *site, struct seq_file *m)
 {
 	size_t i;
 	static const char * const pstate[] = {
-		[CPS_CACHED]  = "c",
-		[CPS_OWNED]   = "o",
-		[CPS_PAGEOUT] = "w",
-		[CPS_PAGEIN]  = "r",
-		[CPS_FREEING] = "f"
+		[CPS_CACHED]	= "c",
+		[CPS_OWNED]	= "o",
+		[CPS_PAGEOUT]	= "w",
+		[CPS_PAGEIN]	= "r",
+		[CPS_FREEING]	= "f"
 	};
 /*
        lookup    hit  total   busy create
@@ -553,9 +553,9 @@ int cl_site_stats_print(const struct cl_site *site, struct seq_file *m)
 } *cl_envs = NULL;
 
 struct cl_env {
-	void	     *ce_magic;
-	struct lu_env     ce_lu;
-	struct lu_context ce_ses;
+	void		       *ce_magic;
+	struct lu_env		ce_lu;
+	struct lu_context	ce_ses;
 
 	/*
 	 * Linkage into global list of all client environments. Used for
@@ -565,12 +565,12 @@ struct cl_env {
 	/*
 	 *
 	 */
-	int	       ce_ref;
+	int			ce_ref;
 	/*
 	 * Debugging field: address of the caller who made original
 	 * allocation.
 	 */
-	void	     *ce_debug;
+	void		       *ce_debug;
 };
 
 static void cl_env_inc(enum cache_stats_item item)
@@ -818,10 +818,10 @@ void cl_env_put(struct lu_env *env, u16 *refcheck)
  */
 void cl_lvb2attr(struct cl_attr *attr, const struct ost_lvb *lvb)
 {
-	attr->cat_size   = lvb->lvb_size;
-	attr->cat_mtime  = lvb->lvb_mtime;
-	attr->cat_atime  = lvb->lvb_atime;
-	attr->cat_ctime  = lvb->lvb_ctime;
+	attr->cat_size = lvb->lvb_size;
+	attr->cat_mtime = lvb->lvb_mtime;
+	attr->cat_atime = lvb->lvb_atime;
+	attr->cat_ctime = lvb->lvb_ctime;
 	attr->cat_blocks = lvb->lvb_blocks;
 }
 EXPORT_SYMBOL(cl_lvb2attr);
@@ -936,7 +936,7 @@ struct cl_device *cl_type_setup(const struct lu_env *env, struct lu_site *site,
 				struct lu_device_type *ldt,
 				struct lu_device *next)
 {
-	const char       *typename;
+	const char *typename;
 	struct lu_device *d;
 
 	typename = ldt->ldt_name;
@@ -983,9 +983,9 @@ struct cl_thread_info *cl_env_info(const struct lu_env *env)
 LU_KEY_INIT_FINI(cl, struct cl_thread_info);
 
 static struct lu_context_key cl_key = {
-	.lct_tags = LCT_CL_THREAD,
-	.lct_init = cl_key_init,
-	.lct_fini = cl_key_fini,
+	.lct_tags	= LCT_CL_THREAD,
+	.lct_init	= cl_key_init,
+	.lct_fini	= cl_key_fini,
 };
 
 static struct lu_kmem_descr cl_object_caches[] = {
diff --git a/drivers/staging/lustre/lustre/obdclass/cl_page.c b/drivers/staging/lustre/lustre/obdclass/cl_page.c
index a560af1..b1b4dc7 100644
--- a/drivers/staging/lustre/lustre/obdclass/cl_page.c
+++ b/drivers/staging/lustre/lustre/obdclass/cl_page.c
@@ -95,7 +95,7 @@ static void cl_page_get_trust(struct cl_page *page)
 
 static void cl_page_free(const struct lu_env *env, struct cl_page *page)
 {
-	struct cl_object *obj  = page->cp_obj;
+	struct cl_object *obj = page->cp_obj;
 
 	PASSERT(env, page, list_empty(&page->cp_batch));
 	PASSERT(env, page, !page->cp_owner);
@@ -132,7 +132,7 @@ struct cl_page *cl_page_alloc(const struct lu_env *env,
 			      struct page *vmpage,
 			      enum cl_page_type type)
 {
-	struct cl_page	  *page;
+	struct cl_page *page;
 	struct lu_object_header *head;
 
 	page = kzalloc(cl_object_header(o)->coh_page_bufsize, GFP_NOFS);
@@ -185,7 +185,7 @@ struct cl_page *cl_page_find(const struct lu_env *env,
 			     pgoff_t idx, struct page *vmpage,
 			     enum cl_page_type type)
 {
-	struct cl_page	  *page = NULL;
+	struct cl_page *page = NULL;
 	struct cl_object_header *hdr;
 
 	LASSERT(type == CPT_CACHEABLE || type == CPT_TRANSIENT);
@@ -239,39 +239,39 @@ static void __cl_page_state_set(const struct lu_env *env,
 	 */
 	static const int allowed_transitions[CPS_NR][CPS_NR] = {
 		[CPS_CACHED] = {
-			[CPS_CACHED]  = 0,
-			[CPS_OWNED]   = 1, /* io finds existing cached page */
-			[CPS_PAGEIN]  = 0,
-			[CPS_PAGEOUT] = 1, /* write-out from the cache */
-			[CPS_FREEING] = 1, /* eviction on the memory pressure */
+			[CPS_CACHED]	= 0,
+			[CPS_OWNED]	= 1, /* io finds existing cached page */
+			[CPS_PAGEIN]	= 0,
+			[CPS_PAGEOUT]	= 1, /* write-out from the cache */
+			[CPS_FREEING]	= 1, /* eviction on the memory pressure */
 		},
 		[CPS_OWNED] = {
-			[CPS_CACHED]  = 1, /* release to the cache */
-			[CPS_OWNED]   = 0,
-			[CPS_PAGEIN]  = 1, /* start read immediately */
-			[CPS_PAGEOUT] = 1, /* start write immediately */
-			[CPS_FREEING] = 1, /* lock invalidation or truncate */
+			[CPS_CACHED]	= 1, /* release to the cache */
+			[CPS_OWNED]	= 0,
+			[CPS_PAGEIN]	= 1, /* start read immediately */
+			[CPS_PAGEOUT]	= 1, /* start write immediately */
+			[CPS_FREEING]	= 1, /* lock invalidation or truncate */
 		},
 		[CPS_PAGEIN] = {
-			[CPS_CACHED]  = 1, /* io completion */
-			[CPS_OWNED]   = 0,
-			[CPS_PAGEIN]  = 0,
-			[CPS_PAGEOUT] = 0,
-			[CPS_FREEING] = 0,
+			[CPS_CACHED]	= 1, /* io completion */
+			[CPS_OWNED]	= 0,
+			[CPS_PAGEIN]	= 0,
+			[CPS_PAGEOUT]	= 0,
+			[CPS_FREEING]	= 0,
 		},
 		[CPS_PAGEOUT] = {
-			[CPS_CACHED]  = 1, /* io completion */
-			[CPS_OWNED]   = 0,
-			[CPS_PAGEIN]  = 0,
-			[CPS_PAGEOUT] = 0,
-			[CPS_FREEING] = 0,
+			[CPS_CACHED]	= 1, /* io completion */
+			[CPS_OWNED]	= 0,
+			[CPS_PAGEIN]	= 0,
+			[CPS_PAGEOUT]	= 0,
+			[CPS_FREEING]	= 0,
 		},
 		[CPS_FREEING] = {
-			[CPS_CACHED]  = 0,
-			[CPS_OWNED]   = 0,
-			[CPS_PAGEIN]  = 0,
-			[CPS_PAGEOUT] = 0,
-			[CPS_FREEING] = 0,
+			[CPS_CACHED]	= 0,
+			[CPS_OWNED]	= 0,
+			[CPS_PAGEIN]	= 0,
+			[CPS_PAGEOUT]	= 0,
+			[CPS_FREEING]	= 0,
 		}
 	};
 
@@ -976,9 +976,9 @@ void cl_page_slice_add(struct cl_page *page, struct cl_page_slice *slice,
 		       const struct cl_page_operations *ops)
 {
 	list_add_tail(&slice->cpl_linkage, &page->cp_layers);
-	slice->cpl_obj  = obj;
+	slice->cpl_obj = obj;
 	slice->cpl_index = index;
-	slice->cpl_ops  = ops;
+	slice->cpl_ops = ops;
 	slice->cpl_page = page;
 }
 EXPORT_SYMBOL(cl_page_slice_add);
@@ -988,7 +988,7 @@ void cl_page_slice_add(struct cl_page *page, struct cl_page_slice *slice,
  */
 struct cl_client_cache *cl_cache_init(unsigned long lru_page_max)
 {
-	struct cl_client_cache	*cache = NULL;
+	struct cl_client_cache *cache = NULL;
 
 	cache = kzalloc(sizeof(*cache), GFP_KERNEL);
 	if (!cache)
diff --git a/drivers/staging/lustre/lustre/obdclass/class_obd.c b/drivers/staging/lustre/lustre/obdclass/class_obd.c
index e130cf7..b859ab19 100644
--- a/drivers/staging/lustre/lustre/obdclass/class_obd.c
+++ b/drivers/staging/lustre/lustre/obdclass/class_obd.c
@@ -587,9 +587,9 @@ static long obd_class_ioctl(struct file *filp, unsigned int cmd,
 
 /* modules setup */
 static struct miscdevice obd_psdev = {
-	.minor	= MISC_DYNAMIC_MINOR,
-	.name	= OBD_DEV_NAME,
-	.fops	= &obd_psdev_fops,
+	.minor		= MISC_DYNAMIC_MINOR,
+	.name		= OBD_DEV_NAME,
+	.fops		= &obd_psdev_fops,
 };
 
 static int obd_init_checks(void)
diff --git a/drivers/staging/lustre/lustre/obdclass/genops.c b/drivers/staging/lustre/lustre/obdclass/genops.c
index 3d4d6e1..cee144c 100644
--- a/drivers/staging/lustre/lustre/obdclass/genops.c
+++ b/drivers/staging/lustre/lustre/obdclass/genops.c
@@ -622,9 +622,9 @@ struct obd_device *class_devices_in_group(struct obd_uuid *grp_uuid, int *next)
  */
 int class_notify_sptlrpc_conf(const char *fsname, int namelen)
 {
-	struct obd_device  *obd;
-	const char	 *type;
-	int		 i, rc = 0, rc2;
+	struct obd_device *obd;
+	const char *type;
+	int i, rc = 0, rc2;
 
 	LASSERT(namelen > 0);
 
@@ -693,7 +693,7 @@ int obd_init_caches(void)
 		goto out;
 
 	return 0;
- out:
+out:
 	obd_cleanup_caches();
 	return -ENOMEM;
 }
@@ -772,8 +772,8 @@ static void export_handle_addref(void *export)
 }
 
 static struct portals_handle_ops export_handle_ops = {
-	.hop_addref = export_handle_addref,
-	.hop_free   = NULL,
+	.hop_addref	= export_handle_addref,
+	.hop_free	= NULL,
 };
 
 struct obd_export *class_export_get(struct obd_export *exp)
@@ -967,8 +967,8 @@ static void import_handle_addref(void *import)
 }
 
 static struct portals_handle_ops import_handle_ops = {
-	.hop_addref = import_handle_addref,
-	.hop_free   = NULL,
+	.hop_addref	= import_handle_addref,
+	.hop_free	= NULL,
 };
 
 struct obd_import *class_import_get(struct obd_import *import)
diff --git a/drivers/staging/lustre/lustre/obdclass/kernelcomm.c b/drivers/staging/lustre/lustre/obdclass/kernelcomm.c
index 5d81996..925ba52 100644
--- a/drivers/staging/lustre/lustre/obdclass/kernelcomm.c
+++ b/drivers/staging/lustre/lustre/obdclass/kernelcomm.c
@@ -89,10 +89,10 @@ int libcfs_kkuc_msg_put(struct file *filp, void *payload)
  */
 /** A single group registration has a uid and a file pointer */
 struct kkuc_reg {
-	struct list_head kr_chain;
-	int		 kr_uid;
-	struct file	*kr_fp;
-	char		 kr_data[0];
+	struct list_head	kr_chain;
+	int			kr_uid;
+	struct file	       *kr_fp;
+	char			kr_data[0];
 };
 
 static struct list_head kkuc_groups[KUC_GRP_MAX + 1];
diff --git a/drivers/staging/lustre/lustre/obdclass/linkea.c b/drivers/staging/lustre/lustre/obdclass/linkea.c
index 74c99ee..33594bd 100644
--- a/drivers/staging/lustre/lustre/obdclass/linkea.c
+++ b/drivers/staging/lustre/lustre/obdclass/linkea.c
@@ -95,8 +95,8 @@ int linkea_init_with_rec(struct linkea_data *ldata)
 int linkea_entry_pack(struct link_ea_entry *lee, const struct lu_name *lname,
 		      const struct lu_fid *pfid)
 {
-	struct lu_fid   tmpfid;
-	int             reclen;
+	struct lu_fid tmpfid;
+	int reclen;
 
 	tmpfid = *pfid;
 	if (OBD_FAIL_CHECK(OBD_FAIL_LFSCK_LINKEA_CRASH))
@@ -216,7 +216,7 @@ int linkea_links_find(struct linkea_data *ldata, const struct lu_name *lname,
 		      const struct lu_fid  *pfid)
 {
 	struct lu_name tmpname;
-	struct lu_fid  tmpfid;
+	struct lu_fid tmpfid;
 	int count;
 
 	LASSERT(ldata->ld_leh);
diff --git a/drivers/staging/lustre/lustre/obdclass/llog.c b/drivers/staging/lustre/lustre/obdclass/llog.c
index 8644d34..7aa459b 100644
--- a/drivers/staging/lustre/lustre/obdclass/llog.c
+++ b/drivers/staging/lustre/lustre/obdclass/llog.c
@@ -151,8 +151,8 @@ int llog_init_handle(const struct lu_env *env, struct llog_handle *handle,
 {
 	int chunk_size = handle->lgh_ctxt->loc_chunk_size;
 	enum llog_flag fmt = flags & LLOG_F_EXT_MASK;
-	struct llog_log_hdr	*llh;
-	int			 rc;
+	struct llog_log_hdr *llh;
+	int rc;
 
 	LASSERT(!handle->lgh_hdr);
 
@@ -223,16 +223,16 @@ int llog_init_handle(const struct lu_env *env, struct llog_handle *handle,
 
 static int llog_process_thread(void *arg)
 {
-	struct llog_process_info	*lpi = arg;
-	struct llog_handle		*loghandle = lpi->lpi_loghandle;
-	struct llog_log_hdr		*llh = loghandle->lgh_hdr;
-	struct llog_process_cat_data	*cd  = lpi->lpi_catdata;
-	char				*buf;
+	struct llog_process_info *lpi = arg;
+	struct llog_handle *loghandle = lpi->lpi_loghandle;
+	struct llog_log_hdr *llh = loghandle->lgh_hdr;
+	struct llog_process_cat_data *cd  = lpi->lpi_catdata;
+	char *buf;
 	u64 cur_offset, tmp_offset;
 	int chunk_size;
-	int				 rc = 0, index = 1, last_index;
-	int				 saved_index = 0;
-	int				 last_called_index = 0;
+	int rc = 0, index = 1, last_index;
+	int saved_index = 0;
+	int last_called_index = 0;
 
 	if (!llh)
 		return -EINVAL;
@@ -394,9 +394,9 @@ static int llog_process_thread(void *arg)
 
 static int llog_process_thread_daemonize(void *arg)
 {
-	struct llog_process_info	*lpi = arg;
-	struct lu_env			 env;
-	int				 rc;
+	struct llog_process_info *lpi = arg;
+	struct lu_env env;
+	int rc;
 
 	unshare_fs_struct();
 
@@ -419,15 +419,15 @@ int llog_process_or_fork(const struct lu_env *env,
 			 llog_cb_t cb, void *data, void *catdata, bool fork)
 {
 	struct llog_process_info *lpi;
-	int		      rc;
+	int rc;
 
 	lpi = kzalloc(sizeof(*lpi), GFP_KERNEL);
 	if (!lpi)
 		return -ENOMEM;
 	lpi->lpi_loghandle = loghandle;
-	lpi->lpi_cb	= cb;
-	lpi->lpi_cbdata    = data;
-	lpi->lpi_catdata   = catdata;
+	lpi->lpi_cb = cb;
+	lpi->lpi_cbdata = data;
+	lpi->lpi_catdata = catdata;
 
 	if (fork) {
 		struct task_struct *task;
@@ -469,7 +469,7 @@ int llog_open(const struct lu_env *env, struct llog_ctxt *ctxt,
 	      char *name, enum llog_open_param open_param)
 {
 	const struct cred *old_cred = NULL;
-	int	 rc;
+	int rc;
 
 	LASSERT(ctxt);
 	LASSERT(ctxt->loc_logops);
@@ -507,8 +507,8 @@ int llog_open(const struct lu_env *env, struct llog_ctxt *ctxt,
 
 int llog_close(const struct lu_env *env, struct llog_handle *loghandle)
 {
-	struct llog_operations	*lop;
-	int			 rc;
+	struct llog_operations *lop;
+	int rc;
 
 	rc = llog_handle2ops(loghandle, &lop);
 	if (rc)
diff --git a/drivers/staging/lustre/lustre/obdclass/llog_cat.c b/drivers/staging/lustre/lustre/obdclass/llog_cat.c
index 172a368..b61c858 100644
--- a/drivers/staging/lustre/lustre/obdclass/llog_cat.c
+++ b/drivers/staging/lustre/lustre/obdclass/llog_cat.c
@@ -63,9 +63,9 @@ static int llog_cat_id2handle(const struct lu_env *env,
 			      struct llog_handle **res,
 			      struct llog_logid *logid)
 {
-	struct llog_handle	*loghandle;
+	struct llog_handle *loghandle;
 	enum llog_flag fmt;
-	int			 rc = 0;
+	int rc = 0;
 
 	if (!cathandle)
 		return -EBADF;
@@ -125,7 +125,7 @@ static int llog_cat_id2handle(const struct lu_env *env,
 
 int llog_cat_close(const struct lu_env *env, struct llog_handle *cathandle)
 {
-	struct llog_handle	*loghandle, *n;
+	struct llog_handle *loghandle, *n;
 
 	list_for_each_entry_safe(loghandle, n, &cathandle->u.chd.chd_head,
 				 u.phd.phd_entry) {
diff --git a/drivers/staging/lustre/lustre/obdclass/llog_internal.h b/drivers/staging/lustre/lustre/obdclass/llog_internal.h
index 4991d4e..545119e 100644
--- a/drivers/staging/lustre/lustre/obdclass/llog_internal.h
+++ b/drivers/staging/lustre/lustre/obdclass/llog_internal.h
@@ -37,23 +37,23 @@
 #include <lustre_log.h>
 
 struct llog_process_info {
-	struct llog_handle *lpi_loghandle;
-	llog_cb_t	   lpi_cb;
-	void	       *lpi_cbdata;
-	void	       *lpi_catdata;
-	int		 lpi_rc;
+	struct llog_handle     *lpi_loghandle;
+	llog_cb_t		lpi_cb;
+	void		       *lpi_cbdata;
+	void		       *lpi_catdata;
+	int			lpi_rc;
 	struct completion	lpi_completion;
-	const struct lu_env	*lpi_env;
+	const struct lu_env    *lpi_env;
 
 };
 
 struct llog_thread_info {
-	struct lu_attr			 lgi_attr;
-	struct lu_fid			 lgi_fid;
-	struct lu_buf			 lgi_buf;
-	loff_t				 lgi_off;
-	struct llog_rec_hdr		 lgi_lrh;
-	struct llog_rec_tail		 lgi_tail;
+	struct lu_attr		lgi_attr;
+	struct lu_fid		lgi_fid;
+	struct lu_buf		lgi_buf;
+	loff_t			lgi_off;
+	struct llog_rec_hdr	lgi_lrh;
+	struct llog_rec_tail	lgi_tail;
 };
 
 extern struct lu_context_key llog_thread_key;
diff --git a/drivers/staging/lustre/lustre/obdclass/llog_swab.c b/drivers/staging/lustre/lustre/obdclass/llog_swab.c
index fddc1ea..57dadec 100644
--- a/drivers/staging/lustre/lustre/obdclass/llog_swab.c
+++ b/drivers/staging/lustre/lustre/obdclass/llog_swab.c
@@ -358,14 +358,14 @@ void lustre_swab_lustre_cfg(struct lustre_cfg *lcfg)
 
 /* used only for compatibility with old on-disk cfg_marker data */
 struct cfg_marker32 {
-	u32   cm_step;
-	u32   cm_flags;
-	u32   cm_vers;
-	u32   padding;
-	u32   cm_createtime;
-	u32   cm_canceltime;
-	char    cm_tgtname[MTI_NAME_MAXLEN];
-	char    cm_comment[MTI_NAME_MAXLEN];
+	u32	cm_step;
+	u32	cm_flags;
+	u32	cm_vers;
+	u32	padding;
+	u32	cm_createtime;
+	u32	cm_canceltime;
+	char	cm_tgtname[MTI_NAME_MAXLEN];
+	char	cm_comment[MTI_NAME_MAXLEN];
 };
 
 #define MTI_NAMELEN32    (MTI_NAME_MAXLEN - \
diff --git a/drivers/staging/lustre/lustre/obdclass/lprocfs_counters.c b/drivers/staging/lustre/lustre/obdclass/lprocfs_counters.c
index 77bc66f..c7bf1ee 100644
--- a/drivers/staging/lustre/lustre/obdclass/lprocfs_counters.c
+++ b/drivers/staging/lustre/lustre/obdclass/lprocfs_counters.c
@@ -45,10 +45,10 @@
 
 void lprocfs_counter_add(struct lprocfs_stats *stats, int idx, long amount)
 {
-	struct lprocfs_counter		*percpu_cntr;
-	struct lprocfs_counter_header	*header;
-	int				smp_id;
-	unsigned long			flags = 0;
+	struct lprocfs_counter *percpu_cntr;
+	struct lprocfs_counter_header *header;
+	int smp_id;
+	unsigned long flags = 0;
 
 	if (!stats)
 		return;
@@ -94,10 +94,10 @@ void lprocfs_counter_add(struct lprocfs_stats *stats, int idx, long amount)
 
 void lprocfs_counter_sub(struct lprocfs_stats *stats, int idx, long amount)
 {
-	struct lprocfs_counter		*percpu_cntr;
-	struct lprocfs_counter_header	*header;
-	int				smp_id;
-	unsigned long			flags = 0;
+	struct lprocfs_counter *percpu_cntr;
+	struct lprocfs_counter_header *header;
+	int smp_id;
+	unsigned long flags = 0;
 
 	if (!stats)
 		return;
diff --git a/drivers/staging/lustre/lustre/obdclass/lprocfs_status.c b/drivers/staging/lustre/lustre/obdclass/lprocfs_status.c
index cc70402..e1ac610 100644
--- a/drivers/staging/lustre/lustre/obdclass/lprocfs_status.c
+++ b/drivers/staging/lustre/lustre/obdclass/lprocfs_status.c
@@ -364,7 +364,7 @@ static ssize_t blocksize_show(struct kobject *kobj, struct attribute *attr,
 {
 	struct obd_device *obd = container_of(kobj, struct obd_device,
 					      obd_kset.kobj);
-	struct obd_statfs  osfs;
+	struct obd_statfs osfs;
 	int rc = obd_statfs(NULL, obd->obd_self_export, &osfs,
 			    get_jiffies_64() - OBD_STATFS_CACHE_SECONDS * HZ,
 			    OBD_STATFS_NODELAY);
@@ -380,7 +380,7 @@ static ssize_t kbytestotal_show(struct kobject *kobj, struct attribute *attr,
 {
 	struct obd_device *obd = container_of(kobj, struct obd_device,
 					      obd_kset.kobj);
-	struct obd_statfs  osfs;
+	struct obd_statfs osfs;
 	int rc = obd_statfs(NULL, obd->obd_self_export, &osfs,
 			    get_jiffies_64() - OBD_STATFS_CACHE_SECONDS * HZ,
 			    OBD_STATFS_NODELAY);
@@ -403,7 +403,7 @@ static ssize_t kbytesfree_show(struct kobject *kobj, struct attribute *attr,
 {
 	struct obd_device *obd = container_of(kobj, struct obd_device,
 					      obd_kset.kobj);
-	struct obd_statfs  osfs;
+	struct obd_statfs osfs;
 	int rc = obd_statfs(NULL, obd->obd_self_export, &osfs,
 			    get_jiffies_64() - OBD_STATFS_CACHE_SECONDS * HZ,
 			    OBD_STATFS_NODELAY);
@@ -426,7 +426,7 @@ static ssize_t kbytesavail_show(struct kobject *kobj, struct attribute *attr,
 {
 	struct obd_device *obd = container_of(kobj, struct obd_device,
 					      obd_kset.kobj);
-	struct obd_statfs  osfs;
+	struct obd_statfs osfs;
 	int rc = obd_statfs(NULL, obd->obd_self_export, &osfs,
 			    get_jiffies_64() - OBD_STATFS_CACHE_SECONDS * HZ,
 			    OBD_STATFS_NODELAY);
@@ -449,7 +449,7 @@ static ssize_t filestotal_show(struct kobject *kobj, struct attribute *attr,
 {
 	struct obd_device *obd = container_of(kobj, struct obd_device,
 					      obd_kset.kobj);
-	struct obd_statfs  osfs;
+	struct obd_statfs osfs;
 	int rc = obd_statfs(NULL, obd->obd_self_export, &osfs,
 			    get_jiffies_64() - OBD_STATFS_CACHE_SECONDS * HZ,
 			    OBD_STATFS_NODELAY);
@@ -465,7 +465,7 @@ static ssize_t filesfree_show(struct kobject *kobj, struct attribute *attr,
 {
 	struct obd_device *obd = container_of(kobj, struct obd_device,
 					      obd_kset.kobj);
-	struct obd_statfs  osfs;
+	struct obd_statfs osfs;
 	int rc = obd_statfs(NULL, obd->obd_self_export, &osfs,
 			    get_jiffies_64() - OBD_STATFS_CACHE_SECONDS * HZ,
 			    OBD_STATFS_NODELAY);
@@ -638,10 +638,10 @@ void lprocfs_stats_unlock(struct lprocfs_stats *stats,
 void lprocfs_stats_collect(struct lprocfs_stats *stats, int idx,
 			   struct lprocfs_counter *cnt)
 {
-	unsigned int			num_entry;
-	struct lprocfs_counter		*percpu_cntr;
-	int				i;
-	unsigned long			flags = 0;
+	unsigned int num_entry;
+	struct lprocfs_counter *percpu_cntr;
+	int i;
+	unsigned long flags = 0;
 
 	memset(cnt, 0, sizeof(*cnt));
 
@@ -740,17 +740,17 @@ static void obd_connect_seq_flags2str(struct seq_file *m, u64 flags,
 
 int lprocfs_rd_import(struct seq_file *m, void *data)
 {
-	char				nidstr[LNET_NIDSTR_SIZE];
-	struct lprocfs_counter		ret;
-	struct lprocfs_counter_header	*header;
-	struct obd_device		*obd	= data;
-	struct obd_import		*imp;
-	struct obd_import_conn		*conn;
+	char nidstr[LNET_NIDSTR_SIZE];
+	struct lprocfs_counter ret;
+	struct lprocfs_counter_header *header;
+	struct obd_device *obd = data;
+	struct obd_import *imp;
+	struct obd_import_conn *conn;
 	struct obd_connect_data *ocd;
-	int				j;
-	int				k;
-	int				rw	= 0;
-	int				rc;
+	int j;
+	int k;
+	int rw = 0;
+	int rc;
 
 	LASSERT(obd);
 	rc = lprocfs_climp_check(obd);
@@ -1101,11 +1101,11 @@ int lprocfs_obd_cleanup(struct obd_device *obd)
 
 int lprocfs_stats_alloc_one(struct lprocfs_stats *stats, unsigned int cpuid)
 {
-	struct lprocfs_counter  *cntr;
-	unsigned int            percpusize;
-	int                     rc = -ENOMEM;
-	unsigned long           flags = 0;
-	int                     i;
+	struct lprocfs_counter *cntr;
+	unsigned int percpusize;
+	int rc = -ENOMEM;
+	unsigned long flags = 0;
+	int i;
 
 	LASSERT(!stats->ls_percpu[cpuid]);
 	LASSERT((stats->ls_flags & LPROCFS_STATS_FLAG_NOPERCPU) == 0);
@@ -1138,10 +1138,10 @@ int lprocfs_stats_alloc_one(struct lprocfs_stats *stats, unsigned int cpuid)
 struct lprocfs_stats *lprocfs_alloc_stats(unsigned int num,
 					  enum lprocfs_stats_flags flags)
 {
-	struct lprocfs_stats	*stats;
-	unsigned int		num_entry;
-	unsigned int		percpusize = 0;
-	int			i;
+	struct lprocfs_stats *stats;
+	unsigned int num_entry;
+	unsigned int percpusize = 0;
+	int i;
 
 	if (num == 0)
 		return NULL;
@@ -1221,9 +1221,9 @@ u64 lprocfs_stats_collector(struct lprocfs_stats *stats, int idx,
 			      enum lprocfs_fields_flags field)
 {
 	unsigned int i;
-	unsigned int  num_cpu;
-	unsigned long flags     = 0;
-	u64         ret       = 0;
+	unsigned int num_cpu;
+	unsigned long flags = 0;
+	u64 ret = 0;
 
 	LASSERT(stats);
 
@@ -1243,11 +1243,11 @@ u64 lprocfs_stats_collector(struct lprocfs_stats *stats, int idx,
 
 void lprocfs_clear_stats(struct lprocfs_stats *stats)
 {
-	struct lprocfs_counter		*percpu_cntr;
-	int				i;
-	int				j;
-	unsigned int			num_entry;
-	unsigned long			flags = 0;
+	struct lprocfs_counter *percpu_cntr;
+	int i;
+	int j;
+	unsigned int num_entry;
+	unsigned long flags = 0;
 
 	num_entry = lprocfs_stats_lock(stats, LPROCFS_GET_NUM_CPU, &flags);
 
@@ -1256,11 +1256,11 @@ void lprocfs_clear_stats(struct lprocfs_stats *stats)
 			continue;
 		for (j = 0; j < stats->ls_num; j++) {
 			percpu_cntr = lprocfs_stats_counter_get(stats, i, j);
-			percpu_cntr->lc_count		= 0;
-			percpu_cntr->lc_min		= LC_MIN_INIT;
-			percpu_cntr->lc_max		= 0;
-			percpu_cntr->lc_sumsquare	= 0;
-			percpu_cntr->lc_sum		= 0;
+			percpu_cntr->lc_count = 0;
+			percpu_cntr->lc_min = LC_MIN_INIT;
+			percpu_cntr->lc_max = 0;
+			percpu_cntr->lc_sumsquare = 0;
+			percpu_cntr->lc_sum = 0;
 			if (stats->ls_flags & LPROCFS_STATS_FLAG_IRQ_SAFE)
 				percpu_cntr->lc_sum_irq	= 0;
 		}
@@ -1302,10 +1302,10 @@ static void *lprocfs_stats_seq_next(struct seq_file *p, void *v, loff_t *pos)
 /* seq file export of one lprocfs counter */
 static int lprocfs_stats_seq_show(struct seq_file *p, void *v)
 {
-	struct lprocfs_stats		*stats	= p->private;
-	struct lprocfs_counter_header   *hdr;
-	struct lprocfs_counter           ctr;
-	int                              idx    = *(loff_t *)v;
+	struct lprocfs_stats *stats = p->private;
+	struct lprocfs_counter_header *hdr;
+	struct lprocfs_counter ctr;
+	int idx = *(loff_t *)v;
 
 	if (idx == 0) {
 		struct timespec64 now;
@@ -1337,10 +1337,10 @@ static int lprocfs_stats_seq_show(struct seq_file *p, void *v)
 }
 
 static const struct seq_operations lprocfs_stats_seq_sops = {
-	.start	= lprocfs_stats_seq_start,
-	.stop	= lprocfs_stats_seq_stop,
-	.next	= lprocfs_stats_seq_next,
-	.show	= lprocfs_stats_seq_show,
+	.start		= lprocfs_stats_seq_start,
+	.stop		= lprocfs_stats_seq_stop,
+	.next		= lprocfs_stats_seq_next,
+	.show		= lprocfs_stats_seq_show,
 };
 
 static int lprocfs_stats_seq_open(struct inode *inode, struct file *file)
@@ -1359,12 +1359,12 @@ static int lprocfs_stats_seq_open(struct inode *inode, struct file *file)
 }
 
 const struct file_operations lprocfs_stats_seq_fops = {
-	.owner   = THIS_MODULE,
-	.open    = lprocfs_stats_seq_open,
-	.read    = seq_read,
-	.write   = lprocfs_stats_seq_write,
-	.llseek  = seq_lseek,
-	.release = lprocfs_seq_release,
+	.owner		= THIS_MODULE,
+	.open		= lprocfs_stats_seq_open,
+	.read		= seq_read,
+	.write		= lprocfs_stats_seq_write,
+	.llseek		= seq_lseek,
+	.release	= lprocfs_seq_release,
 };
 EXPORT_SYMBOL_GPL(lprocfs_stats_seq_fops);
 
@@ -1372,30 +1372,30 @@ void lprocfs_counter_init(struct lprocfs_stats *stats, int index,
 			  unsigned int conf, const char *name,
 			  const char *units)
 {
-	struct lprocfs_counter_header	*header;
-	struct lprocfs_counter		*percpu_cntr;
-	unsigned long			flags = 0;
-	unsigned int			i;
-	unsigned int			num_cpu;
+	struct lprocfs_counter_header *header;
+	struct lprocfs_counter *percpu_cntr;
+	unsigned long flags = 0;
+	unsigned int i;
+	unsigned int num_cpu;
 
 	header = &stats->ls_cnt_header[index];
 	LASSERTF(header, "Failed to allocate stats header:[%d]%s/%s\n",
 		 index, name, units);
 
 	header->lc_config = conf;
-	header->lc_name   = name;
-	header->lc_units  = units;
+	header->lc_name = name;
+	header->lc_units = units;
 
 	num_cpu = lprocfs_stats_lock(stats, LPROCFS_GET_NUM_CPU, &flags);
 	for (i = 0; i < num_cpu; ++i) {
 		if (!stats->ls_percpu[i])
 			continue;
 		percpu_cntr = lprocfs_stats_counter_get(stats, i, index);
-		percpu_cntr->lc_count		= 0;
-		percpu_cntr->lc_min		= LC_MIN_INIT;
-		percpu_cntr->lc_max		= 0;
-		percpu_cntr->lc_sumsquare	= 0;
-		percpu_cntr->lc_sum		= 0;
+		percpu_cntr->lc_count = 0;
+		percpu_cntr->lc_min = LC_MIN_INIT;
+		percpu_cntr->lc_max = 0;
+		percpu_cntr->lc_sumsquare = 0;
+		percpu_cntr->lc_sum = 0;
 		if ((stats->ls_flags & LPROCFS_STATS_FLAG_IRQ_SAFE) != 0)
 			percpu_cntr->lc_sum_irq	= 0;
 	}
@@ -1843,8 +1843,8 @@ ssize_t lustre_attr_store(struct kobject *kobj, struct attribute *attr,
 EXPORT_SYMBOL_GPL(lustre_attr_store);
 
 const struct sysfs_ops lustre_sysfs_ops = {
-	.show  = lustre_attr_show,
-	.store = lustre_attr_store,
+	.show		= lustre_attr_show,
+	.store		= lustre_attr_store,
 };
 EXPORT_SYMBOL_GPL(lustre_sysfs_ops);
 
diff --git a/drivers/staging/lustre/lustre/obdclass/lu_object.c b/drivers/staging/lustre/lustre/obdclass/lu_object.c
index a132d87..3bd4874 100644
--- a/drivers/staging/lustre/lustre/obdclass/lu_object.c
+++ b/drivers/staging/lustre/lustre/obdclass/lu_object.c
@@ -87,15 +87,15 @@ enum {
 #define LU_CACHE_NR_LDISKFS_LIMIT	LU_CACHE_NR_UNLIMITED
 #define LU_CACHE_NR_ZFS_LIMIT		256
 
-#define LU_SITE_BITS_MIN	12
-#define LU_SITE_BITS_MAX	24
-#define LU_SITE_BITS_MAX_CL	19
+#define LU_SITE_BITS_MIN		12
+#define LU_SITE_BITS_MAX		24
+#define LU_SITE_BITS_MAX_CL		19
 /**
  * total 256 buckets, we don't want too many buckets because:
  * - consume too much memory
  * - avoid unbalanced LRU list
  */
-#define LU_SITE_BKT_BITS	8
+#define LU_SITE_BKT_BITS		8
 
 static unsigned int lu_cache_percent = LU_CACHE_PERCENT_DEFAULT;
 module_param(lu_cache_percent, int, 0644);
@@ -129,10 +129,10 @@ void lu_object_put(const struct lu_env *env, struct lu_object *o)
 {
 	struct lu_site_bkt_data *bkt;
 	struct lu_object_header *top;
-	struct lu_site	  *site;
-	struct lu_object	*orig;
-	struct cfs_hash_bd	    bd;
-	const struct lu_fid     *fid;
+	struct lu_site *site;
+	struct lu_object *orig;
+	struct cfs_hash_bd bd;
+	const struct lu_fid *fid;
 
 	top  = o->lo_header;
 	site = o->lo_dev->ld_site;
@@ -319,14 +319,14 @@ static struct lu_object *lu_object_alloc(const struct lu_env *env,
 static void lu_object_free(const struct lu_env *env, struct lu_object *o)
 {
 	wait_queue_head_t *wq;
-	struct lu_site	  *site;
-	struct lu_object	*scan;
-	struct list_head	      *layers;
-	struct list_head	       splice;
+	struct lu_site *site;
+	struct lu_object *scan;
+	struct list_head *layers;
+	struct list_head splice;
 
-	site   = o->lo_dev->ld_site;
+	site = o->lo_dev->ld_site;
 	layers = &o->lo_header->loh_layers;
-	wq     = lu_site_wq_from_fid(site, &o->lo_header->loh_fid);
+	wq = lu_site_wq_from_fid(site, &o->lo_header->loh_fid);
 	/*
 	 * First call ->loo_object_delete() method to release all resources.
 	 */
@@ -369,13 +369,13 @@ int lu_site_purge_objects(const struct lu_env *env, struct lu_site *s,
 	struct lu_object_header *h;
 	struct lu_object_header *temp;
 	struct lu_site_bkt_data *bkt;
-	struct cfs_hash_bd	    bd;
-	struct cfs_hash_bd	    bd2;
-	struct list_head	       dispose;
-	int		      did_sth;
+	struct cfs_hash_bd bd;
+	struct cfs_hash_bd bd2;
+	struct list_head dispose;
+	int did_sth;
 	unsigned int start = 0;
-	int		      count;
-	int		      bnr;
+	int count;
+	int bnr;
 	unsigned int i;
 
 	if (OBD_FAIL_CHECK(OBD_FAIL_OBD_NO_LRU))
@@ -389,7 +389,7 @@ int lu_site_purge_objects(const struct lu_env *env, struct lu_site *s,
 	if (nr != ~0)
 		start = s->ls_purge_start;
 	bnr = (nr == ~0) ? -1 : nr / (int)CFS_HASH_NBKT(s->ls_obj_hash) + 1;
- again:
+again:
 	/*
 	 * It doesn't make any sense to make purge threads parallel, that can
 	 * only bring troubles to us. See LU-5331.
@@ -496,10 +496,10 @@ struct lu_cdebug_data {
  * lu_global_init().
  */
 static struct lu_context_key lu_global_key = {
-	.lct_tags = LCT_MD_THREAD | LCT_DT_THREAD |
-		    LCT_MG_THREAD | LCT_CL_THREAD | LCT_LOCAL,
-	.lct_init = lu_global_key_init,
-	.lct_fini = lu_global_key_fini
+	.lct_tags	= LCT_MD_THREAD | LCT_DT_THREAD |
+			  LCT_MG_THREAD | LCT_CL_THREAD | LCT_LOCAL,
+	.lct_init	= lu_global_key_init,
+	.lct_fini	= lu_global_key_fini
 };
 
 /**
@@ -509,7 +509,7 @@ int lu_cdebug_printer(const struct lu_env *env,
 		      void *cookie, const char *format, ...)
 {
 	struct libcfs_debug_msg_data *msgdata = cookie;
-	struct lu_cdebug_data	*key;
+	struct lu_cdebug_data *key;
 	int used;
 	int complete;
 	va_list args;
@@ -594,7 +594,7 @@ static struct lu_object *htable_lookup(struct lu_site *s,
 {
 	struct lu_site_bkt_data *bkt;
 	struct lu_object_header *h;
-	struct hlist_node	*hnode;
+	struct hlist_node *hnode;
 	u64 ver = cfs_hash_bd_version_get(bd);
 
 	if (*version == ver)
@@ -670,12 +670,12 @@ struct lu_object *lu_object_find_at(const struct lu_env *env,
 				    const struct lu_fid *f,
 				    const struct lu_object_conf *conf)
 {
-	struct lu_object      *o;
-	struct lu_object      *shadow;
-	struct lu_site	*s;
-	struct cfs_hash	    *hs;
-	struct cfs_hash_bd	  bd;
-	u64		  version = 0;
+	struct lu_object *o;
+	struct lu_object *shadow;
+	struct lu_site *s;
+	struct cfs_hash	*hs;
+	struct cfs_hash_bd bd;
+	u64 version = 0;
 
 	/*
 	 * This uses standard index maintenance protocol:
@@ -795,9 +795,9 @@ void lu_device_type_fini(struct lu_device_type *ldt)
 static struct lu_env lu_shrink_env;
 
 struct lu_site_print_arg {
-	struct lu_env   *lsp_env;
-	void	    *lsp_cookie;
-	lu_printer_t     lsp_printer;
+	struct lu_env		*lsp_env;
+	void			*lsp_cookie;
+	lu_printer_t		 lsp_printer;
 };
 
 static int
@@ -805,7 +805,7 @@ struct lu_site_print_arg {
 		  struct hlist_node *hnode, void *data)
 {
 	struct lu_site_print_arg *arg = (struct lu_site_print_arg *)data;
-	struct lu_object_header  *h;
+	struct lu_object_header *h;
 
 	h = hlist_entry(hnode, struct lu_object_header, loh_hash);
 	if (!list_empty(&h->loh_layers)) {
@@ -828,9 +828,9 @@ void lu_site_print(const struct lu_env *env, struct lu_site *s, void *cookie,
 		   lu_printer_t printer)
 {
 	struct lu_site_print_arg arg = {
-		.lsp_env     = (struct lu_env *)env,
-		.lsp_cookie  = cookie,
-		.lsp_printer = printer,
+		.lsp_env	= (struct lu_env *)env,
+		.lsp_cookie	= cookie,
+		.lsp_printer	= printer,
 	};
 
 	cfs_hash_for_each(s->ls_obj_hash, lu_site_obj_print, &arg);
@@ -883,8 +883,8 @@ static unsigned long lu_htable_order(struct lu_device *top)
 static unsigned int lu_obj_hop_hash(struct cfs_hash *hs,
 				    const void *key, unsigned int mask)
 {
-	struct lu_fid  *fid = (struct lu_fid *)key;
-	u32	   hash;
+	struct lu_fid *fid = (struct lu_fid *)key;
+	u32 hash;
 
 	hash = fid_flatten32(fid);
 	hash += (hash >> 4) + (hash << 12); /* mixing oid and seq */
@@ -1247,7 +1247,7 @@ struct lu_object *lu_object_locate(struct lu_object_header *h,
  */
 void lu_stack_fini(const struct lu_env *env, struct lu_device *top)
 {
-	struct lu_site   *site = top->ld_site;
+	struct lu_site *site = top->ld_site;
 	struct lu_device *scan;
 	struct lu_device *next;
 
@@ -1263,7 +1263,7 @@ void lu_stack_fini(const struct lu_env *env, struct lu_device *top)
 
 	for (scan = top; scan; scan = next) {
 		const struct lu_device_type *ldt = scan->ld_type;
-		struct obd_type	     *type;
+		struct obd_type *type;
 
 		next = ldt->ldt_ops->ldto_device_free(env, scan);
 		type = ldt->ldt_obd_type;
@@ -1595,7 +1595,7 @@ static int keys_init(struct lu_context *ctx)
  */
 int lu_context_init(struct lu_context *ctx, u32 tags)
 {
-	int	rc;
+	int rc;
 
 	memset(ctx, 0, sizeof(*ctx));
 	ctx->lc_state = LCS_INITIALIZED;
@@ -1761,7 +1761,7 @@ static void lu_site_stats_get(const struct lu_site *s,
 	stats->lss_busy += cfs_hash_size_get(hs) -
 		percpu_counter_sum_positive(&s2->ls_lru_len_counter);
 	cfs_hash_for_each_bucket(hs, &bd, i) {
-		struct hlist_head	*hhead;
+		struct hlist_head *hhead;
 
 		cfs_hash_bd_lock(hs, &bd, 1);
 		stats->lss_total += cfs_hash_bd_count_get(&bd);
@@ -1860,9 +1860,9 @@ static unsigned long lu_cache_shrink_scan(struct shrinker *sk,
  * Debugging printer function using printk().
  */
 static struct shrinker lu_site_shrinker = {
-	.count_objects	= lu_cache_shrink_count,
-	.scan_objects	= lu_cache_shrink_scan,
-	.seeks 		= DEFAULT_SEEKS,
+	.count_objects		= lu_cache_shrink_count,
+	.scan_objects		= lu_cache_shrink_scan,
+	.seeks			= DEFAULT_SEEKS,
 };
 
 /**
diff --git a/drivers/staging/lustre/lustre/obdclass/lustre_handles.c b/drivers/staging/lustre/lustre/obdclass/lustre_handles.c
index b296877..0674afb 100644
--- a/drivers/staging/lustre/lustre/obdclass/lustre_handles.c
+++ b/drivers/staging/lustre/lustre/obdclass/lustre_handles.c
@@ -47,7 +47,7 @@
 static spinlock_t handle_base_lock;
 
 static struct handle_bucket {
-	spinlock_t	lock;
+	spinlock_t		lock;
 	struct list_head	head;
 } *handle_hash;
 
diff --git a/drivers/staging/lustre/lustre/obdclass/lustre_peer.c b/drivers/staging/lustre/lustre/obdclass/lustre_peer.c
index 8e7f3a8..0c3e0ca 100644
--- a/drivers/staging/lustre/lustre/obdclass/lustre_peer.c
+++ b/drivers/staging/lustre/lustre/obdclass/lustre_peer.c
@@ -44,10 +44,10 @@
 #define NIDS_MAX	32
 
 struct uuid_nid_data {
-	struct list_head       un_list;
-	struct obd_uuid  un_uuid;
-	int	      un_nid_count;
-	lnet_nid_t       un_nids[NIDS_MAX];
+	struct list_head	un_list;
+	struct obd_uuid		un_uuid;
+	int			un_nid_count;
+	lnet_nid_t		un_nids[NIDS_MAX];
 };
 
 /* FIXME: This should probably become more elegant than a global linked list */
diff --git a/drivers/staging/lustre/lustre/obdclass/obd_config.c b/drivers/staging/lustre/lustre/obdclass/obd_config.c
index 887afda..0cdadea4 100644
--- a/drivers/staging/lustre/lustre/obdclass/obd_config.c
+++ b/drivers/staging/lustre/lustre/obdclass/obd_config.c
@@ -189,8 +189,8 @@ static int class_parse_value(char *buf, int opc, void *value, char **endh,
 			     int quiet)
 {
 	char *endp;
-	char  tmp;
-	int   rc = 0;
+	char tmp;
+	int rc = 0;
 
 	if (!buf)
 		return 1;
@@ -249,7 +249,7 @@ char *lustre_cfg_string(struct lustre_cfg *lcfg, u32 index)
 
 	/*
 	 * make sure it's NULL terminated, even if this kills a char
-	 * of data.  Try to use the padding first though.
+	 * of data. Try to use the padding first though.
 	 */
 	if (s[lcfg->lcfg_buflens[index] - 1] != '\0') {
 		size_t last = ALIGN(lcfg->lcfg_buflens[index], 8) - 1;
@@ -388,7 +388,6 @@ static int class_setup(struct obd_device *obd, struct lustre_cfg *lcfg)
 
 	/* create an uuid-export lustre hash */
 	err = rhashtable_init(&obd->obd_uuid_hash, &uuid_hash_params);
-
 	if (err)
 		goto err_hash;
 
@@ -1407,7 +1406,7 @@ int class_config_llog_handler(const struct lu_env *env,
 		}
 
 		lustre_cfg_init(lcfg_new, lcfg->lcfg_command, &bufs);
-		lcfg_new->lcfg_num   = lcfg->lcfg_num;
+		lcfg_new->lcfg_num = lcfg->lcfg_num;
 		lcfg_new->lcfg_flags = lcfg->lcfg_flags;
 
 		/* XXX Hack to try to remain binary compatible with
@@ -1454,9 +1453,9 @@ int class_config_parse_llog(const struct lu_env *env, struct llog_ctxt *ctxt,
 	struct llog_process_cat_data cd = {
 		.lpcd_first_idx = 0,
 	};
-	struct llog_handle		*llh;
-	llog_cb_t			 callback;
-	int				 rc;
+	struct llog_handle *llh;
+	llog_cb_t callback;
+	int rc;
 
 	CDEBUG(D_INFO, "looking up llog %s\n", name);
 	rc = llog_open(env, ctxt, &llh, NULL, name, LLOG_OPEN_EXISTS);
@@ -1499,10 +1498,10 @@ int class_config_parse_llog(const struct lu_env *env, struct llog_ctxt *ctxt,
 static int class_config_parse_rec(struct llog_rec_hdr *rec, char *buf,
 				  int size)
 {
-	struct lustre_cfg	*lcfg = (struct lustre_cfg *)(rec + 1);
-	char			*ptr = buf;
-	char			*end = buf + size;
-	int			 rc = 0;
+	struct lustre_cfg *lcfg = (struct lustre_cfg *)(rec + 1);
+	char *ptr = buf;
+	char *end = buf + size;
+	int rc = 0;
 
 	LASSERT(rec->lrh_type == OBD_CFG_REC);
 	rc = lustre_cfg_sanity_check(lcfg, rec->lrh_len);
@@ -1549,8 +1548,8 @@ int class_config_dump_handler(const struct lu_env *env,
 			      struct llog_handle *handle,
 			      struct llog_rec_hdr *rec, void *data)
 {
-	char	*outstr;
-	int	 rc = 0;
+	char *outstr;
+	int rc = 0;
 
 	outstr = kzalloc(256, GFP_NOFS);
 	if (!outstr)
@@ -1573,10 +1572,10 @@ int class_config_dump_handler(const struct lu_env *env,
  */
 int class_manual_cleanup(struct obd_device *obd)
 {
-	char		    flags[3] = "";
-	struct lustre_cfg      *lcfg;
-	struct lustre_cfg_bufs  bufs;
-	int		     rc;
+	char flags[3] = "";
+	struct lustre_cfg *lcfg;
+	struct lustre_cfg_bufs bufs;
+	int rc;
 
 	if (!obd) {
 		CERROR("empty cleanup\n");
diff --git a/drivers/staging/lustre/lustre/obdclass/obd_mount.c b/drivers/staging/lustre/lustre/obdclass/obd_mount.c
index eab3216..33aa790 100644
--- a/drivers/staging/lustre/lustre/obdclass/obd_mount.c
+++ b/drivers/staging/lustre/lustre/obdclass/obd_mount.c
@@ -150,7 +150,7 @@ static int do_lcfg(char *cfgname, lnet_nid_t nid, int cmd,
 		   char *s1, char *s2, char *s3, char *s4)
 {
 	struct lustre_cfg_bufs bufs;
-	struct lustre_cfg     *lcfg = NULL;
+	struct lustre_cfg *lcfg = NULL;
 	int rc;
 
 	CDEBUG(D_TRACE, "lcfg %s %#x %s %s %s %s\n", cfgname,
@@ -801,8 +801,8 @@ static int lmd_make_exclusion(struct lustre_mount_data *lmd, const char *ptr)
 
 static int lmd_parse_mgssec(struct lustre_mount_data *lmd, char *ptr)
 {
-	char   *tail;
-	int     length;
+	char *tail;
+	int length;
 
 	kfree(lmd->lmd_mgssec);
 	lmd->lmd_mgssec = NULL;
@@ -845,8 +845,8 @@ static int lmd_parse_network(struct lustre_mount_data *lmd, char *ptr)
 
 static int lmd_parse_string(char **handle, char *ptr)
 {
-	char   *tail;
-	int     length;
+	char *tail;
+	int length;
 
 	if (!handle || !ptr)
 		return -EINVAL;
@@ -876,8 +876,8 @@ static int lmd_parse_mgs(struct lustre_mount_data *lmd, char **ptr)
 	lnet_nid_t nid;
 	char *tail = *ptr;
 	char *mgsnid;
-	int   length;
-	int   oldlen = 0;
+	int length;
+	int oldlen = 0;
 
 	/* Find end of nidlist */
 	while (class_parse_nid_quiet(tail, &nid, &tail) == 0)
@@ -1252,4 +1252,3 @@ int lmd_parse(char *options, struct lustre_mount_data *lmd)
 	return -EINVAL;
 }
 EXPORT_SYMBOL(lmd_parse);
-
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [lustre-devel] [PATCH 17/26] obdecho: cleanup white spaces
  2019-01-31 17:19 [lustre-devel] [PATCH 00/26] lustre: cleanups with no code changes James Simmons
                   ` (15 preceding siblings ...)
  2019-01-31 17:19 ` [lustre-devel] [PATCH 16/26] obdclass: " James Simmons
@ 2019-01-31 17:19 ` James Simmons
  2019-01-31 17:19 ` [lustre-devel] [PATCH 18/26] osc: " James Simmons
                   ` (9 subsequent siblings)
  26 siblings, 0 replies; 30+ messages in thread
From: James Simmons @ 2019-01-31 17:19 UTC (permalink / raw)
  To: lustre-devel

The obdecho code is very messy and difficult to read. Remove excess
white space and properly align data structures so they are easy on
the eyes.

Signed-off-by: James Simmons <jsimmons@infradead.org>
---
 .../staging/lustre/lustre/obdecho/echo_client.c    | 280 ++++++++++-----------
 .../staging/lustre/lustre/obdecho/echo_internal.h  |   4 +-
 2 files changed, 142 insertions(+), 142 deletions(-)

diff --git a/drivers/staging/lustre/lustre/obdecho/echo_client.c b/drivers/staging/lustre/lustre/obdecho/echo_client.c
index 4f9dbc4..1ebd985 100644
--- a/drivers/staging/lustre/lustre/obdecho/echo_client.c
+++ b/drivers/staging/lustre/lustre/obdecho/echo_client.c
@@ -52,41 +52,41 @@
  */
 
 struct echo_device {
-	struct cl_device	ed_cl;
-	struct echo_client_obd *ed_ec;
+	struct cl_device		ed_cl;
+	struct echo_client_obd	       *ed_ec;
 
-	struct cl_site	  ed_site_myself;
-	struct lu_site		*ed_site;
-	struct lu_device       *ed_next;
+	struct cl_site			ed_site_myself;
+	struct lu_site		       *ed_site;
+	struct lu_device	       *ed_next;
 };
 
 struct echo_object {
-	struct cl_object	eo_cl;
-	struct cl_object_header eo_hdr;
-
-	struct echo_device     *eo_dev;
-	struct list_head	      eo_obj_chain;
-	struct lov_oinfo       *eo_oinfo;
-	atomic_t	    eo_npages;
-	int		     eo_deleted;
+	struct cl_object		eo_cl;
+	struct cl_object_header		eo_hdr;
+
+	struct echo_device	       *eo_dev;
+	struct list_head		eo_obj_chain;
+	struct lov_oinfo	       *eo_oinfo;
+	atomic_t			eo_npages;
+	int				eo_deleted;
 };
 
 struct echo_object_conf {
-	struct cl_object_conf  eoc_cl;
-	struct lov_oinfo      **eoc_oinfo;
+	struct cl_object_conf		eoc_cl;
+	struct lov_oinfo	      **eoc_oinfo;
 };
 
 struct echo_page {
-	struct cl_page_slice   ep_cl;
-	unsigned long		ep_lock;
+	struct cl_page_slice		ep_cl;
+	unsigned long			ep_lock;
 };
 
 struct echo_lock {
-	struct cl_lock_slice   el_cl;
-	struct list_head	     el_chain;
-	struct echo_object    *el_object;
-	u64		  el_cookie;
-	atomic_t	   el_refcount;
+	struct cl_lock_slice		el_cl;
+	struct list_head		el_chain;
+	struct echo_object	       *el_object;
+	u64				el_cookie;
+	atomic_t			el_refcount;
 };
 
 static int echo_client_setup(const struct lu_env *env,
@@ -159,19 +159,19 @@ static int cl_echo_object_brw(struct echo_object *eco, int rw, u64 offset,
 			      struct page **pages, int npages, int async);
 
 struct echo_thread_info {
-	struct echo_object_conf eti_conf;
-	struct lustre_md	eti_md;
-
-	struct cl_2queue	eti_queue;
-	struct cl_io	    eti_io;
-	struct cl_lock          eti_lock;
-	struct lu_fid	   eti_fid;
-	struct lu_fid		eti_fid2;
+	struct echo_object_conf		eti_conf;
+	struct lustre_md		eti_md;
+
+	struct cl_2queue		eti_queue;
+	struct cl_io			eti_io;
+	struct cl_lock			eti_lock;
+	struct lu_fid			eti_fid;
+	struct lu_fid			eti_fid2;
 };
 
 /* No session used right now */
 struct echo_session_info {
-	unsigned long dummy;
+	unsigned long			dummy;
 };
 
 static struct kmem_cache *echo_lock_kmem;
@@ -288,20 +288,20 @@ static int echo_page_print(const struct lu_env *env,
 }
 
 static const struct cl_page_operations echo_page_ops = {
-	.cpo_own	   = echo_page_own,
-	.cpo_disown	= echo_page_disown,
-	.cpo_discard       = echo_page_discard,
-	.cpo_fini	  = echo_page_fini,
-	.cpo_print	 = echo_page_print,
-	.cpo_is_vmlocked   = echo_page_is_vmlocked,
+	.cpo_own		= echo_page_own,
+	.cpo_disown		= echo_page_disown,
+	.cpo_discard		= echo_page_discard,
+	.cpo_fini		= echo_page_fini,
+	.cpo_print		= echo_page_print,
+	.cpo_is_vmlocked	= echo_page_is_vmlocked,
 	.io = {
 		[CRT_READ] = {
 			.cpo_prep	= echo_page_prep,
-			.cpo_completion  = echo_page_completion,
+			.cpo_completion	= echo_page_completion,
 		},
 		[CRT_WRITE] = {
 			.cpo_prep	= echo_page_prep,
-			.cpo_completion  = echo_page_completion,
+			.cpo_completion	= echo_page_completion,
 		}
 	}
 };
@@ -324,7 +324,7 @@ static void echo_lock_fini(const struct lu_env *env,
 }
 
 static const struct cl_lock_operations echo_lock_ops = {
-	.clo_fini      = echo_lock_fini,
+	.clo_fini		= echo_lock_fini,
 };
 
 /** @} echo_lock */
@@ -383,10 +383,10 @@ static int echo_conf_set(const struct lu_env *env, struct cl_object *obj,
 }
 
 static const struct cl_object_operations echo_cl_obj_ops = {
-	.coo_page_init = echo_page_init,
-	.coo_lock_init = echo_lock_init,
-	.coo_io_init   = echo_io_init,
-	.coo_conf_set  = echo_conf_set
+	.coo_page_init		= echo_page_init,
+	.coo_lock_init		= echo_lock_init,
+	.coo_io_init		= echo_io_init,
+	.coo_conf_set		= echo_conf_set
 };
 
 /** @} echo_cl_ops */
@@ -400,15 +400,15 @@ static int echo_conf_set(const struct lu_env *env, struct cl_object *obj,
 static int echo_object_init(const struct lu_env *env, struct lu_object *obj,
 			    const struct lu_object_conf *conf)
 {
-	struct echo_device *ed	 = cl2echo_dev(lu2cl_dev(obj->lo_dev));
-	struct echo_client_obd *ec     = ed->ed_ec;
+	struct echo_device *ed = cl2echo_dev(lu2cl_dev(obj->lo_dev));
+	struct echo_client_obd *ec = ed->ed_ec;
 	struct echo_object *eco	= cl2echo_obj(lu2cl(obj));
 	const struct cl_object_conf *cconf;
 	struct echo_object_conf *econf;
 
 	if (ed->ed_next) {
-		struct lu_object  *below;
-		struct lu_device  *under;
+		struct lu_object *below;
+		struct lu_device *under;
 
 		under = ed->ed_next;
 		below = under->ld_ops->ldo_object_alloc(env, obj->lo_header,
@@ -442,7 +442,7 @@ static int echo_object_init(const struct lu_env *env, struct lu_object *obj,
 
 static void echo_object_free(const struct lu_env *env, struct lu_object *obj)
 {
-	struct echo_object *eco    = cl2echo_obj(lu2cl(obj));
+	struct echo_object *eco = cl2echo_obj(lu2cl(obj));
 	struct echo_client_obd *ec = eco->eo_dev->ed_ec;
 
 	LASSERT(atomic_read(&eco->eo_npages) == 0);
@@ -467,12 +467,12 @@ static int echo_object_print(const struct lu_env *env, void *cookie,
 }
 
 static const struct lu_object_operations echo_lu_obj_ops = {
-	.loo_object_init      = echo_object_init,
-	.loo_object_delete    = NULL,
-	.loo_object_release   = NULL,
-	.loo_object_free      = echo_object_free,
-	.loo_object_print     = echo_object_print,
-	.loo_object_invariant = NULL
+	.loo_object_init	= echo_object_init,
+	.loo_object_delete	= NULL,
+	.loo_object_release	= NULL,
+	.loo_object_free	= echo_object_free,
+	.loo_object_print	= echo_object_print,
+	.loo_object_invariant	= NULL
 };
 
 /** @} echo_lu_ops */
@@ -504,13 +504,13 @@ static struct lu_object *echo_object_alloc(const struct lu_env *env,
 		lu_object_add_top(&hdr->coh_lu, obj);
 
 		eco->eo_cl.co_ops = &echo_cl_obj_ops;
-		obj->lo_ops       = &echo_lu_obj_ops;
+		obj->lo_ops = &echo_lu_obj_ops;
 	}
 	return obj;
 }
 
 static const struct lu_device_operations echo_device_lu_ops = {
-	.ldo_object_alloc   = echo_object_alloc,
+	.ldo_object_alloc	= echo_object_alloc,
 };
 
 /** @} echo_lu_dev_ops */
@@ -571,9 +571,9 @@ static void echo_thread_key_fini(const struct lu_context *ctx,
 }
 
 static struct lu_context_key echo_thread_key = {
-	.lct_tags = LCT_CL_THREAD,
-	.lct_init = echo_thread_key_init,
-	.lct_fini = echo_thread_key_fini,
+	.lct_tags		= LCT_CL_THREAD,
+	.lct_init		= echo_thread_key_init,
+	.lct_fini		= echo_thread_key_fini,
 };
 
 static void *echo_session_key_init(const struct lu_context *ctx,
@@ -596,9 +596,9 @@ static void echo_session_key_fini(const struct lu_context *ctx,
 }
 
 static struct lu_context_key echo_session_key = {
-	.lct_tags = LCT_SESSION,
-	.lct_init = echo_session_key_init,
-	.lct_fini = echo_session_key_fini,
+	.lct_tags		= LCT_SESSION,
+	.lct_init		= echo_session_key_init,
+	.lct_fini		= echo_session_key_fini,
 };
 
 LU_TYPE_INIT_FINI(echo, &echo_thread_key, &echo_session_key);
@@ -607,11 +607,11 @@ static struct lu_device *echo_device_alloc(const struct lu_env *env,
 					   struct lu_device_type *t,
 					   struct lustre_cfg *cfg)
 {
-	struct lu_device   *next;
+	struct lu_device *next;
 	struct echo_device *ed;
-	struct cl_device   *cd;
-	struct obd_device  *obd = NULL; /* to keep compiler happy */
-	struct obd_device  *tgt;
+	struct cl_device *cd;
+	struct obd_device *obd = NULL; /* to keep compiler happy */
+	struct obd_device *tgt;
 	const char *tgt_type_name;
 	int rc, err;
 
@@ -729,10 +729,10 @@ static void echo_lock_release(const struct lu_env *env,
 static struct lu_device *echo_device_free(const struct lu_env *env,
 					  struct lu_device *d)
 {
-	struct echo_device     *ed   = cl2echo_dev(lu2cl_dev(d));
-	struct echo_client_obd *ec   = ed->ed_ec;
-	struct echo_object     *eco;
-	struct lu_device       *next = ed->ed_next;
+	struct echo_device *ed = cl2echo_dev(lu2cl_dev(d));
+	struct echo_client_obd *ec = ed->ed_ec;
+	struct echo_object *eco;
+	struct lu_device *next = ed->ed_next;
 
 	CDEBUG(D_INFO, "echo device:%p is going to be freed, next = %p\n",
 	       ed, next);
@@ -786,23 +786,23 @@ static struct lu_device *echo_device_free(const struct lu_env *env,
 }
 
 static const struct lu_device_type_operations echo_device_type_ops = {
-	.ldto_init = echo_type_init,
-	.ldto_fini = echo_type_fini,
+	.ldto_init		= echo_type_init,
+	.ldto_fini		= echo_type_fini,
 
-	.ldto_start = echo_type_start,
-	.ldto_stop  = echo_type_stop,
+	.ldto_start		= echo_type_start,
+	.ldto_stop		= echo_type_stop,
 
-	.ldto_device_alloc = echo_device_alloc,
-	.ldto_device_free  = echo_device_free,
-	.ldto_device_init  = echo_device_init,
-	.ldto_device_fini  = echo_device_fini
+	.ldto_device_alloc	= echo_device_alloc,
+	.ldto_device_free	= echo_device_free,
+	.ldto_device_init	= echo_device_init,
+	.ldto_device_fini	= echo_device_fini
 };
 
 static struct lu_device_type echo_device_type = {
-	.ldt_tags     = LU_DEVICE_CL,
-	.ldt_name     = LUSTRE_ECHO_CLIENT_NAME,
-	.ldt_ops      = &echo_device_type_ops,
-	.ldt_ctx_tags = LCT_CL_THREAD,
+	.ldt_tags		= LU_DEVICE_CL,
+	.ldt_name		= LUSTRE_ECHO_CLIENT_NAME,
+	.ldt_ops		= &echo_device_type_ops,
+	.ldt_ctx_tags		= LCT_CL_THREAD,
 };
 
 /** @} echo_init */
@@ -823,7 +823,7 @@ static struct lu_device *echo_device_free(const struct lu_env *env,
 	struct echo_object_conf *conf;
 	struct lov_oinfo *oinfo = NULL;
 	struct echo_object *eco;
-	struct cl_object   *obj;
+	struct cl_object *obj;
 	struct lu_fid *fid;
 	u16 refcheck;
 	int rc;
@@ -858,7 +858,7 @@ static struct lu_device *echo_device_free(const struct lu_env *env,
 	 */
 	conf->eoc_oinfo = &oinfo;
 
-	fid  = &info->eti_fid;
+	fid = &info->eti_fid;
 	rc = ostid_to_fid(fid, (struct ost_id *)oi, 0);
 	if (rc != 0) {
 		eco = ERR_PTR(rc);
@@ -928,10 +928,10 @@ static int __cl_echo_enqueue(struct lu_env *env, struct echo_object *eco,
 
 	memset(lck, 0, sizeof(*lck));
 	descr = &lck->cll_descr;
-	descr->cld_obj   = obj;
+	descr->cld_obj = obj;
 	descr->cld_start = cl_index(obj, start);
-	descr->cld_end   = cl_index(obj, end);
-	descr->cld_mode  = mode == LCK_PW ? CLM_WRITE : CLM_READ;
+	descr->cld_end = cl_index(obj, end);
+	descr->cld_mode = mode == LCK_PW ? CLM_WRITE : CLM_READ;
 	descr->cld_enq_flags = enqflags;
 	io->ci_obj = obj;
 
@@ -957,7 +957,7 @@ static int __cl_echo_cancel(struct lu_env *env, struct echo_device *ed,
 			    u64 cookie)
 {
 	struct echo_client_obd *ec = ed->ed_ec;
-	struct echo_lock       *ecl = NULL;
+	struct echo_lock *ecl = NULL;
 	int found = 0, still_used = 0;
 
 	spin_lock(&ec->ec_lock);
@@ -997,14 +997,14 @@ static void echo_commit_callback(const struct lu_env *env, struct cl_io *io,
 static int cl_echo_object_brw(struct echo_object *eco, int rw, u64 offset,
 			      struct page **pages, int npages, int async)
 {
-	struct lu_env	   *env;
+	struct lu_env *env;
 	struct echo_thread_info *info;
-	struct cl_object	*obj = echo_obj2cl(eco);
-	struct echo_device      *ed  = eco->eo_dev;
-	struct cl_2queue	*queue;
-	struct cl_io	    *io;
-	struct cl_page	  *clp;
-	struct lustre_handle    lh = { 0 };
+	struct cl_object *obj = echo_obj2cl(eco);
+	struct echo_device *ed  = eco->eo_dev;
+	struct cl_2queue *queue;
+	struct cl_io *io;
+	struct cl_page *clp;
+	struct lustre_handle lh = { 0 };
 	size_t page_size = cl_page_size(obj);
 	u16 refcheck;
 	int rc;
@@ -1016,9 +1016,9 @@ static int cl_echo_object_brw(struct echo_object *eco, int rw, u64 offset,
 	if (IS_ERR(env))
 		return PTR_ERR(env);
 
-	info    = echo_env_info(env);
-	io      = &info->eti_io;
-	queue   = &info->eti_queue;
+	info = echo_env_info(env);
+	io = &info->eti_io;
+	queue = &info->eti_queue;
 
 	cl_2queue_init(queue);
 
@@ -1097,10 +1097,10 @@ static int cl_echo_object_brw(struct echo_object *eco, int rw, u64 offset,
 static int echo_create_object(const struct lu_env *env, struct echo_device *ed,
 			      struct obdo *oa)
 {
-	struct echo_object     *eco;
+	struct echo_object *eco;
 	struct echo_client_obd *ec = ed->ed_ec;
-	int		     rc;
-	int		     created = 0;
+	int rc;
+	int created = 0;
 
 	if (!(oa->o_valid & OBD_MD_FLID) ||
 	    !(oa->o_valid & OBD_MD_FLGROUP) ||
@@ -1133,7 +1133,7 @@ static int echo_create_object(const struct lu_env *env, struct echo_device *ed,
 
 	CDEBUG(D_INFO, "oa oid " DOSTID "\n", POSTID(&oa->o_oi));
 
- failed:
+failed:
 	if (created && rc)
 		obd_destroy(env, ec->ec_exp, oa);
 	if (rc)
@@ -1144,8 +1144,8 @@ static int echo_create_object(const struct lu_env *env, struct echo_device *ed,
 static int echo_get_object(struct echo_object **ecop, struct echo_device *ed,
 			   struct obdo *oa)
 {
-	struct echo_object     *eco;
-	int		     rc;
+	struct echo_object *eco;
+	int rc;
 
 	if (!(oa->o_valid & OBD_MD_FLID) || !(oa->o_valid & OBD_MD_FLGROUP) ||
 	    !ostid_id(&oa->o_oi)) {
@@ -1176,10 +1176,10 @@ static void echo_put_object(struct echo_object *eco)
 echo_client_page_debug_setup(struct page *page, int rw, u64 id,
 			     u64 offset, u64 count)
 {
-	char    *addr;
-	u64	 stripe_off;
-	u64	 stripe_id;
-	int      delta;
+	char *addr;
+	u64 stripe_off;
+	u64 stripe_id;
+	int delta;
 
 	/* no partial pages on the client */
 	LASSERT(count == PAGE_SIZE);
@@ -1204,12 +1204,12 @@ static void echo_put_object(struct echo_object *eco)
 static int echo_client_page_debug_check(struct page *page, u64 id,
 					u64 offset, u64 count)
 {
-	u64	stripe_off;
-	u64	stripe_id;
-	char   *addr;
-	int     delta;
-	int     rc;
-	int     rc2;
+	u64 stripe_off;
+	u64 stripe_id;
+	char *addr;
+	int delta;
+	int rc;
+	int rc2;
 
 	/* no partial pages on the client */
 	LASSERT(count == PAGE_SIZE);
@@ -1237,16 +1237,16 @@ static int echo_client_kbrw(struct echo_device *ed, int rw, struct obdo *oa,
 			    struct echo_object *eco, u64 offset,
 			    u64 count, int async)
 {
-	u32	       npages;
+	u32 npages;
 	struct brw_page	*pga;
 	struct brw_page	*pgp;
-	struct page	    **pages;
-	u64		 off;
-	int		     i;
-	int		     rc;
-	int		     verify;
-	gfp_t		     gfp_mask;
-	int		     brw_flags = 0;
+	struct page **pages;
+	u64 off;
+	int i;
+	int rc;
+	int verify;
+	gfp_t gfp_mask;
+	int brw_flags = 0;
 
 	verify = (ostid_id(&oa->o_oi) != ECHO_PERSISTENT_OBJID &&
 		  (oa->o_valid & OBD_MD_FLFLAGS) != 0 &&
@@ -1301,7 +1301,7 @@ static int echo_client_kbrw(struct echo_device *ed, int rw, struct obdo *oa,
 	LASSERT(ed->ed_next);
 	rc = cl_echo_object_brw(eco, rw, offset, pages, npages, async);
 
- out:
+out:
 	if (rc != 0 || rw != OBD_BRW_READ)
 		verify = 0;
 
@@ -1474,16 +1474,16 @@ static int echo_client_brw_ioctl(const struct lu_env *env, int rw,
 echo_client_iocontrol(unsigned int cmd, struct obd_export *exp, int len,
 		      void *karg, void __user *uarg)
 {
-	struct obd_device      *obd = exp->exp_obd;
-	struct echo_device     *ed = obd2echo_dev(obd);
+	struct obd_device *obd = exp->exp_obd;
+	struct echo_device *ed = obd2echo_dev(obd);
 	struct echo_client_obd *ec = ed->ed_ec;
-	struct echo_object     *eco;
-	struct obd_ioctl_data  *data = karg;
-	struct lu_env	  *env;
-	struct obdo	    *oa;
-	struct lu_fid	   fid;
-	int		     rw = OBD_BRW_READ;
-	int		     rc = 0;
+	struct echo_object *eco;
+	struct obd_ioctl_data *data = karg;
+	struct lu_env *env;
+	struct obdo *oa;
+	struct lu_fid fid;
+	int rw = OBD_BRW_READ;
+	int rc = 0;
 
 	oa = &data->ioc_obdo1;
 	if (!(oa->o_valid & OBD_MD_FLGROUP)) {
@@ -1652,7 +1652,7 @@ static int echo_client_connect(const struct lu_env *env,
 			       struct obd_device *src, struct obd_uuid *cluuid,
 			       struct obd_connect_data *data, void *localdata)
 {
-	int		rc;
+	int rc;
 	struct lustre_handle conn = { 0 };
 
 	rc = class_connect(&conn, src, cluuid);
@@ -1664,7 +1664,7 @@ static int echo_client_connect(const struct lu_env *env,
 
 static int echo_client_disconnect(struct obd_export *exp)
 {
-	int		     rc;
+	int rc;
 
 	if (!exp) {
 		rc = -EINVAL;
@@ -1673,15 +1673,15 @@ static int echo_client_disconnect(struct obd_export *exp)
 
 	rc = class_disconnect(exp);
 	goto out;
- out:
+out:
 	return rc;
 }
 
 static struct obd_ops echo_client_obd_ops = {
-	.owner          = THIS_MODULE,
-	.iocontrol      = echo_client_iocontrol,
-	.connect        = echo_client_connect,
-	.disconnect     = echo_client_disconnect
+	.owner			= THIS_MODULE,
+	.iocontrol		= echo_client_iocontrol,
+	.connect		= echo_client_connect,
+	.disconnect		= echo_client_disconnect
 };
 
 static int echo_client_init(void)
diff --git a/drivers/staging/lustre/lustre/obdecho/echo_internal.h b/drivers/staging/lustre/lustre/obdecho/echo_internal.h
index ac7a209..8094a94 100644
--- a/drivers/staging/lustre/lustre/obdecho/echo_internal.h
+++ b/drivers/staging/lustre/lustre/obdecho/echo_internal.h
@@ -33,8 +33,8 @@
 #define _ECHO_INTERNAL_H
 
 /* The persistent object (i.e. actually stores stuff!) */
-#define ECHO_PERSISTENT_OBJID    1ULL
-#define ECHO_PERSISTENT_SIZE     ((u64)(1 << 20))
+#define ECHO_PERSISTENT_OBJID	1ULL
+#define ECHO_PERSISTENT_SIZE	((u64)(1 << 20))
 
 /* block size to use for data verification */
 #define OBD_ECHO_BLOCK_SIZE	(4 << 10)
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [lustre-devel] [PATCH 18/26] osc: cleanup white spaces
  2019-01-31 17:19 [lustre-devel] [PATCH 00/26] lustre: cleanups with no code changes James Simmons
                   ` (16 preceding siblings ...)
  2019-01-31 17:19 ` [lustre-devel] [PATCH 17/26] obdecho: " James Simmons
@ 2019-01-31 17:19 ` James Simmons
  2019-01-31 17:19 ` [lustre-devel] [PATCH 19/26] ptlrpc: " James Simmons
                   ` (8 subsequent siblings)
  26 siblings, 0 replies; 30+ messages in thread
From: James Simmons @ 2019-01-31 17:19 UTC (permalink / raw)
  To: lustre-devel

The osc code is very messy and difficult to read. Remove excess
white space and properly align data structures so they are easy on
the eyes.

Signed-off-by: James Simmons <jsimmons@infradead.org>
---
 drivers/staging/lustre/lustre/osc/lproc_osc.c      |  8 +--
 drivers/staging/lustre/lustre/osc/osc_cache.c      | 61 +++++++++++-----------
 .../staging/lustre/lustre/osc/osc_cl_internal.h    |  4 +-
 drivers/staging/lustre/lustre/osc/osc_dev.c        | 42 +++++++--------
 drivers/staging/lustre/lustre/osc/osc_io.c         | 32 ++++++------
 drivers/staging/lustre/lustre/osc/osc_lock.c       | 30 +++++------
 drivers/staging/lustre/lustre/osc/osc_object.c     | 28 +++++-----
 drivers/staging/lustre/lustre/osc/osc_page.c       |  8 +--
 drivers/staging/lustre/lustre/osc/osc_request.c    | 10 ++--
 9 files changed, 111 insertions(+), 112 deletions(-)

diff --git a/drivers/staging/lustre/lustre/osc/lproc_osc.c b/drivers/staging/lustre/lustre/osc/lproc_osc.c
index 39a2a26..299a69f 100644
--- a/drivers/staging/lustre/lustre/osc/lproc_osc.c
+++ b/drivers/staging/lustre/lustre/osc/lproc_osc.c
@@ -501,7 +501,7 @@ static ssize_t contention_seconds_show(struct kobject *kobj,
 {
 	struct obd_device *obd = container_of(kobj, struct obd_device,
 					      obd_kset.kobj);
-	struct osc_device *od  = obd2osc_dev(obd);
+	struct osc_device *od = obd2osc_dev(obd);
 
 	return sprintf(buf, "%u\n", od->od_contention_time);
 }
@@ -513,7 +513,7 @@ static ssize_t contention_seconds_store(struct kobject *kobj,
 {
 	struct obd_device *obd = container_of(kobj, struct obd_device,
 					      obd_kset.kobj);
-	struct osc_device *od  = obd2osc_dev(obd);
+	struct osc_device *od = obd2osc_dev(obd);
 	unsigned int val;
 	int rc;
 
@@ -533,7 +533,7 @@ static ssize_t lockless_truncate_show(struct kobject *kobj,
 {
 	struct obd_device *obd = container_of(kobj, struct obd_device,
 					      obd_kset.kobj);
-	struct osc_device *od  = obd2osc_dev(obd);
+	struct osc_device *od = obd2osc_dev(obd);
 
 	return sprintf(buf, "%u\n", od->od_lockless_truncate);
 }
@@ -545,7 +545,7 @@ static ssize_t lockless_truncate_store(struct kobject *kobj,
 {
 	struct obd_device *obd = container_of(kobj, struct obd_device,
 					      obd_kset.kobj);
-	struct osc_device *od  = obd2osc_dev(obd);
+	struct osc_device *od = obd2osc_dev(obd);
 	bool val;
 	int rc;
 
diff --git a/drivers/staging/lustre/lustre/osc/osc_cache.c b/drivers/staging/lustre/lustre/osc/osc_cache.c
index bef422c..673e139 100644
--- a/drivers/staging/lustre/lustre/osc/osc_cache.c
+++ b/drivers/staging/lustre/lustre/osc/osc_cache.c
@@ -1304,7 +1304,7 @@ static int osc_make_ready(const struct lu_env *env, struct osc_async_page *oap,
 			  int cmd)
 {
 	struct osc_page *opg = oap2osc_page(oap);
-	struct cl_page  *page = oap2cl_page(oap);
+	struct cl_page *page = oap2cl_page(oap);
 	int result;
 
 	LASSERT(cmd == OBD_BRW_WRITE); /* no cached reads */
@@ -1323,7 +1323,6 @@ static int osc_refresh_count(const struct lu_env *env,
 	pgoff_t index = osc_index(oap2osc(oap));
 	struct cl_object *obj;
 	struct cl_attr *attr = &osc_env_info(env)->oti_attr;
-
 	int result;
 	loff_t kms;
 
@@ -1395,21 +1394,21 @@ static int osc_completion(const struct lu_env *env, struct osc_async_page *oap,
 	return 0;
 }
 
-#define OSC_DUMP_GRANT(lvl, cli, fmt, args...) do {			      \
-	struct client_obd *__tmp = (cli);				      \
+#define OSC_DUMP_GRANT(lvl, cli, fmt, args...) do {			\
+	struct client_obd *__tmp = (cli);				\
 	CDEBUG(lvl, "%s: grant { dirty: %ld/%ld dirty_pages: %ld/%lu "	\
 	       "dropped: %ld avail: %ld, dirty_grant: %ld, "		\
 	       "reserved: %ld, flight: %d } lru {in list: %ld, "	\
 	       "left: %ld, waiters: %d }" fmt "\n",			\
-	       cli_name(__tmp),						      \
-	       __tmp->cl_dirty_pages, __tmp->cl_dirty_max_pages,	      \
-	       atomic_long_read(&obd_dirty_pages), obd_max_dirty_pages,	      \
-	       __tmp->cl_lost_grant, __tmp->cl_avail_grant,		      \
+	       cli_name(__tmp),						\
+	       __tmp->cl_dirty_pages, __tmp->cl_dirty_max_pages,	\
+	       atomic_long_read(&obd_dirty_pages), obd_max_dirty_pages,	\
+	       __tmp->cl_lost_grant, __tmp->cl_avail_grant,		\
 	       __tmp->cl_dirty_grant,					\
-	       __tmp->cl_reserved_grant, __tmp->cl_w_in_flight,		      \
-	       atomic_long_read(&__tmp->cl_lru_in_list),		      \
-	       atomic_long_read(&__tmp->cl_lru_busy),			      \
-	       atomic_read(&__tmp->cl_lru_shrinkers), ##args);		      \
+	       __tmp->cl_reserved_grant, __tmp->cl_w_in_flight,		\
+	       atomic_long_read(&__tmp->cl_lru_in_list),		\
+	       atomic_long_read(&__tmp->cl_lru_busy),			\
+	       atomic_read(&__tmp->cl_lru_shrinkers), ##args);		\
 } while (0)
 
 /* caller must hold loi_list_lock */
@@ -1471,7 +1470,7 @@ static void __osc_unreserve_grant(struct client_obd *cli,
 	cli->cl_reserved_grant -= reserved;
 	if (unused > reserved) {
 		cli->cl_avail_grant += reserved;
-		cli->cl_lost_grant  += unused - reserved;
+		cli->cl_lost_grant += unused - reserved;
 		cli->cl_dirty_grant -= unused - reserved;
 	} else {
 		cli->cl_avail_grant += unused;
@@ -1984,11 +1983,11 @@ static unsigned int get_write_extents(struct osc_object *obj,
 	struct client_obd *cli = osc_cli(obj);
 	struct osc_extent *ext;
 	struct extent_rpc_data data = {
-		.erd_rpc_list = rpclist,
-		.erd_page_count = 0,
-		.erd_max_pages = cli->cl_max_pages_per_rpc,
-		.erd_max_chunks = osc_max_write_chunks(cli),
-		.erd_max_extents = 256,
+		.erd_rpc_list		= rpclist,
+		.erd_page_count		= 0,
+		.erd_max_pages		= cli->cl_max_pages_per_rpc,
+		.erd_max_chunks		= osc_max_write_chunks(cli),
+		.erd_max_extents	= 256,
 	};
 
 	assert_osc_object_is_locked(obj);
@@ -2121,11 +2120,11 @@ static unsigned int get_write_extents(struct osc_object *obj,
 	struct osc_extent *next;
 	LIST_HEAD(rpclist);
 	struct extent_rpc_data data = {
-		.erd_rpc_list = &rpclist,
-		.erd_page_count = 0,
-		.erd_max_pages = cli->cl_max_pages_per_rpc,
-		.erd_max_chunks = UINT_MAX,
-		.erd_max_extents = UINT_MAX,
+		.erd_rpc_list		= &rpclist,
+		.erd_page_count		= 0,
+		.erd_max_pages		= cli->cl_max_pages_per_rpc,
+		.erd_max_chunks		= UINT_MAX,
+		.erd_max_extents	= UINT_MAX,
 	};
 	int rc = 0;
 
@@ -2545,7 +2544,7 @@ int osc_flush_async_page(const struct lu_env *env, struct cl_io *io,
 	struct osc_extent *ext = NULL;
 	struct osc_object *obj = cl2osc(ops->ops_cl.cpl_obj);
 	struct cl_page *cp = ops->ops_cl.cpl_page;
-	pgoff_t            index = osc_index(ops);
+	pgoff_t index = osc_index(ops);
 	struct osc_async_page *oap = &ops->ops_oap;
 	bool unplug = false;
 	int rc = 0;
@@ -3033,13 +3032,13 @@ bool osc_page_gang_lookup(const struct lu_env *env, struct cl_io *io,
 			  osc_page_gang_cbt cb, void *cbdata)
 {
 	struct osc_page *ops;
-	void            **pvec;
-	pgoff_t         idx;
-	unsigned int    nr;
-	unsigned int    i;
-	unsigned int    j;
-	bool            res = true;
-	bool            tree_lock = true;
+	void **pvec;
+	pgoff_t idx;
+	unsigned int nr;
+	unsigned int i;
+	unsigned int j;
+	bool res = true;
+	bool tree_lock = true;
 
 	idx = start;
 	pvec = osc_env_info(env)->oti_pvec;
diff --git a/drivers/staging/lustre/lustre/osc/osc_cl_internal.h b/drivers/staging/lustre/lustre/osc/osc_cl_internal.h
index c89c894..8bede94 100644
--- a/drivers/staging/lustre/lustre/osc/osc_cl_internal.h
+++ b/drivers/staging/lustre/lustre/osc/osc_cl_internal.h
@@ -407,9 +407,9 @@ void osc_io_unplug(const struct lu_env *env, struct client_obd *cli,
 
 void osc_object_set_contended(struct osc_object *obj);
 void osc_object_clear_contended(struct osc_object *obj);
-int  osc_object_is_contended(struct osc_object *obj);
+int osc_object_is_contended(struct osc_object *obj);
 
-int  osc_lock_is_lockless(const struct osc_lock *olck);
+int osc_lock_is_lockless(const struct osc_lock *olck);
 
 /****************************************************************************
  *
diff --git a/drivers/staging/lustre/lustre/osc/osc_dev.c b/drivers/staging/lustre/lustre/osc/osc_dev.c
index c767a3c..997fa35 100644
--- a/drivers/staging/lustre/lustre/osc/osc_dev.c
+++ b/drivers/staging/lustre/lustre/osc/osc_dev.c
@@ -127,9 +127,9 @@ static void osc_key_fini(const struct lu_context *ctx,
 }
 
 struct lu_context_key osc_key = {
-	.lct_tags = LCT_CL_THREAD,
-	.lct_init = osc_key_init,
-	.lct_fini = osc_key_fini
+	.lct_tags		= LCT_CL_THREAD,
+	.lct_init		= osc_key_init,
+	.lct_fini		= osc_key_fini
 };
 
 static void *osc_session_init(const struct lu_context *ctx,
@@ -152,9 +152,9 @@ static void osc_session_fini(const struct lu_context *ctx,
 }
 
 struct lu_context_key osc_session_key = {
-	.lct_tags = LCT_SESSION,
-	.lct_init = osc_session_init,
-	.lct_fini = osc_session_fini
+	.lct_tags		= LCT_SESSION,
+	.lct_init		= osc_session_init,
+	.lct_fini		= osc_session_fini
 };
 
 /* type constructor/destructor: osc_type_{init,fini,start,stop}(). */
@@ -167,9 +167,9 @@ static int osc_cl_process_config(const struct lu_env *env,
 }
 
 static const struct lu_device_operations osc_lu_ops = {
-	.ldo_object_alloc      = osc_object_alloc,
-	.ldo_process_config    = osc_cl_process_config,
-	.ldo_recovery_complete = NULL
+	.ldo_object_alloc	= osc_object_alloc,
+	.ldo_process_config	= osc_cl_process_config,
+	.ldo_recovery_complete	= NULL
 };
 
 static int osc_device_init(const struct lu_env *env, struct lu_device *d,
@@ -224,24 +224,24 @@ static struct lu_device *osc_device_alloc(const struct lu_env *env,
 }
 
 static const struct lu_device_type_operations osc_device_type_ops = {
-	.ldto_init = osc_type_init,
-	.ldto_fini = osc_type_fini,
+	.ldto_init		= osc_type_init,
+	.ldto_fini		= osc_type_fini,
 
-	.ldto_start = osc_type_start,
-	.ldto_stop  = osc_type_stop,
+	.ldto_start		= osc_type_start,
+	.ldto_stop		= osc_type_stop,
 
-	.ldto_device_alloc = osc_device_alloc,
-	.ldto_device_free  = osc_device_free,
+	.ldto_device_alloc	= osc_device_alloc,
+	.ldto_device_free	= osc_device_free,
 
-	.ldto_device_init = osc_device_init,
-	.ldto_device_fini = osc_device_fini
+	.ldto_device_init	= osc_device_init,
+	.ldto_device_fini	= osc_device_fini
 };
 
 struct lu_device_type osc_device_type = {
-	.ldt_tags = LU_DEVICE_CL,
-	.ldt_name = LUSTRE_OSC_NAME,
-	.ldt_ops = &osc_device_type_ops,
-	.ldt_ctx_tags = LCT_CL_THREAD
+	.ldt_tags		= LU_DEVICE_CL,
+	.ldt_name		= LUSTRE_OSC_NAME,
+	.ldt_ops		= &osc_device_type_ops,
+	.ldt_ctx_tags		= LCT_CL_THREAD
 };
 
 /** @} osc */
diff --git a/drivers/staging/lustre/lustre/osc/osc_io.c b/drivers/staging/lustre/lustre/osc/osc_io.c
index cf5b3cc..0b9ed01 100644
--- a/drivers/staging/lustre/lustre/osc/osc_io.c
+++ b/drivers/staging/lustre/lustre/osc/osc_io.c
@@ -935,21 +935,21 @@ static void osc_io_end(const struct lu_env *env,
 		[CIT_READ] = {
 			.cio_iter_init	= osc_io_iter_init,
 			.cio_iter_fini	= osc_io_iter_fini,
-			.cio_start  = osc_io_read_start,
-			.cio_fini   = osc_io_fini
+			.cio_start	= osc_io_read_start,
+			.cio_fini	= osc_io_fini
 		},
 		[CIT_WRITE] = {
 			.cio_iter_init	= osc_io_write_iter_init,
 			.cio_iter_fini	= osc_io_write_iter_fini,
-			.cio_start  = osc_io_write_start,
-			.cio_end    = osc_io_end,
-			.cio_fini   = osc_io_fini
+			.cio_start	= osc_io_write_start,
+			.cio_end	= osc_io_end,
+			.cio_fini	= osc_io_fini
 		},
 		[CIT_SETATTR] = {
 			.cio_iter_init	= osc_io_iter_init,
 			.cio_iter_fini	= osc_io_iter_fini,
-			.cio_start  = osc_io_setattr_start,
-			.cio_end    = osc_io_setattr_end
+			.cio_start	= osc_io_setattr_start,
+			.cio_end	= osc_io_setattr_end
 		},
 		[CIT_DATA_VERSION] = {
 			.cio_start	= osc_io_data_version_start,
@@ -958,14 +958,14 @@ static void osc_io_end(const struct lu_env *env,
 		[CIT_FAULT] = {
 			.cio_iter_init	= osc_io_iter_init,
 			.cio_iter_fini	= osc_io_iter_fini,
-			.cio_start  = osc_io_fault_start,
-			.cio_end    = osc_io_end,
-			.cio_fini   = osc_io_fini
+			.cio_start	= osc_io_fault_start,
+			.cio_end	= osc_io_end,
+			.cio_fini	= osc_io_fini
 		},
 		[CIT_FSYNC] = {
-			.cio_start  = osc_io_fsync_start,
-			.cio_end    = osc_io_fsync_end,
-			.cio_fini   = osc_io_fini
+			.cio_start	= osc_io_fsync_start,
+			.cio_end	= osc_io_fsync_end,
+			.cio_fini	= osc_io_fini
 		},
 		[CIT_LADVISE] = {
 			.cio_start	= osc_io_ladvise_start,
@@ -973,12 +973,12 @@ static void osc_io_end(const struct lu_env *env,
 			.cio_fini	= osc_io_fini
 		},
 		[CIT_MISC] = {
-			.cio_fini   = osc_io_fini
+			.cio_fini	= osc_io_fini
 		}
 	},
 	.cio_read_ahead			= osc_io_read_ahead,
-	.cio_submit                 = osc_io_submit,
-	.cio_commit_async           = osc_io_commit_async
+	.cio_submit			= osc_io_submit,
+	.cio_commit_async		= osc_io_commit_async
 };
 
 /*****************************************************************************
diff --git a/drivers/staging/lustre/lustre/osc/osc_lock.c b/drivers/staging/lustre/lustre/osc/osc_lock.c
index 06d813e..5a1717c 100644
--- a/drivers/staging/lustre/lustre/osc/osc_lock.c
+++ b/drivers/staging/lustre/lustre/osc/osc_lock.c
@@ -676,11 +676,11 @@ static unsigned long osc_lock_weight(const struct lu_env *env,
  */
 unsigned long osc_ldlm_weigh_ast(struct ldlm_lock *dlmlock)
 {
-	struct lu_env           *env;
-	struct osc_object	*obj;
-	struct osc_lock		*oscl;
-	unsigned long            weight;
-	bool			 found = false;
+	struct lu_env *env;
+	struct osc_object *obj;
+	struct osc_lock	*oscl;
+	unsigned long weight;
+	bool found = false;
 	u16 refcheck;
 
 	might_sleep();
@@ -739,9 +739,9 @@ static void osc_lock_build_einfo(const struct lu_env *env,
 				 struct ldlm_enqueue_info *einfo)
 {
 	einfo->ei_type = LDLM_EXTENT;
-	einfo->ei_mode   = osc_cl_lock2ldlm(lock->cll_descr.cld_mode);
+	einfo->ei_mode = osc_cl_lock2ldlm(lock->cll_descr.cld_mode);
 	einfo->ei_cb_bl = osc_ldlm_blocking_ast;
-	einfo->ei_cb_cp  = ldlm_completion_ast;
+	einfo->ei_cb_cp = ldlm_completion_ast;
 	einfo->ei_cb_gl = osc_ldlm_glimpse_ast;
 	einfo->ei_cbdata = osc; /* value to be put into ->l_ast_data */
 }
@@ -814,9 +814,9 @@ static bool osc_lock_compatible(const struct osc_lock *qing,
 	if (qed->ols_state < OLS_GRANTED)
 		return true;
 
-	if (qed_descr->cld_mode  >= qing_descr->cld_mode &&
+	if (qed_descr->cld_mode >= qing_descr->cld_mode &&
 	    qed_descr->cld_start <= qing_descr->cld_start &&
-	    qed_descr->cld_end   >= qing_descr->cld_end)
+	    qed_descr->cld_end >= qing_descr->cld_end)
 		return true;
 
 	return false;
@@ -865,7 +865,7 @@ static int osc_lock_enqueue_wait(const struct lu_env *env,
 
 		descr = &tmp_oscl->ols_cl.cls_lock->cll_descr;
 		if (descr->cld_start > need->cld_end ||
-		    descr->cld_end   < need->cld_start)
+		    descr->cld_end < need->cld_start)
 			continue;
 
 		/* We're not supposed to give up group lock */
@@ -1053,7 +1053,7 @@ static void osc_lock_detach(const struct lu_env *env, struct osc_lock *olck)
 static void osc_lock_cancel(const struct lu_env *env,
 			    const struct cl_lock_slice *slice)
 {
-	struct osc_object *obj  = cl2osc(slice->cls_obj);
+	struct osc_object *obj = cl2osc(slice->cls_obj);
 	struct osc_lock *oscl = cl2osc_lock(slice);
 
 	LINVRNT(osc_lock_invariant(oscl));
@@ -1078,10 +1078,10 @@ static int osc_lock_print(const struct lu_env *env, void *cookie,
 }
 
 static const struct cl_lock_operations osc_lock_ops = {
-	.clo_fini    = osc_lock_fini,
-	.clo_enqueue = osc_lock_enqueue,
-	.clo_cancel  = osc_lock_cancel,
-	.clo_print   = osc_lock_print,
+	.clo_fini	= osc_lock_fini,
+	.clo_enqueue	= osc_lock_enqueue,
+	.clo_cancel	= osc_lock_cancel,
+	.clo_print	= osc_lock_print,
 };
 
 static void osc_lock_lockless_cancel(const struct lu_env *env,
diff --git a/drivers/staging/lustre/lustre/osc/osc_object.c b/drivers/staging/lustre/lustre/osc/osc_object.c
index 1097380..98a0b6c 100644
--- a/drivers/staging/lustre/lustre/osc/osc_object.c
+++ b/drivers/staging/lustre/lustre/osc/osc_object.c
@@ -198,8 +198,8 @@ static int osc_object_ast_clear(struct ldlm_lock *lock, void *data)
 
 static int osc_object_prune(const struct lu_env *env, struct cl_object *obj)
 {
-	struct osc_object       *osc = cl2osc(obj);
-	struct ldlm_res_id      *resname = &osc_env_info(env)->oti_resname;
+	struct osc_object *osc = cl2osc(obj);
+	struct ldlm_res_id *resname = &osc_env_info(env)->oti_resname;
 
 	/* DLM locks don't hold a reference of osc_object so we have to
 	 * clear it before the object is being destroyed.
@@ -413,23 +413,23 @@ static void osc_req_attr_set(const struct lu_env *env, struct cl_object *obj,
 }
 
 static const struct cl_object_operations osc_ops = {
-	.coo_page_init = osc_page_init,
-	.coo_lock_init = osc_lock_init,
-	.coo_io_init   = osc_io_init,
-	.coo_attr_get  = osc_attr_get,
-	.coo_attr_update = osc_attr_update,
-	.coo_glimpse   = osc_object_glimpse,
-	.coo_prune	 = osc_object_prune,
+	.coo_page_init		= osc_page_init,
+	.coo_lock_init		= osc_lock_init,
+	.coo_io_init		= osc_io_init,
+	.coo_attr_get		= osc_attr_get,
+	.coo_attr_update	= osc_attr_update,
+	.coo_glimpse		= osc_object_glimpse,
+	.coo_prune		= osc_object_prune,
 	.coo_fiemap		= osc_object_fiemap,
 	.coo_req_attr_set	= osc_req_attr_set
 };
 
 static const struct lu_object_operations osc_lu_obj_ops = {
-	.loo_object_init      = osc_object_init,
-	.loo_object_release   = NULL,
-	.loo_object_free      = osc_object_free,
-	.loo_object_print     = osc_object_print,
-	.loo_object_invariant = NULL
+	.loo_object_init	= osc_object_init,
+	.loo_object_release	= NULL,
+	.loo_object_free	= osc_object_free,
+	.loo_object_print	= osc_object_print,
+	.loo_object_invariant	= NULL
 };
 
 struct lu_object *osc_object_alloc(const struct lu_env *env,
diff --git a/drivers/staging/lustre/lustre/osc/osc_page.c b/drivers/staging/lustre/lustre/osc/osc_page.c
index e0187fa..71f5485 100644
--- a/drivers/staging/lustre/lustre/osc/osc_page.c
+++ b/drivers/staging/lustre/lustre/osc/osc_page.c
@@ -227,10 +227,10 @@ static int osc_page_flush(const struct lu_env *env,
 }
 
 static const struct cl_page_operations osc_page_ops = {
-	.cpo_print	 = osc_page_print,
+	.cpo_print	= osc_page_print,
 	.cpo_delete	= osc_page_delete,
-	.cpo_clip	   = osc_page_clip,
-	.cpo_flush	  = osc_page_flush
+	.cpo_clip	= osc_page_clip,
+	.cpo_flush	= osc_page_flush
 };
 
 int osc_page_init(const struct lu_env *env, struct cl_object *obj,
@@ -931,7 +931,7 @@ void osc_dec_unstable_pages(struct ptlrpc_request *req)
  */
 void osc_inc_unstable_pages(struct ptlrpc_request *req)
 {
-	struct client_obd *cli  = &req->rq_import->imp_obd->u.cli;
+	struct client_obd *cli = &req->rq_import->imp_obd->u.cli;
 	struct ptlrpc_bulk_desc *desc = req->rq_bulk;
 	long page_count = desc->bd_iov_count;
 
diff --git a/drivers/staging/lustre/lustre/osc/osc_request.c b/drivers/staging/lustre/lustre/osc/osc_request.c
index e92c8ac..86f9de6 100644
--- a/drivers/staging/lustre/lustre/osc/osc_request.c
+++ b/drivers/staging/lustre/lustre/osc/osc_request.c
@@ -162,7 +162,7 @@ static int osc_getattr(const struct lu_env *env, struct obd_export *exp,
 	oa->o_blksize = cli_brw_size(exp->exp_obd);
 	oa->o_valid |= OBD_MD_FLBLKSZ;
 
- out:
+out:
 	ptlrpc_req_finished(req);
 	return rc;
 }
@@ -1322,7 +1322,7 @@ static int osc_brw_prep_request(int cmd, struct client_obd *cli,
 
 	return 0;
 
- out:
+out:
 	ptlrpc_req_finished(req);
 	return rc;
 }
@@ -1753,7 +1753,7 @@ static int brw_interpret(const struct lu_env *env,
 
 	if (rc == 0) {
 		struct obdo *oa = aa->aa_oa;
-		struct cl_attr *attr  = &osc_env_info(env)->oti_attr;
+		struct cl_attr *attr = &osc_env_info(env)->oti_attr;
 		unsigned long valid = 0;
 		struct cl_object *obj;
 		struct osc_async_page *last;
@@ -2260,7 +2260,7 @@ int osc_enqueue_base(struct obd_export *exp, struct ldlm_res_id *res_id,
 			lustre_handle_copy(&aa->oa_lockh, &lockh);
 			aa->oa_upcall = upcall;
 			aa->oa_cookie = cookie;
-			aa->oa_agl    = !!agl;
+			aa->oa_agl = !!agl;
 			if (!agl) {
 				aa->oa_flags = flags;
 				aa->oa_lvb = lvb;
@@ -2474,7 +2474,7 @@ static int osc_statfs(const struct lu_env *env, struct obd_export *exp,
 
 	*osfs = *msfs;
 
- out:
+out:
 	ptlrpc_req_finished(req);
 	return rc;
 }
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [lustre-devel] [PATCH 19/26] ptlrpc: cleanup white spaces
  2019-01-31 17:19 [lustre-devel] [PATCH 00/26] lustre: cleanups with no code changes James Simmons
                   ` (17 preceding siblings ...)
  2019-01-31 17:19 ` [lustre-devel] [PATCH 18/26] osc: " James Simmons
@ 2019-01-31 17:19 ` James Simmons
  2019-02-04  3:18   ` NeilBrown
  2019-01-31 17:19 ` [lustre-devel] [PATCH 20/26] lustre: first batch to cleanup white spaces in internal headers James Simmons
                   ` (7 subsequent siblings)
  26 siblings, 1 reply; 30+ messages in thread
From: James Simmons @ 2019-01-31 17:19 UTC (permalink / raw)
  To: lustre-devel

The ptlrpc code is very messy and difficult to read. Remove excess
white space and properly align data structures so they are easy on
the eyes.

Signed-off-by: James Simmons <jsimmons@infradead.org>
---
 drivers/staging/lustre/lustre/ptlrpc/client.c      |  45 ++--
 drivers/staging/lustre/lustre/ptlrpc/import.c      |   2 +-
 drivers/staging/lustre/lustre/ptlrpc/layout.c      |   3 -
 .../staging/lustre/lustre/ptlrpc/lproc_ptlrpc.c    | 278 ++++++++++-----------
 drivers/staging/lustre/lustre/ptlrpc/niobuf.c      |   7 +-
 drivers/staging/lustre/lustre/ptlrpc/nrs.c         |   1 -
 .../staging/lustre/lustre/ptlrpc/ptlrpc_internal.h |  14 +-
 drivers/staging/lustre/lustre/ptlrpc/ptlrpcd.c     |  20 +-
 drivers/staging/lustre/lustre/ptlrpc/recover.c     |   1 +
 drivers/staging/lustre/lustre/ptlrpc/sec.c         |   4 +-
 drivers/staging/lustre/lustre/ptlrpc/sec_bulk.c    |  74 +++---
 drivers/staging/lustre/lustre/ptlrpc/sec_config.c  |  22 +-
 drivers/staging/lustre/lustre/ptlrpc/sec_null.c    |  34 +--
 drivers/staging/lustre/lustre/ptlrpc/sec_plain.c   |  68 ++---
 drivers/staging/lustre/lustre/ptlrpc/service.c     |  20 +-
 15 files changed, 293 insertions(+), 300 deletions(-)

diff --git a/drivers/staging/lustre/lustre/ptlrpc/client.c b/drivers/staging/lustre/lustre/ptlrpc/client.c
index f4b3875..0831810 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/client.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/client.c
@@ -49,14 +49,14 @@
 #include "ptlrpc_internal.h"
 
 const struct ptlrpc_bulk_frag_ops ptlrpc_bulk_kiov_pin_ops = {
-	.add_kiov_frag	= ptlrpc_prep_bulk_page_pin,
-	.release_frags	= ptlrpc_release_bulk_page_pin,
+	.add_kiov_frag		= ptlrpc_prep_bulk_page_pin,
+	.release_frags		= ptlrpc_release_bulk_page_pin,
 };
 EXPORT_SYMBOL(ptlrpc_bulk_kiov_pin_ops);
 
 const struct ptlrpc_bulk_frag_ops ptlrpc_bulk_kiov_nopin_ops = {
-	.add_kiov_frag	= ptlrpc_prep_bulk_page_nopin,
-	.release_frags	= NULL,
+	.add_kiov_frag		= ptlrpc_prep_bulk_page_nopin,
+	.release_frags		= NULL,
 };
 EXPORT_SYMBOL(ptlrpc_bulk_kiov_nopin_ops);
 
@@ -658,15 +658,14 @@ static void __ptlrpc_free_req_to_pool(struct ptlrpc_request *request)
 
 void ptlrpc_add_unreplied(struct ptlrpc_request *req)
 {
-	struct obd_import	*imp = req->rq_import;
-	struct ptlrpc_request	*iter;
+	struct obd_import *imp = req->rq_import;
+	struct ptlrpc_request *iter;
 
 	assert_spin_locked(&imp->imp_lock);
 	LASSERT(list_empty(&req->rq_unreplied_list));
 
 	/* unreplied list is sorted by xid in ascending order */
 	list_for_each_entry_reverse(iter, &imp->imp_unreplied_list, rq_unreplied_list) {
-
 		LASSERT(req->rq_xid != iter->rq_xid);
 		if (req->rq_xid < iter->rq_xid)
 			continue;
@@ -1318,10 +1317,10 @@ static int after_reply(struct ptlrpc_request *req)
 		 * reply).  NB: no need to round up because alloc_repbuf will
 		 * round it up
 		 */
-		req->rq_replen       = req->rq_nob_received;
+		req->rq_replen = req->rq_nob_received;
 		req->rq_nob_received = 0;
 		spin_lock(&req->rq_lock);
-		req->rq_resend       = 1;
+		req->rq_resend = 1;
 		spin_unlock(&req->rq_lock);
 		return 0;
 	}
@@ -1359,7 +1358,7 @@ static int after_reply(struct ptlrpc_request *req)
 		spin_unlock(&req->rq_lock);
 		req->rq_nr_resend++;
 
-		/* Readjust the timeout for current conditions */
+		/* Read just the timeout for current conditions */
 		ptlrpc_at_set_req_timeout(req);
 		/*
 		 * delay resend to give a chance to the server to get ready.
@@ -1620,7 +1619,7 @@ static inline int ptlrpc_set_producer(struct ptlrpc_request_set *set)
 		rc = set->set_producer(set, set->set_producer_arg);
 		if (rc == -ENOENT) {
 			/* no more RPC to produce */
-			set->set_producer     = NULL;
+			set->set_producer = NULL;
 			set->set_producer_arg = NULL;
 			return 0;
 		}
@@ -1654,7 +1653,7 @@ int ptlrpc_check_set(const struct lu_env *env, struct ptlrpc_request_set *set)
 
 		/*
 		 * This schedule point is mainly for the ptlrpcd caller of this
-		 * function.  Most ptlrpc sets are not long-lived and unbounded
+		 * function. Most ptlrpc sets are not long-lived and unbounded
 		 * in length, but at the least the set used by the ptlrpcd is.
 		 * Since the processing time is unbounded, we need to insert an
 		 * explicit schedule point to make the thread well-behaved.
@@ -2130,7 +2129,6 @@ void ptlrpc_expired_set(struct ptlrpc_request_set *set)
 
 	/* A timeout expired. See which reqs it applies to...  */
 	list_for_each_entry(req, &set->set_requests, rq_set_chain) {
-
 		/* don't expire request waiting for context */
 		if (req->rq_wait_ctx)
 			continue;
@@ -2185,7 +2183,6 @@ int ptlrpc_set_next_timeout(struct ptlrpc_request_set *set)
 	time64_t deadline;
 
 	list_for_each_entry(req, &set->set_requests, rq_set_chain) {
-
 		/* Request in-flight? */
 		if (!(((req->rq_phase == RQ_PHASE_RPC) && !req->rq_waiting) ||
 		      (req->rq_phase == RQ_PHASE_BULK) ||
@@ -2568,7 +2565,7 @@ static void ptlrpc_free_request(struct ptlrpc_request *req)
  */
 void ptlrpc_request_committed(struct ptlrpc_request *req, int force)
 {
-	struct obd_import	*imp = req->rq_import;
+	struct obd_import *imp = req->rq_import;
 
 	spin_lock(&imp->imp_lock);
 	if (list_empty(&req->rq_replay_list)) {
@@ -2896,7 +2893,7 @@ static int ptlrpc_replay_interpret(const struct lu_env *env,
 
 	/* continue with recovery */
 	rc = ptlrpc_import_recovery_state_machine(imp);
- out:
+out:
 	req->rq_send_state = aa->praa_old_state;
 
 	if (rc != 0)
@@ -3031,7 +3028,7 @@ void ptlrpc_abort_set(struct ptlrpc_request_set *set)
 /**
  * Initialize the XID for the node.  This is common among all requests on
  * this node, and only requires the property that it is monotonically
- * increasing.  It does not need to be sequential.  Since this is also used
+ * increasing. It does not need to be sequential.  Since this is also used
  * as the RDMA match bits, it is important that a single client NOT have
  * the same match bits for two different in-flight requests, hence we do
  * NOT want to have an XID per target or similar.
@@ -3198,12 +3195,12 @@ struct ptlrpc_work_async_args {
 static void ptlrpcd_add_work_req(struct ptlrpc_request *req)
 {
 	/* re-initialize the req */
-	req->rq_timeout		= obd_timeout;
-	req->rq_sent		= ktime_get_real_seconds();
-	req->rq_deadline	= req->rq_sent + req->rq_timeout;
-	req->rq_phase		= RQ_PHASE_INTERPRET;
-	req->rq_next_phase	= RQ_PHASE_COMPLETE;
-	req->rq_xid		= ptlrpc_next_xid();
+	req->rq_timeout	= obd_timeout;
+	req->rq_sent = ktime_get_real_seconds();
+	req->rq_deadline = req->rq_sent + req->rq_timeout;
+	req->rq_phase = RQ_PHASE_INTERPRET;
+	req->rq_next_phase = RQ_PHASE_COMPLETE;
+	req->rq_xid = ptlrpc_next_xid();
 	req->rq_import_generation = req->rq_import->imp_generation;
 
 	ptlrpcd_add_req(req);
@@ -3241,7 +3238,7 @@ static int ptlrpcd_check_work(struct ptlrpc_request *req)
 void *ptlrpcd_alloc_work(struct obd_import *imp,
 			 int (*cb)(const struct lu_env *, void *), void *cbdata)
 {
-	struct ptlrpc_request	 *req = NULL;
+	struct ptlrpc_request *req = NULL;
 	struct ptlrpc_work_async_args *args;
 
 	might_sleep();
diff --git a/drivers/staging/lustre/lustre/ptlrpc/import.c b/drivers/staging/lustre/lustre/ptlrpc/import.c
index 56a0b76..7bb2e06 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/import.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/import.c
@@ -51,7 +51,7 @@
 #include "ptlrpc_internal.h"
 
 struct ptlrpc_connect_async_args {
-	 u64 pcaa_peer_committed;
+	u64 pcaa_peer_committed;
 	int pcaa_initial_connect;
 };
 
diff --git a/drivers/staging/lustre/lustre/ptlrpc/layout.c b/drivers/staging/lustre/lustre/ptlrpc/layout.c
index 2848f2f..f1f7d70 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/layout.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/layout.c
@@ -1907,9 +1907,7 @@ static void *__req_capsule_get(struct req_capsule *pill,
 	void *value;
 	u32 len;
 	u32 offset;
-
 	void *(*getter)(struct lustre_msg *m, u32 n, u32 minlen);
-
 	static const char *rcl_names[RCL_NR] = {
 		[RCL_CLIENT] = "client",
 		[RCL_SERVER] = "server"
@@ -2176,7 +2174,6 @@ void req_capsule_extend(struct req_capsule *pill, const struct req_format *fmt)
 {
 	int i;
 	size_t j;
-
 	const struct req_format *old;
 
 	LASSERT(pill->rc_fmt);
diff --git a/drivers/staging/lustre/lustre/ptlrpc/lproc_ptlrpc.c b/drivers/staging/lustre/lustre/ptlrpc/lproc_ptlrpc.c
index 92e3e0f..25858b8 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/lproc_ptlrpc.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/lproc_ptlrpc.c
@@ -42,115 +42,115 @@
 #include "ptlrpc_internal.h"
 
 static struct ll_rpc_opcode {
-	u32       opcode;
-	const char *opname;
+	u32				opcode;
+	const char			*opname;
 } ll_rpc_opcode_table[LUSTRE_MAX_OPCODES] = {
-	{ OST_REPLY,	"ost_reply" },
-	{ OST_GETATTR,      "ost_getattr" },
-	{ OST_SETATTR,      "ost_setattr" },
-	{ OST_READ,	 "ost_read" },
-	{ OST_WRITE,	"ost_write" },
-	{ OST_CREATE,       "ost_create" },
-	{ OST_DESTROY,      "ost_destroy" },
-	{ OST_GET_INFO,     "ost_get_info" },
-	{ OST_CONNECT,      "ost_connect" },
-	{ OST_DISCONNECT,   "ost_disconnect" },
-	{ OST_PUNCH,	"ost_punch" },
-	{ OST_OPEN,	 "ost_open" },
-	{ OST_CLOSE,	"ost_close" },
-	{ OST_STATFS,       "ost_statfs" },
-	{ 14,		NULL },    /* formerly OST_SAN_READ */
-	{ 15,		NULL },    /* formerly OST_SAN_WRITE */
-	{ OST_SYNC,	 "ost_sync" },
-	{ OST_SET_INFO,     "ost_set_info" },
-	{ OST_QUOTACHECK,   "ost_quotacheck" },
-	{ OST_QUOTACTL,     "ost_quotactl" },
-	{ OST_QUOTA_ADJUST_QUNIT, "ost_quota_adjust_qunit" },
-	{ OST_LADVISE,			"ost_ladvise" },
-	{ MDS_GETATTR,      "mds_getattr" },
-	{ MDS_GETATTR_NAME, "mds_getattr_lock" },
-	{ MDS_CLOSE,	"mds_close" },
-	{ MDS_REINT,	"mds_reint" },
-	{ MDS_READPAGE,     "mds_readpage" },
-	{ MDS_CONNECT,      "mds_connect" },
-	{ MDS_DISCONNECT,   "mds_disconnect" },
-	{ MDS_GET_ROOT,			"mds_get_root" },
-	{ MDS_STATFS,       "mds_statfs" },
-	{ MDS_PIN,	  "mds_pin" },
-	{ MDS_UNPIN,	"mds_unpin" },
-	{ MDS_SYNC,	 "mds_sync" },
-	{ MDS_DONE_WRITING, "mds_done_writing" },
-	{ MDS_SET_INFO,     "mds_set_info" },
-	{ MDS_QUOTACHECK,   "mds_quotacheck" },
-	{ MDS_QUOTACTL,     "mds_quotactl" },
-	{ MDS_GETXATTR,     "mds_getxattr" },
-	{ MDS_SETXATTR,     "mds_setxattr" },
-	{ MDS_WRITEPAGE,    "mds_writepage" },
-	{ MDS_IS_SUBDIR,    "mds_is_subdir" },
-	{ MDS_GET_INFO,     "mds_get_info" },
-	{ MDS_HSM_STATE_GET, "mds_hsm_state_get" },
-	{ MDS_HSM_STATE_SET, "mds_hsm_state_set" },
-	{ MDS_HSM_ACTION,   "mds_hsm_action" },
-	{ MDS_HSM_PROGRESS, "mds_hsm_progress" },
-	{ MDS_HSM_REQUEST,  "mds_hsm_request" },
-	{ MDS_HSM_CT_REGISTER, "mds_hsm_ct_register" },
-	{ MDS_HSM_CT_UNREGISTER, "mds_hsm_ct_unregister" },
-	{ MDS_SWAP_LAYOUTS,	"mds_swap_layouts" },
-	{ LDLM_ENQUEUE,     "ldlm_enqueue" },
-	{ LDLM_CONVERT,     "ldlm_convert" },
-	{ LDLM_CANCEL,      "ldlm_cancel" },
-	{ LDLM_BL_CALLBACK, "ldlm_bl_callback" },
-	{ LDLM_CP_CALLBACK, "ldlm_cp_callback" },
-	{ LDLM_GL_CALLBACK, "ldlm_gl_callback" },
-	{ LDLM_SET_INFO,    "ldlm_set_info" },
-	{ MGS_CONNECT,      "mgs_connect" },
-	{ MGS_DISCONNECT,   "mgs_disconnect" },
-	{ MGS_EXCEPTION,    "mgs_exception" },
-	{ MGS_TARGET_REG,   "mgs_target_reg" },
-	{ MGS_TARGET_DEL,   "mgs_target_del" },
-	{ MGS_SET_INFO,     "mgs_set_info" },
-	{ MGS_CONFIG_READ,  "mgs_config_read" },
-	{ OBD_PING,	 "obd_ping" },
-	{ OBD_LOG_CANCEL,	"llog_cancel" },
-	{ OBD_QC_CALLBACK,  "obd_quota_callback" },
-	{ OBD_IDX_READ,	    "dt_index_read" },
-	{ LLOG_ORIGIN_HANDLE_CREATE,	 "llog_origin_handle_open" },
-	{ LLOG_ORIGIN_HANDLE_NEXT_BLOCK, "llog_origin_handle_next_block" },
-	{ LLOG_ORIGIN_HANDLE_READ_HEADER, "llog_origin_handle_read_header" },
-	{ LLOG_ORIGIN_HANDLE_WRITE_REC,  "llog_origin_handle_write_rec" },
-	{ LLOG_ORIGIN_HANDLE_CLOSE,      "llog_origin_handle_close" },
-	{ LLOG_ORIGIN_CONNECT,	   "llog_origin_connect" },
-	{ LLOG_CATINFO,		  "llog_catinfo" },
-	{ LLOG_ORIGIN_HANDLE_PREV_BLOCK, "llog_origin_handle_prev_block" },
-	{ LLOG_ORIGIN_HANDLE_DESTROY,    "llog_origin_handle_destroy" },
-	{ QUOTA_DQACQ,      "quota_acquire" },
-	{ QUOTA_DQREL,      "quota_release" },
-	{ SEQ_QUERY,	"seq_query" },
-	{ SEC_CTX_INIT,     "sec_ctx_init" },
-	{ SEC_CTX_INIT_CONT, "sec_ctx_init_cont" },
-	{ SEC_CTX_FINI,     "sec_ctx_fini" },
-	{ FLD_QUERY,	"fld_query" },
-	{ FLD_READ,	"fld_read" },
+	{ OST_REPLY,				"ost_reply" },
+	{ OST_GETATTR,				"ost_getattr" },
+	{ OST_SETATTR,				"ost_setattr" },
+	{ OST_READ,				"ost_read" },
+	{ OST_WRITE,				"ost_write" },
+	{ OST_CREATE,				"ost_create" },
+	{ OST_DESTROY,				"ost_destroy" },
+	{ OST_GET_INFO,				"ost_get_info" },
+	{ OST_CONNECT,				"ost_connect" },
+	{ OST_DISCONNECT,			"ost_disconnect" },
+	{ OST_PUNCH,				"ost_punch" },
+	{ OST_OPEN,				"ost_open" },
+	{ OST_CLOSE,				"ost_close" },
+	{ OST_STATFS,				"ost_statfs" },
+	{ 14,					NULL },	/* formerly OST_SAN_READ */
+	{ 15,					NULL }, /* formerly OST_SAN_WRITE */
+	{ OST_SYNC,				"ost_sync" },
+	{ OST_SET_INFO,				"ost_set_info" },
+	{ OST_QUOTACHECK,			"ost_quotacheck" },
+	{ OST_QUOTACTL,				"ost_quotactl" },
+	{ OST_QUOTA_ADJUST_QUNIT,		"ost_quota_adjust_qunit" },
+	{ OST_LADVISE,				"ost_ladvise" },
+	{ MDS_GETATTR,				"mds_getattr" },
+	{ MDS_GETATTR_NAME,			"mds_getattr_lock" },
+	{ MDS_CLOSE,				"mds_close" },
+	{ MDS_REINT,				"mds_reint" },
+	{ MDS_READPAGE,				"mds_readpage" },
+	{ MDS_CONNECT,				"mds_connect" },
+	{ MDS_DISCONNECT,			"mds_disconnect" },
+	{ MDS_GET_ROOT,				"mds_get_root" },
+	{ MDS_STATFS,				"mds_statfs" },
+	{ MDS_PIN,				"mds_pin" },
+	{ MDS_UNPIN,				"mds_unpin" },
+	{ MDS_SYNC,				"mds_sync" },
+	{ MDS_DONE_WRITING,			"mds_done_writing" },
+	{ MDS_SET_INFO,				"mds_set_info" },
+	{ MDS_QUOTACHECK,			"mds_quotacheck" },
+	{ MDS_QUOTACTL,				"mds_quotactl" },
+	{ MDS_GETXATTR,				"mds_getxattr" },
+	{ MDS_SETXATTR,				"mds_setxattr" },
+	{ MDS_WRITEPAGE,			"mds_writepage" },
+	{ MDS_IS_SUBDIR,			"mds_is_subdir" },
+	{ MDS_GET_INFO,				"mds_get_info" },
+	{ MDS_HSM_STATE_GET,			"mds_hsm_state_get" },
+	{ MDS_HSM_STATE_SET,			"mds_hsm_state_set" },
+	{ MDS_HSM_ACTION,			"mds_hsm_action" },
+	{ MDS_HSM_PROGRESS,			"mds_hsm_progress" },
+	{ MDS_HSM_REQUEST,			"mds_hsm_request" },
+	{ MDS_HSM_CT_REGISTER,			"mds_hsm_ct_register" },
+	{ MDS_HSM_CT_UNREGISTER,		"mds_hsm_ct_unregister" },
+	{ MDS_SWAP_LAYOUTS,			"mds_swap_layouts" },
+	{ LDLM_ENQUEUE,				"ldlm_enqueue" },
+	{ LDLM_CONVERT,				"ldlm_convert" },
+	{ LDLM_CANCEL,				"ldlm_cancel" },
+	{ LDLM_BL_CALLBACK,			"ldlm_bl_callback" },
+	{ LDLM_CP_CALLBACK,			"ldlm_cp_callback" },
+	{ LDLM_GL_CALLBACK,			"ldlm_gl_callback" },
+	{ LDLM_SET_INFO,			"ldlm_set_info" },
+	{ MGS_CONNECT,				"mgs_connect" },
+	{ MGS_DISCONNECT,			"mgs_disconnect" },
+	{ MGS_EXCEPTION,			"mgs_exception" },
+	{ MGS_TARGET_REG,			"mgs_target_reg" },
+	{ MGS_TARGET_DEL,			"mgs_target_del" },
+	{ MGS_SET_INFO,				"mgs_set_info" },
+	{ MGS_CONFIG_READ,			"mgs_config_read" },
+	{ OBD_PING,				"obd_ping" },
+	{ OBD_LOG_CANCEL,			"llog_cancel" },
+	{ OBD_QC_CALLBACK,			"obd_quota_callback" },
+	{ OBD_IDX_READ,				"dt_index_read" },
+	{ LLOG_ORIGIN_HANDLE_CREATE,		 "llog_origin_handle_open" },
+	{ LLOG_ORIGIN_HANDLE_NEXT_BLOCK,	"llog_origin_handle_next_block" },
+	{ LLOG_ORIGIN_HANDLE_READ_HEADER,	"llog_origin_handle_read_header" },
+	{ LLOG_ORIGIN_HANDLE_WRITE_REC,		"llog_origin_handle_write_rec" },
+	{ LLOG_ORIGIN_HANDLE_CLOSE,		"llog_origin_handle_close" },
+	{ LLOG_ORIGIN_CONNECT,			"llog_origin_connect" },
+	{ LLOG_CATINFO,				"llog_catinfo" },
+	{ LLOG_ORIGIN_HANDLE_PREV_BLOCK,	"llog_origin_handle_prev_block" },
+	{ LLOG_ORIGIN_HANDLE_DESTROY,		"llog_origin_handle_destroy" },
+	{ QUOTA_DQACQ,				"quota_acquire" },
+	{ QUOTA_DQREL,				"quota_release" },
+	{ SEQ_QUERY,				"seq_query" },
+	{ SEC_CTX_INIT,				"sec_ctx_init" },
+	{ SEC_CTX_INIT_CONT,			"sec_ctx_init_cont" },
+	{ SEC_CTX_FINI,				"sec_ctx_fini" },
+	{ FLD_QUERY,				"fld_query" },
+	{ FLD_READ,				"fld_read" },
 };
 
 static struct ll_eopcode {
-	u32       opcode;
-	const char *opname;
+	u32			opcode;
+	const char		*opname;
 } ll_eopcode_table[EXTRA_LAST_OPC] = {
-	{ LDLM_GLIMPSE_ENQUEUE, "ldlm_glimpse_enqueue" },
-	{ LDLM_PLAIN_ENQUEUE,   "ldlm_plain_enqueue" },
-	{ LDLM_EXTENT_ENQUEUE,  "ldlm_extent_enqueue" },
-	{ LDLM_FLOCK_ENQUEUE,   "ldlm_flock_enqueue" },
-	{ LDLM_IBITS_ENQUEUE,   "ldlm_ibits_enqueue" },
-	{ MDS_REINT_SETATTR,    "mds_reint_setattr" },
-	{ MDS_REINT_CREATE,     "mds_reint_create" },
-	{ MDS_REINT_LINK,       "mds_reint_link" },
-	{ MDS_REINT_UNLINK,     "mds_reint_unlink" },
-	{ MDS_REINT_RENAME,     "mds_reint_rename" },
-	{ MDS_REINT_OPEN,       "mds_reint_open" },
-	{ MDS_REINT_SETXATTR,   "mds_reint_setxattr" },
-	{ BRW_READ_BYTES,       "read_bytes" },
-	{ BRW_WRITE_BYTES,      "write_bytes" },
+	{ LDLM_GLIMPSE_ENQUEUE,			"ldlm_glimpse_enqueue" },
+	{ LDLM_PLAIN_ENQUEUE,			"ldlm_plain_enqueue" },
+	{ LDLM_EXTENT_ENQUEUE,			"ldlm_extent_enqueue" },
+	{ LDLM_FLOCK_ENQUEUE,			"ldlm_flock_enqueue" },
+	{ LDLM_IBITS_ENQUEUE,			"ldlm_ibits_enqueue" },
+	{ MDS_REINT_SETATTR,			"mds_reint_setattr" },
+	{ MDS_REINT_CREATE,			"mds_reint_create" },
+	{ MDS_REINT_LINK,			"mds_reint_link" },
+	{ MDS_REINT_UNLINK,			"mds_reint_unlink" },
+	{ MDS_REINT_RENAME,			"mds_reint_rename" },
+	{ MDS_REINT_OPEN,			"mds_reint_open" },
+	{ MDS_REINT_SETXATTR,			"mds_reint_setxattr" },
+	{ BRW_READ_BYTES,			"read_bytes" },
+	{ BRW_WRITE_BYTES,			"write_bytes" },
 };
 
 const char *ll_opcode2str(u32 opcode)
@@ -450,13 +450,13 @@ static void nrs_policy_get_info_locked(struct ptlrpc_nrs_policy *policy,
 
 	memcpy(info->pi_name, policy->pol_desc->pd_name, NRS_POL_NAME_MAX);
 
-	info->pi_fallback    = !!(policy->pol_flags & PTLRPC_NRS_FL_FALLBACK);
-	info->pi_state	     = policy->pol_state;
+	info->pi_fallback = !!(policy->pol_flags & PTLRPC_NRS_FL_FALLBACK);
+	info->pi_state = policy->pol_state;
 	/**
 	 * XXX: These are accessed without holding
 	 * ptlrpc_service_part::scp_req_lock.
 	 */
-	info->pi_req_queued  = policy->pol_req_queued;
+	info->pi_req_queued = policy->pol_req_queued;
 	info->pi_req_started = policy->pol_req_started;
 }
 
@@ -788,18 +788,18 @@ struct ptlrpc_srh_iterator {
 /* convert position to sequence */
 #define PTLRPC_REQ_POS2SEQ(svc, pos)			\
 	((svc)->srv_cpt_bits == 0 ? (pos) :		\
-	 ((u64)(pos) << (svc)->srv_cpt_bits) |	\
+	 ((u64)(pos) << (svc)->srv_cpt_bits) |		\
 	 ((u64)(pos) >> (64 - (svc)->srv_cpt_bits)))
 
 static void *
 ptlrpc_lprocfs_svc_req_history_start(struct seq_file *s, loff_t *pos)
 {
-	struct ptlrpc_service		*svc = s->private;
-	struct ptlrpc_service_part	*svcpt;
-	struct ptlrpc_srh_iterator	*srhi;
-	unsigned int			cpt;
-	int				rc;
-	int				i;
+	struct ptlrpc_service *svc = s->private;
+	struct ptlrpc_service_part *svcpt;
+	struct ptlrpc_srh_iterator *srhi;
+	unsigned int cpt;
+	int rc;
+	int i;
 
 	if (sizeof(loff_t) != sizeof(u64)) { /* can't support */
 		CWARN("Failed to read request history because size of loff_t %d can't match size of u64\n",
@@ -940,10 +940,10 @@ static int ptlrpc_lprocfs_svc_req_history_show(struct seq_file *s, void *iter)
 ptlrpc_lprocfs_svc_req_history_open(struct inode *inode, struct file *file)
 {
 	static const struct seq_operations sops = {
-		.start = ptlrpc_lprocfs_svc_req_history_start,
-		.stop  = ptlrpc_lprocfs_svc_req_history_stop,
-		.next  = ptlrpc_lprocfs_svc_req_history_next,
-		.show  = ptlrpc_lprocfs_svc_req_history_show,
+		.start	= ptlrpc_lprocfs_svc_req_history_start,
+		.stop	= ptlrpc_lprocfs_svc_req_history_stop,
+		.next	= ptlrpc_lprocfs_svc_req_history_next,
+		.show	= ptlrpc_lprocfs_svc_req_history_show,
 	};
 	struct seq_file *seqf;
 	int rc;
@@ -975,9 +975,9 @@ static int ptlrpc_lprocfs_timeouts_seq_show(struct seq_file *m, void *n)
 	}
 
 	ptlrpc_service_for_each_part(svcpt, i, svc) {
-		cur	= at_get(&svcpt->scp_at_estimate);
-		worst	= svcpt->scp_at_estimate.at_worst_ever;
-		worstt	= svcpt->scp_at_estimate.at_worst_time;
+		cur = at_get(&svcpt->scp_at_estimate);
+		worst = svcpt->scp_at_estimate.at_worst_ever;
+		worstt = svcpt->scp_at_estimate.at_worst_time;
 		s2dhms(&ts, ktime_get_real_seconds() - worstt);
 
 		seq_printf(m, "%10s : cur %3u  worst %3u (at %lld, "
@@ -1074,26 +1074,26 @@ void ptlrpc_ldebugfs_register_service(struct dentry *entry,
 				      struct ptlrpc_service *svc)
 {
 	struct lprocfs_vars lproc_vars[] = {
-		{.name       = "req_buffer_history_len",
-		 .fops	     = &ptlrpc_lprocfs_req_history_len_fops,
-		 .data       = svc},
-		{.name       = "req_buffer_history_max",
-		 .fops	     = &ptlrpc_lprocfs_req_history_max_fops,
-		 .data       = svc},
-		{.name       = "timeouts",
-		 .fops	     = &ptlrpc_lprocfs_timeouts_fops,
-		 .data       = svc},
-		{.name       = "nrs_policies",
-		 .fops	     = &ptlrpc_lprocfs_nrs_fops,
-		 .data	     = svc},
-		{NULL}
+		{ .name		= "req_buffer_history_len",
+		  .fops		= &ptlrpc_lprocfs_req_history_len_fops,
+		  .data		= svc },
+		{ .name		= "req_buffer_history_max",
+		  .fops		= &ptlrpc_lprocfs_req_history_max_fops,
+		  .data		= svc },
+		{ .name		= "timeouts",
+		  .fops		= &ptlrpc_lprocfs_timeouts_fops,
+		  .data		= svc },
+		{ .name		= "nrs_policies",
+		  .fops		= &ptlrpc_lprocfs_nrs_fops,
+		  .data		= svc },
+		{ NULL }
 	};
 	static const struct file_operations req_history_fops = {
-		.owner       = THIS_MODULE,
-		.open	= ptlrpc_lprocfs_svc_req_history_open,
-		.read	= seq_read,
-		.llseek      = seq_lseek,
-		.release     = lprocfs_seq_release,
+		.owner		= THIS_MODULE,
+		.open		= ptlrpc_lprocfs_svc_req_history_open,
+		.read		= seq_read,
+		.llseek		= seq_lseek,
+		.release	= lprocfs_seq_release,
 	};
 
 	ptlrpc_ldebugfs_register(entry, svc->srv_name,
diff --git a/drivers/staging/lustre/lustre/ptlrpc/niobuf.c b/drivers/staging/lustre/lustre/ptlrpc/niobuf.c
index d3044a7..ea7a7f9 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/niobuf.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/niobuf.c
@@ -280,6 +280,7 @@ int ptlrpc_unregister_bulk(struct ptlrpc_request *req, int async)
 		 * timeout lets us CWARN for visibility of sluggish LNDs
 		 */
 		int cnt = 0;
+
 		while (cnt < LONG_UNLINK &&
 		       (rc = wait_event_idle_timeout(*wq,
 						     !ptlrpc_client_bulk_active(req),
@@ -685,7 +686,7 @@ int ptl_send_rpc(struct ptlrpc_request *request, int noreply)
 	 * add the network latency for our local timeout.
 	 */
 	request->rq_deadline = request->rq_sent + request->rq_timeout +
-		ptlrpc_at_get_net_latency(request);
+			       ptlrpc_at_get_net_latency(request);
 
 	ptlrpc_pinger_sending_on_import(imp);
 
@@ -705,7 +706,7 @@ int ptl_send_rpc(struct ptlrpc_request *request, int noreply)
 	if (noreply)
 		goto out;
 
- cleanup_me:
+cleanup_me:
 	/* MEUnlink is safe; the PUT didn't even get off the ground, and
 	 * nobody apart from the PUT's target has the right nid+XID to
 	 * access the reply buffer.
@@ -715,7 +716,7 @@ int ptl_send_rpc(struct ptlrpc_request *request, int noreply)
 	/* UNLINKED callback called synchronously */
 	LASSERT(!request->rq_receiving_reply);
 
- cleanup_bulk:
+cleanup_bulk:
 	/* We do sync unlink here as there was no real transfer here so
 	 * the chance to have long unlink to sluggish net is smaller here.
 	 */
diff --git a/drivers/staging/lustre/lustre/ptlrpc/nrs.c b/drivers/staging/lustre/lustre/ptlrpc/nrs.c
index 248ba04..ef7dd5d 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/nrs.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/nrs.c
@@ -118,7 +118,6 @@ static int nrs_policy_stop_locked(struct ptlrpc_nrs_policy *policy)
 	/* Immediately make it invisible */
 	if (nrs->nrs_policy_primary == policy) {
 		nrs->nrs_policy_primary = NULL;
-
 	} else {
 		LASSERT(nrs->nrs_policy_fallback == policy);
 		nrs->nrs_policy_fallback = NULL;
diff --git a/drivers/staging/lustre/lustre/ptlrpc/ptlrpc_internal.h b/drivers/staging/lustre/lustre/ptlrpc/ptlrpc_internal.h
index 10c2520..5383b68 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/ptlrpc_internal.h
+++ b/drivers/staging/lustre/lustre/ptlrpc/ptlrpc_internal.h
@@ -111,12 +111,12 @@ struct nrs_core {
 	 * Protects nrs_core::nrs_policies, serializes external policy
 	 * registration/unregistration, and NRS core lprocfs operations.
 	 */
-	struct mutex nrs_mutex;
+	struct mutex		nrs_mutex;
 	/**
 	 * List of all policy descriptors registered with NRS core; protected
 	 * by nrs_core::nrs_mutex.
 	 */
-	struct list_head nrs_policies;
+	struct list_head	nrs_policies;
 
 };
 
@@ -251,15 +251,15 @@ struct ptlrpc_reply_state *
 void ptlrpc_pinger_wake_up(void);
 
 /* sec_null.c */
-int  sptlrpc_null_init(void);
+int sptlrpc_null_init(void);
 void sptlrpc_null_fini(void);
 
 /* sec_plain.c */
-int  sptlrpc_plain_init(void);
+int sptlrpc_plain_init(void);
 void sptlrpc_plain_fini(void);
 
 /* sec_bulk.c */
-int  sptlrpc_enc_pool_init(void);
+int sptlrpc_enc_pool_init(void);
 void sptlrpc_enc_pool_fini(void);
 int sptlrpc_proc_enc_pool_seq_show(struct seq_file *m, void *v);
 
@@ -277,11 +277,11 @@ void sptlrpc_conf_choose_flavor(enum lustre_sec_part from,
 				struct obd_uuid *target,
 				lnet_nid_t nid,
 				struct sptlrpc_flavor *sf);
-int  sptlrpc_conf_init(void);
+int sptlrpc_conf_init(void);
 void sptlrpc_conf_fini(void);
 
 /* sec.c */
-int  sptlrpc_init(void);
+int sptlrpc_init(void);
 void sptlrpc_fini(void);
 
 /* layout.c */
diff --git a/drivers/staging/lustre/lustre/ptlrpc/ptlrpcd.c b/drivers/staging/lustre/lustre/ptlrpc/ptlrpcd.c
index e39c38a..f0ac296 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/ptlrpcd.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/ptlrpcd.c
@@ -69,13 +69,13 @@
 
 /* One of these per CPT. */
 struct ptlrpcd {
-	int pd_size;
-	int pd_index;
-	int pd_cpt;
-	int pd_cursor;
-	int pd_nthreads;
-	int pd_groupsize;
-	struct ptlrpcd_ctl pd_threads[0];
+	int			pd_size;
+	int			pd_index;
+	int			pd_cpt;
+	int			pd_cursor;
+	int			pd_nthreads;
+	int			pd_groupsize;
+	struct ptlrpcd_ctl	pd_threads[0];
 };
 
 /*
@@ -171,9 +171,9 @@ void ptlrpcd_wake(struct ptlrpc_request *req)
 static struct ptlrpcd_ctl *
 ptlrpcd_select_pc(struct ptlrpc_request *req)
 {
-	struct ptlrpcd	*pd;
-	int		cpt;
-	int		idx;
+	struct ptlrpcd *pd;
+	int cpt;
+	int idx;
 
 	if (req && req->rq_send_state != LUSTRE_IMP_FULL)
 		return &ptlrpcd_rcv;
diff --git a/drivers/staging/lustre/lustre/ptlrpc/recover.c b/drivers/staging/lustre/lustre/ptlrpc/recover.c
index ed769a4..af672ab 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/recover.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/recover.c
@@ -119,6 +119,7 @@ int ptlrpc_replay_next(struct obd_import *imp, int *inflight)
 	 */
 	if (!req) {
 		struct ptlrpc_request *tmp;
+
 		list_for_each_entry_safe(tmp, pos, &imp->imp_replay_list,
 					 rq_replay_list) {
 			if (tmp->rq_transno > last_transno) {
diff --git a/drivers/staging/lustre/lustre/ptlrpc/sec.c b/drivers/staging/lustre/lustre/ptlrpc/sec.c
index 165082a..6dc7731 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/sec.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/sec.c
@@ -171,7 +171,7 @@ u32 sptlrpc_name2flavor_base(const char *name)
 
 const char *sptlrpc_flavor2name_base(u32 flvr)
 {
-	u32   base = SPTLRPC_FLVR_BASE(flvr);
+	u32 base = SPTLRPC_FLVR_BASE(flvr);
 
 	if (base == SPTLRPC_FLVR_BASE(SPTLRPC_FLVR_NULL))
 		return "null";
@@ -365,7 +365,7 @@ int sptlrpc_req_get_ctx(struct ptlrpc_request *req)
 {
 	struct obd_import *imp = req->rq_import;
 	struct ptlrpc_sec *sec;
-	int		rc;
+	int rc;
 
 	LASSERT(!req->rq_cli_ctx);
 	LASSERT(imp);
diff --git a/drivers/staging/lustre/lustre/ptlrpc/sec_bulk.c b/drivers/staging/lustre/lustre/ptlrpc/sec_bulk.c
index 93dcb6d..74cfdd8 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/sec_bulk.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/sec_bulk.c
@@ -57,8 +57,8 @@
 #define POINTERS_PER_PAGE	(PAGE_SIZE / sizeof(void *))
 #define PAGES_PER_POOL		(POINTERS_PER_PAGE)
 
-#define IDLE_IDX_MAX	 (100)
-#define IDLE_IDX_WEIGHT	 (3)
+#define IDLE_IDX_MAX		(100)
+#define IDLE_IDX_WEIGHT		(3)
 
 #define CACHE_QUIESCENT_PERIOD  (20)
 
@@ -66,16 +66,16 @@
 	/*
 	 * constants
 	 */
-	unsigned long    epp_max_pages;   /* maximum pages can hold, const */
-	unsigned int     epp_max_pools;   /* number of pools, const */
+	unsigned long		epp_max_pages;	/* maximum pages can hold, const */
+	unsigned int		epp_max_pools;	/* number of pools, const */
 
 	/*
 	 * wait queue in case of not enough free pages.
 	 */
-	wait_queue_head_t      epp_waitq;       /* waiting threads */
-	unsigned int     epp_waitqlen;    /* wait queue length */
-	unsigned long    epp_pages_short; /* # of pages wanted of in-q users */
-	unsigned int     epp_growing:1;   /* during adding pages */
+	wait_queue_head_t	epp_waitq;	/* waiting threads */
+	unsigned int		epp_waitqlen;	/* wait queue length */
+	unsigned long		epp_pages_short; /* # of pages wanted of in-q users */
+	unsigned int		epp_growing:1;	/* during adding pages */
 
 	/*
 	 * indicating how idle the pools are, from 0 to MAX_IDLE_IDX
@@ -84,36 +84,36 @@
 	 * is idled for a while but the idle_idx might still be low if no
 	 * activities happened in the pools.
 	 */
-	unsigned long    epp_idle_idx;
+	unsigned long		epp_idle_idx;
 
 	/* last shrink time due to mem tight */
-	time64_t         epp_last_shrink;
-	time64_t         epp_last_access;
+	time64_t		epp_last_shrink;
+	time64_t		epp_last_access;
 
 	/*
 	 * in-pool pages bookkeeping
 	 */
-	spinlock_t	 epp_lock;	   /* protect following fields */
-	unsigned long    epp_total_pages; /* total pages in pools */
-	unsigned long    epp_free_pages;  /* current pages available */
+	spinlock_t		epp_lock;	 /* protect following fields */
+	unsigned long		epp_total_pages; /* total pages in pools */
+	unsigned long		epp_free_pages;	 /* current pages available */
 
 	/*
 	 * statistics
 	 */
-	unsigned long    epp_st_max_pages;      /* # of pages ever reached */
-	unsigned int     epp_st_grows;	  /* # of grows */
-	unsigned int     epp_st_grow_fails;     /* # of add pages failures */
-	unsigned int     epp_st_shrinks;	/* # of shrinks */
-	unsigned long    epp_st_access;	 /* # of access */
-	unsigned long    epp_st_missings;       /* # of cache missing */
-	unsigned long    epp_st_lowfree;	/* lowest free pages reached */
-	unsigned int     epp_st_max_wqlen;      /* highest waitqueue length */
-	unsigned long       epp_st_max_wait;       /* in jiffies */
-	unsigned long	 epp_st_outofmem;	/* # of out of mem requests */
+	unsigned long		epp_st_max_pages;	/* # of pages ever reached */
+	unsigned int		epp_st_grows;		/* # of grows */
+	unsigned int		epp_st_grow_fails;	/* # of add pages failures */
+	unsigned int		epp_st_shrinks;		/* # of shrinks */
+	unsigned long		epp_st_access;		/* # of access */
+	unsigned long		epp_st_missings;	/* # of cache missing */
+	unsigned long		epp_st_lowfree;		/* lowest free pages reached */
+	unsigned int		epp_st_max_wqlen;	/* highest waitqueue length */
+	unsigned long		epp_st_max_wait;	/* in jiffies */
+	unsigned long		epp_st_outofmem;	/* # of out of mem requests */
 	/*
 	 * pointers to pools
 	 */
-	struct page    ***epp_pools;
+	struct page		***epp_pools;
 } page_pools;
 
 /*
@@ -394,9 +394,9 @@ static inline void enc_pools_free(void)
 }
 
 static struct shrinker pools_shrinker = {
-	.count_objects	= enc_pools_shrink_count,
-	.scan_objects	= enc_pools_shrink_scan,
-	.seeks		= DEFAULT_SEEKS,
+	.count_objects		= enc_pools_shrink_count,
+	.scan_objects		= enc_pools_shrink_scan,
+	.seeks			= DEFAULT_SEEKS,
 };
 
 int sptlrpc_enc_pool_init(void)
@@ -475,14 +475,14 @@ void sptlrpc_enc_pool_fini(void)
 }
 
 static int cfs_hash_alg_id[] = {
-	[BULK_HASH_ALG_NULL]	= CFS_HASH_ALG_NULL,
-	[BULK_HASH_ALG_ADLER32]	= CFS_HASH_ALG_ADLER32,
-	[BULK_HASH_ALG_CRC32]	= CFS_HASH_ALG_CRC32,
-	[BULK_HASH_ALG_MD5]	= CFS_HASH_ALG_MD5,
-	[BULK_HASH_ALG_SHA1]	= CFS_HASH_ALG_SHA1,
-	[BULK_HASH_ALG_SHA256]	= CFS_HASH_ALG_SHA256,
-	[BULK_HASH_ALG_SHA384]	= CFS_HASH_ALG_SHA384,
-	[BULK_HASH_ALG_SHA512]	= CFS_HASH_ALG_SHA512,
+	[BULK_HASH_ALG_NULL]		= CFS_HASH_ALG_NULL,
+	[BULK_HASH_ALG_ADLER32]		= CFS_HASH_ALG_ADLER32,
+	[BULK_HASH_ALG_CRC32]		= CFS_HASH_ALG_CRC32,
+	[BULK_HASH_ALG_MD5]		= CFS_HASH_ALG_MD5,
+	[BULK_HASH_ALG_SHA1]		= CFS_HASH_ALG_SHA1,
+	[BULK_HASH_ALG_SHA256]		= CFS_HASH_ALG_SHA256,
+	[BULK_HASH_ALG_SHA384]		= CFS_HASH_ALG_SHA384,
+	[BULK_HASH_ALG_SHA512]		= CFS_HASH_ALG_SHA512,
 };
 
 const char *sptlrpc_get_hash_name(u8 hash_alg)
@@ -498,7 +498,7 @@ u8 sptlrpc_get_hash_alg(const char *algname)
 int bulk_sec_desc_unpack(struct lustre_msg *msg, int offset, int swabbed)
 {
 	struct ptlrpc_bulk_sec_desc *bsd;
-	int			  size = msg->lm_buflens[offset];
+	int size = msg->lm_buflens[offset];
 
 	bsd = lustre_msg_buf(msg, offset, sizeof(*bsd));
 	if (!bsd) {
diff --git a/drivers/staging/lustre/lustre/ptlrpc/sec_config.c b/drivers/staging/lustre/lustre/ptlrpc/sec_config.c
index 1844ada..54130ae 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/sec_config.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/sec_config.c
@@ -408,19 +408,19 @@ static int sptlrpc_rule_set_choose(struct sptlrpc_rule_set *rset,
  **********************************/
 
 struct sptlrpc_conf_tgt {
-	struct list_head	      sct_list;
-	char		    sct_name[MAX_OBD_NAME];
-	struct sptlrpc_rule_set sct_rset;
+	struct list_head		sct_list;
+	char				sct_name[MAX_OBD_NAME];
+	struct sptlrpc_rule_set		sct_rset;
 };
 
 struct sptlrpc_conf {
-	struct list_head	      sc_list;
-	char		    sc_fsname[MTI_NAME_MAXLEN];
-	unsigned int	    sc_modified;  /* modified during updating */
-	unsigned int	    sc_updated:1, /* updated copy from MGS */
-				sc_local:1;   /* local copy from target */
-	struct sptlrpc_rule_set sc_rset;      /* fs general rules */
-	struct list_head	      sc_tgts;      /* target-specific rules */
+	struct list_head		sc_list;
+	char				sc_fsname[MTI_NAME_MAXLEN];
+	unsigned int			sc_modified;	/* modified during updating */
+	unsigned int			sc_updated:1,	/* updated copy from MGS */
+					sc_local:1;	/* local copy from target */
+	struct sptlrpc_rule_set		sc_rset;	/* fs general rules */
+	struct list_head		sc_tgts;	/* target-specific rules */
 };
 
 static struct mutex sptlrpc_conf_lock;
@@ -801,7 +801,7 @@ void sptlrpc_conf_choose_flavor(enum lustre_sec_part from,
 	flavor_set_flags(sf, from, to, 1);
 }
 
-#define SEC_ADAPT_DELAY	 (10)
+#define SEC_ADAPT_DELAY		(10)
 
 /**
  * called by client devices, notify the sptlrpc config has changed and
diff --git a/drivers/staging/lustre/lustre/ptlrpc/sec_null.c b/drivers/staging/lustre/lustre/ptlrpc/sec_null.c
index 6933a53..df6ef4f 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/sec_null.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/sec_null.c
@@ -277,8 +277,8 @@ int null_enlarge_reqbuf(struct ptlrpc_sec *sec,
 }
 
 static struct ptlrpc_svc_ctx null_svc_ctx = {
-	.sc_refcount    = ATOMIC_INIT(1),
-	.sc_policy      = &null_policy,
+	.sc_refcount	= ATOMIC_INIT(1),
+	.sc_policy	= &null_policy,
 };
 
 static
@@ -373,33 +373,33 @@ int null_authorize(struct ptlrpc_request *req)
 
 static struct ptlrpc_ctx_ops null_ctx_ops = {
 	.refresh		= null_ctx_refresh,
-	.sign		   = null_ctx_sign,
-	.verify		 = null_ctx_verify,
+	.sign			= null_ctx_sign,
+	.verify			= null_ctx_verify,
 };
 
 static struct ptlrpc_sec_cops null_sec_cops = {
-	.create_sec	     = null_create_sec,
-	.destroy_sec	    = null_destroy_sec,
-	.lookup_ctx	     = null_lookup_ctx,
+	.create_sec		= null_create_sec,
+	.destroy_sec		= null_destroy_sec,
+	.lookup_ctx		= null_lookup_ctx,
 	.flush_ctx_cache	= null_flush_ctx_cache,
-	.alloc_reqbuf	   = null_alloc_reqbuf,
-	.alloc_repbuf	   = null_alloc_repbuf,
-	.free_reqbuf	    = null_free_reqbuf,
-	.free_repbuf	    = null_free_repbuf,
-	.enlarge_reqbuf	 = null_enlarge_reqbuf,
+	.alloc_reqbuf		= null_alloc_reqbuf,
+	.alloc_repbuf		= null_alloc_repbuf,
+	.free_reqbuf		= null_free_reqbuf,
+	.free_repbuf		= null_free_repbuf,
+	.enlarge_reqbuf		= null_enlarge_reqbuf,
 };
 
 static struct ptlrpc_sec_sops null_sec_sops = {
-	.accept		 = null_accept,
-	.alloc_rs	       = null_alloc_rs,
-	.authorize	      = null_authorize,
+	.accept			= null_accept,
+	.alloc_rs		= null_alloc_rs,
+	.authorize		= null_authorize,
 	.free_rs		= null_free_rs,
 };
 
 static struct ptlrpc_sec_policy null_policy = {
-	.sp_owner	       = THIS_MODULE,
+	.sp_owner		= THIS_MODULE,
 	.sp_name		= "sec.null",
-	.sp_policy	      = SPTLRPC_POLICY_NULL,
+	.sp_policy		= SPTLRPC_POLICY_NULL,
 	.sp_cops		= &null_sec_cops,
 	.sp_sops		= &null_sec_sops,
 };
diff --git a/drivers/staging/lustre/lustre/ptlrpc/sec_plain.c b/drivers/staging/lustre/lustre/ptlrpc/sec_plain.c
index 0a31ff4..021bf7f 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/sec_plain.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/sec_plain.c
@@ -46,9 +46,9 @@
 #include "ptlrpc_internal.h"
 
 struct plain_sec {
-	struct ptlrpc_sec       pls_base;
-	rwlock_t	    pls_lock;
-	struct ptlrpc_cli_ctx  *pls_ctx;
+	struct ptlrpc_sec	 pls_base;
+	rwlock_t		 pls_lock;
+	struct ptlrpc_cli_ctx	*pls_ctx;
 };
 
 static inline struct plain_sec *sec2plsec(struct ptlrpc_sec *sec)
@@ -65,15 +65,15 @@ static inline struct plain_sec *sec2plsec(struct ptlrpc_sec *sec)
 /*
  * for simplicity, plain policy rpc use fixed layout.
  */
-#define PLAIN_PACK_SEGMENTS	     (4)
+#define PLAIN_PACK_SEGMENTS	(4)
 
-#define PLAIN_PACK_HDR_OFF	      (0)
-#define PLAIN_PACK_MSG_OFF	      (1)
-#define PLAIN_PACK_USER_OFF	     (2)
-#define PLAIN_PACK_BULK_OFF	     (3)
+#define PLAIN_PACK_HDR_OFF	(0)
+#define PLAIN_PACK_MSG_OFF	(1)
+#define PLAIN_PACK_USER_OFF	(2)
+#define PLAIN_PACK_BULK_OFF	(3)
 
-#define PLAIN_FL_USER		   (0x01)
-#define PLAIN_FL_BULK		   (0x02)
+#define PLAIN_FL_USER		(0x01)
+#define PLAIN_FL_BULK		(0x02)
 
 struct plain_header {
 	u8	    ph_ver;	    /* 0 */
@@ -711,8 +711,8 @@ int plain_enlarge_reqbuf(struct ptlrpc_sec *sec,
  ****************************************/
 
 static struct ptlrpc_svc_ctx plain_svc_ctx = {
-	.sc_refcount    = ATOMIC_INIT(1),
-	.sc_policy      = &plain_policy,
+	.sc_refcount	= ATOMIC_INIT(1),
+	.sc_policy	= &plain_policy,
 };
 
 static
@@ -961,40 +961,40 @@ int plain_svc_wrap_bulk(struct ptlrpc_request *req,
 
 static struct ptlrpc_ctx_ops plain_ctx_ops = {
 	.refresh		= plain_ctx_refresh,
-	.validate	       = plain_ctx_validate,
-	.sign		   = plain_ctx_sign,
-	.verify		 = plain_ctx_verify,
-	.wrap_bulk	      = plain_cli_wrap_bulk,
-	.unwrap_bulk	    = plain_cli_unwrap_bulk,
+	.validate		= plain_ctx_validate,
+	.sign			= plain_ctx_sign,
+	.verify			= plain_ctx_verify,
+	.wrap_bulk		= plain_cli_wrap_bulk,
+	.unwrap_bulk		= plain_cli_unwrap_bulk,
 };
 
 static struct ptlrpc_sec_cops plain_sec_cops = {
-	.create_sec	     = plain_create_sec,
-	.destroy_sec	    = plain_destroy_sec,
-	.kill_sec	       = plain_kill_sec,
-	.lookup_ctx	     = plain_lookup_ctx,
-	.release_ctx	    = plain_release_ctx,
+	.create_sec		= plain_create_sec,
+	.destroy_sec		= plain_destroy_sec,
+	.kill_sec		= plain_kill_sec,
+	.lookup_ctx		= plain_lookup_ctx,
+	.release_ctx		= plain_release_ctx,
 	.flush_ctx_cache	= plain_flush_ctx_cache,
-	.alloc_reqbuf	   = plain_alloc_reqbuf,
-	.free_reqbuf	    = plain_free_reqbuf,
-	.alloc_repbuf	   = plain_alloc_repbuf,
-	.free_repbuf	    = plain_free_repbuf,
-	.enlarge_reqbuf	 = plain_enlarge_reqbuf,
+	.alloc_reqbuf		= plain_alloc_reqbuf,
+	.free_reqbuf		= plain_free_reqbuf,
+	.alloc_repbuf		= plain_alloc_repbuf,
+	.free_repbuf		= plain_free_repbuf,
+	.enlarge_reqbuf		= plain_enlarge_reqbuf,
 };
 
 static struct ptlrpc_sec_sops plain_sec_sops = {
-	.accept		 = plain_accept,
-	.alloc_rs	       = plain_alloc_rs,
-	.authorize	      = plain_authorize,
+	.accept			= plain_accept,
+	.alloc_rs		= plain_alloc_rs,
+	.authorize		= plain_authorize,
 	.free_rs		= plain_free_rs,
-	.unwrap_bulk	    = plain_svc_unwrap_bulk,
-	.wrap_bulk	      = plain_svc_wrap_bulk,
+	.unwrap_bulk		= plain_svc_unwrap_bulk,
+	.wrap_bulk		= plain_svc_wrap_bulk,
 };
 
 static struct ptlrpc_sec_policy plain_policy = {
-	.sp_owner	       = THIS_MODULE,
+	.sp_owner		= THIS_MODULE,
 	.sp_name		= "plain",
-	.sp_policy	      = SPTLRPC_POLICY_PLAIN,
+	.sp_policy		= SPTLRPC_POLICY_PLAIN,
 	.sp_cops		= &plain_sec_cops,
 	.sp_sops		= &plain_sec_sops,
 };
diff --git a/drivers/staging/lustre/lustre/ptlrpc/service.c b/drivers/staging/lustre/lustre/ptlrpc/service.c
index 1030f65..5b97f2a 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/service.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/service.c
@@ -173,7 +173,7 @@
 	       svc->srv_name, i, svc->srv_buf_size, svcpt->scp_nrqbds_posted,
 	       svcpt->scp_nrqbds_total, rc);
 
- try_post:
+try_post:
 	if (post && rc == 0)
 		rc = ptlrpc_server_post_idle_rqbds(svcpt);
 
@@ -185,8 +185,8 @@
 struct ptlrpc_hr_thread {
 	int				hrt_id;		/* thread ID */
 	spinlock_t			hrt_lock;
-	wait_queue_head_t			hrt_waitq;
-	struct list_head			hrt_queue;	/* RS queue */
+	wait_queue_head_t		hrt_waitq;
+	struct list_head		hrt_queue;	/* RS queue */
 	struct ptlrpc_hr_partition	*hrt_partition;
 };
 
@@ -212,7 +212,7 @@ struct ptlrpc_hr_service {
 	/* CPU partition table, it's just cfs_cpt_tab for now */
 	struct cfs_cpt_table		*hr_cpt_table;
 	/** controller sleep waitq */
-	wait_queue_head_t			hr_waitq;
+	wait_queue_head_t		hr_waitq;
 	unsigned int			hr_stopping;
 	/** roundrobin rotor for non-affinity service */
 	unsigned int			hr_rotor;
@@ -236,7 +236,6 @@ struct ptlrpc_hr_service {
 	    svcpt->scp_service->srv_cptable == ptlrpc_hr.hr_cpt_table) {
 		/* directly match partition */
 		hrp = ptlrpc_hr.hr_partitions[svcpt->scp_cpt];
-
 	} else {
 		rotor = ptlrpc_hr.hr_rotor++;
 		rotor %= cfs_cpt_number(ptlrpc_hr.hr_cpt_table);
@@ -440,7 +439,7 @@ static void ptlrpc_at_timer(struct timer_list *t)
 		nthrs = max(tc->tc_nthrs_base,
 			    tc->tc_nthrs_max / svc->srv_ncpts);
 	}
- out:
+out:
 	nthrs = max(nthrs, tc->tc_nthrs_init);
 	svc->srv_nthrs_cpt_limit = nthrs;
 	svc->srv_nthrs_cpt_init = init;
@@ -459,7 +458,7 @@ static void ptlrpc_at_timer(struct timer_list *t)
 ptlrpc_service_part_init(struct ptlrpc_service *svc,
 			 struct ptlrpc_service_part *svcpt, int cpt)
 {
-	struct ptlrpc_at_array	*array;
+	struct ptlrpc_at_array *array;
 	int size;
 	int index;
 	int rc;
@@ -1125,7 +1124,6 @@ static int ptlrpc_at_send_early_reply(struct ptlrpc_request *req)
 		goto out_put;
 
 	rc = ptlrpc_send_reply(reqcopy, PTLRPC_REPLY_EARLY);
-
 	if (!rc) {
 		/* Adjust our own deadline to what we told the client */
 		req->rq_deadline = newdl;
@@ -1316,7 +1314,7 @@ static void ptlrpc_server_hpreq_fini(struct ptlrpc_request *req)
 static int ptlrpc_server_request_add(struct ptlrpc_service_part *svcpt,
 				     struct ptlrpc_request *req)
 {
-	int	rc;
+	int rc;
 
 	rc = ptlrpc_server_hpreq_init(svcpt, req);
 	if (rc < 0)
@@ -2412,7 +2410,7 @@ int ptlrpc_start_threads(struct ptlrpc_service *svc)
 	}
 
 	return 0;
- failed:
+failed:
 	CERROR("cannot start %s thread #%d_%d: rc %d\n",
 	       svc->srv_thread_name, i, j, rc);
 	ptlrpc_stop_all_threads(svc);
@@ -2432,7 +2430,7 @@ int ptlrpc_start_thread(struct ptlrpc_service_part *svcpt, int wait)
 	       svc->srv_name, svcpt->scp_cpt, svcpt->scp_nthrs_running,
 	       svc->srv_nthrs_cpt_init, svc->srv_nthrs_cpt_limit);
 
- again:
+again:
 	if (unlikely(svc->srv_is_stopping))
 		return -ESRCH;
 
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [lustre-devel] [PATCH 20/26] lustre: first batch to cleanup white spaces in internal headers
  2019-01-31 17:19 [lustre-devel] [PATCH 00/26] lustre: cleanups with no code changes James Simmons
                   ` (18 preceding siblings ...)
  2019-01-31 17:19 ` [lustre-devel] [PATCH 19/26] ptlrpc: " James Simmons
@ 2019-01-31 17:19 ` James Simmons
  2019-01-31 17:19 ` [lustre-devel] [PATCH 21/26] lustre: second " James Simmons
                   ` (6 subsequent siblings)
  26 siblings, 0 replies; 30+ messages in thread
From: James Simmons @ 2019-01-31 17:19 UTC (permalink / raw)
  To: lustre-devel

The internal headers are very messy and difficult to read. Remove
excess white space and properly align data structures so they are
easy on the eyes. This is the first batch since it covers many
lines of changes.

Signed-off-by: James Simmons <jsimmons@infradead.org>
---
 drivers/staging/lustre/lustre/include/cl_object.h  | 338 ++++++++++-----------
 .../staging/lustre/lustre/include/lprocfs_status.h |  77 ++---
 drivers/staging/lustre/lustre/include/lu_object.h  | 322 ++++++++++----------
 .../staging/lustre/lustre/include/lustre_disk.h    |  42 +--
 drivers/staging/lustre/lustre/include/lustre_dlm.h | 256 ++++++++--------
 .../lustre/lustre/include/lustre_dlm_flags.h       | 326 ++++++++++----------
 6 files changed, 682 insertions(+), 679 deletions(-)

diff --git a/drivers/staging/lustre/lustre/include/cl_object.h b/drivers/staging/lustre/lustre/include/cl_object.h
index 3109c04..b8ae41d 100644
--- a/drivers/staging/lustre/lustre/include/cl_object.h
+++ b/drivers/staging/lustre/lustre/include/cl_object.h
@@ -49,16 +49,16 @@
  *
  *   - cl_page
  *
- *   - cl_lock     represents an extent lock on an object.
+ *   - cl_lock	represents an extent lock on an object.
  *
- *   - cl_io       represents high-level i/o activity such as whole read/write
- *		 system call, or write-out of pages from under the lock being
- *		 canceled. cl_io has sub-ios that can be stopped and resumed
- *		 independently, thus achieving high degree of transfer
- *		 parallelism. Single cl_io can be advanced forward by
- *		 the multiple threads (although in the most usual case of
- *		 read/write system call it is associated with the single user
- *		 thread, that issued the system call).
+ *   - cl_io	represents high-level i/o activity such as whole read/write
+ *		system call, or write-out of pages from under the lock being
+ *		canceled. cl_io has sub-ios that can be stopped and resumed
+ *		independently, thus achieving high degree of transfer
+ *		parallelism. Single cl_io can be advanced forward by
+ *		the multiple threads (although in the most usual case of
+ *		read/write system call it is associated with the single user
+ *		thread, that issued the system call).
  *
  * Terminology
  *
@@ -135,39 +135,39 @@ struct cl_device {
  */
 struct cl_attr {
 	/** Object size, in bytes */
-	loff_t cat_size;
+	loff_t		cat_size;
 	/**
 	 * Known minimal size, in bytes.
 	 *
 	 * This is only valid when at least one DLM lock is held.
 	 */
-	loff_t cat_kms;
+	loff_t		cat_kms;
 	/** Modification time. Measured in seconds since epoch. */
-	time64_t cat_mtime;
+	time64_t	cat_mtime;
 	/** Access time. Measured in seconds since epoch. */
-	time64_t cat_atime;
+	time64_t	cat_atime;
 	/** Change time. Measured in seconds since epoch. */
-	time64_t cat_ctime;
+	time64_t	cat_ctime;
 	/**
 	 * Blocks allocated to this cl_object on the server file system.
 	 *
 	 * \todo XXX An interface for block size is needed.
 	 */
-	u64  cat_blocks;
+	u64		cat_blocks;
 	/**
 	 * User identifier for quota purposes.
 	 */
-	uid_t  cat_uid;
+	uid_t		cat_uid;
 	/**
 	 * Group identifier for quota purposes.
 	 */
-	gid_t  cat_gid;
+	gid_t		cat_gid;
 
 	/* nlink of the directory */
-	u64  cat_nlink;
+	u64		cat_nlink;
 
 	/* Project identifier for quota purpose. */
-	u32	cat_projid;
+	u32		cat_projid;
 };
 
 /**
@@ -223,11 +223,11 @@ enum cl_attr_valid {
  */
 struct cl_object {
 	/** super class */
-	struct lu_object		   co_lu;
+	struct lu_object			co_lu;
 	/** per-object-layer operations */
-	const struct cl_object_operations *co_ops;
+	const struct cl_object_operations	*co_ops;
 	/** offset of page slice in cl_page buffer */
-	int				   co_slice_off;
+	int					co_slice_off;
 };
 
 /**
@@ -237,30 +237,30 @@ struct cl_object {
  */
 struct cl_object_conf {
 	/** Super-class. */
-	struct lu_object_conf     coc_lu;
+	struct lu_object_conf	coc_lu;
 	union {
 		/**
 		 * Object layout. This is consumed by lov.
 		 */
-		struct lu_buf	  coc_layout;
+		struct lu_buf		coc_layout;
 		/**
 		 * Description of particular stripe location in the
 		 * cluster. This is consumed by osc.
 		 */
-		struct lov_oinfo *coc_oinfo;
+		struct lov_oinfo	*coc_oinfo;
 	} u;
 	/**
 	 * VFS inode. This is consumed by vvp.
 	 */
-	struct inode	     *coc_inode;
+	struct inode		*coc_inode;
 	/**
 	 * Layout lock handle.
 	 */
-	struct ldlm_lock	 *coc_lock;
+	struct ldlm_lock	*coc_lock;
 	/**
 	 * Operation to handle layout, OBJECT_CONF_XYZ.
 	 */
-	int			  coc_opc;
+	int			coc_opc;
 };
 
 enum {
@@ -283,13 +283,13 @@ enum {
 
 struct cl_layout {
 	/** the buffer to return the layout in lov_mds_md format. */
-	struct lu_buf	cl_buf;
+	struct lu_buf		cl_buf;
 	/** size of layout in lov_mds_md format. */
-	size_t		cl_size;
+	size_t			cl_size;
 	/** Layout generation. */
-	u32		cl_layout_gen;
+	u32			cl_layout_gen;
 	/** whether layout is a composite one */
-	bool		cl_is_composite;
+	bool			cl_is_composite;
 };
 
 /**
@@ -421,7 +421,7 @@ struct cl_object_header {
 	/** Standard lu_object_header. cl_object::co_lu::lo_header points
 	 * here.
 	 */
-	struct lu_object_header  coh_lu;
+	struct lu_object_header	 coh_lu;
 
 	/**
 	 * Parent object. It is assumed that an object has a well-defined
@@ -454,16 +454,16 @@ struct cl_object_header {
  * Helper macro: iterate over all layers of the object \a obj, assigning every
  * layer top-to-bottom to \a slice.
  */
-#define cl_object_for_each(slice, obj)				      \
-	list_for_each_entry((slice),				    \
-				&(obj)->co_lu.lo_header->loh_layers,	\
-				co_lu.lo_linkage)
+#define cl_object_for_each(slice, obj)					\
+	list_for_each_entry((slice),					\
+			    &(obj)->co_lu.lo_header->loh_layers,	\
+			    co_lu.lo_linkage)
 /**
  * Helper macro: iterate over all layers of the object \a obj, assigning every
  * layer bottom-to-top to \a slice.
  */
-#define cl_object_for_each_reverse(slice, obj)			       \
-	list_for_each_entry_reverse((slice),			     \
+#define cl_object_for_each_reverse(slice, obj)				\
+	list_for_each_entry_reverse((slice),				\
 					&(obj)->co_lu.lo_header->loh_layers, \
 					co_lu.lo_linkage)
 /** @} cl_object */
@@ -717,39 +717,39 @@ enum cl_page_type {
  */
 struct cl_page {
 	/** Reference counter. */
-	atomic_t	     cp_ref;
+	atomic_t			 cp_ref;
 	/** An object this page is a part of. Immutable after creation. */
-	struct cl_object	*cp_obj;
+	struct cl_object		*cp_obj;
 	/** vmpage */
-	struct page		*cp_vmpage;
+	struct page			*cp_vmpage;
 	/** Linkage of pages within group. Pages must be owned */
-	struct list_head	 cp_batch;
+	struct list_head		 cp_batch;
 	/** List of slices. Immutable after creation. */
-	struct list_head	 cp_layers;
+	struct list_head		 cp_layers;
 	/**
 	 * Page state. This field is const to avoid accidental update, it is
 	 * modified only internally within cl_page.c. Protected by a VM lock.
 	 */
-	const enum cl_page_state cp_state;
+	const enum cl_page_state	 cp_state;
 	/**
 	 * Page type. Only CPT_TRANSIENT is used so far. Immutable after
 	 * creation.
 	 */
-	enum cl_page_type	cp_type;
+	enum cl_page_type		 cp_type;
 
 	/**
 	 * Owning IO in cl_page_state::CPS_OWNED state. Sub-page can be owned
 	 * by sub-io. Protected by a VM lock.
 	 */
-	struct cl_io	    *cp_owner;
+	struct cl_io			*cp_owner;
 	/** List of references to this page, for debugging. */
-	struct lu_ref	    cp_reference;
+	struct lu_ref			 cp_reference;
 	/** Link to an object, for debugging. */
-	struct lu_ref_link       cp_obj_ref;
+	struct lu_ref_link		 cp_obj_ref;
 	/** Link to a queue, for debugging. */
-	struct lu_ref_link       cp_queue_ref;
+	struct lu_ref_link		 cp_queue_ref;
 	/** Assigned if doing a sync_io */
-	struct cl_sync_io       *cp_sync_io;
+	struct cl_sync_io		*cp_sync_io;
 };
 
 /**
@@ -758,7 +758,7 @@ struct cl_page {
  * \see vvp_page, lov_page, osc_page
  */
 struct cl_page_slice {
-	struct cl_page		  *cpl_page;
+	struct cl_page			*cpl_page;
 	pgoff_t				 cpl_index;
 	/**
 	 * Object slice corresponding to this page slice. Immutable after
@@ -767,7 +767,7 @@ struct cl_page_slice {
 	struct cl_object		*cpl_obj;
 	const struct cl_page_operations *cpl_ops;
 	/** Linkage into cl_page::cp_layers. Immutable after creation. */
-	struct list_head		       cpl_linkage;
+	struct list_head		 cpl_linkage;
 };
 
 /**
@@ -986,25 +986,25 @@ struct cl_page_operations {
 /**
  * Helper macro, dumping detailed information about \a page into a log.
  */
-#define CL_PAGE_DEBUG(mask, env, page, format, ...)		     \
-do {								    \
-	if (cfs_cdebug_show(mask, DEBUG_SUBSYSTEM)) {		   \
+#define CL_PAGE_DEBUG(mask, env, page, format, ...)			\
+do {									\
+	if (cfs_cdebug_show(mask, DEBUG_SUBSYSTEM)) {			\
 		LIBCFS_DEBUG_MSG_DATA_DECL(msgdata, mask, NULL);	\
 		cl_page_print(env, &msgdata, lu_cdebug_printer, page);  \
-		CDEBUG(mask, format, ## __VA_ARGS__);		  \
-	}							       \
+		CDEBUG(mask, format, ## __VA_ARGS__);			\
+	}								\
 } while (0)
 
 /**
  * Helper macro, dumping shorter information about \a page into a log.
  */
-#define CL_PAGE_HEADER(mask, env, page, format, ...)			  \
-do {									  \
-	if (cfs_cdebug_show(mask, DEBUG_SUBSYSTEM)) {			 \
-		LIBCFS_DEBUG_MSG_DATA_DECL(msgdata, mask, NULL);		\
+#define CL_PAGE_HEADER(mask, env, page, format, ...)			\
+do {									\
+	if (cfs_cdebug_show(mask, DEBUG_SUBSYSTEM)) {			\
+		LIBCFS_DEBUG_MSG_DATA_DECL(msgdata, mask, NULL);	\
 		cl_page_header_print(env, &msgdata, lu_cdebug_printer, page); \
 		CDEBUG(mask, format, ## __VA_ARGS__);			\
-	}								     \
+	}								\
 } while (0)
 
 static inline struct page *cl_page_vmpage(struct cl_page *page)
@@ -1145,24 +1145,24 @@ static inline bool __page_in_use(const struct cl_page *page, int refc)
  */
 struct cl_lock_descr {
 	/** Object this lock is granted for. */
-	struct cl_object *cld_obj;
+	struct cl_object		*cld_obj;
 	/** Index of the first page protected by this lock. */
-	pgoff_t	   cld_start;
+	pgoff_t				cld_start;
 	/** Index of the last page (inclusive) protected by this lock. */
-	pgoff_t	   cld_end;
+	pgoff_t				cld_end;
 	/** Group ID, for group lock */
-	u64	     cld_gid;
+	u64				cld_gid;
 	/** Lock mode. */
-	enum cl_lock_mode cld_mode;
+	enum cl_lock_mode		cld_mode;
 	/**
 	 * flags to enqueue lock. A combination of bit-flags from
 	 * enum cl_enq_flags.
 	 */
-	u32	     cld_enq_flags;
+	u32				cld_enq_flags;
 };
 
 #define DDESCR "%s(%d):[%lu, %lu]:%x"
-#define PDESCR(descr)						   \
+#define PDESCR(descr)							\
 	cl_lock_mode_name((descr)->cld_mode), (descr)->cld_mode,	\
 	(descr)->cld_start, (descr)->cld_end, (descr)->cld_enq_flags
 
@@ -1173,9 +1173,9 @@ struct cl_lock_descr {
  */
 struct cl_lock {
 	/** List of slices. Immutable after creation. */
-	struct list_head	    cll_layers;
+	struct list_head		cll_layers;
 	/** lock attribute, extent, cl_object, etc. */
-	struct cl_lock_descr  cll_descr;
+	struct cl_lock_descr		cll_descr;
 };
 
 /**
@@ -1184,14 +1184,14 @@ struct cl_lock {
  * \see vvp_lock, lov_lock, lovsub_lock, osc_lock
  */
 struct cl_lock_slice {
-	struct cl_lock		  *cls_lock;
+	struct cl_lock			*cls_lock;
 	/** Object slice corresponding to this lock slice. Immutable after
 	 * creation.
 	 */
 	struct cl_object		*cls_obj;
 	const struct cl_lock_operations *cls_ops;
 	/** Linkage into cl_lock::cll_layers. Immutable after creation. */
-	struct list_head		       cls_linkage;
+	struct list_head		 cls_linkage;
 };
 
 /**
@@ -1236,22 +1236,22 @@ struct cl_lock_operations {
 			 const struct cl_lock_slice *slice);
 };
 
-#define CL_LOCK_DEBUG(mask, env, lock, format, ...)		     \
-do {								    \
+#define CL_LOCK_DEBUG(mask, env, lock, format, ...)			\
+do {									\
 	LIBCFS_DEBUG_MSG_DATA_DECL(msgdata, mask, NULL);		\
 									\
-	if (cfs_cdebug_show(mask, DEBUG_SUBSYSTEM)) {		   \
+	if (cfs_cdebug_show(mask, DEBUG_SUBSYSTEM)) {			\
 		cl_lock_print(env, &msgdata, lu_cdebug_printer, lock);  \
-		CDEBUG(mask, format, ## __VA_ARGS__);		  \
-	}							       \
+		CDEBUG(mask, format, ## __VA_ARGS__);			\
+	}								\
 } while (0)
 
-#define CL_LOCK_ASSERT(expr, env, lock) do {			    \
-	if (likely(expr))					       \
-		break;						  \
+#define CL_LOCK_ASSERT(expr, env, lock) do {				\
+	if (likely(expr))						\
+		break;							\
 									\
 	CL_LOCK_DEBUG(D_ERROR, env, lock, "failed at %s.\n", #expr);    \
-	LBUG();							 \
+	LBUG();								\
 } while (0)
 
 /** @} cl_lock */
@@ -1276,9 +1276,9 @@ struct cl_lock_operations {
  * @{
  */
 struct cl_page_list {
-	unsigned int		 pl_nr;
-	struct list_head	   pl_pages;
-	struct task_struct	*pl_owner;
+	unsigned int			pl_nr;
+	struct list_head		pl_pages;
+	struct task_struct		*pl_owner;
 };
 
 /**
@@ -1286,8 +1286,8 @@ struct cl_page_list {
  * contains an incoming page list and an outgoing page list.
  */
 struct cl_2queue {
-	struct cl_page_list c2_qin;
-	struct cl_page_list c2_qout;
+	struct cl_page_list		c2_qin;
+	struct cl_page_list		c2_qout;
 };
 
 /** @} cl_page_list */
@@ -1424,16 +1424,16 @@ enum cl_io_state {
  * \see vvp_io, lov_io, osc_io
  */
 struct cl_io_slice {
-	struct cl_io		  *cis_io;
+	struct cl_io			*cis_io;
 	/** corresponding object slice. Immutable after creation. */
-	struct cl_object	      *cis_obj;
+	struct cl_object		*cis_obj;
 	/** io operations. Immutable after creation. */
-	const struct cl_io_operations *cis_iop;
+	const struct cl_io_operations	*cis_iop;
 	/**
 	 * linkage into a list of all slices for a given cl_io, hanging off
 	 * cl_io::ci_layers. Immutable after creation.
 	 */
-	struct list_head		     cis_linkage;
+	struct list_head		cis_linkage;
 };
 
 typedef void (*cl_commit_cbt)(const struct lu_env *, struct cl_io *,
@@ -1445,16 +1445,16 @@ struct cl_read_ahead {
 	 * This is determined DLM lock coverage, RPC and stripe boundary.
 	 * cra_end is included.
 	 */
-	pgoff_t cra_end;
+	pgoff_t				cra_end;
 	/* optimal RPC size for this read, by pages */
-	unsigned long cra_rpc_size;
+	unsigned long			cra_rpc_size;
 	/*
 	 * Release callback. If readahead holds resources underneath, this
 	 * function should be called to release it.
 	 */
 	void (*cra_release)(const struct lu_env *env, void *cbdata);
 	/* Callback data for cra_release routine */
-	void *cra_cbdata;
+	void				*cra_cbdata;
 };
 
 static inline void cl_read_ahead_release(const struct lu_env *env,
@@ -1594,18 +1594,18 @@ enum cl_enq_flags {
 	 * instruct server to not block, if conflicting lock is found. Instead
 	 * -EWOULDBLOCK is returned immediately.
 	 */
-	CEF_NONBLOCK     = 0x00000001,
+	CEF_NONBLOCK		= 0x00000001,
 	/**
 	 * take lock asynchronously (out of order), as it cannot
 	 * deadlock. This is for LDLM_FL_HAS_INTENT locks used for glimpsing.
 	 */
-	CEF_ASYNC	= 0x00000002,
+	CEF_ASYNC		= 0x00000002,
 	/**
 	 * tell the server to instruct (though a flag in the blocking ast) an
 	 * owner of the conflicting lock, that it can drop dirty pages
 	 * protected by this lock, without sending them to the server.
 	 */
-	CEF_DISCARD_DATA = 0x00000004,
+	CEF_DISCARD_DATA	= 0x00000004,
 	/**
 	 * tell the sub layers that it must be a `real' lock. This is used for
 	 * mmapped-buffer locks and glimpse locks that must be never converted
@@ -1613,7 +1613,7 @@ enum cl_enq_flags {
 	 *
 	 * \see vvp_mmap_locks(), cl_glimpse_lock().
 	 */
-	CEF_MUST	 = 0x00000008,
+	CEF_MUST		= 0x00000008,
 	/**
 	 * tell the sub layers that never request a `real' lock. This flag is
 	 * not used currently.
@@ -1624,24 +1624,24 @@ enum cl_enq_flags {
 	 * object doing IO; however, lock itself may have precise requirements
 	 * that are described by the enqueue flags.
 	 */
-	CEF_NEVER	= 0x00000010,
+	CEF_NEVER		= 0x00000010,
 	/**
 	 * for async glimpse lock.
 	 */
-	CEF_AGL	  = 0x00000020,
+	CEF_AGL			= 0x00000020,
 	/**
 	 * enqueue a lock to test DLM lock existence.
 	 */
-	CEF_PEEK	= 0x00000040,
+	CEF_PEEK		= 0x00000040,
 	/**
 	 * Lock match only. Used by group lock in I/O as group lock
 	 * is known to exist.
 	 */
-	CEF_LOCK_MATCH	= BIT(7),
+	CEF_LOCK_MATCH		= BIT(7),
 	/**
 	 * mask of enq_flags.
 	 */
-	CEF_MASK	= 0x000000ff,
+	CEF_MASK		= 0x000000ff,
 };
 
 /**
@@ -1650,8 +1650,8 @@ enum cl_enq_flags {
  */
 struct cl_io_lock_link {
 	/** linkage into one of cl_lockset lists. */
-	struct list_head	   cill_linkage;
-	struct cl_lock          cill_lock;
+	struct list_head	cill_linkage;
+	struct cl_lock		cill_lock;
 	/** optional destructor */
 	void	       (*cill_fini)(const struct lu_env *env,
 				    struct cl_io_lock_link *link);
@@ -1689,9 +1689,9 @@ struct cl_io_lock_link {
  */
 struct cl_lockset {
 	/** locks to be acquired. */
-	struct list_head  cls_todo;
+	struct list_head	cls_todo;
 	/** locks acquired. */
-	struct list_head  cls_done;
+	struct list_head	cls_done;
 };
 
 /**
@@ -1709,21 +1709,21 @@ enum cl_io_lock_dmd {
 
 enum cl_fsync_mode {
 	/** start writeback, do not wait for them to finish */
-	CL_FSYNC_NONE  = 0,
+	CL_FSYNC_NONE		= 0,
 	/** start writeback and wait for them to finish */
-	CL_FSYNC_LOCAL = 1,
+	CL_FSYNC_LOCAL		= 1,
 	/** discard all of dirty pages in a specific file range */
-	CL_FSYNC_DISCARD = 2,
+	CL_FSYNC_DISCARD	= 2,
 	/** start writeback and make sure they have reached storage before
 	 * return. OST_SYNC RPC must be issued and finished
 	 */
-	CL_FSYNC_ALL   = 3
+	CL_FSYNC_ALL		= 3
 };
 
 struct cl_io_rw_common {
-	loff_t      crw_pos;
-	size_t      crw_count;
-	int	 crw_nonblock;
+	loff_t			crw_pos;
+	size_t			crw_count;
+	int			crw_nonblock;
 };
 
 /**
@@ -1739,65 +1739,65 @@ struct cl_io {
 	/** type of this IO. Immutable after creation. */
 	enum cl_io_type		ci_type;
 	/** current state of cl_io state machine. */
-	enum cl_io_state	       ci_state;
+	enum cl_io_state	ci_state;
 	/** main object this io is against. Immutable after creation. */
-	struct cl_object	      *ci_obj;
+	struct cl_object	*ci_obj;
 	/**
 	 * Upper layer io, of which this io is a part of. Immutable after
 	 * creation.
 	 */
-	struct cl_io		  *ci_parent;
+	struct cl_io		*ci_parent;
 	/** List of slices. Immutable after creation. */
-	struct list_head		     ci_layers;
+	struct list_head	ci_layers;
 	/** list of locks (to be) acquired by this io. */
-	struct cl_lockset	      ci_lockset;
+	struct cl_lockset	ci_lockset;
 	/** lock requirements, this is just a help info for sublayers. */
-	enum cl_io_lock_dmd	    ci_lockreq;
+	enum cl_io_lock_dmd	ci_lockreq;
 	union {
 		struct cl_rd_io {
-			struct cl_io_rw_common rd;
+			struct cl_io_rw_common	rd;
 		} ci_rd;
 		struct cl_wr_io {
-			struct cl_io_rw_common wr;
-			int		    wr_append;
-			int		    wr_sync;
+			struct cl_io_rw_common	wr;
+			int			wr_append;
+			int			wr_sync;
 		} ci_wr;
-		struct cl_io_rw_common ci_rw;
+		struct cl_io_rw_common	ci_rw;
 		struct cl_setattr_io {
-			struct ost_lvb   sa_attr;
-			unsigned int		 sa_attr_flags;
-			unsigned int     sa_avalid;
+			struct ost_lvb		sa_attr;
+			unsigned int		sa_attr_flags;
+			unsigned int		sa_avalid;
 			unsigned int		sa_xvalid; /* OP_XVALID */
-			int		sa_stripe_index;
-			struct ost_layout	 sa_layout;
+			int			sa_stripe_index;
+			struct ost_layout	sa_layout;
 			const struct lu_fid	*sa_parent_fid;
 		} ci_setattr;
 		struct cl_data_version_io {
-			u64 dv_data_version;
-			int dv_flags;
+			u64			dv_data_version;
+			int			dv_flags;
 		} ci_data_version;
 		struct cl_fault_io {
 			/** page index within file. */
-			pgoff_t	 ft_index;
+			pgoff_t			ft_index;
 			/** bytes valid byte on a faulted page. */
-			size_t	     ft_nob;
+			size_t			ft_nob;
 			/** writable page? for nopage() only */
-			int	     ft_writable;
+			int			ft_writable;
 			/** page of an executable? */
-			int	     ft_executable;
+			int			ft_executable;
 			/** page_mkwrite() */
-			int	     ft_mkwrite;
+			int			ft_mkwrite;
 			/** resulting page */
-			struct cl_page *ft_page;
+			struct cl_page		*ft_page;
 		} ci_fault;
 		struct cl_fsync_io {
-			loff_t	     fi_start;
-			loff_t	     fi_end;
+			loff_t			fi_start;
+			loff_t			fi_end;
 			/** file system level fid */
-			struct lu_fid     *fi_fid;
-			enum cl_fsync_mode fi_mode;
+			struct lu_fid		*fi_fid;
+			enum cl_fsync_mode	fi_mode;
 			/* how many pages were written/discarded */
-			unsigned int       fi_nr_written;
+			unsigned int		fi_nr_written;
 		} ci_fsync;
 		struct cl_ladvise_io {
 			u64			li_start;
@@ -1808,30 +1808,30 @@ struct cl_io {
 			u64			li_flags;
 		} ci_ladvise;
 	} u;
-	struct cl_2queue     ci_queue;
-	size_t	       ci_nob;
-	int		  ci_result;
-	unsigned int	 ci_continue:1,
+	struct cl_2queue	ci_queue;
+	size_t			ci_nob;
+	int			ci_result;
+	unsigned int		ci_continue:1,
 	/**
 	 * This io has held grouplock, to inform sublayers that
 	 * don't do lockless i/o.
 	 */
-			     ci_no_srvlock:1,
+				ci_no_srvlock:1,
 	/**
 	 * The whole IO need to be restarted because layout has been changed
 	 */
-			     ci_need_restart:1,
+				ci_need_restart:1,
 	/**
 	 * to not refresh layout - the IO issuer knows that the layout won't
 	 * change(page operations, layout change causes all page to be
 	 * discarded), or it doesn't matter if it changes(sync).
 	 */
-			     ci_ignore_layout:1,
+				ci_ignore_layout:1,
 	/**
 	 * Need MDS intervention to complete a write. This usually means the
 	 * corresponding component is not initialized for the writing extent.
 	 */
-			ci_need_write_intent:1,
+				ci_need_write_intent:1,
 	/**
 	 * Check if layout changed after the IO finishes. Mainly for HSM
 	 * requirement. If IO occurs to openning files, it doesn't need to
@@ -1839,19 +1839,19 @@ struct cl_io {
 	 * Right now, only two operations need to verify layout: glimpse
 	 * and setattr.
 	 */
-			     ci_verify_layout:1,
+				ci_verify_layout:1,
 	/**
 	 * file is released, restore has to be triggered by vvp layer
 	 */
-			     ci_restore_needed:1,
+				ci_restore_needed:1,
 	/**
 	 * O_NOATIME
 	 */
-			     ci_noatime:1;
+				ci_noatime:1;
 	/**
 	 * Number of pages owned by this IO. For invariant checking.
 	 */
-	unsigned int	     ci_owned_nr;
+	unsigned int		ci_owned_nr;
 };
 
 /** @} cl_io */
@@ -1860,14 +1860,14 @@ struct cl_io {
  * Per-transfer attributes.
  */
 struct cl_req_attr {
-	enum cl_req_type cra_type;
-	u64		 cra_flags;
-	struct cl_page	*cra_page;
+	enum cl_req_type	cra_type;
+	u64			cra_flags;
+	struct cl_page	       *cra_page;
 
 	/** Generic attributes for the server consumption. */
-	struct obdo	*cra_oa;
+	struct obdo	       *cra_oa;
 	/** Jobid */
-	char		 cra_jobid[LUSTRE_JOBID_SIZE];
+	char			cra_jobid[LUSTRE_JOBID_SIZE];
 };
 
 enum cache_stats_item {
@@ -1892,8 +1892,8 @@ enum cache_stats_item {
  * Stats for a generic cache (similar to inode, lu_object, etc. caches).
  */
 struct cache_stats {
-	const char    *cs_name;
-	atomic_t   cs_stats[CS_NR];
+	const char	       *cs_name;
+	atomic_t		cs_stats[CS_NR];
 };
 
 /** These are not exported so far */
@@ -1905,7 +1905,7 @@ struct cache_stats {
  * clients to co-exist in the single address space.
  */
 struct cl_site {
-	struct lu_site	cs_lu;
+	struct lu_site		cs_lu;
 	/**
 	 * Statistical counters. Atomics do not scale, something better like
 	 * per-cpu counters is needed.
@@ -1915,8 +1915,8 @@ struct cl_site {
 	 * When interpreting keep in mind that both sub-locks (and sub-pages)
 	 * and top-locks (and top-pages) are accounted here.
 	 */
-	struct cache_stats    cs_pages;
-	atomic_t	  cs_pages_state[CPS_NR];
+	struct cache_stats	cs_pages;
+	atomic_t		cs_pages_state[CPS_NR];
 };
 
 int  cl_site_init(struct cl_site *s, struct cl_device *top);
@@ -2341,13 +2341,13 @@ static inline struct cl_page *cl_page_list_first(struct cl_page_list *plist)
 /**
  * Iterate over pages in a page list.
  */
-#define cl_page_list_for_each(page, list)			       \
+#define cl_page_list_for_each(page, list)				\
 	list_for_each_entry((page), &(list)->pl_pages, cp_batch)
 
 /**
  * Iterate over pages in a page list, taking possible removals into account.
  */
-#define cl_page_list_for_each_safe(page, temp, list)		    \
+#define cl_page_list_for_each_safe(page, temp, list)			\
 	list_for_each_entry_safe((page), (temp), &(list)->pl_pages, cp_batch)
 
 void cl_page_list_init(struct cl_page_list *plist);
@@ -2394,7 +2394,7 @@ struct cl_sync_io {
 	/** barrier of destroy this structure */
 	atomic_t		csi_barrier;
 	/** completion to be signaled when transfer is complete. */
-	wait_queue_head_t		csi_waitq;
+	wait_queue_head_t	csi_waitq;
 	/** callback to invoke when this IO is finished */
 	void			(*csi_end_io)(const struct lu_env *,
 					      struct cl_sync_io *);
diff --git a/drivers/staging/lustre/lustre/include/lprocfs_status.h b/drivers/staging/lustre/lustre/include/lprocfs_status.h
index 7649040..d69f395 100644
--- a/drivers/staging/lustre/lustre/include/lprocfs_status.h
+++ b/drivers/staging/lustre/lustre/include/lprocfs_status.h
@@ -49,25 +49,25 @@
 #include <uapi/linux/lustre/lustre_idl.h>
 
 struct lprocfs_vars {
-	const char		*name;
+	const char			*name;
 	const struct file_operations	*fops;
-	void			*data;
+	void				*data;
 	/**
 	 * sysfs file mode.
 	 */
-	umode_t			proc_mode;
+	umode_t				proc_mode;
 };
 
 struct lprocfs_static_vars {
-	struct lprocfs_vars *obd_vars;
-	const struct attribute_group *sysfs_vars;
+	struct lprocfs_vars		*obd_vars;
+	const struct attribute_group	*sysfs_vars;
 };
 
 /* if we find more consumers this could be generalized */
 #define OBD_HIST_MAX 32
 struct obd_histogram {
-	spinlock_t	oh_lock;
-	unsigned long	oh_buckets[OBD_HIST_MAX];
+	spinlock_t			oh_lock;
+	unsigned long			oh_buckets[OBD_HIST_MAX];
 };
 
 enum {
@@ -125,37 +125,37 @@ struct rename_stats {
  */
 
 enum {
-	LPROCFS_CNTR_EXTERNALLOCK = 0x0001,
-	LPROCFS_CNTR_AVGMINMAX    = 0x0002,
-	LPROCFS_CNTR_STDDEV       = 0x0004,
+	LPROCFS_CNTR_EXTERNALLOCK	= 0x0001,
+	LPROCFS_CNTR_AVGMINMAX		= 0x0002,
+	LPROCFS_CNTR_STDDEV		= 0x0004,
 
 	/* counter data type */
-	LPROCFS_TYPE_REGS	 = 0x0100,
-	LPROCFS_TYPE_BYTES	= 0x0200,
-	LPROCFS_TYPE_PAGES	= 0x0400,
-	LPROCFS_TYPE_CYCLE	= 0x0800,
+	LPROCFS_TYPE_REGS		= 0x0100,
+	LPROCFS_TYPE_BYTES		= 0x0200,
+	LPROCFS_TYPE_PAGES		= 0x0400,
+	LPROCFS_TYPE_CYCLE		= 0x0800,
 };
 
 #define LC_MIN_INIT ((~(u64)0) >> 1)
 
 struct lprocfs_counter_header {
-	unsigned int		lc_config;
-	const char		*lc_name;   /* must be static */
-	const char		*lc_units;  /* must be static */
+	unsigned int	 lc_config;
+	const char	*lc_name;   /* must be static */
+	const char	*lc_units;  /* must be static */
 };
 
 struct lprocfs_counter {
-	s64	lc_count;
-	s64	lc_min;
-	s64	lc_max;
-	s64	lc_sumsquare;
+	s64		lc_count;
+	s64		lc_min;
+	s64		lc_max;
+	s64		lc_sumsquare;
 	/*
 	 * Every counter has lc_array_sum[0], while lc_array_sum[1] is only
 	 * for irq context counter, i.e. stats with
 	 * LPROCFS_STATS_FLAG_IRQ_SAFE flag, its counter need
 	 * lc_array_sum[1]
 	 */
-	s64	lc_array_sum[1];
+	s64		lc_array_sum[1];
 };
 
 #define lc_sum		lc_array_sum[0]
@@ -165,20 +165,23 @@ struct lprocfs_percpu {
 #ifndef __GNUC__
 	s64			pad;
 #endif
-	struct lprocfs_counter lp_cntr[0];
+	struct lprocfs_counter	lp_cntr[0];
 };
 
 enum lprocfs_stats_lock_ops {
-	LPROCFS_GET_NUM_CPU	= 0x0001, /* number allocated per-CPU stats */
-	LPROCFS_GET_SMP_ID	= 0x0002, /* current stat to be updated */
+	LPROCFS_GET_NUM_CPU		= 0x0001, /* number allocated per-CPU
+						   * stats
+						   */
+	LPROCFS_GET_SMP_ID		= 0x0002, /* current stat to be updated
+						   */
 };
 
 enum lprocfs_stats_flags {
-	LPROCFS_STATS_FLAG_NONE     = 0x0000, /* per cpu counter */
-	LPROCFS_STATS_FLAG_NOPERCPU = 0x0001, /* stats have no percpu
-					       * area and need locking
-					       */
-	LPROCFS_STATS_FLAG_IRQ_SAFE = 0x0002, /* alloc need irq safe */
+	LPROCFS_STATS_FLAG_NONE		= 0x0000, /* per cpu counter */
+	LPROCFS_STATS_FLAG_NOPERCPU	= 0x0001, /* stats have no percpu
+						   * area and need locking
+						   */
+	LPROCFS_STATS_FLAG_IRQ_SAFE	= 0x0002, /* alloc need irq safe */
 };
 
 enum lprocfs_fields_flags {
@@ -187,7 +190,7 @@ enum lprocfs_fields_flags {
 	LPROCFS_FIELDS_FLAGS_MIN	= 0x0003,
 	LPROCFS_FIELDS_FLAGS_MAX	= 0x0004,
 	LPROCFS_FIELDS_FLAGS_AVG	= 0x0005,
-	LPROCFS_FIELDS_FLAGS_SUMSQUARE  = 0x0006,
+	LPROCFS_FIELDS_FLAGS_SUMSQUARE	= 0x0006,
 	LPROCFS_FIELDS_FLAGS_COUNT      = 0x0007,
 };
 
@@ -513,12 +516,12 @@ void lprocfs_stats_collect(struct lprocfs_stats *stats, int idx,
 	return single_open(file, name##_seq_show, inode->i_private);	\
 }									\
 static const struct file_operations name##_fops = {			\
-	.owner   = THIS_MODULE,					    \
-	.open    = name##_single_open,				     \
-	.read    = seq_read,					       \
-	.write   = custom_seq_write,				       \
-	.llseek  = seq_lseek,					      \
-	.release = lprocfs_single_release,				 \
+	.owner   = THIS_MODULE,						\
+	.open    = name##_single_open,					\
+	.read    = seq_read,						\
+	.write   = custom_seq_write,					\
+	.llseek  = seq_lseek,						\
+	.release = lprocfs_single_release,				\
 }
 
 #define LPROC_SEQ_FOPS_RO(name)	 __LPROC_SEQ_FOPS(name, NULL)
diff --git a/drivers/staging/lustre/lustre/include/lu_object.h b/drivers/staging/lustre/lustre/include/lu_object.h
index 3e663a9..68aa0d0 100644
--- a/drivers/staging/lustre/lustre/include/lu_object.h
+++ b/drivers/staging/lustre/lustre/include/lu_object.h
@@ -178,7 +178,7 @@ struct lu_object_conf {
 	/**
 	 * Some hints for obj find and alloc.
 	 */
-	enum loc_flags     loc_flags;
+	enum loc_flags	loc_flags;
 };
 
 /**
@@ -261,30 +261,30 @@ struct lu_device {
 	 *
 	 * \todo XXX which means that atomic_t is probably too small.
 	 */
-	atomic_t		       ld_ref;
+	atomic_t				ld_ref;
 	/**
 	 * Pointer to device type. Never modified once set.
 	 */
-	struct lu_device_type       *ld_type;
+	struct lu_device_type			*ld_type;
 	/**
 	 * Operation vector for this device.
 	 */
-	const struct lu_device_operations *ld_ops;
+	const struct lu_device_operations	*ld_ops;
 	/**
 	 * Stack this device belongs to.
 	 */
-	struct lu_site		    *ld_site;
+	struct lu_site				*ld_site;
 
 	/** \todo XXX: temporary back pointer into obd. */
-	struct obd_device		 *ld_obd;
+	struct obd_device			*ld_obd;
 	/**
 	 * A list of references to this object, for debugging.
 	 */
-	struct lu_ref		      ld_reference;
+	struct lu_ref				ld_reference;
 	/**
 	 * Link the device to the site.
 	 **/
-	struct list_head			 ld_linkage;
+	struct list_head			ld_linkage;
 };
 
 struct lu_device_type_operations;
@@ -309,23 +309,23 @@ struct lu_device_type {
 	/**
 	 * Tag bits. Taken from enum lu_device_tag. Never modified once set.
 	 */
-	u32				   ldt_tags;
+	u32					ldt_tags;
 	/**
 	 * Name of this class. Unique system-wide. Never modified once set.
 	 */
-	char				   *ldt_name;
+	char					*ldt_name;
 	/**
 	 * Operations for this type.
 	 */
-	const struct lu_device_type_operations *ldt_ops;
+	const struct lu_device_type_operations	*ldt_ops;
 	/**
 	 * \todo XXX: temporary pointer to associated obd_type.
 	 */
-	struct obd_type			*ldt_obd_type;
+	struct obd_type				*ldt_obd_type;
 	/**
 	 * \todo XXX: temporary: context tags used by obd_*() calls.
 	 */
-	u32				   ldt_ctx_tags;
+	u32					ldt_ctx_tags;
 	/**
 	 * Number of existing device type instances.
 	 */
@@ -427,21 +427,21 @@ struct lu_attr {
 
 /** Bit-mask of valid attributes */
 enum la_valid {
-	LA_ATIME = 1 << 0,
-	LA_MTIME = 1 << 1,
-	LA_CTIME = 1 << 2,
-	LA_SIZE  = 1 << 3,
-	LA_MODE  = 1 << 4,
-	LA_UID   = 1 << 5,
-	LA_GID   = 1 << 6,
-	LA_BLOCKS = 1 << 7,
-	LA_TYPE   = 1 << 8,
-	LA_FLAGS  = 1 << 9,
-	LA_NLINK  = 1 << 10,
-	LA_RDEV   = 1 << 11,
-	LA_BLKSIZE = 1 << 12,
-	LA_KILL_SUID = 1 << 13,
-	LA_KILL_SGID = 1 << 14,
+	LA_ATIME	= 1 << 0,
+	LA_MTIME	= 1 << 1,
+	LA_CTIME	= 1 << 2,
+	LA_SIZE		= 1 << 3,
+	LA_MODE		= 1 << 4,
+	LA_UID		= 1 << 5,
+	LA_GID		= 1 << 6,
+	LA_BLOCKS	= 1 << 7,
+	LA_TYPE		= 1 << 8,
+	LA_FLAGS	= 1 << 9,
+	LA_NLINK	= 1 << 10,
+	LA_RDEV		= 1 << 11,
+	LA_BLKSIZE	= 1 << 12,
+	LA_KILL_SUID	= 1 << 13,
+	LA_KILL_SGID	= 1 << 14,
 };
 
 /**
@@ -451,15 +451,15 @@ struct lu_object {
 	/**
 	 * Header for this object.
 	 */
-	struct lu_object_header	   *lo_header;
+	struct lu_object_header			*lo_header;
 	/**
 	 * Device for this layer.
 	 */
-	struct lu_device		  *lo_dev;
+	struct lu_device			*lo_dev;
 	/**
 	 * Operations for this object.
 	 */
-	const struct lu_object_operations *lo_ops;
+	const struct lu_object_operations	*lo_ops;
 	/**
 	 * Linkage into list of all layers.
 	 */
@@ -467,7 +467,7 @@ struct lu_object {
 	/**
 	 * Link to the device, for debugging.
 	 */
-	struct lu_ref_link                 lo_dev_ref;
+	struct lu_ref_link			 lo_dev_ref;
 };
 
 enum lu_object_header_flags {
@@ -484,13 +484,13 @@ enum lu_object_header_flags {
 };
 
 enum lu_object_header_attr {
-	LOHA_EXISTS   = 1 << 0,
-	LOHA_REMOTE   = 1 << 1,
+	LOHA_EXISTS	= 1 << 0,
+	LOHA_REMOTE	= 1 << 1,
 	/**
 	 * UNIX file type is stored in S_IFMT bits.
 	 */
-	LOHA_FT_START = 001 << 12, /**< S_IFIFO */
-	LOHA_FT_END   = 017 << 12, /**< S_IFMT */
+	LOHA_FT_START	= 001 << 12, /**< S_IFIFO */
+	LOHA_FT_END	= 017 << 12, /**< S_IFMT */
 };
 
 /**
@@ -513,33 +513,33 @@ struct lu_object_header {
 	 * Object flags from enum lu_object_header_flags. Set and checked
 	 * atomically.
 	 */
-	unsigned long	  loh_flags;
+	unsigned long		loh_flags;
 	/**
 	 * Object reference count. Protected by lu_site::ls_guard.
 	 */
-	atomic_t	   loh_ref;
+	atomic_t		loh_ref;
 	/**
 	 * Common object attributes, cached for efficiency. From enum
 	 * lu_object_header_attr.
 	 */
-	u32		  loh_attr;
+	u32			loh_attr;
 	/**
 	 * Linkage into per-site hash table. Protected by lu_site::ls_guard.
 	 */
-	struct hlist_node       loh_hash;
+	struct hlist_node	loh_hash;
 	/**
 	 * Linkage into per-site LRU list. Protected by lu_site::ls_guard.
 	 */
-	struct list_head	     loh_lru;
+	struct list_head	loh_lru;
 	/**
 	 * Linkage into list of layers. Never modified once set (except lately
 	 * during object destruction). No locking is necessary.
 	 */
-	struct list_head	     loh_layers;
+	struct list_head	loh_layers;
 	/**
 	 * A list of references to this object, for debugging.
 	 */
-	struct lu_ref	  loh_reference;
+	struct lu_ref		loh_reference;
 };
 
 struct fld;
@@ -577,7 +577,7 @@ struct lu_site {
 	/**
 	 * Top-level device for this stack.
 	 */
-	struct lu_device	 *ls_top_dev;
+	struct lu_device	*ls_top_dev;
 	/**
 	 * Bottom-level device for this stack
 	 */
@@ -585,12 +585,12 @@ struct lu_site {
 	/**
 	 * Linkage into global list of sites.
 	 */
-	struct list_head		ls_linkage;
+	struct list_head	ls_linkage;
 	/**
 	 * List for lu device for this site, protected
 	 * by ls_ld_lock.
 	 **/
-	struct list_head		ls_ld_linkage;
+	struct list_head	ls_ld_linkage;
 	spinlock_t		ls_ld_lock;
 
 	/**
@@ -609,7 +609,7 @@ struct lu_site {
 	/**
 	 * Number of objects in lsb_lru_lists - used for shrinking
 	 */
-	struct percpu_counter	 ls_lru_len_counter;
+	struct percpu_counter	ls_lru_len_counter;
 };
 
 wait_queue_head_t *
@@ -753,31 +753,31 @@ int lu_cdebug_printer(const struct lu_env *env,
 /**
  * Print object description followed by a user-supplied message.
  */
-#define LU_OBJECT_DEBUG(mask, env, object, format, ...)		   \
-do {								      \
-	if (cfs_cdebug_show(mask, DEBUG_SUBSYSTEM)) {		     \
+#define LU_OBJECT_DEBUG(mask, env, object, format, ...)			\
+do {									\
+	if (cfs_cdebug_show(mask, DEBUG_SUBSYSTEM)) {			\
 		LIBCFS_DEBUG_MSG_DATA_DECL(msgdata, mask, NULL);	\
 		lu_object_print(env, &msgdata, lu_cdebug_printer, object);\
-		CDEBUG(mask, format "\n", ## __VA_ARGS__);		    \
-	}								 \
+		CDEBUG(mask, format "\n", ## __VA_ARGS__);		\
+	}								\
 } while (0)
 
 /**
  * Print short object description followed by a user-supplied message.
  */
 #define LU_OBJECT_HEADER(mask, env, object, format, ...)		\
-do {								    \
-	if (cfs_cdebug_show(mask, DEBUG_SUBSYSTEM)) {		   \
+do {									\
+	if (cfs_cdebug_show(mask, DEBUG_SUBSYSTEM)) {			\
 		LIBCFS_DEBUG_MSG_DATA_DECL(msgdata, mask, NULL);	\
 		lu_object_header_print(env, &msgdata, lu_cdebug_printer,\
-				       (object)->lo_header);	    \
-		lu_cdebug_printer(env, &msgdata, "\n");		 \
-		CDEBUG(mask, format, ## __VA_ARGS__);		  \
-	}							       \
+				       (object)->lo_header);		\
+		lu_cdebug_printer(env, &msgdata, "\n");			\
+		CDEBUG(mask, format, ## __VA_ARGS__);			\
+	}								\
 } while (0)
 
-void lu_object_print       (const struct lu_env *env, void *cookie,
-			    lu_printer_t printer, const struct lu_object *o);
+void lu_object_print(const struct lu_env *env, void *cookie,
+		     lu_printer_t printer, const struct lu_object *o);
 void lu_object_header_print(const struct lu_env *env, void *cookie,
 			    lu_printer_t printer,
 			    const struct lu_object_header *hdr);
@@ -849,15 +849,15 @@ static inline void lu_object_ref_del_at(struct lu_object *o,
 /** input params, should be filled out by mdt */
 struct lu_rdpg {
 	/** hash */
-	u64		   rp_hash;
+	u64			rp_hash;
 	/** count in bytes */
-	unsigned int	    rp_count;
+	unsigned int		rp_count;
 	/** number of pages */
-	unsigned int	    rp_npages;
+	unsigned int		rp_npages;
 	/** requested attr */
-	u32		   rp_attrs;
+	u32			rp_attrs;
 	/** pointers to pages */
-	struct page	   **rp_pages;
+	struct page		**rp_pages;
 };
 
 enum lu_xattr_flags {
@@ -912,24 +912,24 @@ struct lu_context {
 	 * of tags has non-empty intersection with one for key. Tags are taken
 	 * from enum lu_context_tag.
 	 */
-	u32		  lc_tags;
-	enum lu_context_state  lc_state;
+	u32			lc_tags;
+	enum lu_context_state	lc_state;
 	/**
 	 * Pointer to the home service thread. NULL for other execution
 	 * contexts.
 	 */
-	struct ptlrpc_thread  *lc_thread;
+	struct ptlrpc_thread	*lc_thread;
 	/**
 	 * Pointer to an array with key values. Internal implementation
 	 * detail.
 	 */
-	void		 **lc_value;
+	void			**lc_value;
 	/**
 	 * Linkage into a list of all remembered contexts. Only
 	 * `non-transient' contexts, i.e., ones created for service threads
 	 * are placed here.
 	 */
-	struct list_head	     lc_remember;
+	struct list_head	lc_remember;
 	/**
 	 * Version counter used to skip calls to lu_context_refill() when no
 	 * keys were registered.
@@ -949,59 +949,59 @@ enum lu_context_tag {
 	/**
 	 * Thread on md server
 	 */
-	LCT_MD_THREAD = 1 << 0,
+	LCT_MD_THREAD		= 1 << 0,
 	/**
 	 * Thread on dt server
 	 */
-	LCT_DT_THREAD = 1 << 1,
+	LCT_DT_THREAD		= 1 << 1,
 	/**
 	 * Context for transaction handle
 	 */
-	LCT_TX_HANDLE = 1 << 2,
+	LCT_TX_HANDLE		= 1 << 2,
 	/**
 	 * Thread on client
 	 */
-	LCT_CL_THREAD = 1 << 3,
+	LCT_CL_THREAD		= 1 << 3,
 	/**
 	 * A per-request session on a server, and a per-system-call session on
 	 * a client.
 	 */
-	LCT_SESSION   = 1 << 4,
+	LCT_SESSION		= 1 << 4,
 	/**
 	 * A per-request data on OSP device
 	 */
-	LCT_OSP_THREAD = 1 << 5,
+	LCT_OSP_THREAD		= 1 << 5,
 	/**
 	 * MGS device thread
 	 */
-	LCT_MG_THREAD = 1 << 6,
+	LCT_MG_THREAD		= 1 << 6,
 	/**
 	 * Context for local operations
 	 */
-	LCT_LOCAL = 1 << 7,
+	LCT_LOCAL		= 1 << 7,
 	/**
 	 * session for server thread
 	 **/
-	LCT_SERVER_SESSION = BIT(8),
+	LCT_SERVER_SESSION	= BIT(8),
 	/**
 	 * Set when@least one of keys, having values in this context has
 	 * non-NULL lu_context_key::lct_exit() method. This is used to
 	 * optimize lu_context_exit() call.
 	 */
-	LCT_HAS_EXIT  = 1 << 28,
+	LCT_HAS_EXIT		= 1 << 28,
 	/**
 	 * Don't add references for modules creating key values in that context.
 	 * This is only for contexts used internally by lu_object framework.
 	 */
-	LCT_NOREF     = 1 << 29,
+	LCT_NOREF		= 1 << 29,
 	/**
 	 * Key is being prepared for retiring, don't create new values for it.
 	 */
-	LCT_QUIESCENT = 1 << 30,
+	LCT_QUIESCENT		= 1 << 30,
 	/**
 	 * Context should be remembered.
 	 */
-	LCT_REMEMBER  = 1 << 31,
+	LCT_REMEMBER		= 1 << 31,
 	/**
 	 * Contexts usable in cache shrinker thread.
 	 */
@@ -1049,7 +1049,7 @@ struct lu_context_key {
 	/**
 	 * Set of tags for which values of this key are to be instantiated.
 	 */
-	u32 lct_tags;
+	u32		lct_tags;
 	/**
 	 * Value constructor. This is called when new value is created for a
 	 * context. Returns pointer to new value of error pointer.
@@ -1074,62 +1074,62 @@ struct lu_context_key {
 	 * Internal implementation detail: index within lu_context::lc_value[]
 	 * reserved for this key.
 	 */
-	int      lct_index;
+	int		lct_index;
 	/**
 	 * Internal implementation detail: number of values created for this
 	 * key.
 	 */
-	atomic_t lct_used;
+	atomic_t	lct_used;
 	/**
 	 * Internal implementation detail: module for this key.
 	 */
-	struct module *lct_owner;
+	struct module	*lct_owner;
 	/**
 	 * References to this key. For debugging.
 	 */
-	struct lu_ref  lct_reference;
+	struct lu_ref	lct_reference;
 };
 
-#define LU_KEY_INIT(mod, type)				    \
-	static void *mod##_key_init(const struct lu_context *ctx, \
-				    struct lu_context_key *key)   \
-	{							 \
-		type *value;				      \
-								  \
-		BUILD_BUG_ON(sizeof(*value) > PAGE_SIZE);        \
-								  \
-		value = kzalloc(sizeof(*value), GFP_NOFS);	\
-		if (!value)				\
-			value = ERR_PTR(-ENOMEM);		 \
-								  \
-		return value;				     \
-	}							 \
+#define LU_KEY_INIT(mod, type)						\
+	static void *mod##_key_init(const struct lu_context *ctx,	\
+				    struct lu_context_key *key)		\
+	{								\
+		type *value;						\
+									\
+		BUILD_BUG_ON(sizeof(*value) > PAGE_SIZE);		\
+									\
+		value = kzalloc(sizeof(*value), GFP_NOFS);		\
+		if (!value)						\
+			value = ERR_PTR(-ENOMEM);			\
+									\
+		return value;						\
+	}								\
 	struct __##mod##__dummy_init {; } /* semicolon catcher */
 
-#define LU_KEY_FINI(mod, type)					      \
-	static void mod##_key_fini(const struct lu_context *ctx,	    \
+#define LU_KEY_FINI(mod, type)						\
+	static void mod##_key_fini(const struct lu_context *ctx,	\
 				    struct lu_context_key *key, void *data) \
-	{								   \
-		type *info = data;					  \
-									    \
-		kfree(info);					 \
-	}								   \
+	{								\
+		type *info = data;					\
+									\
+		kfree(info);						\
+	}								\
 	struct __##mod##__dummy_fini {; } /* semicolon catcher */
 
-#define LU_KEY_INIT_FINI(mod, type)   \
-	LU_KEY_INIT(mod, type);	\
+#define LU_KEY_INIT_FINI(mod, type)			\
+	LU_KEY_INIT(mod, type);				\
 	LU_KEY_FINI(mod, type)
 
 #define LU_CONTEXT_KEY_DEFINE(mod, tags)		\
-	struct lu_context_key mod##_thread_key = {      \
-		.lct_tags = tags,		       \
-		.lct_init = mod##_key_init,	     \
-		.lct_fini = mod##_key_fini	      \
+	struct lu_context_key mod##_thread_key = {	\
+		.lct_tags = tags,			\
+		.lct_init = mod##_key_init,		\
+		.lct_fini = mod##_key_fini		\
 	}
 
 #define LU_CONTEXT_KEY_INIT(key)			\
-do {						    \
-	(key)->lct_owner = THIS_MODULE;		 \
+do {							\
+	(key)->lct_owner = THIS_MODULE;			\
 } while (0)
 
 int lu_context_key_register(struct lu_context_key *key);
@@ -1144,53 +1144,53 @@ void *lu_context_key_get(const struct lu_context *ctx,
  * owning module.
  */
 
-#define LU_KEY_INIT_GENERIC(mod)					\
+#define LU_KEY_INIT_GENERIC(mod)					  \
 	static void mod##_key_init_generic(struct lu_context_key *k, ...) \
-	{							       \
-		struct lu_context_key *key = k;			 \
-		va_list args;					   \
-									\
-		va_start(args, k);				      \
-		do {						    \
-			LU_CONTEXT_KEY_INIT(key);		       \
-			key = va_arg(args, struct lu_context_key *);    \
-		} while (key);				  \
-		va_end(args);					   \
+	{								  \
+		struct lu_context_key *key = k;				  \
+		va_list args;						  \
+									  \
+		va_start(args, k);					  \
+		do {							  \
+			LU_CONTEXT_KEY_INIT(key);			  \
+			key = va_arg(args, struct lu_context_key *);	  \
+		} while (key);						  \
+		va_end(args);						  \
 	}
 
-#define LU_TYPE_INIT(mod, ...)					  \
-	LU_KEY_INIT_GENERIC(mod)					\
-	static int mod##_type_init(struct lu_device_type *t)	    \
-	{							       \
-		mod##_key_init_generic(__VA_ARGS__, NULL);	      \
-		return lu_context_key_register_many(__VA_ARGS__, NULL); \
-	}							       \
+#define LU_TYPE_INIT(mod, ...)						  \
+	LU_KEY_INIT_GENERIC(mod)					  \
+	static int mod##_type_init(struct lu_device_type *t)		  \
+	{								  \
+		mod##_key_init_generic(__VA_ARGS__, NULL);		  \
+		return lu_context_key_register_many(__VA_ARGS__, NULL);	  \
+	}								  \
 	struct __##mod##_dummy_type_init {; }
 
-#define LU_TYPE_FINI(mod, ...)					  \
-	static void mod##_type_fini(struct lu_device_type *t)	   \
-	{							       \
+#define LU_TYPE_FINI(mod, ...)						\
+	static void mod##_type_fini(struct lu_device_type *t)		\
+	{								\
 		lu_context_key_degister_many(__VA_ARGS__, NULL);	\
-	}							       \
+	}								\
 	struct __##mod##_dummy_type_fini {; }
 
-#define LU_TYPE_START(mod, ...)				 \
-	static void mod##_type_start(struct lu_device_type *t)  \
-	{						       \
-		lu_context_key_revive_many(__VA_ARGS__, NULL);  \
-	}						       \
+#define LU_TYPE_START(mod, ...)						\
+	static void mod##_type_start(struct lu_device_type *t)		\
+	{								\
+		lu_context_key_revive_many(__VA_ARGS__, NULL);		\
+	}								\
 	struct __##mod##_dummy_type_start {; }
 
-#define LU_TYPE_STOP(mod, ...)				  \
-	static void mod##_type_stop(struct lu_device_type *t)   \
-	{						       \
-		lu_context_key_quiesce_many(__VA_ARGS__, NULL); \
-	}						       \
+#define LU_TYPE_STOP(mod, ...)						\
+	static void mod##_type_stop(struct lu_device_type *t)		\
+	{								\
+		lu_context_key_quiesce_many(__VA_ARGS__, NULL);		\
+	}								\
 	struct __##mod##_dummy_type_stop {; }
 
-#define LU_TYPE_INIT_FINI(mod, ...)	     \
-	LU_TYPE_INIT(mod, __VA_ARGS__);	 \
-	LU_TYPE_FINI(mod, __VA_ARGS__);	 \
+#define LU_TYPE_INIT_FINI(mod, ...)		\
+	LU_TYPE_INIT(mod, __VA_ARGS__);		\
+	LU_TYPE_FINI(mod, __VA_ARGS__);		\
 	LU_TYPE_START(mod, __VA_ARGS__);	\
 	LU_TYPE_STOP(mod, __VA_ARGS__)
 
@@ -1217,11 +1217,11 @@ struct lu_env {
 	/**
 	 * "Local" context, used to store data instead of stack.
 	 */
-	struct lu_context  le_ctx;
+	struct lu_context	le_ctx;
 	/**
 	 * "Session" context for per-request data.
 	 */
-	struct lu_context *le_ses;
+	struct lu_context	*le_ses;
 };
 
 int lu_env_init(struct lu_env *env, u32 tags);
@@ -1240,8 +1240,8 @@ struct lu_env {
  * Common name structure to be passed around for various name related methods.
  */
 struct lu_name {
-	const char    *ln_name;
-	int	    ln_namelen;
+	const char	*ln_name;
+	int		ln_namelen;
 };
 
 /**
@@ -1265,8 +1265,8 @@ static inline bool lu_name_is_valid_2(const char *name, size_t name_len)
  * methods.
  */
 struct lu_buf {
-	void   *lb_buf;
-	size_t	lb_len;
+	void		*lb_buf;
+	size_t		lb_len;
 };
 
 /**
@@ -1285,9 +1285,9 @@ struct lu_buf {
 void lu_global_fini(void);
 
 struct lu_kmem_descr {
-	struct kmem_cache **ckd_cache;
-	const char       *ckd_name;
-	const size_t      ckd_size;
+	struct kmem_cache     **ckd_cache;
+	const char	       *ckd_name;
+	const size_t		ckd_size;
 };
 
 int  lu_kmem_init(struct lu_kmem_descr *caches);
diff --git a/drivers/staging/lustre/lustre/include/lustre_disk.h b/drivers/staging/lustre/lustre/include/lustre_disk.h
index 091a09f..07c074e 100644
--- a/drivers/staging/lustre/lustre/include/lustre_disk.h
+++ b/drivers/staging/lustre/lustre/include/lustre_disk.h
@@ -51,13 +51,13 @@
 
 /****************** persistent mount data *********************/
 
-#define LDD_F_SV_TYPE_MDT   0x0001
-#define LDD_F_SV_TYPE_OST   0x0002
-#define LDD_F_SV_TYPE_MGS   0x0004
-#define LDD_F_SV_TYPE_MASK (LDD_F_SV_TYPE_MDT  | \
-			    LDD_F_SV_TYPE_OST  | \
-			    LDD_F_SV_TYPE_MGS)
-#define LDD_F_SV_ALL	0x0008
+#define LDD_F_SV_TYPE_MDT	0x0001
+#define LDD_F_SV_TYPE_OST	0x0002
+#define LDD_F_SV_TYPE_MGS	0x0004
+#define LDD_F_SV_TYPE_MASK	(LDD_F_SV_TYPE_MDT  | \
+				 LDD_F_SV_TYPE_OST  | \
+				 LDD_F_SV_TYPE_MGS)
+#define LDD_F_SV_ALL		0x0008
 
 /****************** mount command *********************/
 
@@ -65,7 +65,7 @@
  * everything as string options
  */
 
-#define LMD_MAGIC    0xbdacbd03
+#define LMD_MAGIC		0xbdacbd03
 #define LMD_PARAMS_MAXLEN	4096
 
 /* gleaned from the mount command - no persistent info here */
@@ -117,19 +117,19 @@ struct lustre_mount_data {
 struct kobject;
 
 struct lustre_sb_info {
-	int			lsi_flags;
-	struct obd_device	*lsi_mgc;     /* mgc obd */
-	struct lustre_mount_data *lsi_lmd;     /* mount command info */
-	struct ll_sb_info	*lsi_llsbi;   /* add'l client sbi info */
-	struct dt_device	*lsi_dt_dev;  /* dt device to access disk fs*/
-	atomic_t		lsi_mounts;  /* references to the srv_mnt */
-	struct kobject		*lsi_kobj;
-	char			lsi_svname[MTI_NAME_MAXLEN];
-	char			lsi_osd_obdname[64];
-	char			lsi_osd_uuid[64];
-	struct obd_export	*lsi_osd_exp;
-	char			lsi_osd_type[16];
-	char			lsi_fstype[16];
+	int			  lsi_flags;
+	struct obd_device	 *lsi_mgc;    /* mgc obd */
+	struct lustre_mount_data *lsi_lmd;    /* mount command info */
+	struct ll_sb_info	 *lsi_llsbi;  /* add'l client sbi info */
+	struct dt_device	 *lsi_dt_dev; /* dt device to access disk fs */
+	atomic_t		  lsi_mounts; /* references to the srv_mnt */
+	struct kobject		 *lsi_kobj;
+	char			  lsi_svname[MTI_NAME_MAXLEN];
+	char			  lsi_osd_obdname[64];
+	char			  lsi_osd_uuid[64];
+	struct obd_export	 *lsi_osd_exp;
+	char			  lsi_osd_type[16];
+	char			  lsi_fstype[16];
 };
 
 #define LSI_UMOUNT_FAILOVER	0x00200000
diff --git a/drivers/staging/lustre/lustre/include/lustre_dlm.h b/drivers/staging/lustre/lustre/include/lustre_dlm.h
index 7c12087..c561d61 100644
--- a/drivers/staging/lustre/lustre/include/lustre_dlm.h
+++ b/drivers/staging/lustre/lustre/include/lustre_dlm.h
@@ -66,17 +66,17 @@
  * LDLM non-error return states
  */
 enum ldlm_error {
-	ELDLM_OK = 0,
-	ELDLM_LOCK_MATCHED = 1,
+	ELDLM_OK		= 0,
+	ELDLM_LOCK_MATCHED	= 1,
 
-	ELDLM_LOCK_CHANGED = 300,
-	ELDLM_LOCK_ABORTED = 301,
-	ELDLM_LOCK_REPLACED = 302,
-	ELDLM_NO_LOCK_DATA = 303,
-	ELDLM_LOCK_WOULDBLOCK = 304,
+	ELDLM_LOCK_CHANGED	= 300,
+	ELDLM_LOCK_ABORTED	= 301,
+	ELDLM_LOCK_REPLACED	= 302,
+	ELDLM_NO_LOCK_DATA	= 303,
+	ELDLM_LOCK_WOULDBLOCK	= 304,
 
-	ELDLM_NAMESPACE_EXISTS = 400,
-	ELDLM_BAD_NAMESPACE    = 401
+	ELDLM_NAMESPACE_EXISTS	= 400,
+	ELDLM_BAD_NAMESPACE	= 401
 };
 
 /**
@@ -389,7 +389,7 @@ struct ldlm_namespace {
 	 * Position in global namespace list linking all namespaces on
 	 * the node.
 	 */
-	struct list_head		ns_list_chain;
+	struct list_head	ns_list_chain;
 
 	/**
 	 * List of unused locks for this namespace. This list is also called
@@ -401,7 +401,7 @@ struct ldlm_namespace {
 	 * to release from the head of this list.
 	 * Locks are linked via l_lru field in \see struct ldlm_lock.
 	 */
-	struct list_head		ns_unused_list;
+	struct list_head	ns_unused_list;
 	/** Number of locks in the LRU list above */
 	int			ns_nr_unused;
 
@@ -437,7 +437,7 @@ struct ldlm_namespace {
 	 * Wait queue used by __ldlm_namespace_free. Gets woken up every time
 	 * a resource is removed.
 	 */
-	wait_queue_head_t		ns_waitq;
+	wait_queue_head_t	ns_waitq;
 	/** LDLM pool structure for this namespace */
 	struct ldlm_pool	ns_pool;
 	/** Definition of how eagerly unused locks will be released from LRU */
@@ -502,7 +502,7 @@ typedef int (*ldlm_completion_callback)(struct ldlm_lock *lock, u64 flags,
 /** Work list for sending GL ASTs to multiple locks. */
 struct ldlm_glimpse_work {
 	struct ldlm_lock	*gl_lock; /* lock to glimpse */
-	struct list_head		 gl_list; /* linkage to other gl work structs */
+	struct list_head	 gl_list; /* linkage to other gl work structs */
 	u32			 gl_flags;/* see LDLM_GL_WORK_* below */
 	union ldlm_gl_desc	*gl_desc; /* glimpse descriptor to be packed in
 					   * glimpse callback request
@@ -538,18 +538,18 @@ enum ldlm_cancel_flags {
 };
 
 struct ldlm_flock {
-	u64 start;
-	u64 end;
-	u64 owner;
-	u64 blocking_owner;
-	struct obd_export *blocking_export;
-	u32 pid;
+	u64			start;
+	u64			end;
+	u64			owner;
+	u64			blocking_owner;
+	struct obd_export	*blocking_export;
+	u32			pid;
 };
 
 union ldlm_policy_data {
-	struct ldlm_extent l_extent;
-	struct ldlm_flock l_flock;
-	struct ldlm_inodebits l_inodebits;
+	struct ldlm_extent	l_extent;
+	struct ldlm_flock	l_flock;
+	struct ldlm_inodebits	l_inodebits;
 };
 
 void ldlm_convert_policy_to_local(struct obd_export *exp, enum ldlm_type type,
@@ -589,23 +589,23 @@ struct ldlm_lock {
 	 *
 	 * Must be first in the structure.
 	 */
-	struct portals_handle	l_handle;
+	struct portals_handle		l_handle;
 	/**
 	 * Lock reference count.
 	 * This is how many users have pointers to actual structure, so that
 	 * we do not accidentally free lock structure that is in use.
 	 */
-	atomic_t		l_refc;
+	atomic_t			l_refc;
 	/**
 	 * Internal spinlock protects l_resource.  We should hold this lock
 	 * first before taking res_lock.
 	 */
-	spinlock_t		l_lock;
+	spinlock_t			l_lock;
 	/**
 	 * Pointer to actual resource this lock is in.
 	 * ldlm_lock_change_resource() can change this.
 	 */
-	struct ldlm_resource	*l_resource;
+	struct ldlm_resource		*l_resource;
 	/**
 	 * List item for client side LRU list.
 	 * Protected by ns_lock in struct ldlm_namespace.
@@ -620,20 +620,20 @@ struct ldlm_lock {
 	/**
 	 * Interval-tree node for ldlm_extent.
 	 */
-	struct rb_node		l_rb;
-	u64			__subtree_last;
+	struct rb_node			l_rb;
+	u64				__subtree_last;
 
 	/**
 	 * Requested mode.
 	 * Protected by lr_lock.
 	 */
-	enum ldlm_mode		l_req_mode;
+	enum ldlm_mode			l_req_mode;
 	/**
 	 * Granted mode, also protected by lr_lock.
 	 */
-	enum ldlm_mode		l_granted_mode;
+	enum ldlm_mode			l_granted_mode;
 	/** Lock completion handler pointer. Called when lock is granted. */
-	ldlm_completion_callback l_completion_ast;
+	ldlm_completion_callback	l_completion_ast;
 	/**
 	 * Lock blocking AST handler pointer.
 	 * It plays two roles:
@@ -644,51 +644,51 @@ struct ldlm_lock {
 	 * and then once more when the last user went away and the lock is
 	 * cancelled (could happen recursively).
 	 */
-	ldlm_blocking_callback	l_blocking_ast;
+	ldlm_blocking_callback		l_blocking_ast;
 	/**
 	 * Lock glimpse handler.
 	 * Glimpse handler is used to obtain LVB updates from a client by
 	 * server
 	 */
-	ldlm_glimpse_callback	l_glimpse_ast;
+	ldlm_glimpse_callback		l_glimpse_ast;
 
 	/**
 	 * Lock export.
 	 * This is a pointer to actual client export for locks that were granted
 	 * to clients. Used server-side.
 	 */
-	struct obd_export	*l_export;
+	struct obd_export		*l_export;
 	/**
 	 * Lock connection export.
 	 * Pointer to server export on a client.
 	 */
-	struct obd_export	*l_conn_export;
+	struct obd_export		*l_conn_export;
 
 	/**
 	 * Remote lock handle.
 	 * If the lock is remote, this is the handle of the other side lock
 	 * (l_handle)
 	 */
-	struct lustre_handle	l_remote_handle;
+	struct lustre_handle		l_remote_handle;
 
 	/**
 	 * Representation of private data specific for a lock type.
 	 * Examples are: extent range for extent lock or bitmask for ibits locks
 	 */
-	union ldlm_policy_data	l_policy_data;
+	union ldlm_policy_data		l_policy_data;
 
 	/**
 	 * Lock state flags. Protected by lr_lock.
 	 * \see lustre_dlm_flags.h where the bits are defined.
 	 */
-	u64			l_flags;
+	u64				l_flags;
 
 	/**
 	 * Lock r/w usage counters.
 	 * Protected by lr_lock.
 	 */
-	u32			l_readers;
-	u32			l_writers;
+	u32				l_readers;
+	u32				l_writers;
 	/**
 	 * If the lock is granted, a process sleeps on this waitq to learn when
 	 * it's no longer in use.  If the lock is not granted, a process sleeps
@@ -700,31 +700,31 @@ struct ldlm_lock {
 	 * Seconds. It will be updated if there is any activity related to
 	 * the lock, e.g. enqueue the lock or send blocking AST.
 	 */
-	time64_t		l_last_activity;
+	time64_t			l_last_activity;
 
 	/**
 	 * Time last used by e.g. being matched by lock match.
 	 * Jiffies. Should be converted to time if needed.
 	 */
-	unsigned long		l_last_used;
+	unsigned long			l_last_used;
 
 	/** Originally requested extent for the extent lock. */
-	struct ldlm_extent	l_req_extent;
+	struct ldlm_extent		l_req_extent;
 
 	/*
 	 * Client-side-only members.
 	 */
 
-	enum lvb_type	      l_lvb_type;
+	enum lvb_type			l_lvb_type;
 
 	/**
 	 * Temporary storage for a LVB received during an enqueue operation.
 	 */
-	u32			l_lvb_len;
-	void			*l_lvb_data;
+	u32				l_lvb_len;
+	void				*l_lvb_data;
 
 	/** Private storage for lock user. Opaque to LDLM. */
-	void			*l_ast_data;
+	void				*l_ast_data;
 
 	/*
 	 * Server-side-only members.
@@ -735,7 +735,7 @@ struct ldlm_lock {
 	 * Used by Commit on Share (COS) code. Currently only used for
 	 * inodebits locks on MDS.
 	 */
-	u64			l_client_cookie;
+	u64				l_client_cookie;
 
 	/**
 	 * List item for locks waiting for cancellation from clients.
@@ -753,10 +753,10 @@ struct ldlm_lock {
 	 * under this lock.
 	 * \see ost_rw_prolong_locks
 	 */
-	unsigned long		l_callback_timeout;
+	unsigned long			l_callback_timeout;
 
 	/** Local PID of process which created this lock. */
-	u32			l_pid;
+	u32				l_pid;
 
 	/**
 	 * Number of times blocking AST was sent for this lock.
@@ -764,7 +764,7 @@ struct ldlm_lock {
 	 * attempt to send blocking AST more than once, an assertion would be
 	 * hit. \see ldlm_work_bl_ast_lock
 	 */
-	int			l_bl_ast_run;
+	int				l_bl_ast_run;
 	/** List item ldlm_add_ast_work_item() for case of blocking ASTs. */
 	struct list_head		l_bl_ast;
 	/** List item ldlm_add_ast_work_item() for case of completion ASTs. */
@@ -776,7 +776,7 @@ struct ldlm_lock {
 	 * Pointer to a conflicting lock that caused blocking AST to be sent
 	 * for this lock
 	 */
-	struct ldlm_lock	*l_blocking_lock;
+	struct ldlm_lock		*l_blocking_lock;
 
 	/**
 	 * Protected by lr_lock, linkages to "skip lists".
@@ -786,15 +786,15 @@ struct ldlm_lock {
 	struct list_head		l_sl_policy;
 
 	/** Reference tracking structure to debug leaked locks. */
-	struct lu_ref		l_reference;
+	struct lu_ref			l_reference;
 #if LUSTRE_TRACKS_LOCK_EXP_REFS
 	/* Debugging stuff for bug 20498, for tracking export references. */
 	/** number of export references taken */
-	int			l_exp_refs_nr;
+	int				l_exp_refs_nr;
 	/** link all locks referencing one export */
 	struct list_head		l_exp_refs_link;
 	/** referenced export object */
-	struct obd_export	*l_exp_refs_target;
+	struct obd_export		*l_exp_refs_target;
 #endif
 };
 
@@ -810,19 +810,19 @@ struct ldlm_lock {
  * whether the locks are conflicting or not.
  */
 struct ldlm_resource {
-	struct ldlm_ns_bucket	*lr_ns_bucket;
+	struct ldlm_ns_bucket		*lr_ns_bucket;
 
 	/**
 	 * List item for list in namespace hash.
 	 * protected by ns_lock
 	 */
-	struct hlist_node	lr_hash;
+	struct hlist_node		lr_hash;
 
 	/** Reference count for this resource */
-	atomic_t		lr_refcount;
+	atomic_t			lr_refcount;
 
 	/** Spinlock to protect locks under this resource. */
-	spinlock_t		lr_lock;
+	spinlock_t			lr_lock;
 
 	/**
 	 * protected by lr_lock
@@ -838,30 +838,30 @@ struct ldlm_resource {
 	/** @} */
 
 	/** Resource name */
-	struct ldlm_res_id	lr_name;
+	struct ldlm_res_id		lr_name;
 
 	/**
 	 * Interval trees (only for extent locks) for all modes of this resource
 	 */
-	struct ldlm_interval_tree *lr_itree;
+	struct ldlm_interval_tree	*lr_itree;
 
 	/** Type of locks this resource can hold. Only one type per resource. */
-	enum ldlm_type		lr_type; /* LDLM_{PLAIN,EXTENT,FLOCK,IBITS} */
+	enum ldlm_type			lr_type; /* LDLM_{PLAIN,EXTENT,FLOCK,IBITS} */
 
 	/**
 	 * Server-side-only lock value block elements.
 	 * To serialize lvbo_init.
 	 */
-	int			lr_lvb_len;
-	struct mutex		lr_lvb_mutex;
+	int				lr_lvb_len;
+	struct mutex			lr_lvb_mutex;
 
 	/**
 	 * Associated inode, used only on client side.
 	 */
-	struct inode		*lr_lvb_inode;
+	struct inode			*lr_lvb_inode;
 
 	/** List of references to this resource. For debugging. */
-	struct lu_ref		lr_reference;
+	struct lu_ref			lr_reference;
 };
 
 static inline bool ldlm_has_layout(struct ldlm_lock *lock)
@@ -931,26 +931,26 @@ static inline int ldlm_lvbo_fill(struct ldlm_lock *lock, void *buf, int len)
 }
 
 struct ldlm_ast_work {
-	struct ldlm_lock      *w_lock;
-	int		    w_blocking;
-	struct ldlm_lock_desc  w_desc;
-	struct list_head	     w_list;
-	int		    w_flags;
-	void		  *w_data;
-	int		    w_datalen;
+	struct ldlm_lock       *w_lock;
+	int			w_blocking;
+	struct ldlm_lock_desc	w_desc;
+	struct list_head	w_list;
+	int			w_flags;
+	void		       *w_data;
+	int			w_datalen;
 };
 
 /**
  * Common ldlm_enqueue parameters
  */
 struct ldlm_enqueue_info {
-	enum ldlm_type	ei_type;  /** Type of the lock being enqueued. */
-	enum ldlm_mode	ei_mode;  /** Mode of the lock being enqueued. */
-	void *ei_cb_bl;  /** blocking lock callback */
-	void *ei_cb_cp;  /** lock completion callback */
-	void *ei_cb_gl;  /** lock glimpse callback */
-	void *ei_cbdata; /** Data to be passed into callbacks. */
-	unsigned int ei_enq_slave:1; /* whether enqueue slave stripes */
+	enum ldlm_type		ei_type;  /** Type of the lock being enqueued. */
+	enum ldlm_mode		ei_mode;  /** Mode of the lock being enqueued. */
+	void			*ei_cb_bl;  /** blocking lock callback */
+	void			*ei_cb_cp;  /** lock completion callback */
+	void			*ei_cb_gl;  /** lock glimpse callback */
+	void			*ei_cbdata; /** Data to be passed into callbacks. */
+	unsigned int		ei_enq_slave:1; /* whether enqueue slave stripes */
 };
 
 extern struct obd_ops ldlm_obd_ops;
@@ -971,12 +971,12 @@ struct ldlm_enqueue_info {
  * \see LDLM_DEBUG
  */
 #define ldlm_lock_debug(msgdata, mask, cdls, lock, fmt, a...) do {      \
-	CFS_CHECK_STACK(msgdata, mask, cdls);			   \
+	CFS_CHECK_STACK(msgdata, mask, cdls);				\
 									\
-	if (((mask) & D_CANTMASK) != 0 ||			       \
-	    ((libcfs_debug & (mask)) != 0 &&			    \
-	     (libcfs_subsystem_debug & DEBUG_SUBSYSTEM) != 0))	  \
-		_ldlm_lock_debug(lock, msgdata, fmt, ##a);	      \
+	if (((mask) & D_CANTMASK) != 0 ||				\
+	    ((libcfs_debug & (mask)) != 0 &&				\
+	     (libcfs_subsystem_debug & DEBUG_SUBSYSTEM) != 0))		\
+		_ldlm_lock_debug(lock, msgdata, fmt, ##a);		\
 } while (0)
 
 void _ldlm_lock_debug(struct ldlm_lock *lock,
@@ -987,9 +987,9 @@ void _ldlm_lock_debug(struct ldlm_lock *lock,
 /**
  * Rate-limited version of lock printing function.
  */
-#define LDLM_DEBUG_LIMIT(mask, lock, fmt, a...) do {			 \
-	static struct cfs_debug_limit_state _ldlm_cdls;			   \
-	LIBCFS_DEBUG_MSG_DATA_DECL(msgdata, mask, &_ldlm_cdls);	      \
+#define LDLM_DEBUG_LIMIT(mask, lock, fmt, a...) do {			\
+	static struct cfs_debug_limit_state _ldlm_cdls;			\
+	LIBCFS_DEBUG_MSG_DATA_DECL(msgdata, mask, &_ldlm_cdls);		\
 	ldlm_lock_debug(&msgdata, mask, &_ldlm_cdls, lock, "### " fmt, ##a);\
 } while (0)
 
@@ -997,14 +997,14 @@ void _ldlm_lock_debug(struct ldlm_lock *lock,
 #define LDLM_WARN(lock, fmt, a...)  LDLM_DEBUG_LIMIT(D_WARNING, lock, fmt, ## a)
 
 /** Non-rate-limited lock printing function for debugging purposes. */
-#define LDLM_DEBUG(lock, fmt, a...)   do {				  \
-	if (likely(lock)) {						    \
-		LIBCFS_DEBUG_MSG_DATA_DECL(msgdata, D_DLMTRACE, NULL);      \
-		ldlm_lock_debug(&msgdata, D_DLMTRACE, NULL, lock,	    \
-				"### " fmt, ##a);			    \
-	} else {							    \
-		LDLM_DEBUG_NOLOCK("no dlm lock: " fmt, ##a);		    \
-	}								    \
+#define LDLM_DEBUG(lock, fmt, a...)   do {				\
+	if (likely(lock)) {						\
+		LIBCFS_DEBUG_MSG_DATA_DECL(msgdata, D_DLMTRACE, NULL);	\
+		ldlm_lock_debug(&msgdata, D_DLMTRACE, NULL, lock,	\
+				"### " fmt, ##a);			\
+	} else {							\
+		LDLM_DEBUG_NOLOCK("no dlm lock: " fmt, ##a);		\
+	}								\
 } while (0)
 
 typedef int (*ldlm_processing_policy)(struct ldlm_lock *lock, u64 *flags,
@@ -1040,9 +1040,9 @@ int ldlm_resource_iterate(struct ldlm_namespace *, const struct ldlm_res_id *,
 u64 ldlm_extent_shift_kms(struct ldlm_lock *lock, u64 old_kms);
 
 struct ldlm_callback_suite {
-	ldlm_completion_callback lcs_completion;
-	ldlm_blocking_callback   lcs_blocking;
-	ldlm_glimpse_callback    lcs_glimpse;
+	ldlm_completion_callback	lcs_completion;
+	ldlm_blocking_callback		lcs_blocking;
+	ldlm_glimpse_callback		lcs_glimpse;
 };
 
 /* ldlm_lockd.c */
@@ -1105,41 +1105,41 @@ static inline int ldlm_res_lvbo_update(struct ldlm_resource *res,
  * Release a temporary lock reference obtained by ldlm_handle2lock() or
  * __ldlm_handle2lock().
  */
-#define LDLM_LOCK_PUT(lock)		     \
-do {					    \
-	LDLM_LOCK_REF_DEL(lock);		\
-	/*LDLM_DEBUG((lock), "put");*/	  \
-	ldlm_lock_put(lock);		    \
+#define LDLM_LOCK_PUT(lock)		\
+do {					\
+	LDLM_LOCK_REF_DEL(lock);	\
+	/*LDLM_DEBUG((lock), "put");*/	\
+	ldlm_lock_put(lock);		\
 } while (0)
 
 /**
  * Release a lock reference obtained by some other means (see
  * LDLM_LOCK_PUT()).
  */
-#define LDLM_LOCK_RELEASE(lock)		 \
-do {					    \
-	/*LDLM_DEBUG((lock), "put");*/	  \
-	ldlm_lock_put(lock);		    \
+#define LDLM_LOCK_RELEASE(lock)		\
+do {					\
+	/*LDLM_DEBUG((lock), "put");*/	\
+	ldlm_lock_put(lock);		\
 } while (0)
 
-#define LDLM_LOCK_GET(lock)		     \
-({					      \
-	ldlm_lock_get(lock);		    \
-	/*LDLM_DEBUG((lock), "get");*/	  \
-	lock;				   \
+#define LDLM_LOCK_GET(lock)		\
+({					\
+	ldlm_lock_get(lock);		\
+	/*LDLM_DEBUG((lock), "get");*/	\
+	lock;				\
 })
 
-#define ldlm_lock_list_put(head, member, count)		     \
-({								  \
-	struct ldlm_lock *_lock, *_next;			    \
-	int c = count;					      \
-	list_for_each_entry_safe(_lock, _next, head, member) {  \
-		if (c-- == 0)				       \
-			break;				      \
-		list_del_init(&_lock->member);		  \
-		LDLM_LOCK_RELEASE(_lock);			   \
-	}							   \
-	LASSERT(c <= 0);					    \
+#define ldlm_lock_list_put(head, member, count)			\
+({								\
+	struct ldlm_lock *_lock, *_next;			\
+	int c = count;						\
+	list_for_each_entry_safe(_lock, _next, head, member) {	\
+		if (c-- == 0)					\
+			break;					\
+		list_del_init(&_lock->member);			\
+		LDLM_LOCK_RELEASE(_lock);			\
+	}							\
+	LASSERT(c <= 0);					\
 })
 
 struct ldlm_lock *ldlm_lock_get(struct ldlm_lock *lock);
@@ -1198,12 +1198,12 @@ void ldlm_resource_add_lock(struct ldlm_resource *res,
 int ldlm_lock_change_resource(struct ldlm_namespace *, struct ldlm_lock *,
 			      const struct ldlm_res_id *);
 
-#define LDLM_RESOURCE_ADDREF(res) do {				  \
-	lu_ref_add_atomic(&(res)->lr_reference, __func__, current);  \
+#define LDLM_RESOURCE_ADDREF(res) do {					\
+	lu_ref_add_atomic(&(res)->lr_reference, __func__, current);	\
 } while (0)
 
-#define LDLM_RESOURCE_DELREF(res) do {				  \
-	lu_ref_del(&(res)->lr_reference, __func__, current);	  \
+#define LDLM_RESOURCE_DELREF(res) do {				\
+	lu_ref_del(&(res)->lr_reference, __func__, current);	\
 } while (0)
 
 /* ldlm_request.c */
diff --git a/drivers/staging/lustre/lustre/include/lustre_dlm_flags.h b/drivers/staging/lustre/lustre/include/lustre_dlm_flags.h
index 487ea17..abeb651 100644
--- a/drivers/staging/lustre/lustre/include/lustre_dlm_flags.h
+++ b/drivers/staging/lustre/lustre/include/lustre_dlm_flags.h
@@ -27,100 +27,100 @@
 #ifndef LDLM_ALL_FLAGS_MASK
 
 /** l_flags bits marked as "all_flags" bits */
-#define LDLM_FL_ALL_FLAGS_MASK          0x00FFFFFFC08F932FULL
+#define LDLM_FL_ALL_FLAGS_MASK		0x00FFFFFFC08F932FULL
 
 /** extent, mode, or resource changed */
-#define LDLM_FL_LOCK_CHANGED            0x0000000000000001ULL /* bit 0 */
-#define ldlm_is_lock_changed(_l)        LDLM_TEST_FLAG((_l), 1ULL <<  0)
-#define ldlm_set_lock_changed(_l)       LDLM_SET_FLAG((_l), 1ULL <<  0)
-#define ldlm_clear_lock_changed(_l)     LDLM_CLEAR_FLAG((_l), 1ULL <<  0)
+#define LDLM_FL_LOCK_CHANGED		0x0000000000000001ULL /* bit 0 */
+#define ldlm_is_lock_changed(_l)	LDLM_TEST_FLAG((_l), 1ULL <<  0)
+#define ldlm_set_lock_changed(_l)	LDLM_SET_FLAG((_l), 1ULL <<  0)
+#define ldlm_clear_lock_changed(_l)	LDLM_CLEAR_FLAG((_l), 1ULL <<  0)
 
 /**
  * Server placed lock on granted list, or a recovering client wants the
  * lock added to the granted list, no questions asked.
  */
-#define LDLM_FL_BLOCK_GRANTED           0x0000000000000002ULL /* bit 1 */
-#define ldlm_is_block_granted(_l)       LDLM_TEST_FLAG((_l), 1ULL <<  1)
-#define ldlm_set_block_granted(_l)      LDLM_SET_FLAG((_l), 1ULL <<  1)
-#define ldlm_clear_block_granted(_l)    LDLM_CLEAR_FLAG((_l), 1ULL <<  1)
+#define LDLM_FL_BLOCK_GRANTED		0x0000000000000002ULL /* bit 1 */
+#define ldlm_is_block_granted(_l)	LDLM_TEST_FLAG((_l), 1ULL <<  1)
+#define ldlm_set_block_granted(_l)	LDLM_SET_FLAG((_l), 1ULL <<  1)
+#define ldlm_clear_block_granted(_l)	LDLM_CLEAR_FLAG((_l), 1ULL <<  1)
 
 /**
  * Server placed lock on conv list, or a recovering client wants the lock
  * added to the conv list, no questions asked.
  */
-#define LDLM_FL_BLOCK_CONV              0x0000000000000004ULL /* bit 2 */
-#define ldlm_is_block_conv(_l)          LDLM_TEST_FLAG((_l), 1ULL <<  2)
-#define ldlm_set_block_conv(_l)         LDLM_SET_FLAG((_l), 1ULL <<  2)
-#define ldlm_clear_block_conv(_l)       LDLM_CLEAR_FLAG((_l), 1ULL <<  2)
+#define LDLM_FL_BLOCK_CONV		0x0000000000000004ULL /* bit 2 */
+#define ldlm_is_block_conv(_l)		LDLM_TEST_FLAG((_l), 1ULL <<  2)
+#define ldlm_set_block_conv(_l)		LDLM_SET_FLAG((_l), 1ULL <<  2)
+#define ldlm_clear_block_conv(_l)	LDLM_CLEAR_FLAG((_l), 1ULL <<  2)
 
 /**
  * Server placed lock on wait list, or a recovering client wants the lock
  * added to the wait list, no questions asked.
  */
-#define LDLM_FL_BLOCK_WAIT              0x0000000000000008ULL /* bit 3 */
-#define ldlm_is_block_wait(_l)          LDLM_TEST_FLAG((_l), 1ULL <<  3)
-#define ldlm_set_block_wait(_l)         LDLM_SET_FLAG((_l), 1ULL <<  3)
-#define ldlm_clear_block_wait(_l)       LDLM_CLEAR_FLAG((_l), 1ULL <<  3)
+#define LDLM_FL_BLOCK_WAIT		0x0000000000000008ULL /* bit 3 */
+#define ldlm_is_block_wait(_l)		LDLM_TEST_FLAG((_l), 1ULL <<  3)
+#define ldlm_set_block_wait(_l)		LDLM_SET_FLAG((_l), 1ULL <<  3)
+#define ldlm_clear_block_wait(_l)	LDLM_CLEAR_FLAG((_l), 1ULL <<  3)
 
 /** blocking or cancel packet was queued for sending. */
-#define LDLM_FL_AST_SENT                0x0000000000000020ULL /* bit 5 */
-#define ldlm_is_ast_sent(_l)            LDLM_TEST_FLAG((_l), 1ULL <<  5)
-#define ldlm_set_ast_sent(_l)           LDLM_SET_FLAG((_l), 1ULL <<  5)
-#define ldlm_clear_ast_sent(_l)         LDLM_CLEAR_FLAG((_l), 1ULL <<  5)
+#define LDLM_FL_AST_SENT		0x0000000000000020ULL /* bit 5 */
+#define ldlm_is_ast_sent(_l)		LDLM_TEST_FLAG((_l), 1ULL <<  5)
+#define ldlm_set_ast_sent(_l)		LDLM_SET_FLAG((_l), 1ULL <<  5)
+#define ldlm_clear_ast_sent(_l)		LDLM_CLEAR_FLAG((_l), 1ULL <<  5)
 
 /**
  * Lock is being replayed.  This could probably be implied by the fact that
  * one of BLOCK_{GRANTED,CONV,WAIT} is set, but that is pretty dangerous.
  */
-#define LDLM_FL_REPLAY                  0x0000000000000100ULL /* bit 8 */
-#define ldlm_is_replay(_l)              LDLM_TEST_FLAG((_l), 1ULL <<  8)
-#define ldlm_set_replay(_l)             LDLM_SET_FLAG((_l), 1ULL <<  8)
-#define ldlm_clear_replay(_l)           LDLM_CLEAR_FLAG((_l), 1ULL <<  8)
+#define LDLM_FL_REPLAY			0x0000000000000100ULL /* bit 8 */
+#define ldlm_is_replay(_l)		LDLM_TEST_FLAG((_l), 1ULL <<  8)
+#define ldlm_set_replay(_l)		LDLM_SET_FLAG((_l), 1ULL <<  8)
+#define ldlm_clear_replay(_l)		LDLM_CLEAR_FLAG((_l), 1ULL <<  8)
 
 /** Don't grant lock, just do intent. */
-#define LDLM_FL_INTENT_ONLY             0x0000000000000200ULL /* bit 9 */
-#define ldlm_is_intent_only(_l)         LDLM_TEST_FLAG((_l), 1ULL <<  9)
-#define ldlm_set_intent_only(_l)        LDLM_SET_FLAG((_l), 1ULL <<  9)
-#define ldlm_clear_intent_only(_l)      LDLM_CLEAR_FLAG((_l), 1ULL <<  9)
+#define LDLM_FL_INTENT_ONLY		0x0000000000000200ULL /* bit 9 */
+#define ldlm_is_intent_only(_l)		LDLM_TEST_FLAG((_l), 1ULL <<  9)
+#define ldlm_set_intent_only(_l)	LDLM_SET_FLAG((_l), 1ULL <<  9)
+#define ldlm_clear_intent_only(_l)	LDLM_CLEAR_FLAG((_l), 1ULL <<  9)
 
 /** lock request has intent */
-#define LDLM_FL_HAS_INTENT              0x0000000000001000ULL /* bit 12 */
-#define ldlm_is_has_intent(_l)          LDLM_TEST_FLAG((_l), 1ULL << 12)
-#define ldlm_set_has_intent(_l)         LDLM_SET_FLAG((_l), 1ULL << 12)
-#define ldlm_clear_has_intent(_l)       LDLM_CLEAR_FLAG((_l), 1ULL << 12)
+#define LDLM_FL_HAS_INTENT		0x0000000000001000ULL /* bit 12 */
+#define ldlm_is_has_intent(_l)		LDLM_TEST_FLAG((_l), 1ULL << 12)
+#define ldlm_set_has_intent(_l)		LDLM_SET_FLAG((_l), 1ULL << 12)
+#define ldlm_clear_has_intent(_l)	LDLM_CLEAR_FLAG((_l), 1ULL << 12)
 
 /** flock deadlock detected */
-#define LDLM_FL_FLOCK_DEADLOCK          0x0000000000008000ULL /* bit  15 */
-#define ldlm_is_flock_deadlock(_l)      LDLM_TEST_FLAG((_l), 1ULL << 15)
-#define ldlm_set_flock_deadlock(_l)     LDLM_SET_FLAG((_l), 1ULL << 15)
-#define ldlm_clear_flock_deadlock(_l)   LDLM_CLEAR_FLAG((_l), 1ULL << 15)
+#define LDLM_FL_FLOCK_DEADLOCK		0x0000000000008000ULL /* bit  15 */
+#define ldlm_is_flock_deadlock(_l)	LDLM_TEST_FLAG((_l), 1ULL << 15)
+#define ldlm_set_flock_deadlock(_l)	LDLM_SET_FLAG((_l), 1ULL << 15)
+#define ldlm_clear_flock_deadlock(_l)	LDLM_CLEAR_FLAG((_l), 1ULL << 15)
 
 /** discard (no writeback) (PW locks) or page retention (PR locks)) on cancel */
-#define LDLM_FL_DISCARD_DATA            0x0000000000010000ULL /* bit 16 */
-#define ldlm_is_discard_data(_l)        LDLM_TEST_FLAG((_l), 1ULL << 16)
-#define ldlm_set_discard_data(_l)       LDLM_SET_FLAG((_l), 1ULL << 16)
-#define ldlm_clear_discard_data(_l)     LDLM_CLEAR_FLAG((_l), 1ULL << 16)
+#define LDLM_FL_DISCARD_DATA		0x0000000000010000ULL /* bit 16 */
+#define ldlm_is_discard_data(_l)	LDLM_TEST_FLAG((_l), 1ULL << 16)
+#define ldlm_set_discard_data(_l)	LDLM_SET_FLAG((_l), 1ULL << 16)
+#define ldlm_clear_discard_data(_l)	LDLM_CLEAR_FLAG((_l), 1ULL << 16)
 
 /** Blocked by group lock - wait indefinitely */
-#define LDLM_FL_NO_TIMEOUT              0x0000000000020000ULL /* bit 17 */
-#define ldlm_is_no_timeout(_l)          LDLM_TEST_FLAG((_l), 1ULL << 17)
-#define ldlm_set_no_timeout(_l)         LDLM_SET_FLAG((_l), 1ULL << 17)
-#define ldlm_clear_no_timeout(_l)       LDLM_CLEAR_FLAG((_l), 1ULL << 17)
+#define LDLM_FL_NO_TIMEOUT		0x0000000000020000ULL /* bit 17 */
+#define ldlm_is_no_timeout(_l)		LDLM_TEST_FLAG((_l), 1ULL << 17)
+#define ldlm_set_no_timeout(_l)		LDLM_SET_FLAG((_l), 1ULL << 17)
+#define ldlm_clear_no_timeout(_l)	LDLM_CLEAR_FLAG((_l), 1ULL << 17)
 
 /**
  * Server told not to wait if blocked. For AGL, OST will not send glimpse
  * callback.
  */
-#define LDLM_FL_BLOCK_NOWAIT            0x0000000000040000ULL /* bit 18 */
-#define ldlm_is_block_nowait(_l)        LDLM_TEST_FLAG((_l), 1ULL << 18)
-#define ldlm_set_block_nowait(_l)       LDLM_SET_FLAG((_l), 1ULL << 18)
-#define ldlm_clear_block_nowait(_l)     LDLM_CLEAR_FLAG((_l), 1ULL << 18)
+#define LDLM_FL_BLOCK_NOWAIT		0x0000000000040000ULL /* bit 18 */
+#define ldlm_is_block_nowait(_l)	LDLM_TEST_FLAG((_l), 1ULL << 18)
+#define ldlm_set_block_nowait(_l)	LDLM_SET_FLAG((_l), 1ULL << 18)
+#define ldlm_clear_block_nowait(_l)	LDLM_CLEAR_FLAG((_l), 1ULL << 18)
 
 /** return blocking lock */
-#define LDLM_FL_TEST_LOCK               0x0000000000080000ULL /* bit 19 */
-#define ldlm_is_test_lock(_l)           LDLM_TEST_FLAG((_l), 1ULL << 19)
-#define ldlm_set_test_lock(_l)          LDLM_SET_FLAG((_l), 1ULL << 19)
-#define ldlm_clear_test_lock(_l)        LDLM_CLEAR_FLAG((_l), 1ULL << 19)
+#define LDLM_FL_TEST_LOCK		0x0000000000080000ULL /* bit 19 */
+#define ldlm_is_test_lock(_l)		LDLM_TEST_FLAG((_l), 1ULL << 19)
+#define ldlm_set_test_lock(_l)		LDLM_SET_FLAG((_l), 1ULL << 19)
+#define ldlm_clear_test_lock(_l)	LDLM_CLEAR_FLAG((_l), 1ULL << 19)
 
 /** match lock only */
 #define LDLM_FL_MATCH_LOCK		0x0000000000100000ULL /* bit  20 */
@@ -131,87 +131,87 @@
  * is for clients (like liblustre) that cannot be expected to reliably
  * response to blocking AST.
  */
-#define LDLM_FL_CANCEL_ON_BLOCK         0x0000000000800000ULL /* bit 23 */
-#define ldlm_is_cancel_on_block(_l)     LDLM_TEST_FLAG((_l), 1ULL << 23)
-#define ldlm_set_cancel_on_block(_l)    LDLM_SET_FLAG((_l), 1ULL << 23)
-#define ldlm_clear_cancel_on_block(_l)  LDLM_CLEAR_FLAG((_l), 1ULL << 23)
+#define LDLM_FL_CANCEL_ON_BLOCK		0x0000000000800000ULL /* bit 23 */
+#define ldlm_is_cancel_on_block(_l)	LDLM_TEST_FLAG((_l), 1ULL << 23)
+#define ldlm_set_cancel_on_block(_l)	LDLM_SET_FLAG((_l), 1ULL << 23)
+#define ldlm_clear_cancel_on_block(_l)	LDLM_CLEAR_FLAG((_l), 1ULL << 23)
 
 /**
  * measure lock contention and return -EUSERS if locking contention is high
  */
-#define LDLM_FL_DENY_ON_CONTENTION        0x0000000040000000ULL /* bit 30 */
-#define ldlm_is_deny_on_contention(_l)    LDLM_TEST_FLAG((_l), 1ULL << 30)
-#define ldlm_set_deny_on_contention(_l)   LDLM_SET_FLAG((_l), 1ULL << 30)
+#define LDLM_FL_DENY_ON_CONTENTION	  0x0000000040000000ULL /* bit 30 */
+#define ldlm_is_deny_on_contention(_l)	  LDLM_TEST_FLAG((_l), 1ULL << 30)
+#define ldlm_set_deny_on_contention(_l)	  LDLM_SET_FLAG((_l), 1ULL << 30)
 #define ldlm_clear_deny_on_contention(_l) LDLM_CLEAR_FLAG((_l), 1ULL << 30)
 
 /**
  * These are flags that are mapped into the flags and ASTs of blocking
  * locks Add FL_DISCARD to blocking ASTs
  */
-#define LDLM_FL_AST_DISCARD_DATA        0x0000000080000000ULL /* bit 31 */
-#define ldlm_is_ast_discard_data(_l)    LDLM_TEST_FLAG((_l), 1ULL << 31)
-#define ldlm_set_ast_discard_data(_l)   LDLM_SET_FLAG((_l), 1ULL << 31)
-#define ldlm_clear_ast_discard_data(_l) LDLM_CLEAR_FLAG((_l), 1ULL << 31)
+#define LDLM_FL_AST_DISCARD_DATA	0x0000000080000000ULL /* bit 31 */
+#define ldlm_is_ast_discard_data(_l)	LDLM_TEST_FLAG((_l), 1ULL << 31)
+#define ldlm_set_ast_discard_data(_l)	LDLM_SET_FLAG((_l), 1ULL << 31)
+#define ldlm_clear_ast_discard_data(_l)	LDLM_CLEAR_FLAG((_l), 1ULL << 31)
 
 /**
  * Used for marking lock as a target for -EINTR while cp_ast sleep emulation
  * + race with upcoming bl_ast.
  */
-#define LDLM_FL_FAIL_LOC                0x0000000100000000ULL /* bit 32 */
-#define ldlm_is_fail_loc(_l)            LDLM_TEST_FLAG((_l), 1ULL << 32)
-#define ldlm_set_fail_loc(_l)           LDLM_SET_FLAG((_l), 1ULL << 32)
-#define ldlm_clear_fail_loc(_l)         LDLM_CLEAR_FLAG((_l), 1ULL << 32)
+#define LDLM_FL_FAIL_LOC		0x0000000100000000ULL /* bit 32 */
+#define ldlm_is_fail_loc(_l)		LDLM_TEST_FLAG((_l), 1ULL << 32)
+#define ldlm_set_fail_loc(_l)		LDLM_SET_FLAG((_l), 1ULL << 32)
+#define ldlm_clear_fail_loc(_l)		LDLM_CLEAR_FLAG((_l), 1ULL << 32)
 
 /**
  * Used while processing the unused list to know that we have already
  * handled this lock and decided to skip it.
  */
-#define LDLM_FL_SKIPPED                 0x0000000200000000ULL /* bit 33 */
-#define ldlm_is_skipped(_l)             LDLM_TEST_FLAG((_l), 1ULL << 33)
-#define ldlm_set_skipped(_l)            LDLM_SET_FLAG((_l), 1ULL << 33)
-#define ldlm_clear_skipped(_l)          LDLM_CLEAR_FLAG((_l), 1ULL << 33)
+#define LDLM_FL_SKIPPED			0x0000000200000000ULL /* bit 33 */
+#define ldlm_is_skipped(_l)		LDLM_TEST_FLAG((_l), 1ULL << 33)
+#define ldlm_set_skipped(_l)		LDLM_SET_FLAG((_l), 1ULL << 33)
+#define ldlm_clear_skipped(_l)		LDLM_CLEAR_FLAG((_l), 1ULL << 33)
 
 /** this lock is being destroyed */
-#define LDLM_FL_CBPENDING               0x0000000400000000ULL /* bit 34 */
-#define ldlm_is_cbpending(_l)           LDLM_TEST_FLAG((_l), 1ULL << 34)
-#define ldlm_set_cbpending(_l)          LDLM_SET_FLAG((_l), 1ULL << 34)
-#define ldlm_clear_cbpending(_l)        LDLM_CLEAR_FLAG((_l), 1ULL << 34)
+#define LDLM_FL_CBPENDING		0x0000000400000000ULL /* bit 34 */
+#define ldlm_is_cbpending(_l)		LDLM_TEST_FLAG((_l), 1ULL << 34)
+#define ldlm_set_cbpending(_l)		LDLM_SET_FLAG((_l), 1ULL << 34)
+#define ldlm_clear_cbpending(_l)	LDLM_CLEAR_FLAG((_l), 1ULL << 34)
 
 /** not a real flag, not saved in lock */
-#define LDLM_FL_WAIT_NOREPROC           0x0000000800000000ULL /* bit 35 */
-#define ldlm_is_wait_noreproc(_l)       LDLM_TEST_FLAG((_l), 1ULL << 35)
-#define ldlm_set_wait_noreproc(_l)      LDLM_SET_FLAG((_l), 1ULL << 35)
-#define ldlm_clear_wait_noreproc(_l)    LDLM_CLEAR_FLAG((_l), 1ULL << 35)
+#define LDLM_FL_WAIT_NOREPROC		0x0000000800000000ULL /* bit 35 */
+#define ldlm_is_wait_noreproc(_l)	LDLM_TEST_FLAG((_l), 1ULL << 35)
+#define ldlm_set_wait_noreproc(_l)	LDLM_SET_FLAG((_l), 1ULL << 35)
+#define ldlm_clear_wait_noreproc(_l)	LDLM_CLEAR_FLAG((_l), 1ULL << 35)
 
 /** cancellation callback already run */
-#define LDLM_FL_CANCEL                  0x0000001000000000ULL /* bit 36 */
-#define ldlm_is_cancel(_l)              LDLM_TEST_FLAG((_l), 1ULL << 36)
-#define ldlm_set_cancel(_l)             LDLM_SET_FLAG((_l), 1ULL << 36)
-#define ldlm_clear_cancel(_l)           LDLM_CLEAR_FLAG((_l), 1ULL << 36)
+#define LDLM_FL_CANCEL			0x0000001000000000ULL /* bit 36 */
+#define ldlm_is_cancel(_l)		LDLM_TEST_FLAG((_l), 1ULL << 36)
+#define ldlm_set_cancel(_l)		LDLM_SET_FLAG((_l), 1ULL << 36)
+#define ldlm_clear_cancel(_l)		LDLM_CLEAR_FLAG((_l), 1ULL << 36)
 
 /** whatever it might mean -- never transmitted? */
-#define LDLM_FL_LOCAL_ONLY              0x0000002000000000ULL /* bit 37 */
-#define ldlm_is_local_only(_l)          LDLM_TEST_FLAG((_l), 1ULL << 37)
-#define ldlm_set_local_only(_l)         LDLM_SET_FLAG((_l), 1ULL << 37)
-#define ldlm_clear_local_only(_l)       LDLM_CLEAR_FLAG((_l), 1ULL << 37)
+#define LDLM_FL_LOCAL_ONLY		0x0000002000000000ULL /* bit 37 */
+#define ldlm_is_local_only(_l)		LDLM_TEST_FLAG((_l), 1ULL << 37)
+#define ldlm_set_local_only(_l)		LDLM_SET_FLAG((_l), 1ULL << 37)
+#define ldlm_clear_local_only(_l)	LDLM_CLEAR_FLAG((_l), 1ULL << 37)
 
 /** don't run the cancel callback under ldlm_cli_cancel_unused */
-#define LDLM_FL_FAILED                  0x0000004000000000ULL /* bit 38 */
-#define ldlm_is_failed(_l)              LDLM_TEST_FLAG((_l), 1ULL << 38)
-#define ldlm_set_failed(_l)             LDLM_SET_FLAG((_l), 1ULL << 38)
-#define ldlm_clear_failed(_l)           LDLM_CLEAR_FLAG((_l), 1ULL << 38)
+#define LDLM_FL_FAILED			0x0000004000000000ULL /* bit 38 */
+#define ldlm_is_failed(_l)		LDLM_TEST_FLAG((_l), 1ULL << 38)
+#define ldlm_set_failed(_l)		LDLM_SET_FLAG((_l), 1ULL << 38)
+#define ldlm_clear_failed(_l)		LDLM_CLEAR_FLAG((_l), 1ULL << 38)
 
 /** lock cancel has already been sent */
-#define LDLM_FL_CANCELING               0x0000008000000000ULL /* bit 39 */
-#define ldlm_is_canceling(_l)           LDLM_TEST_FLAG((_l), 1ULL << 39)
-#define ldlm_set_canceling(_l)          LDLM_SET_FLAG((_l), 1ULL << 39)
-#define ldlm_clear_canceling(_l)        LDLM_CLEAR_FLAG((_l), 1ULL << 39)
+#define LDLM_FL_CANCELING		0x0000008000000000ULL /* bit 39 */
+#define ldlm_is_canceling(_l)		LDLM_TEST_FLAG((_l), 1ULL << 39)
+#define ldlm_set_canceling(_l)		LDLM_SET_FLAG((_l), 1ULL << 39)
+#define ldlm_clear_canceling(_l)	LDLM_CLEAR_FLAG((_l), 1ULL << 39)
 
 /** local lock (ie, no srv/cli split) */
-#define LDLM_FL_LOCAL                   0x0000010000000000ULL /* bit 40 */
-#define ldlm_is_local(_l)               LDLM_TEST_FLAG((_l), 1ULL << 40)
-#define ldlm_set_local(_l)              LDLM_SET_FLAG((_l), 1ULL << 40)
-#define ldlm_clear_local(_l)            LDLM_CLEAR_FLAG((_l), 1ULL << 40)
+#define LDLM_FL_LOCAL			0x0000010000000000ULL /* bit 40 */
+#define ldlm_is_local(_l)		LDLM_TEST_FLAG((_l), 1ULL << 40)
+#define ldlm_set_local(_l)		LDLM_SET_FLAG((_l), 1ULL << 40)
+#define ldlm_clear_local(_l)		LDLM_CLEAR_FLAG((_l), 1ULL << 40)
 
 /**
  * XXX FIXME: This is being added to b_size as a low-risk fix to the
@@ -226,10 +226,10 @@
  * That change is pretty high-risk, though, and would need a lot more
  * testing.
  */
-#define LDLM_FL_LVB_READY               0x0000020000000000ULL /* bit 41 */
-#define ldlm_is_lvb_ready(_l)           LDLM_TEST_FLAG((_l), 1ULL << 41)
-#define ldlm_set_lvb_ready(_l)          LDLM_SET_FLAG((_l), 1ULL << 41)
-#define ldlm_clear_lvb_ready(_l)        LDLM_CLEAR_FLAG((_l), 1ULL << 41)
+#define LDLM_FL_LVB_READY		0x0000020000000000ULL /* bit 41 */
+#define ldlm_is_lvb_ready(_l)		LDLM_TEST_FLAG((_l), 1ULL << 41)
+#define ldlm_set_lvb_ready(_l)		LDLM_SET_FLAG((_l), 1ULL << 41)
+#define ldlm_clear_lvb_ready(_l)	LDLM_CLEAR_FLAG((_l), 1ULL << 41)
 
 /**
  * A lock contributes to the known minimum size (KMS) calculation until it
@@ -239,31 +239,31 @@
  * to know to exclude each other's locks from the calculation as they walk
  * the granted list.
  */
-#define LDLM_FL_KMS_IGNORE              0x0000040000000000ULL /* bit 42 */
-#define ldlm_is_kms_ignore(_l)          LDLM_TEST_FLAG((_l), 1ULL << 42)
-#define ldlm_set_kms_ignore(_l)         LDLM_SET_FLAG((_l), 1ULL << 42)
-#define ldlm_clear_kms_ignore(_l)       LDLM_CLEAR_FLAG((_l), 1ULL << 42)
+#define LDLM_FL_KMS_IGNORE		0x0000040000000000ULL /* bit 42 */
+#define ldlm_is_kms_ignore(_l)		LDLM_TEST_FLAG((_l), 1ULL << 42)
+#define ldlm_set_kms_ignore(_l)		LDLM_SET_FLAG((_l), 1ULL << 42)
+#define ldlm_clear_kms_ignore(_l)	LDLM_CLEAR_FLAG((_l), 1ULL << 42)
 
 /** completion AST to be executed */
-#define LDLM_FL_CP_REQD                 0x0000080000000000ULL /* bit 43 */
-#define ldlm_is_cp_reqd(_l)             LDLM_TEST_FLAG((_l), 1ULL << 43)
-#define ldlm_set_cp_reqd(_l)            LDLM_SET_FLAG((_l), 1ULL << 43)
-#define ldlm_clear_cp_reqd(_l)          LDLM_CLEAR_FLAG((_l), 1ULL << 43)
+#define LDLM_FL_CP_REQD			0x0000080000000000ULL /* bit 43 */
+#define ldlm_is_cp_reqd(_l)		LDLM_TEST_FLAG((_l), 1ULL << 43)
+#define ldlm_set_cp_reqd(_l)		LDLM_SET_FLAG((_l), 1ULL << 43)
+#define ldlm_clear_cp_reqd(_l)		LDLM_CLEAR_FLAG((_l), 1ULL << 43)
 
 /** cleanup_resource has already handled the lock */
-#define LDLM_FL_CLEANED                 0x0000100000000000ULL /* bit 44 */
-#define ldlm_is_cleaned(_l)             LDLM_TEST_FLAG((_l), 1ULL << 44)
-#define ldlm_set_cleaned(_l)            LDLM_SET_FLAG((_l), 1ULL << 44)
-#define ldlm_clear_cleaned(_l)          LDLM_CLEAR_FLAG((_l), 1ULL << 44)
+#define LDLM_FL_CLEANED			0x0000100000000000ULL /* bit 44 */
+#define ldlm_is_cleaned(_l)		LDLM_TEST_FLAG((_l), 1ULL << 44)
+#define ldlm_set_cleaned(_l)		LDLM_SET_FLAG((_l), 1ULL << 44)
+#define ldlm_clear_cleaned(_l)		LDLM_CLEAR_FLAG((_l), 1ULL << 44)
 
 /**
  * optimization hint: LDLM can run blocking callback from current context
  * w/o involving separate thread. in order to decrease cs rate
  */
-#define LDLM_FL_ATOMIC_CB               0x0000200000000000ULL /* bit 45 */
-#define ldlm_is_atomic_cb(_l)           LDLM_TEST_FLAG((_l), 1ULL << 45)
-#define ldlm_set_atomic_cb(_l)          LDLM_SET_FLAG((_l), 1ULL << 45)
-#define ldlm_clear_atomic_cb(_l)        LDLM_CLEAR_FLAG((_l), 1ULL << 45)
+#define LDLM_FL_ATOMIC_CB		0x0000200000000000ULL /* bit 45 */
+#define ldlm_is_atomic_cb(_l)		LDLM_TEST_FLAG((_l), 1ULL << 45)
+#define ldlm_set_atomic_cb(_l)		LDLM_SET_FLAG((_l), 1ULL << 45)
+#define ldlm_clear_atomic_cb(_l)	LDLM_CLEAR_FLAG((_l), 1ULL << 45)
 
 /**
  * It may happen that a client initiates two operations, e.g. unlink and
@@ -273,10 +273,10 @@
  * the first operation. LDLM_FL_BL_AST is set by ldlm_callback_handler() in
  * the lock to prevent the Early Lock Cancel (ELC) code from cancelling it.
  */
-#define LDLM_FL_BL_AST                  0x0000400000000000ULL /* bit 46 */
-#define ldlm_is_bl_ast(_l)              LDLM_TEST_FLAG((_l), 1ULL << 46)
-#define ldlm_set_bl_ast(_l)             LDLM_SET_FLAG((_l), 1ULL << 46)
-#define ldlm_clear_bl_ast(_l)           LDLM_CLEAR_FLAG((_l), 1ULL << 46)
+#define LDLM_FL_BL_AST			0x0000400000000000ULL /* bit 46 */
+#define ldlm_is_bl_ast(_l)		LDLM_TEST_FLAG((_l), 1ULL << 46)
+#define ldlm_set_bl_ast(_l)		LDLM_SET_FLAG((_l), 1ULL << 46)
+#define ldlm_clear_bl_ast(_l)		LDLM_CLEAR_FLAG((_l), 1ULL << 46)
 
 /**
  * Set by ldlm_cancel_callback() when lock cache is dropped to let
@@ -284,30 +284,30 @@
  * ELC RPC is already prepared and is waiting for rpc_lock, too late to
  * send a separate CANCEL RPC.
  */
-#define LDLM_FL_BL_DONE                 0x0000800000000000ULL /* bit 47 */
-#define ldlm_is_bl_done(_l)             LDLM_TEST_FLAG((_l), 1ULL << 47)
-#define ldlm_set_bl_done(_l)            LDLM_SET_FLAG((_l), 1ULL << 47)
-#define ldlm_clear_bl_done(_l)          LDLM_CLEAR_FLAG((_l), 1ULL << 47)
+#define LDLM_FL_BL_DONE			0x0000800000000000ULL /* bit 47 */
+#define ldlm_is_bl_done(_l)		LDLM_TEST_FLAG((_l), 1ULL << 47)
+#define ldlm_set_bl_done(_l)		LDLM_SET_FLAG((_l), 1ULL << 47)
+#define ldlm_clear_bl_done(_l)		LDLM_CLEAR_FLAG((_l), 1ULL << 47)
 
 /**
  * Don't put lock into the LRU list, so that it is not canceled due
  * to aging.  Used by MGC locks, they are cancelled only@unmount or
  * by callback.
  */
-#define LDLM_FL_NO_LRU                  0x0001000000000000ULL /* bit 48 */
-#define ldlm_is_no_lru(_l)              LDLM_TEST_FLAG((_l), 1ULL << 48)
-#define ldlm_set_no_lru(_l)             LDLM_SET_FLAG((_l), 1ULL << 48)
-#define ldlm_clear_no_lru(_l)           LDLM_CLEAR_FLAG((_l), 1ULL << 48)
+#define LDLM_FL_NO_LRU			0x0001000000000000ULL /* bit 48 */
+#define ldlm_is_no_lru(_l)		LDLM_TEST_FLAG((_l), 1ULL << 48)
+#define ldlm_set_no_lru(_l)		LDLM_SET_FLAG((_l), 1ULL << 48)
+#define ldlm_clear_no_lru(_l)		LDLM_CLEAR_FLAG((_l), 1ULL << 48)
 
 /**
  * Set for locks that failed and where the server has been notified.
  *
  * Protected by lock and resource locks.
  */
-#define LDLM_FL_FAIL_NOTIFIED           0x0002000000000000ULL /* bit 49 */
-#define ldlm_is_fail_notified(_l)       LDLM_TEST_FLAG((_l), 1ULL << 49)
-#define ldlm_set_fail_notified(_l)      LDLM_SET_FLAG((_l), 1ULL << 49)
-#define ldlm_clear_fail_notified(_l)    LDLM_CLEAR_FLAG((_l), 1ULL << 49)
+#define LDLM_FL_FAIL_NOTIFIED		0x0002000000000000ULL /* bit 49 */
+#define ldlm_is_fail_notified(_l)	LDLM_TEST_FLAG((_l), 1ULL << 49)
+#define ldlm_set_fail_notified(_l)	LDLM_SET_FLAG((_l), 1ULL << 49)
+#define ldlm_clear_fail_notified(_l)	LDLM_CLEAR_FLAG((_l), 1ULL << 49)
 
 /**
  * Set for locks that were removed from class hash table and will
@@ -316,16 +316,16 @@
  *
  * Protected by lock and resource locks.
  */
-#define LDLM_FL_DESTROYED               0x0004000000000000ULL /* bit 50 */
-#define ldlm_is_destroyed(_l)           LDLM_TEST_FLAG((_l), 1ULL << 50)
-#define ldlm_set_destroyed(_l)          LDLM_SET_FLAG((_l), 1ULL << 50)
-#define ldlm_clear_destroyed(_l)        LDLM_CLEAR_FLAG((_l), 1ULL << 50)
+#define LDLM_FL_DESTROYED		0x0004000000000000ULL /* bit 50 */
+#define ldlm_is_destroyed(_l)		LDLM_TEST_FLAG((_l), 1ULL << 50)
+#define ldlm_set_destroyed(_l)		LDLM_SET_FLAG((_l), 1ULL << 50)
+#define ldlm_clear_destroyed(_l)	LDLM_CLEAR_FLAG((_l), 1ULL << 50)
 
 /** flag whether this is a server namespace lock */
-#define LDLM_FL_SERVER_LOCK             0x0008000000000000ULL /* bit 51 */
-#define ldlm_is_server_lock(_l)         LDLM_TEST_FLAG((_l), 1ULL << 51)
-#define ldlm_set_server_lock(_l)        LDLM_SET_FLAG((_l), 1ULL << 51)
-#define ldlm_clear_server_lock(_l)      LDLM_CLEAR_FLAG((_l), 1ULL << 51)
+#define LDLM_FL_SERVER_LOCK		0x0008000000000000ULL /* bit 51 */
+#define ldlm_is_server_lock(_l)		LDLM_TEST_FLAG((_l), 1ULL << 51)
+#define ldlm_set_server_lock(_l)	LDLM_SET_FLAG((_l), 1ULL << 51)
+#define ldlm_clear_server_lock(_l)	LDLM_CLEAR_FLAG((_l), 1ULL << 51)
 
 /**
  * It's set in lock_res_and_lock() and unset in unlock_res_and_lock().
@@ -335,10 +335,10 @@
  * because it works only for SMP so user needs to add extra macros like
  * LASSERT_SPIN_LOCKED for uniprocessor kernels.
  */
-#define LDLM_FL_RES_LOCKED              0x0010000000000000ULL /* bit 52 */
-#define ldlm_is_res_locked(_l)          LDLM_TEST_FLAG((_l), 1ULL << 52)
-#define ldlm_set_res_locked(_l)         LDLM_SET_FLAG((_l), 1ULL << 52)
-#define ldlm_clear_res_locked(_l)       LDLM_CLEAR_FLAG((_l), 1ULL << 52)
+#define LDLM_FL_RES_LOCKED		0x0010000000000000ULL /* bit 52 */
+#define ldlm_is_res_locked(_l)		LDLM_TEST_FLAG((_l), 1ULL << 52)
+#define ldlm_set_res_locked(_l)		LDLM_SET_FLAG((_l), 1ULL << 52)
+#define ldlm_clear_res_locked(_l)	LDLM_CLEAR_FLAG((_l), 1ULL << 52)
 
 /**
  * It's set once we call ldlm_add_waiting_lock_res_locked() to start the
@@ -346,22 +346,22 @@
  *
  * Protected by lock and resource locks.
  */
-#define LDLM_FL_WAITED                  0x0020000000000000ULL /* bit 53 */
-#define ldlm_is_waited(_l)              LDLM_TEST_FLAG((_l), 1ULL << 53)
-#define ldlm_set_waited(_l)             LDLM_SET_FLAG((_l), 1ULL << 53)
-#define ldlm_clear_waited(_l)           LDLM_CLEAR_FLAG((_l), 1ULL << 53)
+#define LDLM_FL_WAITED			0x0020000000000000ULL /* bit 53 */
+#define ldlm_is_waited(_l)		LDLM_TEST_FLAG((_l), 1ULL << 53)
+#define ldlm_set_waited(_l)		LDLM_SET_FLAG((_l), 1ULL << 53)
+#define ldlm_clear_waited(_l)		LDLM_CLEAR_FLAG((_l), 1ULL << 53)
 
 /** Flag whether this is a server namespace lock. */
-#define LDLM_FL_NS_SRV                  0x0040000000000000ULL /* bit 54 */
-#define ldlm_is_ns_srv(_l)              LDLM_TEST_FLAG((_l), 1ULL << 54)
-#define ldlm_set_ns_srv(_l)             LDLM_SET_FLAG((_l), 1ULL << 54)
-#define ldlm_clear_ns_srv(_l)           LDLM_CLEAR_FLAG((_l), 1ULL << 54)
+#define LDLM_FL_NS_SRV			0x0040000000000000ULL /* bit 54 */
+#define ldlm_is_ns_srv(_l)		LDLM_TEST_FLAG((_l), 1ULL << 54)
+#define ldlm_set_ns_srv(_l)		LDLM_SET_FLAG((_l), 1ULL << 54)
+#define ldlm_clear_ns_srv(_l)		LDLM_CLEAR_FLAG((_l), 1ULL << 54)
 
 /** Flag whether this lock can be reused. Used by exclusive open. */
-#define LDLM_FL_EXCL                    0x0080000000000000ULL /* bit  55 */
-#define ldlm_is_excl(_l)                LDLM_TEST_FLAG((_l), 1ULL << 55)
-#define ldlm_set_excl(_l)               LDLM_SET_FLAG((_l), 1ULL << 55)
-#define ldlm_clear_excl(_l)             LDLM_CLEAR_FLAG((_l), 1ULL << 55)
+#define LDLM_FL_EXCL			0x0080000000000000ULL /* bit  55 */
+#define ldlm_is_excl(_l)		LDLM_TEST_FLAG((_l), 1ULL << 55)
+#define ldlm_set_excl(_l)		LDLM_SET_FLAG((_l), 1ULL << 55)
+#define ldlm_clear_excl(_l)		LDLM_CLEAR_FLAG((_l), 1ULL << 55)
 
 /** l_flags bits marked as "ast" bits */
 #define LDLM_FL_AST_MASK		(LDLM_FL_FLOCK_DEADLOCK		|\
@@ -385,16 +385,16 @@
 					 LDLM_FL_TEST_LOCK)
 
 /** test for ldlm_lock flag bit set */
-#define LDLM_TEST_FLAG(_l, _b)        (((_l)->l_flags & (_b)) != 0)
+#define LDLM_TEST_FLAG(_l, _b)		(((_l)->l_flags & (_b)) != 0)
 
 /** multi-bit test: are any of mask bits set? */
 #define LDLM_HAVE_MASK(_l, _m)		((_l)->l_flags & LDLM_FL_##_m##_MASK)
 
 /** set a ldlm_lock flag bit */
-#define LDLM_SET_FLAG(_l, _b)         ((_l)->l_flags |= (_b))
+#define LDLM_SET_FLAG(_l, _b)		((_l)->l_flags |= (_b))
 
 /** clear a ldlm_lock flag bit */
-#define LDLM_CLEAR_FLAG(_l, _b)       ((_l)->l_flags &= ~(_b))
+#define LDLM_CLEAR_FLAG(_l, _b)		((_l)->l_flags &= ~(_b))
 
 /** @} subgroup */
 /** @} group */
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [lustre-devel] [PATCH 21/26] lustre: second batch to cleanup white spaces in internal headers
  2019-01-31 17:19 [lustre-devel] [PATCH 00/26] lustre: cleanups with no code changes James Simmons
                   ` (19 preceding siblings ...)
  2019-01-31 17:19 ` [lustre-devel] [PATCH 20/26] lustre: first batch to cleanup white spaces in internal headers James Simmons
@ 2019-01-31 17:19 ` James Simmons
  2019-01-31 17:19 ` [lustre-devel] [PATCH 22/26] lustre: last " James Simmons
                   ` (5 subsequent siblings)
  26 siblings, 0 replies; 30+ messages in thread
From: James Simmons @ 2019-01-31 17:19 UTC (permalink / raw)
  To: lustre-devel

The internal headers are very messy and difficult to read. Remove
excess white space and properly align data structures so they are
easy on the eyes. This is the second batch since it covers many
lines of changes.

Signed-off-by: James Simmons <jsimmons@infradead.org>
---
 .../staging/lustre/lustre/include/lustre_export.h  |  76 ++--
 drivers/staging/lustre/lustre/include/lustre_fid.h |  12 +-
 drivers/staging/lustre/lustre/include/lustre_fld.h |  14 +-
 .../staging/lustre/lustre/include/lustre_handles.h |   2 +-
 .../staging/lustre/lustre/include/lustre_import.h  | 225 ++++++------
 .../staging/lustre/lustre/include/lustre_intent.h  |  24 +-
 drivers/staging/lustre/lustre/include/lustre_lib.h |   2 -
 drivers/staging/lustre/lustre/include/lustre_log.h |  38 +-
 drivers/staging/lustre/lustre/include/lustre_mdc.h |   2 +-
 drivers/staging/lustre/lustre/include/lustre_mds.h |   4 +-
 drivers/staging/lustre/lustre/include/lustre_net.h | 388 +++++++++++----------
 .../lustre/lustre/include/lustre_nrs_fifo.h        |   4 +-
 .../lustre/lustre/include/lustre_req_layout.h      |   8 +-
 drivers/staging/lustre/lustre/include/lustre_sec.h | 300 ++++++++--------
 14 files changed, 550 insertions(+), 549 deletions(-)

diff --git a/drivers/staging/lustre/lustre/include/lustre_export.h b/drivers/staging/lustre/lustre/include/lustre_export.h
index 1c70259..63fa656 100644
--- a/drivers/staging/lustre/lustre/include/lustre_export.h
+++ b/drivers/staging/lustre/lustre/include/lustre_export.h
@@ -48,9 +48,9 @@
 #include <lustre_dlm.h>
 
 enum obd_option {
-	OBD_OPT_FORCE =	 0x0001,
-	OBD_OPT_FAILOVER =      0x0002,
-	OBD_OPT_ABORT_RECOV =   0x0004,
+	OBD_OPT_FORCE		= 0x0001,
+	OBD_OPT_FAILOVER	= 0x0002,
+	OBD_OPT_ABORT_RECOV	= 0x0004,
 };
 
 /**
@@ -66,77 +66,77 @@ struct obd_export {
 	 * Subsequent client RPCs contain this handle id to identify
 	 * what export they are talking to.
 	 */
-	struct portals_handle     exp_handle;
-	atomic_t	      exp_refcount;
+	struct portals_handle		exp_handle;
+	atomic_t			exp_refcount;
 	/**
 	 * Set of counters below is to track where export references are
 	 * kept. The exp_rpc_count is used for reconnect handling also,
 	 * the cb_count and locks_count are for debug purposes only for now.
 	 * The sum of them should be less than exp_refcount by 3
 	 */
-	atomic_t	      exp_rpc_count; /* RPC references */
-	atomic_t	      exp_cb_count; /* Commit callback references */
+	atomic_t			exp_rpc_count; /* RPC references */
+	atomic_t			exp_cb_count; /* Commit callback references */
 	/** Number of queued replay requests to be processes */
-	atomic_t		  exp_replay_count;
-	atomic_t	      exp_locks_count; /** Lock references */
+	atomic_t			exp_replay_count;
+	atomic_t			exp_locks_count; /** Lock references */
 #if LUSTRE_TRACKS_LOCK_EXP_REFS
 	struct list_head		exp_locks_list;
-	spinlock_t		  exp_locks_list_guard;
+	spinlock_t			exp_locks_list_guard;
 #endif
 	/** UUID of client connected to this export */
-	struct obd_uuid	   exp_client_uuid;
+	struct obd_uuid			exp_client_uuid;
 	/** To link all exports on an obd device */
 	struct list_head		exp_obd_chain;
 	/** work_struct for destruction of export */
-	struct work_struct	exp_zombie_work;
-	struct rhash_head	exp_uuid_hash; /** uuid-export hash*/
+	struct work_struct		exp_zombie_work;
+	struct rhash_head		exp_uuid_hash; /** uuid-export hash*/
 	/** Obd device of this export */
-	struct obd_device	*exp_obd;
+	struct obd_device		*exp_obd;
 	/**
 	 * "reverse" import to send requests (e.g. from ldlm) back to client
 	 * exp_lock protect its change
 	 */
-	struct obd_import	*exp_imp_reverse;
-	struct lprocfs_stats     *exp_md_stats;
+	struct obd_import		*exp_imp_reverse;
+	struct lprocfs_stats		*exp_md_stats;
 	/** Active connection */
-	struct ptlrpc_connection *exp_connection;
+	struct ptlrpc_connection	*exp_connection;
 	/** Connection count value from last successful reconnect rpc */
-	u32		     exp_conn_cnt;
+	u32				exp_conn_cnt;
 	struct list_head		exp_outstanding_replies;
 	struct list_head		exp_uncommitted_replies;
-	spinlock_t		  exp_uncommitted_replies_lock;
+	spinlock_t			exp_uncommitted_replies_lock;
 	/** Last committed transno for this export */
-	u64		     exp_last_committed;
+	u64				exp_last_committed;
 	/** On replay all requests waiting for replay are linked here */
 	struct list_head		exp_req_replay_queue;
 	/**
 	 * protects exp_flags, exp_outstanding_replies and the change
 	 * of exp_imp_reverse
 	 */
-	spinlock_t		  exp_lock;
+	spinlock_t			exp_lock;
 	/** Compatibility flags for this export are embedded into
 	 *  exp_connect_data
 	 */
-	struct obd_connect_data   exp_connect_data;
-	enum obd_option	   exp_flags;
-	unsigned long	     exp_failed:1,
-				  exp_disconnected:1,
-				  exp_connecting:1,
-				  exp_flvr_changed:1,
-				  exp_flvr_adapt:1;
+	struct obd_connect_data		exp_connect_data;
+	enum obd_option			exp_flags;
+	unsigned long			exp_failed:1,
+					exp_disconnected:1,
+					exp_connecting:1,
+					exp_flvr_changed:1,
+					exp_flvr_adapt:1;
 	/* also protected by exp_lock */
-	enum lustre_sec_part      exp_sp_peer;
-	struct sptlrpc_flavor     exp_flvr;	     /* current */
-	struct sptlrpc_flavor     exp_flvr_old[2];      /* about-to-expire */
-	time64_t		  exp_flvr_expire[2];   /* seconds */
+	enum lustre_sec_part		exp_sp_peer;
+	struct sptlrpc_flavor		exp_flvr;	    /* current */
+	struct sptlrpc_flavor		exp_flvr_old[2];    /* about-to-expire */
+	time64_t			exp_flvr_expire[2]; /* seconds */
 
 	/** protects exp_hp_rpcs */
-	spinlock_t		  exp_rpc_lock;
-	struct list_head		  exp_hp_rpcs;	/* (potential) HP RPCs */
+	spinlock_t			exp_rpc_lock;
+	struct list_head		exp_hp_rpcs;	/* (potential) HP RPCs */
 
 	/** blocking dlm lock list, protected by exp_bl_list_lock */
 	struct list_head		exp_bl_list;
-	spinlock_t		  exp_bl_list_lock;
+	spinlock_t			exp_bl_list_lock;
 };
 
 static inline u64 *exp_connect_flags_ptr(struct obd_export *exp)
@@ -239,9 +239,9 @@ static inline bool imp_connect_disp_stripe(struct obd_import *imp)
 
 #define KKUC_CT_DATA_MAGIC	0x092013cea
 struct kkuc_ct_data {
-	u32		kcd_magic;
-	struct obd_uuid	kcd_uuid;
-	u32		kcd_archive;
+	u32			kcd_magic;
+	struct obd_uuid		kcd_uuid;
+	u32			kcd_archive;
 };
 
 /** @} export */
diff --git a/drivers/staging/lustre/lustre/include/lustre_fid.h b/drivers/staging/lustre/lustre/include/lustre_fid.h
index f0afa8d..5108864 100644
--- a/drivers/staging/lustre/lustre/include/lustre_fid.h
+++ b/drivers/staging/lustre/lustre/include/lustre_fid.h
@@ -331,13 +331,13 @@ struct lu_client_seq {
 	 * clients, this contains meta-sequence range. And for servers this
 	 * contains super-sequence range.
 	 */
-	struct lu_seq_range	 lcs_space;
+	struct lu_seq_range	lcs_space;
 
 	/* Seq related proc */
 	struct dentry		*lcs_debugfs_entry;
 
 	/* This holds last allocated fid in last obtained seq */
-	struct lu_fid	   lcs_fid;
+	struct lu_fid		lcs_fid;
 
 	/* LUSTRE_SEQ_METADATA or LUSTRE_SEQ_DATA */
 	enum lu_cli_type	lcs_type;
@@ -346,17 +346,17 @@ struct lu_client_seq {
 	 * Service uuid, passed from MDT + seq name to form unique seq name to
 	 * use it with procfs.
 	 */
-	char		    lcs_name[LUSTRE_MDT_MAXNAMELEN];
+	char			lcs_name[LUSTRE_MDT_MAXNAMELEN];
 
 	/*
 	 * Sequence width, that is how many objects may be allocated in one
 	 * sequence. Default value for it is LUSTRE_SEQ_MAX_WIDTH.
 	 */
-	u64		   lcs_width;
+	u64			lcs_width;
 
 	/* wait queue for fid allocation and update indicator */
-	wait_queue_head_t	     lcs_waitq;
-	int		     lcs_update;
+	wait_queue_head_t	lcs_waitq;
+	int			lcs_update;
 };
 
 /* Client methods */
diff --git a/drivers/staging/lustre/lustre/include/lustre_fld.h b/drivers/staging/lustre/lustre/include/lustre_fld.h
index 4bcabf7..92074ab 100644
--- a/drivers/staging/lustre/lustre/include/lustre_fld.h
+++ b/drivers/staging/lustre/lustre/include/lustre_fld.h
@@ -59,10 +59,10 @@ enum {
 };
 
 struct lu_fld_target {
-	struct list_head	       ft_chain;
-	struct obd_export       *ft_exp;
-	struct lu_server_fld    *ft_srv;
-	u64		    ft_idx;
+	struct list_head	 ft_chain;
+	struct obd_export	*ft_exp;
+	struct lu_server_fld	*ft_srv;
+	u64			 ft_idx;
 };
 
 struct lu_server_fld {
@@ -79,7 +79,7 @@ struct lu_server_fld {
 	struct mutex		lsf_lock;
 
 	/** Fld service name in form "fld-srv-lustre-MDTXXX" */
-	char		     lsf_name[LUSTRE_MDT_MAXNAMELEN];
+	char			lsf_name[LUSTRE_MDT_MAXNAMELEN];
 
 };
 
@@ -88,13 +88,13 @@ struct lu_client_fld {
 	struct dentry		*lcf_debugfs_entry;
 
 	/** List of exports client FLD knows about. */
-	struct list_head	       lcf_targets;
+	struct list_head	 lcf_targets;
 
 	/** Current hash to be used to chose an export. */
 	struct lu_fld_hash      *lcf_hash;
 
 	/** Exports count. */
-	int		      lcf_count;
+	int			 lcf_count;
 
 	/** Lock protecting exports list and fld_hash. */
 	spinlock_t		 lcf_lock;
diff --git a/drivers/staging/lustre/lustre/include/lustre_handles.h b/drivers/staging/lustre/lustre/include/lustre_handles.h
index 84f70f3..6836808 100644
--- a/drivers/staging/lustre/lustre/include/lustre_handles.h
+++ b/drivers/staging/lustre/lustre/include/lustre_handles.h
@@ -63,7 +63,7 @@ struct portals_handle_ops {
  * to compute the start of the structure based on the handle field.
  */
 struct portals_handle {
-	struct list_head			h_link;
+	struct list_head		h_link;
 	u64				h_cookie;
 	const void			*h_owner;
 	struct portals_handle_ops	*h_ops;
diff --git a/drivers/staging/lustre/lustre/include/lustre_import.h b/drivers/staging/lustre/lustre/include/lustre_import.h
index db075be..7d52665 100644
--- a/drivers/staging/lustre/lustre/include/lustre_import.h
+++ b/drivers/staging/lustre/lustre/include/lustre_import.h
@@ -58,26 +58,26 @@
 #define AT_FLG_NOHIST 0x1	  /* use last reported value only */
 
 struct adaptive_timeout {
-	time64_t	at_binstart;	 /* bin start time */
-	unsigned int	at_hist[AT_BINS];    /* timeout history bins */
+	time64_t	at_binstart;		/* bin start time */
+	unsigned int	at_hist[AT_BINS];	/* timeout history bins */
 	unsigned int	at_flags;
-	unsigned int	at_current;	  /* current timeout value */
-	unsigned int	at_worst_ever;       /* worst-ever timeout value */
-	time64_t	at_worst_time;       /* worst-ever timeout timestamp */
+	unsigned int	at_current;		/* current timeout value */
+	unsigned int	at_worst_ever;		/* worst-ever timeout value */
+	time64_t	at_worst_time;		/* worst-ever timeout timestamp */
 	spinlock_t	at_lock;
 };
 
 struct ptlrpc_at_array {
-	struct list_head       *paa_reqs_array; /** array to hold requests */
-	u32	     paa_size;       /** the size of array */
-	u32	     paa_count;      /** the total count of reqs */
-	time64_t     paa_deadline;   /** the earliest deadline of reqs */
-	u32	    *paa_reqs_count; /** the count of reqs in each entry */
+	struct list_head	*paa_reqs_array; /** array to hold requests */
+	u32			paa_size;        /** the size of array */
+	u32			paa_count;       /** the total count of reqs */
+	time64_t		paa_deadline;    /** the earliest deadline of reqs */
+	u32			*paa_reqs_count; /** the count of reqs in each entry */
 };
 
 #define IMP_AT_MAX_PORTALS 8
 struct imp_at {
-	int		     iat_portal[IMP_AT_MAX_PORTALS];
+	int			iat_portal[IMP_AT_MAX_PORTALS];
 	struct adaptive_timeout iat_net_latency;
 	struct adaptive_timeout iat_service_estimate[IMP_AT_MAX_PORTALS];
 };
@@ -86,16 +86,16 @@ struct imp_at {
 
 /** Possible import states */
 enum lustre_imp_state {
-	LUSTRE_IMP_CLOSED     = 1,
-	LUSTRE_IMP_NEW	= 2,
-	LUSTRE_IMP_DISCON     = 3,
-	LUSTRE_IMP_CONNECTING = 4,
-	LUSTRE_IMP_REPLAY     = 5,
+	LUSTRE_IMP_CLOSED	= 1,
+	LUSTRE_IMP_NEW		= 2,
+	LUSTRE_IMP_DISCON	= 3,
+	LUSTRE_IMP_CONNECTING	= 4,
+	LUSTRE_IMP_REPLAY	= 5,
 	LUSTRE_IMP_REPLAY_LOCKS = 6,
-	LUSTRE_IMP_REPLAY_WAIT  = 7,
-	LUSTRE_IMP_RECOVER    = 8,
-	LUSTRE_IMP_FULL       = 9,
-	LUSTRE_IMP_EVICTED    = 10,
+	LUSTRE_IMP_REPLAY_WAIT	= 7,
+	LUSTRE_IMP_RECOVER	= 8,
+	LUSTRE_IMP_FULL		= 9,
+	LUSTRE_IMP_EVICTED	= 10,
 };
 
 /** Returns test string representation of numeric import state \a state */
@@ -115,13 +115,13 @@ static inline char *ptlrpc_import_state_name(enum lustre_imp_state state)
  * List of import event types
  */
 enum obd_import_event {
-	IMP_EVENT_DISCON     = 0x808001,
-	IMP_EVENT_INACTIVE   = 0x808002,
-	IMP_EVENT_INVALIDATE = 0x808003,
-	IMP_EVENT_ACTIVE     = 0x808004,
-	IMP_EVENT_OCD	= 0x808005,
-	IMP_EVENT_DEACTIVATE = 0x808006,
-	IMP_EVENT_ACTIVATE   = 0x808007,
+	IMP_EVENT_DISCON	= 0x808001,
+	IMP_EVENT_INACTIVE	= 0x808002,
+	IMP_EVENT_INVALIDATE	= 0x808003,
+	IMP_EVENT_ACTIVE	= 0x808004,
+	IMP_EVENT_OCD		= 0x808005,
+	IMP_EVENT_DEACTIVATE	= 0x808006,
+	IMP_EVENT_ACTIVATE	= 0x808007,
 };
 
 /**
@@ -131,20 +131,20 @@ struct obd_import_conn {
 	/** Item for linking connections together */
 	struct list_head		oic_item;
 	/** Pointer to actual PortalRPC connection */
-	struct ptlrpc_connection *oic_conn;
+	struct ptlrpc_connection	*oic_conn;
 	/** uuid of remote side */
-	struct obd_uuid	   oic_uuid;
+	struct obd_uuid			oic_uuid;
 	/**
 	 * Time (64 bit jiffies) of last connection attempt on this connection
 	 */
-	u64		     oic_last_attempt;
+	u64				oic_last_attempt;
 };
 
 /* state history */
 #define IMP_STATE_HIST_LEN 16
 struct import_state_hist {
-	enum lustre_imp_state ish_state;
-	time64_t	ish_time;
+	enum lustre_imp_state		ish_state;
+	time64_t			ish_time;
 };
 
 /**
@@ -153,14 +153,14 @@ struct import_state_hist {
  */
 struct obd_import {
 	/** Local handle (== id) for this import. */
-	struct portals_handle     imp_handle;
+	struct portals_handle		imp_handle;
 	/** Reference counter */
-	atomic_t	      imp_refcount;
-	struct lustre_handle      imp_dlm_handle; /* client's ldlm export */
+	atomic_t			imp_refcount;
+	struct lustre_handle		imp_dlm_handle; /* client's ldlm export */
 	/** Currently active connection */
-	struct ptlrpc_connection *imp_connection;
+	struct ptlrpc_connection       *imp_connection;
 	/** PortalRPC client structure for this import */
-	struct ptlrpc_client     *imp_client;
+	struct ptlrpc_client	       *imp_client;
 	/** List element for linking into pinger chain */
 	struct list_head		imp_pinger_chain;
 	/** work struct for destruction of import */
@@ -188,133 +188,134 @@ struct obd_import {
 	/** @} */
 
 	/** List of not replied requests */
-	struct list_head	imp_unreplied_list;
+	struct list_head		imp_unreplied_list;
 	/** Known maximal replied XID */
-	u64			imp_known_replied_xid;
+	u64				imp_known_replied_xid;
 
 	/** obd device for this import */
-	struct obd_device	*imp_obd;
+	struct obd_device	       *imp_obd;
 
 	/**
 	 * some seciruty-related fields
 	 * @{
 	 */
-	struct ptlrpc_sec	*imp_sec;
-	struct mutex		  imp_sec_mutex;
-	time64_t		imp_sec_expire;
+	struct ptlrpc_sec	       *imp_sec;
+	struct mutex			imp_sec_mutex;
+	time64_t			imp_sec_expire;
 	/** @} */
 
 	/** Wait queue for those who need to wait for recovery completion */
-	wait_queue_head_t	       imp_recovery_waitq;
+	wait_queue_head_t		imp_recovery_waitq;
 
 	/** Number of requests currently in-flight */
-	atomic_t	      imp_inflight;
+	atomic_t			imp_inflight;
 	/** Number of requests currently unregistering */
-	atomic_t	      imp_unregistering;
+	atomic_t			imp_unregistering;
 	/** Number of replay requests inflight */
-	atomic_t	      imp_replay_inflight;
+	atomic_t			imp_replay_inflight;
 	/** Number of currently happening import invalidations */
-	atomic_t	      imp_inval_count;
+	atomic_t			imp_inval_count;
 	/** Numbner of request timeouts */
-	atomic_t	      imp_timeouts;
+	atomic_t			imp_timeouts;
 	/** Current import state */
-	enum lustre_imp_state     imp_state;
+	enum lustre_imp_state		imp_state;
 	/** Last replay state */
-	enum lustre_imp_state	  imp_replay_state;
+	enum lustre_imp_state		imp_replay_state;
 	/** History of import states */
-	struct import_state_hist  imp_state_hist[IMP_STATE_HIST_LEN];
-	int		       imp_state_hist_idx;
+	struct import_state_hist	imp_state_hist[IMP_STATE_HIST_LEN];
+	int				imp_state_hist_idx;
 	/** Current import generation. Incremented on every reconnect */
-	int		       imp_generation;
+	int				imp_generation;
 	/** Incremented every time we send reconnection request */
-	u32		     imp_conn_cnt;
+	u32				imp_conn_cnt;
        /**
 	* \see ptlrpc_free_committed remembers imp_generation value here
 	* after a check to save on unnecessary replay list iterations
 	*/
-	int		       imp_last_generation_checked;
+	int				imp_last_generation_checked;
 	/** Last transno we replayed */
-	u64		     imp_last_replay_transno;
+	u64				imp_last_replay_transno;
 	/** Last transno committed on remote side */
-	u64		     imp_peer_committed_transno;
+	u64				imp_peer_committed_transno;
 	/**
 	 * \see ptlrpc_free_committed remembers last_transno since its last
 	 * check here and if last_transno did not change since last run of
 	 * ptlrpc_free_committed and import generation is the same, we can
 	 * skip looking for requests to remove from replay list as optimisation
 	 */
-	u64		     imp_last_transno_checked;
+	u64				imp_last_transno_checked;
 	/**
 	 * Remote export handle. This is how remote side knows what export
 	 * we are talking to. Filled from response to connect request
 	 */
-	struct lustre_handle      imp_remote_handle;
+	struct lustre_handle		imp_remote_handle;
 	/** When to perform next ping. time in jiffies. */
-	unsigned long		imp_next_ping;
+	unsigned long			imp_next_ping;
 	/** When we last successfully connected. time in 64bit jiffies */
-	u64		     imp_last_success_conn;
+	u64				imp_last_success_conn;
 
 	/** List of all possible connection for import. */
 	struct list_head		imp_conn_list;
 	/**
 	 * Current connection. \a imp_connection is imp_conn_current->oic_conn
 	 */
-	struct obd_import_conn   *imp_conn_current;
+	struct obd_import_conn	       *imp_conn_current;
 
 	/** Protects flags, level, generation, conn_cnt, *_list */
-	spinlock_t		  imp_lock;
+	spinlock_t			imp_lock;
 
 	/* flags */
-	unsigned long	     imp_no_timeout:1, /* timeouts are disabled */
-				  imp_invalid:1,    /* evicted */
-				  /* administratively disabled */
-				  imp_deactive:1,
-				  /* try to recover the import */
-				  imp_replayable:1,
-				  /* don't run recovery (timeout instead) */
-				  imp_dlm_fake:1,
-				  /* use 1/2 timeout on MDS' OSCs */
-				  imp_server_timeout:1,
-				  /* VBR: imp in delayed recovery */
-				  imp_delayed_recovery:1,
-				  /* VBR: if gap was found then no lock replays
-				   */
-				  imp_no_lock_replay:1,
-				  /* recovery by versions was failed */
-				  imp_vbr_failed:1,
-				  /* force an immediate ping */
-				  imp_force_verify:1,
-				  /* force a scheduled ping */
-				  imp_force_next_verify:1,
-				  /* pingable */
-				  imp_pingable:1,
-				  /* resend for replay */
-				  imp_resend_replay:1,
-				  /* disable normal recovery, for test only. */
-				  imp_no_pinger_recover:1,
+	unsigned long			imp_no_timeout:1, /* timeouts are disabled */
+					imp_invalid:1,    /* evicted */
+					/* administratively disabled */
+					imp_deactive:1,
+					/* try to recover the import */
+					imp_replayable:1,
+					/* don't run recovery (timeout instead) */
+					imp_dlm_fake:1,
+					/* use 1/2 timeout on MDS' OSCs */
+					imp_server_timeout:1,
+					/* VBR: imp in delayed recovery */
+					imp_delayed_recovery:1,
+					/* VBR: if gap was found then no lock replays
+					 */
+					imp_no_lock_replay:1,
+					/* recovery by versions was failed */
+					imp_vbr_failed:1,
+					/* force an immediate ping */
+					imp_force_verify:1,
+					/* force a scheduled ping */
+					imp_force_next_verify:1,
+					/* pingable */
+					imp_pingable:1,
+					/* resend for replay */
+					imp_resend_replay:1,
+					/* disable normal recovery, for test only. */
+					imp_no_pinger_recover:1,
 #if OBD_OCD_VERSION(3, 0, 53, 0) > LUSTRE_VERSION_CODE
-				  /* need IR MNE swab */
-				  imp_need_mne_swab:1,
+					/* need IR MNE swab */
+					imp_need_mne_swab:1,
 #endif
-				  /* import must be reconnected instead of
-				   * chosing new connection
-				   */
-				  imp_force_reconnect:1,
-				  /* import has tried to connect with server */
-				  imp_connect_tried:1,
-				 /* connected but not FULL yet */
-				 imp_connected:1;
-	u32		     imp_connect_op;
-	struct obd_connect_data   imp_connect_data;
-	u64		     imp_connect_flags_orig;
-	u64			imp_connect_flags2_orig;
-	int		       imp_connect_error;
-
-	u32		     imp_msg_magic;
-	u32		     imp_msghdr_flags;       /* adjusted based on server capability */
-
-	struct imp_at	     imp_at;		 /* adaptive timeout data */
-	time64_t	     imp_last_reply_time;    /* for health check */
+					/* import must be reconnected instead of
+					 * chosing new connection
+					 */
+					imp_force_reconnect:1,
+					/* import has tried to connect with server */
+					imp_connect_tried:1,
+					/* connected but not FULL yet */
+					imp_connected:1;
+
+	u32				imp_connect_op;
+	struct obd_connect_data		imp_connect_data;
+	u64				imp_connect_flags_orig;
+	u64				imp_connect_flags2_orig;
+	int				imp_connect_error;
+
+	u32				imp_msg_magic;
+	u32				imp_msghdr_flags; /* adjusted based on server capability */
+
+	struct imp_at			imp_at;	/* adaptive timeout data */
+	time64_t			imp_last_reply_time; /* for health check */
 };
 
 /* import.c */
diff --git a/drivers/staging/lustre/lustre/include/lustre_intent.h b/drivers/staging/lustre/lustre/include/lustre_intent.h
index 3f26d7a..f97c318 100644
--- a/drivers/staging/lustre/lustre/include/lustre_intent.h
+++ b/drivers/staging/lustre/lustre/include/lustre_intent.h
@@ -39,18 +39,18 @@
 /* intent IT_XXX are defined in lustre/include/obd.h */
 
 struct lookup_intent {
-	int		it_op;
-	int		it_create_mode;
-	u64		it_flags;
-	int		it_disposition;
-	int		it_status;
-	u64		it_lock_handle;
-	u64		it_lock_bits;
-	int		it_lock_mode;
-	int		it_remote_lock_mode;
-	u64	   it_remote_lock_handle;
-	struct ptlrpc_request *it_request;
-	unsigned int    it_lock_set:1;
+	int			it_op;
+	int			it_create_mode;
+	u64			it_flags;
+	int			it_disposition;
+	int			it_status;
+	u64			it_lock_handle;
+	u64			it_lock_bits;
+	int			it_lock_mode;
+	int			it_remote_lock_mode;
+	u64			it_remote_lock_handle;
+	struct ptlrpc_request	*it_request;
+	unsigned int		it_lock_set:1;
 };
 
 static inline int it_disposition(struct lookup_intent *it, int flag)
diff --git a/drivers/staging/lustre/lustre/include/lustre_lib.h b/drivers/staging/lustre/lustre/include/lustre_lib.h
index 87748e9..da86e46 100644
--- a/drivers/staging/lustre/lustre/include/lustre_lib.h
+++ b/drivers/staging/lustre/lustre/include/lustre_lib.h
@@ -85,8 +85,6 @@ static inline int l_fatal_signal_pending(struct task_struct *p)
 
 /** @} lib */
 
-
-
 /* l_wait_event_abortable() is a bit like wait_event_killable()
  * except there is a fixed set of signals which will abort:
  * LUSTRE_FATAL_SIGS
diff --git a/drivers/staging/lustre/lustre/include/lustre_log.h b/drivers/staging/lustre/lustre/include/lustre_log.h
index 4ba4501..a576d40 100644
--- a/drivers/staging/lustre/lustre/include/lustre_log.h
+++ b/drivers/staging/lustre/lustre/include/lustre_log.h
@@ -66,15 +66,15 @@ enum llog_open_param {
 };
 
 struct plain_handle_data {
-	struct list_head	  phd_entry;
-	struct llog_handle *phd_cat_handle;
-	struct llog_cookie  phd_cookie; /* cookie of this log in its cat */
+	struct list_head	 phd_entry;
+	struct llog_handle	*phd_cat_handle;
+	struct llog_cookie	 phd_cookie; /* cookie of this log in its cat */
 };
 
 struct cat_handle_data {
-	struct list_head	      chd_head;
+	struct list_head	chd_head;
 	struct llog_handle     *chd_current_log; /* currently open log */
-	struct llog_handle	*chd_next_log; /* llog to be used next */
+	struct llog_handle     *chd_next_log; /* llog to be used next */
 };
 
 struct llog_handle;
@@ -101,28 +101,28 @@ struct llog_process_data {
 	 * Any useful data needed while processing catalog. This is
 	 * passed later to process callback.
 	 */
-	void		*lpd_data;
+	void			*lpd_data;
 	/**
 	 * Catalog process callback function, called for each record
 	 * in catalog.
 	 */
-	llog_cb_t	    lpd_cb;
+	llog_cb_t		lpd_cb;
 	/**
 	 * Start processing the catalog from startcat/startidx
 	 */
-	int		  lpd_startcat;
-	int		  lpd_startidx;
+	int			lpd_startcat;
+	int			lpd_startidx;
 };
 
 struct llog_process_cat_data {
 	/**
 	 * Temporary stored first_idx while scanning log.
 	 */
-	int		  lpcd_first_idx;
+	int			lpcd_first_idx;
 	/**
 	 * Temporary stored last_idx while scanning log.
 	 */
-	int		  lpcd_last_idx;
+	int			lpcd_last_idx;
 };
 
 struct thandle;
@@ -234,23 +234,23 @@ struct llog_handle {
 #define LLOG_CTXT_FLAG_STOP		 0x00000002
 
 struct llog_ctxt {
-	int		      loc_idx; /* my index the obd array of ctxt's */
-	struct obd_device       *loc_obd; /* points back to the containing obd*/
-	struct obd_llog_group   *loc_olg; /* group containing that ctxt */
-	struct obd_export       *loc_exp; /* parent "disk" export (e.g. MDS) */
-	struct obd_import       *loc_imp; /* to use in RPC's: can be backward
+	int			 loc_idx; /* my index the obd array of ctxt's */
+	struct obd_device	*loc_obd; /* points back to the containing obd*/
+	struct obd_llog_group	*loc_olg; /* group containing that ctxt */
+	struct obd_export	*loc_exp; /* parent "disk" export (e.g. MDS) */
+	struct obd_import	*loc_imp; /* to use in RPC's: can be backward
 					   * pointing import
 					   */
 	struct llog_operations  *loc_logops;
 	struct llog_handle      *loc_handle;
 	struct mutex		 loc_mutex; /* protect loc_imp */
-	atomic_t	     loc_refcount;
-	long		     loc_flags; /* flags, see above defines */
+	atomic_t		 loc_refcount;
+	long			 loc_flags; /* flags, see above defines */
 	/*
 	 * llog chunk size, and llog record size can not be bigger than
 	 * loc_chunk_size
 	 */
-	u32			loc_chunk_size;
+	u32			 loc_chunk_size;
 };
 
 #define LLOG_PROC_BREAK 0x0001
diff --git a/drivers/staging/lustre/lustre/include/lustre_mdc.h b/drivers/staging/lustre/lustre/include/lustre_mdc.h
index c1fb324..90fcbae 100644
--- a/drivers/staging/lustre/lustre/include/lustre_mdc.h
+++ b/drivers/staging/lustre/lustre/include/lustre_mdc.h
@@ -106,7 +106,7 @@ static inline void mdc_get_rpc_lock(struct mdc_rpc_lock *lck,
 	 * Only when all fake requests are finished can normal requests
 	 * be sent, to ensure they are recoverable again.
 	 */
- again:
+again:
 	mutex_lock(&lck->rpcl_mutex);
 
 	if (CFS_FAIL_CHECK_QUIET(OBD_FAIL_MDC_RPCS_SEM)) {
diff --git a/drivers/staging/lustre/lustre/include/lustre_mds.h b/drivers/staging/lustre/lustre/include/lustre_mds.h
index f665556..df178cc 100644
--- a/drivers/staging/lustre/lustre/include/lustre_mds.h
+++ b/drivers/staging/lustre/lustre/include/lustre_mds.h
@@ -50,8 +50,8 @@
 #include <lustre_export.h>
 
 struct mds_group_info {
-	struct obd_uuid *uuid;
-	int group;
+	struct obd_uuid		*uuid;
+	int			group;
 };
 
 #define MDD_OBD_NAME     "mdd_obd"
diff --git a/drivers/staging/lustre/lustre/include/lustre_net.h b/drivers/staging/lustre/lustre/include/lustre_net.h
index 050a7ec..47b9632 100644
--- a/drivers/staging/lustre/lustre/include/lustre_net.h
+++ b/drivers/staging/lustre/lustre/include/lustre_net.h
@@ -136,9 +136,9 @@
  *
  * Constants determine how memory is used to buffer incoming service requests.
  *
- * ?_NBUFS	      # buffers to allocate when growing the pool
- * ?_BUFSIZE	    # bytes in a single request buffer
- * ?_MAXREQSIZE	 # maximum request service will receive
+ * ?_NBUFS		# buffers to allocate when growing the pool
+ * ?_BUFSIZE		# bytes in a single request buffer
+ * ?_MAXREQSIZE		# maximum request service will receive
  *
  * When fewer than ?_NBUFS/2 buffers are posted for receive, another chunk
  * of ?_NBUFS is added to the pool.
@@ -231,7 +231,7 @@
  *	top of this subset
  *     b) bind service threads on a few partitions, see modparameters of
  *	MDS and OSS for details
-*
+ *
  * NB: these calculations (and examples below) are simplified to help
  *     understanding, the real implementation is a little more complex,
  *     please see ptlrpc_server_nthreads_check() for details.
@@ -263,12 +263,12 @@
 #define LDLM_NTHRS_BASE		24
 #define LDLM_NTHRS_MAX		(num_online_cpus() == 1 ? 64 : 128)
 
-#define LDLM_BL_THREADS   LDLM_NTHRS_AUTO_INIT
-#define LDLM_CLIENT_NBUFS 1
-#define LDLM_SERVER_NBUFS 64
-#define LDLM_BUFSIZE      (8 * 1024)
-#define LDLM_MAXREQSIZE   (5 * 1024)
-#define LDLM_MAXREPSIZE   (1024)
+#define LDLM_BL_THREADS		LDLM_NTHRS_AUTO_INIT
+#define LDLM_CLIENT_NBUFS	1
+#define LDLM_SERVER_NBUFS	64
+#define LDLM_BUFSIZE		(8 * 1024)
+#define LDLM_MAXREQSIZE		(5 * 1024)
+#define LDLM_MAXREPSIZE		(1024)
 
 #define MDS_MAXREQSIZE		(5 * 1024)	/* >= 4736 */
 
@@ -292,23 +292,23 @@ struct ptlrpc_connection {
 	/** linkage for connections hash table */
 	struct rhash_head	c_hash;
 	/** Our own lnet nid for this connection */
-	lnet_nid_t	      c_self;
+	lnet_nid_t		c_self;
 	/** Remote side nid for this connection */
 	struct lnet_process_id	c_peer;
 	/** UUID of the other side */
-	struct obd_uuid	 c_remote_uuid;
+	struct obd_uuid		c_remote_uuid;
 	/** reference counter for this connection */
-	atomic_t	    c_refcount;
+	atomic_t		c_refcount;
 };
 
 /** Client definition for PortalRPC */
 struct ptlrpc_client {
 	/** What lnet portal does this client send messages to by default */
-	u32		   cli_request_portal;
+	u32			cli_request_portal;
 	/** What portal do we expect replies on */
-	u32		   cli_reply_portal;
+	u32			cli_reply_portal;
 	/** Name of the client */
-	char		   *cli_name;
+	char			*cli_name;
 };
 
 /** state flags of requests */
@@ -326,8 +326,8 @@ struct ptlrpc_client {
 	 * a pointer to it here.  The pointer_arg ensures this struct is at
 	 * least big enough for that.
 	 */
-	void      *pointer_arg[11];
-	u64      space[7];
+	void			*pointer_arg[11];
+	u64			space[7];
 };
 
 struct ptlrpc_request_set;
@@ -346,26 +346,26 @@ struct ptlrpc_client {
  * returned.
  */
 struct ptlrpc_request_set {
-	atomic_t	  set_refcount;
+	atomic_t		set_refcount;
 	/** number of in queue requests */
-	atomic_t	  set_new_count;
+	atomic_t		set_new_count;
 	/** number of uncompleted requests */
-	atomic_t	  set_remaining;
+	atomic_t		set_remaining;
 	/** wait queue to wait on for request events */
-	wait_queue_head_t	   set_waitq;
-	wait_queue_head_t	  *set_wakeup_ptr;
+	wait_queue_head_t	set_waitq;
+	wait_queue_head_t	*set_wakeup_ptr;
 	/** List of requests in the set */
-	struct list_head	    set_requests;
+	struct list_head	set_requests;
 	/**
 	 * List of completion callbacks to be called when the set is completed
 	 * This is only used if \a set_interpret is NULL.
 	 * Links struct ptlrpc_set_cbdata.
 	 */
-	struct list_head	    set_cblist;
+	struct list_head	set_cblist;
 	/** Completion callback, if only one. */
-	set_interpreter_func  set_interpret;
+	set_interpreter_func	set_interpret;
 	/** opaq argument passed to completion \a set_interpret callback. */
-	void		 *set_arg;
+	void			*set_arg;
 	/**
 	 * Lock for \a set_new_requests manipulations
 	 * locked so that any old caller can communicate requests to
@@ -373,17 +373,17 @@ struct ptlrpc_request_set {
 	 */
 	spinlock_t		set_new_req_lock;
 	/** List of new yet unsent requests. Only used with ptlrpcd now. */
-	struct list_head	    set_new_requests;
+	struct list_head	set_new_requests;
 
 	/** rq_status of requests that have been freed already */
-	int		   set_rc;
+	int			set_rc;
 	/** Additional fields used by the flow control extension */
 	/** Maximum number of RPCs in flight */
-	int		   set_max_inflight;
+	int			set_max_inflight;
 	/** Callback function used to generate RPCs */
-	set_producer_func     set_producer;
+	set_producer_func	set_producer;
 	/** opaq argument passed to the producer callback */
-	void		 *set_producer_arg;
+	void			*set_producer_arg;
 };
 
 /**
@@ -391,11 +391,11 @@ struct ptlrpc_request_set {
  */
 struct ptlrpc_set_cbdata {
 	/** List linkage item */
-	struct list_head	      psc_item;
+	struct list_head	psc_item;
 	/** Pointer to interpreting function */
 	set_interpreter_func    psc_interpret;
 	/** Opaq argument to pass to the callback */
-	void		   *psc_data;
+	void			*psc_data;
 };
 
 struct ptlrpc_bulk_desc;
@@ -423,76 +423,76 @@ struct ptlrpc_cb_id {
  */
 struct ptlrpc_reply_state {
 	/** Callback description */
-	struct ptlrpc_cb_id    rs_cb_id;
+	struct ptlrpc_cb_id	rs_cb_id;
 	/** Linkage for list of all reply states in a system */
-	struct list_head	     rs_list;
+	struct list_head	rs_list;
 	/** Linkage for list of all reply states on same export */
-	struct list_head	     rs_exp_list;
+	struct list_head	rs_exp_list;
 	/** Linkage for list of all reply states for same obd */
-	struct list_head	     rs_obd_list;
+	struct list_head	rs_obd_list;
 #if RS_DEBUG
-	struct list_head	     rs_debug_list;
+	struct list_head	rs_debug_list;
 #endif
 	/** A spinlock to protect the reply state flags */
 	spinlock_t		rs_lock;
 	/** Reply state flags */
-	unsigned long	  rs_difficult:1; /* ACK/commit stuff */
-	unsigned long	  rs_no_ack:1;    /* no ACK, even for
-					   * difficult requests
-					   */
-	unsigned long	  rs_scheduled:1;     /* being handled? */
-	unsigned long	  rs_scheduled_ever:1;/* any schedule attempts? */
-	unsigned long	  rs_handled:1;  /* been handled yet? */
-	unsigned long	  rs_on_net:1;   /* reply_out_callback pending? */
-	unsigned long	  rs_prealloc:1; /* rs from prealloc list */
-	unsigned long	  rs_committed:1;/* the transaction was committed
-					  * and the rs was dispatched
-					  */
+	unsigned long		rs_difficult:1; /* ACK/commit stuff */
+	unsigned long		rs_no_ack:1;    /* no ACK, even for
+						 * difficult requests
+						 */
+	unsigned long		rs_scheduled:1; /* being handled? */
+	unsigned long		rs_scheduled_ever:1; /* any schedule attempts? */
+	unsigned long		rs_handled:1;	/* been handled yet? */
+	unsigned long		rs_on_net:1;	/* reply_out_callback pending? */
+	unsigned long		rs_prealloc:1;	/* rs from prealloc list */
+	unsigned long		rs_committed:1;	/* the transaction was committed
+						 * and the rs was dispatched
+						 */
 	atomic_t		rs_refcount;	/* number of users */
 	/** Number of locks awaiting client ACK */
 	int			rs_nlocks;
 
 	/** Size of the state */
-	int		    rs_size;
+	int			rs_size;
 	/** opcode */
-	u32		  rs_opc;
+	u32			rs_opc;
 	/** Transaction number */
-	u64		  rs_transno;
+	u64			rs_transno;
 	/** xid */
-	u64		  rs_xid;
-	struct obd_export     *rs_export;
+	u64			rs_xid;
+	struct obd_export	*rs_export;
 	struct ptlrpc_service_part *rs_svcpt;
 	/** Lnet metadata handle for the reply */
-	struct lnet_handle_md		rs_md_h;
+	struct lnet_handle_md	rs_md_h;
 
 	/** Context for the service thread */
-	struct ptlrpc_svc_ctx *rs_svc_ctx;
+	struct ptlrpc_svc_ctx	*rs_svc_ctx;
 	/** Reply buffer (actually sent to the client), encoded if needed */
-	struct lustre_msg     *rs_repbuf;       /* wrapper */
+	struct lustre_msg	*rs_repbuf;	/* wrapper */
 	/** Size of the reply buffer */
-	int		    rs_repbuf_len;   /* wrapper buf length */
+	int			rs_repbuf_len;	/* wrapper buf length */
 	/** Size of the reply message */
-	int		    rs_repdata_len;  /* wrapper msg length */
+	int			rs_repdata_len;	/* wrapper msg length */
 	/**
 	 * Actual reply message. Its content is encrypted (if needed) to
 	 * produce reply buffer for actual sending. In simple case
 	 * of no network encryption we just set \a rs_repbuf to \a rs_msg
 	 */
-	struct lustre_msg     *rs_msg;	  /* reply message */
+	struct lustre_msg	*rs_msg;	/* reply message */
 
 	/** Handles of locks awaiting client reply ACK */
-	struct lustre_handle   rs_locks[RS_MAX_LOCKS];
+	struct lustre_handle	rs_locks[RS_MAX_LOCKS];
 	/** Lock modes of locks in \a rs_locks */
-	enum ldlm_mode	    rs_modes[RS_MAX_LOCKS];
+	enum ldlm_mode		rs_modes[RS_MAX_LOCKS];
 };
 
 struct ptlrpc_thread;
 
 /** RPC stages */
 enum rq_phase {
-	RQ_PHASE_NEW	    = 0xebc0de00,
-	RQ_PHASE_RPC	    = 0xebc0de01,
-	RQ_PHASE_BULK	   = 0xebc0de02,
+	RQ_PHASE_NEW		= 0xebc0de00,
+	RQ_PHASE_RPC		= 0xebc0de01,
+	RQ_PHASE_BULK		= 0xebc0de02,
 	RQ_PHASE_INTERPRET      = 0xebc0de03,
 	RQ_PHASE_COMPLETE       = 0xebc0de04,
 	RQ_PHASE_UNREG_RPC	= 0xebc0de05,
@@ -513,11 +513,11 @@ typedef int (*ptlrpc_interpterer_t)(const struct lu_env *env,
  */
 struct ptlrpc_request_pool {
 	/** Locks the list */
-	spinlock_t prp_lock;
+	spinlock_t		prp_lock;
 	/** list of ptlrpc_request structs */
-	struct list_head prp_req_list;
+	struct list_head	prp_req_list;
 	/** Maximum message size that would fit into a request from this pool */
-	int prp_rq_size;
+	int			prp_rq_size;
 	/** Function to allocate more requests for this pool */
 	int (*prp_populate)(struct ptlrpc_request_pool *, int);
 };
@@ -741,9 +741,10 @@ struct ptlrpc_request {
 	 */
 	spinlock_t			 rq_lock;
 	spinlock_t			 rq_early_free_lock;
+
 	/** client-side flags are serialized by rq_lock @{ */
 	unsigned int rq_intr:1, rq_replied:1, rq_err:1,
-		rq_timedout:1, rq_resend:1, rq_restart:1,
+		     rq_timedout:1, rq_resend:1, rq_restart:1,
 		/**
 		 * when ->rq_replay is set, request is kept by the client even
 		 * after server commits corresponding transaction. This is
@@ -797,21 +798,21 @@ struct ptlrpc_request {
 	 * !rq_truncate : # reply bytes actually received,
 	 *  rq_truncate : required repbuf_len for resend
 	 */
-	int rq_nob_received;
+	int				rq_nob_received;
 	/** Request length */
-	int rq_reqlen;
+	int				rq_reqlen;
 	/** Reply length */
-	int rq_replen;
+	int				rq_replen;
 	/** Pool if request is from preallocated list */
 	struct ptlrpc_request_pool     *rq_pool;
 	/** Request message - what client sent */
-	struct lustre_msg *rq_reqmsg;
+	struct lustre_msg	       *rq_reqmsg;
 	/** Reply message - server response */
-	struct lustre_msg *rq_repmsg;
+	struct lustre_msg	       *rq_repmsg;
 	/** Transaction number */
-	u64 rq_transno;
+	u64				rq_transno;
 	/** xid */
-	u64 rq_xid;
+	u64				rq_xid;
 	/** bulk match bits */
 	u64				rq_mbits;
 	/**
@@ -820,7 +821,7 @@ struct ptlrpc_request {
 	 * Also see \a rq_replay comment above.
 	 * It's also link chain on obd_export::exp_req_replay_queue
 	 */
-	struct list_head rq_replay_list;
+	struct list_head		rq_replay_list;
 	/** non-shared members for client & server request*/
 	union {
 		struct ptlrpc_cli_req    rq_cli;
@@ -857,32 +858,32 @@ struct ptlrpc_request {
 	char			*rq_repbuf;	/**< rep buffer */
 	struct lustre_msg       *rq_repdata;	/**< rep wrapper msg */
 	/** only in priv mode */
-	struct lustre_msg       *rq_clrbuf;
-	int		      rq_reqbuf_len;  /* req wrapper buf len */
-	int		      rq_reqdata_len; /* req wrapper msg len */
-	int		      rq_repbuf_len;  /* rep buffer len */
-	int		      rq_repdata_len; /* rep wrapper msg len */
-	int		      rq_clrbuf_len;  /* only in priv mode */
-	int		      rq_clrdata_len; /* only in priv mode */
+	struct lustre_msg      *rq_clrbuf;
+	int			rq_reqbuf_len;  /* req wrapper buf len */
+	int			rq_reqdata_len; /* req wrapper msg len */
+	int			rq_repbuf_len;  /* rep buffer len */
+	int			rq_repdata_len; /* rep wrapper msg len */
+	int			rq_clrbuf_len;  /* only in priv mode */
+	int			rq_clrdata_len; /* only in priv mode */
 
 	/** early replies go to offset 0, regular replies go after that */
-	unsigned int	     rq_reply_off;
+	unsigned int		rq_reply_off;
 
 	/** @} */
 
 	/** Fields that help to see if request and reply were swabbed or not */
-	u32 rq_req_swab_mask;
-	u32 rq_rep_swab_mask;
+	u32			rq_req_swab_mask;
+	u32			rq_rep_swab_mask;
 
 	/** how many early replies (for stats) */
-	int rq_early_count;
+	int			rq_early_count;
 
 	/** Server-side, export on which request was received */
-	struct obd_export		*rq_export;
+	struct obd_export	*rq_export;
 	/** import where request is being sent */
-	struct obd_import		*rq_import;
+	struct obd_import	*rq_import;
 	/** our LNet NID */
-	lnet_nid_t	   rq_self;
+	lnet_nid_t		rq_self;
 	/** Peer description (the other side) */
 	struct lnet_process_id	rq_peer;
 	/** Descriptor for the NID from which the peer sent the request. */
@@ -895,11 +896,11 @@ struct ptlrpc_request {
 	/**
 	 * when request/reply sent (secs), or time when request should be sent
 	 */
-	time64_t rq_sent;
+	time64_t		rq_sent;
 	/** when request must finish. */
-	time64_t		  rq_deadline;
+	time64_t		rq_deadline;
 	/** request format description */
-	struct req_capsule	  rq_pill;
+	struct req_capsule	rq_pill;
 };
 
 /**
@@ -1039,15 +1040,15 @@ static inline void lustre_set_rep_swabbed(struct ptlrpc_request *req,
 #define FLAG(field, str) (field ? str : "")
 
 /** Convert bit flags into a string */
-#define DEBUG_REQ_FLAGS(req)						    \
-	ptlrpc_rqphase2str(req),						\
-	FLAG(req->rq_intr, "I"), FLAG(req->rq_replied, "R"),		    \
-	FLAG(req->rq_err, "E"),	FLAG(req->rq_net_err, "e"),		    \
-	FLAG(req->rq_timedout, "X") /* eXpired */, FLAG(req->rq_resend, "S"),   \
-	FLAG(req->rq_restart, "T"), FLAG(req->rq_replay, "P"),		  \
-	FLAG(req->rq_no_resend, "N"),					   \
-	FLAG(req->rq_waiting, "W"),					     \
-	FLAG(req->rq_wait_ctx, "C"), FLAG(req->rq_hp, "H"),		     \
+#define DEBUG_REQ_FLAGS(req)						      \
+	ptlrpc_rqphase2str(req),					      \
+	FLAG(req->rq_intr, "I"), FLAG(req->rq_replied, "R"),		      \
+	FLAG(req->rq_err, "E"),	FLAG(req->rq_net_err, "e"),		      \
+	FLAG(req->rq_timedout, "X") /* eXpired */, FLAG(req->rq_resend, "S"), \
+	FLAG(req->rq_restart, "T"), FLAG(req->rq_replay, "P"),		      \
+	FLAG(req->rq_no_resend, "N"),					      \
+	FLAG(req->rq_waiting, "W"),					      \
+	FLAG(req->rq_wait_ctx, "C"), FLAG(req->rq_hp, "H"),		      \
 	FLAG(req->rq_committed, "M")
 
 #define REQ_FLAGS_FMT "%s:%s%s%s%s%s%s%s%s%s%s%s%s%s"
@@ -1060,14 +1061,14 @@ void _debug_req(struct ptlrpc_request *req,
  * Helper that decides if we need to print request according to current debug
  * level settings
  */
-#define debug_req(msgdata, mask, cdls, req, fmt, a...)			\
-do {									  \
-	CFS_CHECK_STACK(msgdata, mask, cdls);				 \
+#define debug_req(msgdata, mask, cdls, req, fmt, a...)			      \
+do {									      \
+	CFS_CHECK_STACK(msgdata, mask, cdls);				      \
 									      \
-	if (((mask) & D_CANTMASK) != 0 ||				     \
-	    ((libcfs_debug & (mask)) != 0 &&				  \
-	     (libcfs_subsystem_debug & DEBUG_SUBSYSTEM) != 0))		\
-		_debug_req((req), msgdata, fmt, ##a);			 \
+	if (((mask) & D_CANTMASK) != 0 ||				      \
+	    ((libcfs_debug & (mask)) != 0 &&				      \
+	     (libcfs_subsystem_debug & DEBUG_SUBSYSTEM) != 0))		      \
+		_debug_req((req), msgdata, fmt, ##a);			      \
 } while (0)
 
 /**
@@ -1075,16 +1076,16 @@ void _debug_req(struct ptlrpc_request *req,
  * content into lustre debug log.
  * for most callers (level is a constant) this is resolved at compile time
  */
-#define DEBUG_REQ(level, req, fmt, args...)				   \
-do {									  \
-	if ((level) & (D_ERROR | D_WARNING)) {				\
-		static struct cfs_debug_limit_state cdls;			  \
-		LIBCFS_DEBUG_MSG_DATA_DECL(msgdata, level, &cdls);	    \
+#define DEBUG_REQ(level, req, fmt, args...)				      \
+do {									      \
+	if ((level) & (D_ERROR | D_WARNING)) {				      \
+		static struct cfs_debug_limit_state cdls;		      \
+		LIBCFS_DEBUG_MSG_DATA_DECL(msgdata, level, &cdls);	      \
 		debug_req(&msgdata, level, &cdls, req, "@@@ "fmt" ", ## args);\
 	} else {							      \
-		LIBCFS_DEBUG_MSG_DATA_DECL(msgdata, level, NULL);	     \
+		LIBCFS_DEBUG_MSG_DATA_DECL(msgdata, level, NULL);	      \
 		debug_req(&msgdata, level, NULL, req, "@@@ "fmt" ", ## args); \
-	}								     \
+	}								      \
 } while (0)
 /** @} */
 
@@ -1093,15 +1094,15 @@ void _debug_req(struct ptlrpc_request *req,
  */
 struct ptlrpc_bulk_page {
 	/** Linkage to list of pages in a bulk */
-	struct list_head       bp_link;
+	struct list_head	bp_link;
 	/**
 	 * Number of bytes in a page to transfer starting from \a bp_pageoffset
 	 */
-	int	      bp_buflen;
+	int			bp_buflen;
 	/** offset within a page */
-	int	      bp_pageoffset;
+	int			bp_pageoffset;
 	/** The page itself */
-	struct page     *bp_page;
+	struct page		*bp_page;
 };
 
 enum ptlrpc_bulk_op_type {
@@ -1204,38 +1205,38 @@ struct ptlrpc_bulk_frag_ops {
  */
 struct ptlrpc_bulk_desc {
 	/** completed with failure */
-	unsigned long bd_failure:1;
+	unsigned long			bd_failure:1;
 	/** client side */
-	unsigned long bd_registered:1;
+	unsigned long			bd_registered:1;
 	/** For serialization with callback */
-	spinlock_t bd_lock;
+	spinlock_t			bd_lock;
 	/** Import generation when request for this bulk was sent */
-	int bd_import_generation;
+	int				bd_import_generation;
 	/** {put,get}{source,sink}{kvec,kiov} */
-	enum ptlrpc_bulk_op_type bd_type;
+	enum ptlrpc_bulk_op_type	bd_type;
 	/** LNet portal for this bulk */
-	u32 bd_portal;
+	u32				bd_portal;
 	/** Server side - export this bulk created for */
-	struct obd_export *bd_export;
+	struct obd_export		*bd_export;
 	/** Client side - import this bulk was sent on */
-	struct obd_import *bd_import;
+	struct obd_import		*bd_import;
 	/** Back pointer to the request */
-	struct ptlrpc_request *bd_req;
-	struct ptlrpc_bulk_frag_ops *bd_frag_ops;
-	wait_queue_head_t	    bd_waitq;	/* server side only WQ */
-	int		    bd_iov_count;    /* # entries in bd_iov */
-	int		    bd_max_iov;      /* allocated size of bd_iov */
-	int		    bd_nob;	  /* # bytes covered */
-	int		    bd_nob_transferred; /* # bytes GOT/PUT */
-
-	u64			bd_last_mbits;
-
-	struct ptlrpc_cb_id    bd_cbid;	 /* network callback info */
-	lnet_nid_t	     bd_sender;       /* stash event::sender */
-	int			bd_md_count;	/* # valid entries in bd_mds */
-	int			bd_md_max_brw;	/* max entries in bd_mds */
+	struct ptlrpc_request		*bd_req;
+	struct ptlrpc_bulk_frag_ops	*bd_frag_ops;
+	wait_queue_head_t		bd_waitq;     /* server side only WQ */
+	int				bd_iov_count; /* # entries in bd_iov */
+	int				bd_max_iov;   /* allocated size of bd_iov */
+	int				bd_nob;	      /* # bytes covered */
+	int				bd_nob_transferred; /* # bytes GOT/PUT */
+
+	u64				bd_last_mbits;
+
+	struct ptlrpc_cb_id		bd_cbid;	/* network callback info */
+	lnet_nid_t			bd_sender;	/* stash event::sender */
+	int				bd_md_count;	/* # valid entries in bd_mds */
+	int				bd_md_max_brw;	/* max entries in bd_mds */
 	/** array of associated MDs */
-	struct lnet_handle_md	bd_mds[PTLRPC_BULK_OPS_COUNT];
+	struct lnet_handle_md		bd_mds[PTLRPC_BULK_OPS_COUNT];
 
 	union {
 		struct {
@@ -1277,20 +1278,20 @@ struct ptlrpc_thread {
 	/**
 	 * List of active threads in svc->srv_threads
 	 */
-	struct list_head t_link;
+	struct list_head		t_link;
 	/**
 	 * thread-private data (preallocated memory)
 	 */
-	void *t_data;
-	u32 t_flags;
+	void				*t_data;
+	u32				t_flags;
 	/**
 	 * service thread index, from ptlrpc_start_threads
 	 */
-	unsigned int t_id;
+	unsigned int			t_id;
 	/**
 	 * service thread pid
 	 */
-	pid_t t_pid;
+	pid_t				t_pid;
 	/**
 	 * put watchdog in the structure per thread b=14840
 	 *
@@ -1304,7 +1305,7 @@ struct ptlrpc_thread {
 	 * the svc this thread belonged to b=18582
 	 */
 	struct ptlrpc_service_part	*t_svcpt;
-	wait_queue_head_t			t_ctl_waitq;
+	wait_queue_head_t		t_ctl_waitq;
 	struct lu_env			*t_env;
 	char				t_name[PTLRPC_THR_NAME_LEN];
 };
@@ -1363,22 +1364,22 @@ static inline int thread_test_and_clear_flags(struct ptlrpc_thread *thread,
  */
 struct ptlrpc_request_buffer_desc {
 	/** Link item for rqbds on a service */
-	struct list_head	     rqbd_list;
+	struct list_head		rqbd_list;
 	/** History of requests for this buffer */
-	struct list_head	     rqbd_reqs;
+	struct list_head		rqbd_reqs;
 	/** Back pointer to service for which this buffer is registered */
-	struct ptlrpc_service_part *rqbd_svcpt;
+	struct ptlrpc_service_part	*rqbd_svcpt;
 	/** LNet descriptor */
 	struct lnet_handle_md		rqbd_md_h;
-	int		    rqbd_refcount;
+	int				rqbd_refcount;
 	/** The buffer itself */
-	char		  *rqbd_buffer;
-	struct ptlrpc_cb_id    rqbd_cbid;
+	char				*rqbd_buffer;
+	struct ptlrpc_cb_id		rqbd_cbid;
 	/**
 	 * This "embedded" request structure is only used for the
 	 * last request to fit into the buffer
 	 */
-	struct ptlrpc_request  rqbd_req;
+	struct ptlrpc_request		rqbd_req;
 };
 
 typedef int  (*svc_handler_t)(struct ptlrpc_request *req);
@@ -1431,44 +1432,44 @@ struct ptlrpc_service {
 	spinlock_t			srv_lock;
 	/** most often accessed fields */
 	/** chain thru all services */
-	struct list_head		      srv_list;
+	struct list_head		srv_list;
 	/** service operations table */
 	struct ptlrpc_service_ops	srv_ops;
 	/** only statically allocated strings here; we don't clean them */
-	char			   *srv_name;
+	char				*srv_name;
 	/** only statically allocated strings here; we don't clean them */
-	char			   *srv_thread_name;
+	char				*srv_thread_name;
 	/** service thread list */
-	struct list_head		      srv_threads;
+	struct list_head		srv_threads;
 	/** threads # should be created for each partition on initializing */
 	int				srv_nthrs_cpt_init;
 	/** limit of threads number for each partition */
 	int				srv_nthrs_cpt_limit;
 	/** Root of debugfs dir tree for this service */
-	struct dentry		   *srv_debugfs_entry;
+	struct dentry			*srv_debugfs_entry;
 	/** Pointer to statistic data for this service */
-	struct lprocfs_stats	   *srv_stats;
+	struct lprocfs_stats		*srv_stats;
 	/** # hp per lp reqs to handle */
-	int			     srv_hpreq_ratio;
+	int				srv_hpreq_ratio;
 	/** biggest request to receive */
-	int			     srv_max_req_size;
+	int				srv_max_req_size;
 	/** biggest reply to send */
-	int			     srv_max_reply_size;
+	int				srv_max_reply_size;
 	/** size of individual buffers */
-	int			     srv_buf_size;
+	int				srv_buf_size;
 	/** # buffers to allocate in 1 group */
-	int			     srv_nbuf_per_group;
+	int				srv_nbuf_per_group;
 	/** Local portal on which to receive requests */
-	u32			   srv_req_portal;
+	u32				srv_req_portal;
 	/** Portal on the client to send replies to */
-	u32			   srv_rep_portal;
+	u32				srv_rep_portal;
 	/**
 	 * Tags for lu_context associated with this thread, see struct
 	 * lu_context.
 	 */
-	u32			   srv_ctx_tags;
+	u32				srv_ctx_tags;
 	/** soft watchdog timeout multiplier */
-	int			     srv_watchdog_factor;
+	int				srv_watchdog_factor;
 	/** under unregister_service */
 	unsigned			srv_is_stopping:1;
 
@@ -1524,14 +1525,14 @@ struct ptlrpc_service_part {
 	/** # running threads */
 	int				scp_nthrs_running;
 	/** service threads list */
-	struct list_head			scp_threads;
+	struct list_head		scp_threads;
 
 	/**
 	 * serialize the following fields, used for protecting
 	 * rqbd list and incoming requests waiting for preprocess,
 	 * threads starting & stopping are also protected by this lock.
 	 */
-	spinlock_t scp_lock __cfs_cacheline_aligned;
+	spinlock_t			scp_lock __cfs_cacheline_aligned;
 	/** total # req buffer descs allocated */
 	int				scp_nrqbds_total;
 	/** # posted request buffers for receiving */
@@ -1541,23 +1542,23 @@ struct ptlrpc_service_part {
 	/** # incoming reqs */
 	int				scp_nreqs_incoming;
 	/** request buffers to be reposted */
-	struct list_head			scp_rqbd_idle;
+	struct list_head		scp_rqbd_idle;
 	/** req buffers receiving */
-	struct list_head			scp_rqbd_posted;
+	struct list_head		scp_rqbd_posted;
 	/** incoming reqs */
-	struct list_head			scp_req_incoming;
+	struct list_head		scp_req_incoming;
 	/** timeout before re-posting reqs, in tick */
-	long			scp_rqbd_timeout;
+	long				scp_rqbd_timeout;
 	/**
 	 * all threads sleep on this. This wait-queue is signalled when new
 	 * incoming request arrives and when difficult reply has to be handled.
 	 */
-	wait_queue_head_t			scp_waitq;
+	wait_queue_head_t		scp_waitq;
 
 	/** request history */
-	struct list_head			scp_hist_reqs;
+	struct list_head		scp_hist_reqs;
 	/** request buffer history */
-	struct list_head			scp_hist_rqbds;
+	struct list_head		scp_hist_rqbds;
 	/** # request buffers in history */
 	int				scp_hist_nrqbds;
 	/** sequence number for request */
@@ -1610,11 +1611,11 @@ struct ptlrpc_service_part {
 	 */
 	spinlock_t			scp_rep_lock __cfs_cacheline_aligned;
 	/** all the active replies */
-	struct list_head			scp_rep_active;
+	struct list_head		scp_rep_active;
 	/** List of free reply_states */
-	struct list_head			scp_rep_idle;
+	struct list_head		scp_rep_idle;
 	/** waitq to run, when adding stuff to srv_free_rs_list */
-	wait_queue_head_t			scp_rep_waitq;
+	wait_queue_head_t		scp_rep_waitq;
 	/** # 'difficult' replies */
 	atomic_t			scp_nreps_difficult;
 };
@@ -1648,11 +1649,11 @@ struct ptlrpcd_ctl {
 	/**
 	 * Thread requests set.
 	 */
-	struct ptlrpc_request_set  *pc_set;
+	struct ptlrpc_request_set	*pc_set;
 	/**
 	 * Thread name used in kthread_run()
 	 */
-	char			pc_name[16];
+	char				pc_name[16];
 	/**
 	 * CPT the thread is bound on.
 	 */
@@ -1664,7 +1665,7 @@ struct ptlrpcd_ctl {
 	/**
 	 * Pointer to the array of partners' ptlrpcd_ctl structure.
 	 */
-	struct ptlrpcd_ctl	**pc_partners;
+	struct ptlrpcd_ctl		**pc_partners;
 	/**
 	 * Number of the ptlrpcd's partners.
 	 */
@@ -1672,7 +1673,7 @@ struct ptlrpcd_ctl {
 	/**
 	 * Record the partner index to be processed next.
 	 */
-	int			 pc_cursor;
+	int				pc_cursor;
 	/**
 	 * Error code if the thread failed to fully start.
 	 */
@@ -1777,7 +1778,7 @@ struct ptlrpc_connection *ptlrpc_connection_get(struct lnet_process_id peer,
 static inline int ptlrpc_client_bulk_active(struct ptlrpc_request *req)
 {
 	struct ptlrpc_bulk_desc *desc;
-	int		      rc;
+	int rc;
 
 	desc = req->rq_bulk;
 
@@ -1793,8 +1794,9 @@ static inline int ptlrpc_client_bulk_active(struct ptlrpc_request *req)
 	return rc;
 }
 
-#define PTLRPC_REPLY_MAYBE_DIFFICULT 0x01
-#define PTLRPC_REPLY_EARLY	   0x02
+#define PTLRPC_REPLY_MAYBE_DIFFICULT	0x01
+#define PTLRPC_REPLY_EARLY		0x02
+
 int ptlrpc_send_reply(struct ptlrpc_request *req, int flags);
 int ptlrpc_reply(struct ptlrpc_request *req);
 int ptlrpc_send_error(struct ptlrpc_request *req, int difficult);
diff --git a/drivers/staging/lustre/lustre/include/lustre_nrs_fifo.h b/drivers/staging/lustre/lustre/include/lustre_nrs_fifo.h
index 0db4345f..1c47c80 100644
--- a/drivers/staging/lustre/lustre/include/lustre_nrs_fifo.h
+++ b/drivers/staging/lustre/lustre/include/lustre_nrs_fifo.h
@@ -63,8 +63,8 @@ struct nrs_fifo_head {
 };
 
 struct nrs_fifo_req {
-	struct list_head	fr_list;
-	u64			fr_sequence;
+	struct list_head		fr_list;
+	u64				fr_sequence;
 };
 
 /** @} fifo */
diff --git a/drivers/staging/lustre/lustre/include/lustre_req_layout.h b/drivers/staging/lustre/lustre/include/lustre_req_layout.h
index 2aba99f..57ac618 100644
--- a/drivers/staging/lustre/lustre/include/lustre_req_layout.h
+++ b/drivers/staging/lustre/lustre/include/lustre_req_layout.h
@@ -63,10 +63,10 @@ enum req_location {
 #define REQ_MAX_FIELD_NR 10
 
 struct req_capsule {
-	struct ptlrpc_request   *rc_req;
-	const struct req_format *rc_fmt;
-	enum req_location	rc_loc;
-	u32		    rc_area[RCL_NR][REQ_MAX_FIELD_NR];
+	struct ptlrpc_request		*rc_req;
+	const struct req_format		*rc_fmt;
+	enum req_location		 rc_loc;
+	u32				 rc_area[RCL_NR][REQ_MAX_FIELD_NR];
 };
 
 void req_capsule_init(struct req_capsule *pill, struct ptlrpc_request *req,
diff --git a/drivers/staging/lustre/lustre/include/lustre_sec.h b/drivers/staging/lustre/lustre/include/lustre_sec.h
index c622c8d..5a5625e 100644
--- a/drivers/staging/lustre/lustre/include/lustre_sec.h
+++ b/drivers/staging/lustre/lustre/include/lustre_sec.h
@@ -85,25 +85,25 @@
  * flavor constants
  */
 enum sptlrpc_policy {
-	SPTLRPC_POLICY_NULL	     = 0,
-	SPTLRPC_POLICY_PLAIN	    = 1,
-	SPTLRPC_POLICY_GSS	      = 2,
+	SPTLRPC_POLICY_NULL		= 0,
+	SPTLRPC_POLICY_PLAIN		= 1,
+	SPTLRPC_POLICY_GSS		= 2,
 	SPTLRPC_POLICY_MAX,
 };
 
 enum sptlrpc_mech_null {
-	SPTLRPC_MECH_NULL	       = 0,
+	SPTLRPC_MECH_NULL		= 0,
 	SPTLRPC_MECH_NULL_MAX,
 };
 
 enum sptlrpc_mech_plain {
-	SPTLRPC_MECH_PLAIN	      = 0,
+	SPTLRPC_MECH_PLAIN		= 0,
 	SPTLRPC_MECH_PLAIN_MAX,
 };
 
 enum sptlrpc_mech_gss {
-	SPTLRPC_MECH_GSS_NULL	   = 0,
-	SPTLRPC_MECH_GSS_KRB5	   = 1,
+	SPTLRPC_MECH_GSS_NULL		= 0,
+	SPTLRPC_MECH_GSS_KRB5		= 1,
 	SPTLRPC_MECH_GSS_MAX,
 };
 
@@ -116,113 +116,113 @@ enum sptlrpc_service_type {
 };
 
 enum sptlrpc_bulk_type {
-	SPTLRPC_BULK_DEFAULT	    = 0,    /**< follow rpc flavor */
-	SPTLRPC_BULK_HASH	       = 1,    /**< hash integrity */
+	SPTLRPC_BULK_DEFAULT		= 0,	/**< follow rpc flavor */
+	SPTLRPC_BULK_HASH		= 1,	/**< hash integrity */
 	SPTLRPC_BULK_MAX,
 };
 
 enum sptlrpc_bulk_service {
-	SPTLRPC_BULK_SVC_NULL	   = 0,    /**< no security */
-	SPTLRPC_BULK_SVC_AUTH	   = 1,    /**< authentication only */
-	SPTLRPC_BULK_SVC_INTG	   = 2,    /**< integrity */
-	SPTLRPC_BULK_SVC_PRIV	   = 3,    /**< privacy */
+	SPTLRPC_BULK_SVC_NULL		= 0,	/**< no security */
+	SPTLRPC_BULK_SVC_AUTH		= 1,	/**< authentication only */
+	SPTLRPC_BULK_SVC_INTG		= 2,	/**< integrity */
+	SPTLRPC_BULK_SVC_PRIV		= 3,	/**< privacy */
 	SPTLRPC_BULK_SVC_MAX,
 };
 
 /*
  * compose/extract macros
  */
-#define FLVR_POLICY_OFFSET	      (0)
+#define FLVR_POLICY_OFFSET		(0)
 #define FLVR_MECH_OFFSET		(4)
-#define FLVR_SVC_OFFSET		 (8)
-#define FLVR_BULK_TYPE_OFFSET	   (12)
-#define FLVR_BULK_SVC_OFFSET	    (16)
-
-#define MAKE_FLVR(policy, mech, svc, btype, bsvc)		       \
-	(((u32)(policy) << FLVR_POLICY_OFFSET) |		      \
-	 ((u32)(mech) << FLVR_MECH_OFFSET) |			  \
-	 ((u32)(svc) << FLVR_SVC_OFFSET) |			    \
-	 ((u32)(btype) << FLVR_BULK_TYPE_OFFSET) |		    \
+#define FLVR_SVC_OFFSET			(8)
+#define FLVR_BULK_TYPE_OFFSET		(12)
+#define FLVR_BULK_SVC_OFFSET		(16)
+
+#define MAKE_FLVR(policy, mech, svc, btype, bsvc)	\
+	(((u32)(policy) << FLVR_POLICY_OFFSET) |	\
+	 ((u32)(mech) << FLVR_MECH_OFFSET) |		\
+	 ((u32)(svc) << FLVR_SVC_OFFSET) |		\
+	 ((u32)(btype) << FLVR_BULK_TYPE_OFFSET) |	\
 	 ((u32)(bsvc) << FLVR_BULK_SVC_OFFSET))
 
 /*
  * extraction
  */
-#define SPTLRPC_FLVR_POLICY(flavor)				     \
+#define SPTLRPC_FLVR_POLICY(flavor)			\
 	((((u32)(flavor)) >> FLVR_POLICY_OFFSET) & 0xF)
-#define SPTLRPC_FLVR_MECH(flavor)				       \
+#define SPTLRPC_FLVR_MECH(flavor)			\
 	((((u32)(flavor)) >> FLVR_MECH_OFFSET) & 0xF)
-#define SPTLRPC_FLVR_SVC(flavor)					\
+#define SPTLRPC_FLVR_SVC(flavor)			\
 	((((u32)(flavor)) >> FLVR_SVC_OFFSET) & 0xF)
-#define SPTLRPC_FLVR_BULK_TYPE(flavor)				  \
+#define SPTLRPC_FLVR_BULK_TYPE(flavor)			\
 	((((u32)(flavor)) >> FLVR_BULK_TYPE_OFFSET) & 0xF)
-#define SPTLRPC_FLVR_BULK_SVC(flavor)				   \
+#define SPTLRPC_FLVR_BULK_SVC(flavor)			\
 	((((u32)(flavor)) >> FLVR_BULK_SVC_OFFSET) & 0xF)
 
-#define SPTLRPC_FLVR_BASE(flavor)				       \
+#define SPTLRPC_FLVR_BASE(flavor)			\
 	((((u32)(flavor)) >> FLVR_POLICY_OFFSET) & 0xFFF)
-#define SPTLRPC_FLVR_BASE_SUB(flavor)				   \
+#define SPTLRPC_FLVR_BASE_SUB(flavor)			\
 	((((u32)(flavor)) >> FLVR_MECH_OFFSET) & 0xFF)
 
 /*
  * gss subflavors
  */
-#define MAKE_BASE_SUBFLVR(mech, svc)				    \
-	((u32)(mech) |						\
+#define MAKE_BASE_SUBFLVR(mech, svc)			\
+	((u32)(mech) |					\
 	 ((u32)(svc) << (FLVR_SVC_OFFSET - FLVR_MECH_OFFSET)))
 
-#define SPTLRPC_SUBFLVR_KRB5N					   \
+#define SPTLRPC_SUBFLVR_KRB5N				\
 	MAKE_BASE_SUBFLVR(SPTLRPC_MECH_GSS_KRB5, SPTLRPC_SVC_NULL)
-#define SPTLRPC_SUBFLVR_KRB5A					   \
+#define SPTLRPC_SUBFLVR_KRB5A				\
 	MAKE_BASE_SUBFLVR(SPTLRPC_MECH_GSS_KRB5, SPTLRPC_SVC_AUTH)
-#define SPTLRPC_SUBFLVR_KRB5I					   \
+#define SPTLRPC_SUBFLVR_KRB5I				\
 	MAKE_BASE_SUBFLVR(SPTLRPC_MECH_GSS_KRB5, SPTLRPC_SVC_INTG)
-#define SPTLRPC_SUBFLVR_KRB5P					   \
+#define SPTLRPC_SUBFLVR_KRB5P				\
 	MAKE_BASE_SUBFLVR(SPTLRPC_MECH_GSS_KRB5, SPTLRPC_SVC_PRIV)
 
 /*
  * "end user" flavors
  */
-#define SPTLRPC_FLVR_NULL			       \
-	MAKE_FLVR(SPTLRPC_POLICY_NULL,		  \
-		  SPTLRPC_MECH_NULL,		    \
-		  SPTLRPC_SVC_NULL,		     \
-		  SPTLRPC_BULK_DEFAULT,		 \
+#define SPTLRPC_FLVR_NULL				\
+	MAKE_FLVR(SPTLRPC_POLICY_NULL,			\
+		  SPTLRPC_MECH_NULL,			\
+		  SPTLRPC_SVC_NULL,			\
+		  SPTLRPC_BULK_DEFAULT,			\
 		  SPTLRPC_BULK_SVC_NULL)
-#define SPTLRPC_FLVR_PLAIN			      \
-	MAKE_FLVR(SPTLRPC_POLICY_PLAIN,		 \
-		  SPTLRPC_MECH_PLAIN,		   \
-		  SPTLRPC_SVC_NULL,		     \
-		  SPTLRPC_BULK_HASH,		    \
+#define SPTLRPC_FLVR_PLAIN				\
+	MAKE_FLVR(SPTLRPC_POLICY_PLAIN,			\
+		  SPTLRPC_MECH_PLAIN,			\
+		  SPTLRPC_SVC_NULL,			\
+		  SPTLRPC_BULK_HASH,			\
 		  SPTLRPC_BULK_SVC_INTG)
-#define SPTLRPC_FLVR_KRB5N			      \
-	MAKE_FLVR(SPTLRPC_POLICY_GSS,		   \
+#define SPTLRPC_FLVR_KRB5N				\
+	MAKE_FLVR(SPTLRPC_POLICY_GSS,			\
 		  SPTLRPC_MECH_GSS_KRB5,		\
-		  SPTLRPC_SVC_NULL,		     \
-		  SPTLRPC_BULK_DEFAULT,		 \
+		  SPTLRPC_SVC_NULL,			\
+		  SPTLRPC_BULK_DEFAULT,			\
 		  SPTLRPC_BULK_SVC_NULL)
-#define SPTLRPC_FLVR_KRB5A			      \
-	MAKE_FLVR(SPTLRPC_POLICY_GSS,		   \
+#define SPTLRPC_FLVR_KRB5A				\
+	MAKE_FLVR(SPTLRPC_POLICY_GSS,			\
 		  SPTLRPC_MECH_GSS_KRB5,		\
-		  SPTLRPC_SVC_AUTH,		     \
-		  SPTLRPC_BULK_DEFAULT,		 \
+		  SPTLRPC_SVC_AUTH,			\
+		  SPTLRPC_BULK_DEFAULT,			\
 		  SPTLRPC_BULK_SVC_NULL)
-#define SPTLRPC_FLVR_KRB5I			      \
-	MAKE_FLVR(SPTLRPC_POLICY_GSS,		   \
+#define SPTLRPC_FLVR_KRB5I				\
+	MAKE_FLVR(SPTLRPC_POLICY_GSS,			\
 		  SPTLRPC_MECH_GSS_KRB5,		\
-		  SPTLRPC_SVC_INTG,		     \
-		  SPTLRPC_BULK_DEFAULT,		 \
+		  SPTLRPC_SVC_INTG,			\
+		  SPTLRPC_BULK_DEFAULT,			\
 		  SPTLRPC_BULK_SVC_INTG)
-#define SPTLRPC_FLVR_KRB5P			      \
-	MAKE_FLVR(SPTLRPC_POLICY_GSS,		   \
+#define SPTLRPC_FLVR_KRB5P				\
+	MAKE_FLVR(SPTLRPC_POLICY_GSS,			\
 		  SPTLRPC_MECH_GSS_KRB5,		\
-		  SPTLRPC_SVC_PRIV,		     \
-		  SPTLRPC_BULK_DEFAULT,		 \
+		  SPTLRPC_SVC_PRIV,			\
+		  SPTLRPC_BULK_DEFAULT,			\
 		  SPTLRPC_BULK_SVC_PRIV)
 
-#define SPTLRPC_FLVR_DEFAULT	    SPTLRPC_FLVR_NULL
+#define SPTLRPC_FLVR_DEFAULT		SPTLRPC_FLVR_NULL
 
-#define SPTLRPC_FLVR_INVALID	    ((u32)0xFFFFFFFF)
+#define SPTLRPC_FLVR_INVALID		((u32)0xFFFFFFFF)
 #define SPTLRPC_FLVR_ANY		((u32)0xFFF00000)
 
 /**
@@ -253,7 +253,7 @@ static inline void flvr_set_bulk_svc(u32 *flvr, u32 svc)
 }
 
 struct bulk_spec_hash {
-	u8    hash_alg;
+	u8	hash_alg;
 };
 
 /**
@@ -264,11 +264,11 @@ struct sptlrpc_flavor {
 	/**
 	 * wire flavor, should be renamed to sf_wire.
 	 */
-	u32   sf_rpc;
+	u32	sf_rpc;
 	/**
 	 * general flags of PTLRPC_SEC_FL_*
 	 */
-	u32   sf_flags;
+	u32	sf_flags;
 	/**
 	 * rpc flavor specification
 	 */
@@ -288,12 +288,12 @@ struct sptlrpc_flavor {
  * RPC requests and to be checked by ptlrpc service.
  */
 enum lustre_sec_part {
-	LUSTRE_SP_CLI	   = 0,
+	LUSTRE_SP_CLI	= 0,
 	LUSTRE_SP_MDT,
 	LUSTRE_SP_OST,
 	LUSTRE_SP_MGC,
 	LUSTRE_SP_MGS,
-	LUSTRE_SP_ANY	   = 0xFF
+	LUSTRE_SP_ANY	= 0xFF
 };
 
 enum lustre_sec_part sptlrpc_target_sec_part(struct obd_device *obd);
@@ -303,11 +303,11 @@ enum lustre_sec_part {
  * two Lustre parts.
  */
 struct sptlrpc_rule {
-	u32		   sr_netid;   /* LNET network ID */
-	u8		    sr_from;    /* sec_part */
-	u8		    sr_to;      /* sec_part */
-	u16		   sr_padding;
-	struct sptlrpc_flavor   sr_flvr;
+	u32			sr_netid;	/* LNET network ID */
+	u8			sr_from;	/* sec_part */
+	u8			sr_to;		/* sec_part */
+	u16			sr_padding;
+	struct sptlrpc_flavor	sr_flvr;
 };
 
 /**
@@ -317,8 +317,8 @@ struct sptlrpc_rule {
  * and client when needed.
  */
 struct sptlrpc_rule_set {
-	int		     srs_nslot;
-	int		     srs_nrule;
+	int			srs_nslot;
+	int			srs_nrule;
 	struct sptlrpc_rule    *srs_rules;
 };
 
@@ -460,37 +460,37 @@ struct ptlrpc_ctx_ops {
 			   struct ptlrpc_bulk_desc *desc);
 };
 
-#define PTLRPC_CTX_NEW_BIT	     (0)  /* newly created */
-#define PTLRPC_CTX_UPTODATE_BIT	(1)  /* uptodate */
-#define PTLRPC_CTX_DEAD_BIT	    (2)  /* mark expired gracefully */
-#define PTLRPC_CTX_ERROR_BIT	   (3)  /* fatal error (refresh, etc.) */
-#define PTLRPC_CTX_CACHED_BIT	  (8)  /* in ctx cache (hash etc.) */
-#define PTLRPC_CTX_ETERNAL_BIT	 (9)  /* always valid */
-
-#define PTLRPC_CTX_NEW		 (1 << PTLRPC_CTX_NEW_BIT)
-#define PTLRPC_CTX_UPTODATE	    (1 << PTLRPC_CTX_UPTODATE_BIT)
-#define PTLRPC_CTX_DEAD		(1 << PTLRPC_CTX_DEAD_BIT)
-#define PTLRPC_CTX_ERROR	       (1 << PTLRPC_CTX_ERROR_BIT)
-#define PTLRPC_CTX_CACHED	      (1 << PTLRPC_CTX_CACHED_BIT)
-#define PTLRPC_CTX_ETERNAL	     (1 << PTLRPC_CTX_ETERNAL_BIT)
-
-#define PTLRPC_CTX_STATUS_MASK	 (PTLRPC_CTX_NEW_BIT    |       \
-					PTLRPC_CTX_UPTODATE   |       \
-					PTLRPC_CTX_DEAD       |       \
+#define PTLRPC_CTX_NEW_BIT		(0)  /* newly created */
+#define PTLRPC_CTX_UPTODATE_BIT		(1)  /* uptodate */
+#define PTLRPC_CTX_DEAD_BIT		(2)  /* mark expired gracefully */
+#define PTLRPC_CTX_ERROR_BIT		(3)  /* fatal error (refresh, etc.) */
+#define PTLRPC_CTX_CACHED_BIT		(8)  /* in ctx cache (hash etc.) */
+#define PTLRPC_CTX_ETERNAL_BIT		(9)  /* always valid */
+
+#define PTLRPC_CTX_NEW			(1 << PTLRPC_CTX_NEW_BIT)
+#define PTLRPC_CTX_UPTODATE		(1 << PTLRPC_CTX_UPTODATE_BIT)
+#define PTLRPC_CTX_DEAD			(1 << PTLRPC_CTX_DEAD_BIT)
+#define PTLRPC_CTX_ERROR		(1 << PTLRPC_CTX_ERROR_BIT)
+#define PTLRPC_CTX_CACHED		(1 << PTLRPC_CTX_CACHED_BIT)
+#define PTLRPC_CTX_ETERNAL		(1 << PTLRPC_CTX_ETERNAL_BIT)
+
+#define PTLRPC_CTX_STATUS_MASK	       (PTLRPC_CTX_NEW_BIT	| \
+					PTLRPC_CTX_UPTODATE	| \
+					PTLRPC_CTX_DEAD		| \
 					PTLRPC_CTX_ERROR)
 
 struct ptlrpc_cli_ctx {
-	struct hlist_node	cc_cache;      /* linked into ctx cache */
-	atomic_t	    cc_refcount;
+	struct hlist_node	cc_cache;	/* linked into ctx cache */
+	atomic_t		cc_refcount;
 	struct ptlrpc_sec      *cc_sec;
 	struct ptlrpc_ctx_ops  *cc_ops;
-	unsigned long	      cc_expire;     /* in seconds */
-	unsigned int	    cc_early_expire:1;
-	unsigned long	   cc_flags;
-	struct vfs_cred	 cc_vcred;
+	unsigned long		cc_expire;	/* in seconds */
+	unsigned int		cc_early_expire:1;
+	unsigned long		cc_flags;
+	struct vfs_cred		cc_vcred;
 	spinlock_t		cc_lock;
-	struct list_head	      cc_req_list;   /* waiting reqs linked here */
-	struct list_head	      cc_gc_chain;   /* linked to gc chain */
+	struct list_head	cc_req_list;	/* waiting reqs linked here */
+	struct list_head	cc_gc_chain;	/* linked to gc chain */
 };
 
 /**
@@ -755,18 +755,18 @@ struct ptlrpc_sec_sops {
 };
 
 struct ptlrpc_sec_policy {
-	struct module		   *sp_owner;
-	char			   *sp_name;
-	u16			   sp_policy; /* policy number */
-	struct ptlrpc_sec_cops	 *sp_cops;   /* client ops */
-	struct ptlrpc_sec_sops	 *sp_sops;   /* server ops */
+	struct module			*sp_owner;
+	char				*sp_name;
+	u16				 sp_policy; /* policy number */
+	struct ptlrpc_sec_cops		*sp_cops;   /* client ops */
+	struct ptlrpc_sec_sops		*sp_sops;   /* server ops */
 };
 
-#define PTLRPC_SEC_FL_REVERSE	   0x0001 /* reverse sec */
-#define PTLRPC_SEC_FL_ROOTONLY	  0x0002 /* treat everyone as root */
-#define PTLRPC_SEC_FL_UDESC	     0x0004 /* ship udesc */
-#define PTLRPC_SEC_FL_BULK	      0x0008 /* intensive bulk i/o expected */
-#define PTLRPC_SEC_FL_PAG	       0x0010 /* PAG mode */
+#define PTLRPC_SEC_FL_REVERSE		0x0001 /* reverse sec */
+#define PTLRPC_SEC_FL_ROOTONLY		0x0002 /* treat everyone as root */
+#define PTLRPC_SEC_FL_UDESC		0x0004 /* ship udesc */
+#define PTLRPC_SEC_FL_BULK		0x0008 /* intensive bulk i/o expected */
+#define PTLRPC_SEC_FL_PAG		0x0010 /* PAG mode */
 
 /**
  * The ptlrpc_sec represents the client side ptlrpc security facilities,
@@ -777,25 +777,25 @@ struct ptlrpc_sec_policy {
  */
 struct ptlrpc_sec {
 	struct ptlrpc_sec_policy       *ps_policy;
-	atomic_t		    ps_refcount;
+	atomic_t			ps_refcount;
 	/** statistic only */
-	atomic_t		    ps_nctx;
+	atomic_t			ps_nctx;
 	/** unique identifier */
-	int			     ps_id;
-	struct sptlrpc_flavor	   ps_flvr;
-	enum lustre_sec_part	    ps_part;
+	int				ps_id;
+	struct sptlrpc_flavor		ps_flvr;
+	enum lustre_sec_part		ps_part;
 	/** after set, no more new context will be created */
-	unsigned int		    ps_dying:1;
+	unsigned int			ps_dying:1;
 	/** owning import */
-	struct obd_import	      *ps_import;
+	struct obd_import	       *ps_import;
 	spinlock_t			ps_lock;
 
 	/*
 	 * garbage collection
 	 */
-	struct list_head		      ps_gc_list;
-	unsigned long		      ps_gc_interval; /* in seconds */
-	time64_t		      ps_gc_next;     /* in seconds */
+	struct list_head		ps_gc_list;
+	unsigned long			ps_gc_interval; /* in seconds */
+	time64_t			ps_gc_next;     /* in seconds */
 };
 
 static inline int sec_is_reverse(struct ptlrpc_sec *sec)
@@ -809,30 +809,30 @@ static inline int sec_is_rootonly(struct ptlrpc_sec *sec)
 }
 
 struct ptlrpc_svc_ctx {
-	atomic_t		    sc_refcount;
+	atomic_t			sc_refcount;
 	struct ptlrpc_sec_policy       *sc_policy;
 };
 
 /*
  * user identity descriptor
  */
-#define LUSTRE_MAX_GROUPS	       (128)
+#define LUSTRE_MAX_GROUPS		(128)
 
 struct ptlrpc_user_desc {
-	u32	   pud_uid;
-	u32	   pud_gid;
-	u32	   pud_fsuid;
-	u32	   pud_fsgid;
-	u32	   pud_cap;
-	u32	   pud_ngroups;
-	u32	   pud_groups[0];
+	u32	pud_uid;
+	u32	pud_gid;
+	u32	pud_fsuid;
+	u32	pud_fsgid;
+	u32	pud_cap;
+	u32	pud_ngroups;
+	u32	pud_groups[0];
 };
 
 /*
  * bulk flavors
  */
 enum sptlrpc_bulk_hash_alg {
-	BULK_HASH_ALG_NULL      = 0,
+	BULK_HASH_ALG_NULL	= 0,
 	BULK_HASH_ALG_ADLER32,
 	BULK_HASH_ALG_CRC32,
 	BULK_HASH_ALG_MD5,
@@ -847,16 +847,16 @@ enum sptlrpc_bulk_hash_alg {
 u8 sptlrpc_get_hash_alg(const char *algname);
 
 enum {
-	BSD_FL_ERR      = 1,
+	BSD_FL_ERR	= 1,
 };
 
 struct ptlrpc_bulk_sec_desc {
-	u8	    bsd_version;    /* 0 */
-	u8	    bsd_type;       /* SPTLRPC_BULK_XXX */
-	u8	    bsd_svc;	/* SPTLRPC_BULK_SVC_XXXX */
-	u8	    bsd_flags;      /* flags */
-	u32	   bsd_nob;	/* nob of bulk data */
-	u8	    bsd_data[0];    /* policy-specific token */
+	u8	bsd_version;	/* 0 */
+	u8	bsd_type;	/* SPTLRPC_BULK_XXX */
+	u8	bsd_svc;	/* SPTLRPC_BULK_SVC_XXXX */
+	u8	bsd_flags;	/* flags */
+	u32	bsd_nob;	/* nob of bulk data */
+	u8	bsd_data[0];	/* policy-specific token */
 };
 
 /*
@@ -979,8 +979,8 @@ int cli_ctx_is_eternal(struct ptlrpc_cli_ctx *ctx)
 int sptlrpc_cli_enlarge_reqbuf(struct ptlrpc_request *req,
 			       const struct req_msg_field *field,
 			       int newsize);
-int  sptlrpc_cli_unwrap_early_reply(struct ptlrpc_request *req,
-				    struct ptlrpc_request **req_ret);
+int sptlrpc_cli_unwrap_early_reply(struct ptlrpc_request *req,
+				   struct ptlrpc_request **req_ret);
 void sptlrpc_cli_finish_early_reply(struct ptlrpc_request *early_req);
 
 void sptlrpc_request_out_callback(struct ptlrpc_request *req);
@@ -994,13 +994,13 @@ int sptlrpc_import_sec_adapt(struct obd_import *imp,
 struct ptlrpc_sec *sptlrpc_import_sec_ref(struct obd_import *imp);
 void sptlrpc_import_sec_put(struct obd_import *imp);
 
-int  sptlrpc_import_check_ctx(struct obd_import *imp);
+int sptlrpc_import_check_ctx(struct obd_import *imp);
 void sptlrpc_import_flush_root_ctx(struct obd_import *imp);
 void sptlrpc_import_flush_my_ctx(struct obd_import *imp);
 void sptlrpc_import_flush_all_ctx(struct obd_import *imp);
-int  sptlrpc_req_get_ctx(struct ptlrpc_request *req);
+int sptlrpc_req_get_ctx(struct ptlrpc_request *req);
 void sptlrpc_req_put_ctx(struct ptlrpc_request *req, int sync);
-int  sptlrpc_req_refresh_ctx(struct ptlrpc_request *req, long timeout);
+int sptlrpc_req_refresh_ctx(struct ptlrpc_request *req, long timeout);
 void sptlrpc_req_set_flavor(struct ptlrpc_request *req, int opcode);
 
 /* gc */
@@ -1023,15 +1023,15 @@ enum secsvc_accept_res {
 	SECSVC_DROP,
 };
 
-int  sptlrpc_svc_unwrap_request(struct ptlrpc_request *req);
-int  sptlrpc_svc_alloc_rs(struct ptlrpc_request *req, int msglen);
-int  sptlrpc_svc_wrap_reply(struct ptlrpc_request *req);
+int sptlrpc_svc_unwrap_request(struct ptlrpc_request *req);
+int sptlrpc_svc_alloc_rs(struct ptlrpc_request *req, int msglen);
+int sptlrpc_svc_wrap_reply(struct ptlrpc_request *req);
 void sptlrpc_svc_free_rs(struct ptlrpc_reply_state *rs);
 void sptlrpc_svc_ctx_addref(struct ptlrpc_request *req);
 void sptlrpc_svc_ctx_decref(struct ptlrpc_request *req);
 
-int  sptlrpc_target_export_check(struct obd_export *exp,
-				 struct ptlrpc_request *req);
+int sptlrpc_target_export_check(struct obd_export *exp,
+				struct ptlrpc_request *req);
 
 /* bulk security api */
 void sptlrpc_enc_pool_put_pages(struct ptlrpc_bulk_desc *desc);
@@ -1063,10 +1063,10 @@ static inline int sptlrpc_user_desc_size(int ngroups)
 int sptlrpc_unpack_user_desc(struct lustre_msg *req, int offset, int swabbed);
 
 enum {
-	LUSTRE_SEC_NONE	 = 0,
+	LUSTRE_SEC_NONE		= 0,
 	LUSTRE_SEC_REMOTE       = 1,
 	LUSTRE_SEC_SPECIFY      = 2,
-	LUSTRE_SEC_ALL	  = 3
+	LUSTRE_SEC_ALL		= 3
 };
 
 /** @} sptlrpc */
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [lustre-devel] [PATCH 22/26] lustre: last batch to cleanup white spaces in internal headers
  2019-01-31 17:19 [lustre-devel] [PATCH 00/26] lustre: cleanups with no code changes James Simmons
                   ` (20 preceding siblings ...)
  2019-01-31 17:19 ` [lustre-devel] [PATCH 21/26] lustre: second " James Simmons
@ 2019-01-31 17:19 ` James Simmons
  2019-01-31 17:19 ` [lustre-devel] [PATCH 23/26] libcfs: cleanup white spaces James Simmons
                   ` (4 subsequent siblings)
  26 siblings, 0 replies; 30+ messages in thread
From: James Simmons @ 2019-01-31 17:19 UTC (permalink / raw)
  To: lustre-devel

The internal headers are very messy and difficult to read. Remove
excess white space and properly align data structures so they are
easy on the eyes. This is the last batch since it covers many
lines of changes.

Signed-off-by: James Simmons <jsimmons@infradead.org>
---
 drivers/staging/lustre/lustre/include/obd.h        | 509 +++++++-------
 drivers/staging/lustre/lustre/include/obd_cksum.h  |   4 +-
 drivers/staging/lustre/lustre/include/obd_class.h  |  82 +--
 .../staging/lustre/lustre/include/obd_support.h    | 744 ++++++++++-----------
 drivers/staging/lustre/lustre/include/seq_range.h  |   2 +-
 5 files changed, 671 insertions(+), 670 deletions(-)

diff --git a/drivers/staging/lustre/lustre/include/obd.h b/drivers/staging/lustre/lustre/include/obd.h
index 0bb3cf8..171d2c2 100644
--- a/drivers/staging/lustre/lustre/include/obd.h
+++ b/drivers/staging/lustre/lustre/include/obd.h
@@ -53,20 +53,20 @@
 #define MAX_OBD_DEVICES 8192
 
 struct osc_async_rc {
-	int     ar_rc;
-	int     ar_force_sync;
-	u64   ar_min_xid;
+	int			ar_rc;
+	int			ar_force_sync;
+	u64			ar_min_xid;
 };
 
-struct lov_oinfo {		 /* per-stripe data structure */
-	struct ost_id   loi_oi;    /* object ID/Sequence on the target OST */
-	int loi_ost_idx;	   /* OST stripe index in lov_tgt_desc->tgts */
-	int loi_ost_gen;	   /* generation of this loi_ost_idx */
+struct lov_oinfo {				/* per-stripe data structure */
+	struct ost_id		loi_oi;		/* object ID/Sequence on the target OST */
+	int			loi_ost_idx;	/* OST stripe index in lov_tgt_desc->tgts */
+	int			loi_ost_gen;	/* generation of this loi_ost_idx */
 
-	unsigned long loi_kms_valid:1;
-	u64 loi_kms;	     /* known minimum size */
-	struct ost_lvb loi_lvb;
-	struct osc_async_rc     loi_ar;
+	unsigned long		loi_kms_valid:1;
+	u64			loi_kms;	/* known minimum size */
+	struct ost_lvb		loi_lvb;
+	struct osc_async_rc	loi_ar;
 };
 
 static inline void loi_kms_set(struct lov_oinfo *oinfo, u64 kms)
@@ -85,7 +85,7 @@ static inline void loi_kms_set(struct lov_oinfo *oinfo, u64 kms)
 /* obd info for a particular level (lov, osc). */
 struct obd_info {
 	/* OBD_STATFS_* flags */
-	u64		   oi_flags;
+	u64			oi_flags;
 	/* lsm data specific for every OSC. */
 	struct lov_stripe_md   *oi_md;
 	/* statfs data specific for every OSC, if needed at all. */
@@ -99,31 +99,31 @@ struct obd_info {
 };
 
 struct obd_type {
-	struct list_head typ_chain;
-	struct obd_ops *typ_dt_ops;
-	struct md_ops *typ_md_ops;
-	struct dentry *typ_debugfs_entry;
-	char *typ_name;
-	int  typ_refcnt;
-	struct lu_device_type *typ_lu;
-	spinlock_t obd_type_lock;
+	struct list_head	 typ_chain;
+	struct obd_ops		*typ_dt_ops;
+	struct md_ops		*typ_md_ops;
+	struct dentry		*typ_debugfs_entry;
+	char			*typ_name;
+	int			 typ_refcnt;
+	struct lu_device_type	*typ_lu;
+	spinlock_t		 obd_type_lock;
 	struct kobject		*typ_kobj;
 };
 
 struct brw_page {
-	u64 off;
-	struct page *pg;
-	unsigned int count;
-	u32 flag;
+	u64			off;
+	struct page	       *pg;
+	unsigned int		count;
+	u32			flag;
 };
 
 struct timeout_item {
-	enum timeout_event ti_event;
-	unsigned long	 ti_timeout;
-	timeout_cb_t       ti_cb;
-	void	      *ti_cb_data;
-	struct list_head	 ti_obd_list;
-	struct list_head	 ti_chain;
+	enum timeout_event	ti_event;
+	unsigned long		ti_timeout;
+	timeout_cb_t		ti_cb;
+	void		       *ti_cb_data;
+	struct list_head	ti_obd_list;
+	struct list_head	ti_chain;
 };
 
 #define OBD_MAX_RIF_DEFAULT	8
@@ -135,9 +135,9 @@ struct timeout_item {
 
 /* possible values for fo_sync_lock_cancel */
 enum {
-	NEVER_SYNC_ON_CANCEL = 0,
-	BLOCKING_SYNC_ON_CANCEL = 1,
-	ALWAYS_SYNC_ON_CANCEL = 2,
+	NEVER_SYNC_ON_CANCEL	= 0,
+	BLOCKING_SYNC_ON_CANCEL	= 1,
+	ALWAYS_SYNC_ON_CANCEL	= 2,
 	NUM_SYNC_ON_CANCEL_STATES
 };
 
@@ -159,10 +159,10 @@ enum obd_cl_sem_lock_class {
 struct mdc_rpc_lock;
 struct obd_import;
 struct client_obd {
-	struct rw_semaphore  cl_sem;
-	struct obd_uuid	  cl_target_uuid;
-	struct obd_import       *cl_import; /* ptlrpc connection state */
-	size_t			 cl_conn_count;
+	struct rw_semaphore	cl_sem;
+	struct obd_uuid		cl_target_uuid;
+	struct obd_import      *cl_import; /* ptlrpc connection state */
+	size_t			cl_conn_count;
 	/*
 	 * Cache maximum and default values for easize. This is
 	 * strictly a performance optimization to minimize calls to
@@ -203,22 +203,22 @@ struct client_obd {
 	 * grant before trying to dirty a page and unreserve the rest.
 	 * See osc_{reserve|unreserve}_grant for details.
 	 */
-	long		 cl_reserved_grant;
-	wait_queue_head_t cl_cache_waiters; /* waiting for cache/grant */
-	unsigned long	 cl_next_shrink_grant;   /* jiffies */
-	struct list_head cl_grant_shrink_list;  /* Timeout event list */
-	int		 cl_grant_shrink_interval; /* seconds */
+	long			cl_reserved_grant;
+	wait_queue_head_t	cl_cache_waiters;	/* waiting for cache/grant */
+	unsigned long		cl_next_shrink_grant;   /* jiffies */
+	struct list_head	cl_grant_shrink_list;	/* Timeout event list */
+	int			cl_grant_shrink_interval; /* seconds */
 
 	/* A chunk is an optimal size used by osc_extent to determine
 	 * the extent size. A chunk is max(PAGE_SIZE, OST block size)
 	 */
-	int		  cl_chunkbits;
+	int			cl_chunkbits;
 	/* extent insertion metadata overhead to be accounted in grant,
 	 * in bytes
 	 */
-	unsigned int	 cl_grant_extent_tax;
+	unsigned int		cl_grant_extent_tax;
 	/* maximum extent size, in number of pages */
-	unsigned int	 cl_max_extent_pages;
+	unsigned int		cl_max_extent_pages;
 
 	/* keep track of objects that have lois that contain pages which
 	 * have been queued for async brw.  this lock also protects the
@@ -238,29 +238,29 @@ struct client_obd {
 	 * NB by Jinshan: though field names are still _loi_, but actually
 	 * osc_object{}s are in the list.
 	 */
-	spinlock_t		       cl_loi_list_lock;
-	struct list_head	       cl_loi_ready_list;
-	struct list_head	       cl_loi_hp_ready_list;
-	struct list_head	       cl_loi_write_list;
-	struct list_head	       cl_loi_read_list;
-	u32			 cl_r_in_flight;
-	u32			 cl_w_in_flight;
+	spinlock_t		cl_loi_list_lock;
+	struct list_head	cl_loi_ready_list;
+	struct list_head	cl_loi_hp_ready_list;
+	struct list_head	cl_loi_write_list;
+	struct list_head	cl_loi_read_list;
+	u32			cl_r_in_flight;
+	u32			cl_w_in_flight;
 	/* just a sum of the loi/lop pending numbers to be exported by sysfs */
-	atomic_t	     cl_pending_w_pages;
-	atomic_t	     cl_pending_r_pages;
-	u32			 cl_max_pages_per_rpc;
-	u32			 cl_max_rpcs_in_flight;
-	struct obd_histogram     cl_read_rpc_hist;
-	struct obd_histogram     cl_write_rpc_hist;
-	struct obd_histogram     cl_read_page_hist;
-	struct obd_histogram     cl_write_page_hist;
-	struct obd_histogram     cl_read_offset_hist;
-	struct obd_histogram     cl_write_offset_hist;
+	atomic_t		cl_pending_w_pages;
+	atomic_t		cl_pending_r_pages;
+	u32			cl_max_pages_per_rpc;
+	u32			cl_max_rpcs_in_flight;
+	struct obd_histogram    cl_read_rpc_hist;
+	struct obd_histogram    cl_write_rpc_hist;
+	struct obd_histogram    cl_read_page_hist;
+	struct obd_histogram    cl_write_page_hist;
+	struct obd_histogram    cl_read_offset_hist;
+	struct obd_histogram    cl_write_offset_hist;
 
 	/* LRU for osc caching pages */
 	struct cl_client_cache	*cl_cache;
 	/** member of cl_cache->ccc_lru */
-	struct list_head	 cl_lru_osc;
+	struct list_head	cl_lru_osc;
 	/** # of available LRU slots left in the per-OSC cache.
 	 * Available LRU slots are shared by all OSCs of the same file system,
 	 * therefore this is a pointer to cl_client_cache::ccc_lru_left.
@@ -270,73 +270,74 @@ struct client_obd {
 	 * queue, or in transfer. Busy pages can't be discarded so they are not
 	 * in LRU cache.
 	 */
-	atomic_long_t		 cl_lru_busy;
+	atomic_long_t		cl_lru_busy;
 	/** # of LRU pages in the cache for this client_obd */
-	atomic_long_t		 cl_lru_in_list;
+	atomic_long_t		cl_lru_in_list;
 	/** # of threads are shrinking LRU cache. To avoid contention, it's not
 	 * allowed to have multiple threads shrinking LRU cache.
 	 */
-	atomic_t		 cl_lru_shrinkers;
+	atomic_t		cl_lru_shrinkers;
 	/** The time when this LRU cache was last used. */
-	time64_t		 cl_lru_last_used;
+	time64_t		cl_lru_last_used;
 	/** stats: how many reclaims have happened for this client_obd.
 	 * reclaim and shrink - shrink is async, voluntarily rebalancing;
 	 * reclaim is sync, initiated by IO thread when the LRU slots are
 	 * in shortage.
 	 */
-	u64			 cl_lru_reclaim;
+	u64			cl_lru_reclaim;
 	/** List of LRU pages for this client_obd */
-	struct list_head	 cl_lru_list;
+	struct list_head	cl_lru_list;
 	/** Lock for LRU page list */
-	spinlock_t		 cl_lru_list_lock;
+	spinlock_t		cl_lru_list_lock;
 	/** # of unstable pages in this client_obd.
 	 * An unstable page is a page state that WRITE RPC has finished but
 	 * the transaction has NOT yet committed.
 	 */
-	atomic_long_t		 cl_unstable_count;
+	atomic_long_t		cl_unstable_count;
 	/** Link to osc_shrinker_list */
-	struct list_head	 cl_shrink_list;
+	struct list_head	cl_shrink_list;
 
 	/* number of in flight destroy rpcs is limited to max_rpcs_in_flight */
-	atomic_t	     cl_destroy_in_flight;
-	wait_queue_head_t	      cl_destroy_waitq;
+	atomic_t		cl_destroy_in_flight;
+	wait_queue_head_t	cl_destroy_waitq;
 
 	struct mdc_rpc_lock     *cl_rpc_lock;
 
 	/* modify rpcs in flight
 	 * currently used for metadata only
 	 */
-	spinlock_t		 cl_mod_rpcs_lock;
-	u16			 cl_max_mod_rpcs_in_flight;
-	u16			 cl_mod_rpcs_in_flight;
-	u16			 cl_close_rpcs_in_flight;
-	wait_queue_head_t	 cl_mod_rpcs_waitq;
-	unsigned long		*cl_mod_tag_bitmap;
-	struct obd_histogram	 cl_mod_rpcs_hist;
+	spinlock_t		cl_mod_rpcs_lock;
+	u16			cl_max_mod_rpcs_in_flight;
+	u16			cl_mod_rpcs_in_flight;
+	u16			cl_close_rpcs_in_flight;
+	wait_queue_head_t	cl_mod_rpcs_waitq;
+	unsigned long	       *cl_mod_tag_bitmap;
+	struct obd_histogram	cl_mod_rpcs_hist;
 
 	/* mgc datastruct */
-	atomic_t	     cl_mgc_refcount;
-	struct obd_export       *cl_mgc_mgsexp;
+	atomic_t		cl_mgc_refcount;
+	struct obd_export      *cl_mgc_mgsexp;
 
 	/* checksumming for data sent over the network */
-	unsigned int		 cl_checksum:1,	/* 0 = disabled, 1 = enabled */
-				 cl_checksum_dump:1; /* same */
+	unsigned int		cl_checksum:1,	/* 0 = disabled, 1 = enabled */
+				cl_checksum_dump:1; /* same */
 	/* supported checksum types that are worked out at connect time */
-	u32		    cl_supp_cksum_types;
+	u32			cl_supp_cksum_types;
 	/* checksum algorithm to be used */
-	enum cksum_type	     cl_cksum_type;
+	enum cksum_type		cl_cksum_type;
 
 	/* also protected by the poorly named _loi_list_lock lock above */
-	struct osc_async_rc      cl_ar;
+	struct osc_async_rc     cl_ar;
 
 	/* sequence manager */
 	struct lu_client_seq    *cl_seq;
-	struct rw_semaphore	 cl_seq_rwsem;
+	struct rw_semaphore	cl_seq_rwsem;
+
+	atomic_t		cl_resends; /* resend count */
 
-	atomic_t	     cl_resends; /* resend count */
 
 	/* ptlrpc work for writeback in ptlrpcd context */
-	void		    *cl_writeback_work;
+	void			*cl_writeback_work;
 	void			*cl_lru_work;
 	/* hash tables for osc_quota_info */
 	struct rhashtable	cl_quota_hash[MAXQUOTAS];
@@ -347,55 +348,55 @@ struct client_obd {
 #define obd2cli_tgt(obd) ((char *)(obd)->u.cli.cl_target_uuid.uuid)
 
 struct obd_id_info {
-	u32   idx;
+	u32	idx;
 	u64	*data;
 };
 
 struct echo_client_obd {
 	struct obd_export	*ec_exp;   /* the local connection to osc/lov */
 	spinlock_t		ec_lock;
-	struct list_head	   ec_objects;
-	struct list_head	   ec_locks;
-	u64		ec_unique;
+	struct list_head	ec_objects;
+	struct list_head	ec_locks;
+	u64			ec_unique;
 };
 
 /* Generic subset of OSTs */
 struct ost_pool {
-	u32	      *op_array;      /* array of index of lov_obd->lov_tgts */
-	unsigned int	op_count;      /* number of OSTs in the array */
-	unsigned int	op_size;       /* allocated size of lp_array */
-	struct rw_semaphore op_rw_sem;     /* to protect ost_pool use */
+	u32			*op_array;  /* array of index of lov_obd->lov_tgts */
+	unsigned int		 op_count;  /* number of OSTs in the array */
+	unsigned int		 op_size;   /* allocated size of lp_array */
+	struct rw_semaphore	 op_rw_sem; /* to protect ost_pool use */
 };
 
 /* allow statfs data caching for 1 second */
 #define OBD_STATFS_CACHE_SECONDS 1
 
 struct lov_tgt_desc {
-	struct list_head	  ltd_kill;
-	struct obd_uuid     ltd_uuid;
-	struct obd_device  *ltd_obd;
-	struct obd_export  *ltd_exp;
-	u32	       ltd_gen;
-	u32	       ltd_index;   /* index in lov_obd->tgts */
-	unsigned long       ltd_active:1,/* is this target up for requests */
-			    ltd_activate:1,/* should  target be activated */
-			    ltd_reap:1;  /* should this target be deleted */
+	struct list_head	ltd_kill;
+	struct obd_uuid		ltd_uuid;
+	struct obd_device      *ltd_obd;
+	struct obd_export      *ltd_exp;
+	u32			ltd_gen;
+	u32			ltd_index;   /* index in lov_obd->tgts */
+	unsigned long		ltd_active:1,/* is this target up for requests */
+				ltd_activate:1,/* should  target be activated */
+				ltd_reap:1;  /* should this target be deleted */
 };
 
 struct lov_obd {
-	struct lov_desc	 desc;
-	struct lov_tgt_desc   **lov_tgts;	      /* sparse array */
-	struct ost_pool	 lov_packed;	    /* all OSTs in a packed array */
+	struct lov_desc		desc;
+	struct lov_tgt_desc   **lov_tgts;	/* sparse array */
+	struct ost_pool		lov_packed;	/* all OSTs in a packed array */
 	struct mutex		lov_lock;
 	struct obd_connect_data lov_ocd;
-	atomic_t	    lov_refcount;
-	u32		   lov_death_row;/* tgts scheduled to be deleted */
-	u32		   lov_tgt_size;   /* size of tgts array */
-	int		     lov_connects;
-	int		     lov_pool_count;
+	atomic_t		lov_refcount;
+	u32			lov_death_row;/* tgts scheduled to be deleted */
+	u32			lov_tgt_size;   /* size of tgts array */
+	int			lov_connects;
+	int			lov_pool_count;
 	struct rhashtable	lov_pools_hash_body; /* used for key access */
 	struct list_head	lov_pool_list; /* used for sequential access */
-	struct dentry		*lov_pool_debugfs_entry;
+	struct dentry	       *lov_pool_debugfs_entry;
 	enum lustre_sec_part    lov_sp_me;
 
 	/* Cached LRU and unstable data from upper layer */
@@ -403,12 +404,12 @@ struct lov_obd {
 
 	struct rw_semaphore     lov_notify_lock;
 
-	struct kobject		*lov_tgts_kobj;
+	struct kobject	       *lov_tgts_kobj;
 };
 
 struct lmv_tgt_desc {
 	struct obd_uuid		ltd_uuid;
-	struct obd_export	*ltd_exp;
+	struct obd_export      *ltd_exp;
 	u32			ltd_idx;
 	struct mutex		ltd_fid_mutex;
 	unsigned long		ltd_active:1; /* target up for requests */
@@ -433,52 +434,52 @@ struct lmv_obd {
 };
 
 struct niobuf_local {
-	u64		lnb_file_offset;
-	u32		lnb_page_offset;
-	u32		lnb_len;
-	u32		lnb_flags;
-	int		lnb_rc;
-	struct page	*lnb_page;
-	void		*lnb_data;
+	u64			lnb_file_offset;
+	u32			lnb_page_offset;
+	u32			lnb_len;
+	u32			lnb_flags;
+	int			lnb_rc;
+	struct page		*lnb_page;
+	void			*lnb_data;
 };
 
-#define LUSTRE_FLD_NAME	 "fld"
-#define LUSTRE_SEQ_NAME	 "seq"
+#define LUSTRE_FLD_NAME		"fld"
+#define LUSTRE_SEQ_NAME		"seq"
 
-#define LUSTRE_MDD_NAME	 "mdd"
+#define LUSTRE_MDD_NAME		"mdd"
 #define LUSTRE_OSD_LDISKFS_NAME	"osd-ldiskfs"
 #define LUSTRE_OSD_ZFS_NAME     "osd-zfs"
-#define LUSTRE_VVP_NAME	 "vvp"
-#define LUSTRE_LMV_NAME	 "lmv"
-#define LUSTRE_SLP_NAME	 "slp"
+#define LUSTRE_VVP_NAME		"vvp"
+#define LUSTRE_LMV_NAME		"lmv"
+#define LUSTRE_SLP_NAME		"slp"
 #define LUSTRE_LOD_NAME		"lod"
 #define LUSTRE_OSP_NAME		"osp"
 #define LUSTRE_LWP_NAME		"lwp"
 
 /* obd device type names */
  /* FIXME all the references to LUSTRE_MDS_NAME should be swapped with LUSTRE_MDT_NAME */
-#define LUSTRE_MDS_NAME	 "mds"
-#define LUSTRE_MDT_NAME	 "mdt"
-#define LUSTRE_MDC_NAME	 "mdc"
-#define LUSTRE_OSS_NAME	 "ost"       /* FIXME change name to oss */
-#define LUSTRE_OST_NAME	 "obdfilter" /* FIXME change name to ost */
-#define LUSTRE_OSC_NAME	 "osc"
-#define LUSTRE_LOV_NAME	 "lov"
-#define LUSTRE_MGS_NAME	 "mgs"
-#define LUSTRE_MGC_NAME	 "mgc"
+#define LUSTRE_MDS_NAME		"mds"
+#define LUSTRE_MDT_NAME		"mdt"
+#define LUSTRE_MDC_NAME		"mdc"
+#define LUSTRE_OSS_NAME		"ost"       /* FIXME change name to oss */
+#define LUSTRE_OST_NAME		"obdfilter" /* FIXME change name to ost */
+#define LUSTRE_OSC_NAME		"osc"
+#define LUSTRE_LOV_NAME		"lov"
+#define LUSTRE_MGS_NAME		"mgs"
+#define LUSTRE_MGC_NAME		"mgc"
 
 #define LUSTRE_ECHO_NAME	"obdecho"
 #define LUSTRE_ECHO_CLIENT_NAME "echo_client"
-#define LUSTRE_QMT_NAME	 "qmt"
+#define LUSTRE_QMT_NAME		"qmt"
 
 /* Constant obd names (post-rename) */
-#define LUSTRE_MDS_OBDNAME "MDS"
-#define LUSTRE_OSS_OBDNAME "OSS"
-#define LUSTRE_MGS_OBDNAME "MGS"
-#define LUSTRE_MGC_OBDNAME "MGC"
+#define LUSTRE_MDS_OBDNAME	"MDS"
+#define LUSTRE_OSS_OBDNAME	"OSS"
+#define LUSTRE_MGS_OBDNAME	"MGS"
+#define LUSTRE_MGC_OBDNAME	"MGC"
 
 /* Don't conflict with on-wire flags OBD_BRW_WRITE, etc */
-#define N_LOCAL_TEMP_PAGE 0x10000000
+#define N_LOCAL_TEMP_PAGE	0x10000000
 
 /*
  * Events signalled through obd_notify() upcall-chain.
@@ -516,21 +517,21 @@ struct target_recovery_data {
 };
 
 struct obd_llog_group {
-	struct llog_ctxt  *olg_ctxts[LLOG_MAX_CTXTS];
+	struct llog_ctxt       *olg_ctxts[LLOG_MAX_CTXTS];
 	wait_queue_head_t	olg_waitq;
-	spinlock_t	   olg_lock;
-	struct mutex	   olg_cat_processing;
+	spinlock_t		olg_lock;
+	struct mutex		olg_cat_processing;
 };
 
 /* corresponds to one of the obd's */
 #define OBD_DEVICE_MAGIC	0XAB5CD6EF
 
 struct lvfs_run_ctxt {
-	struct dt_device *dt;
+	struct dt_device       *dt;
 };
 
 struct obd_device {
-	struct obd_type	*obd_type;
+	struct obd_type		*obd_type;
 	u32			 obd_magic; /* OBD_DEVICE_MAGIC */
 	int			 obd_minor; /* device number: lctl dl */
 	struct lu_device	*obd_lu_dev;
@@ -562,35 +563,35 @@ struct obd_device {
 	 */
 	unsigned long obd_recovery_expired:1;
 	/* uuid-export hash body */
-	struct rhashtable	obd_uuid_hash;
-	wait_queue_head_t	     obd_refcount_waitq;
-	struct list_head	      obd_exports;
-	struct list_head	      obd_unlinked_exports;
-	struct list_head	      obd_delayed_exports;
-	atomic_t			obd_refcount;
-	int		     obd_num_exports;
-	spinlock_t		obd_nid_lock;
-	struct ldlm_namespace  *obd_namespace;
-	struct ptlrpc_client	obd_ldlm_client; /* XXX OST/MDS only */
+	struct rhashtable	 obd_uuid_hash;
+	wait_queue_head_t	 obd_refcount_waitq;
+	struct list_head	 obd_exports;
+	struct list_head	 obd_unlinked_exports;
+	struct list_head	 obd_delayed_exports;
+	atomic_t		 obd_refcount;
+	int			 obd_num_exports;
+	spinlock_t		 obd_nid_lock;
+	struct ldlm_namespace	*obd_namespace;
+	struct ptlrpc_client	 obd_ldlm_client; /* XXX OST/MDS only */
 	/* a spinlock is OK for what we do now, may need a semaphore later */
-	spinlock_t		obd_dev_lock; /* protect OBD bitfield above */
-	spinlock_t		obd_osfs_lock;
-	struct obd_statfs	obd_osfs;       /* locked by obd_osfs_lock */
-	u64			obd_osfs_age;
-	u64			obd_last_committed;
-	struct mutex		obd_dev_mutex;
-	struct lvfs_run_ctxt	obd_lvfs_ctxt;
-	struct obd_llog_group	obd_olg;	/* default llog group */
+	spinlock_t		 obd_dev_lock; /* protect OBD bitfield above */
+	spinlock_t		 obd_osfs_lock;
+	struct obd_statfs	 obd_osfs;       /* locked by obd_osfs_lock */
+	u64			 obd_osfs_age;
+	u64			 obd_last_committed;
+	struct mutex		 obd_dev_mutex;
+	struct lvfs_run_ctxt	 obd_lvfs_ctxt;
+	struct obd_llog_group	 obd_olg;	/* default llog group */
 	struct obd_device	*obd_observer;
-	struct rw_semaphore	obd_observer_link_sem;
+	struct rw_semaphore	 obd_observer_link_sem;
 	struct obd_notify_upcall obd_upcall;
-	struct obd_export       *obd_self_export;
+	struct obd_export	*obd_self_export;
 
 	union {
-		struct client_obd cli;
-		struct echo_client_obd echo_client;
-		struct lov_obd lov;
-		struct lmv_obd lmv;
+		struct client_obd	cli;
+		struct echo_client_obd	echo_client;
+		struct lov_obd		lov;
+		struct lmv_obd		lmv;
 	} u;
 
 	/* Fields used by LProcFS */
@@ -600,12 +601,12 @@ struct obd_device {
 
 	struct dentry		*obd_debugfs_entry;
 	struct dentry		*obd_svc_debugfs_entry;
-	struct lprocfs_stats  *obd_svc_stats;
-	const struct attribute	       **obd_attrs;
-	struct lprocfs_vars		*obd_vars;
-	atomic_t	   obd_evict_inprogress;
-	wait_queue_head_t	    obd_evict_inprogress_waitq;
-	struct list_head	obd_evict_list; /* protected with pet_lock */
+	struct lprocfs_stats	*obd_svc_stats;
+	const struct attribute **obd_attrs;
+	struct lprocfs_vars	*obd_vars;
+	atomic_t		 obd_evict_inprogress;
+	wait_queue_head_t	 obd_evict_inprogress_waitq;
+	struct list_head	 obd_evict_list; /* protected with pet_lock */
 
 	/**
 	 * Ldlm pool part. Save last calculated SLV and Limit.
@@ -620,39 +621,39 @@ struct obd_device {
 	 * A list of outstanding class_incref()'s against this obd. For
 	 * debugging.
 	 */
-	struct lu_ref	  obd_reference;
+	struct lu_ref		 obd_reference;
 
-	struct kset			obd_kset; /* sysfs object collection */
-	struct kobj_type		obd_ktype;
-	struct completion		obd_kobj_unregister;
+	struct kset		 obd_kset; /* sysfs object collection */
+	struct kobj_type	 obd_ktype;
+	struct completion	 obd_kobj_unregister;
 };
 
 int obd_uuid_add(struct obd_device *obd, struct obd_export *export);
 void obd_uuid_del(struct obd_device *obd, struct obd_export *export);
 
 /* get/set_info keys */
-#define KEY_ASYNC	       "async"
-#define KEY_CHANGELOG_CLEAR     "changelog_clear"
-#define KEY_FID2PATH	    "fid2path"
-#define KEY_CHECKSUM	    "checksum"
-#define KEY_CLEAR_FS	    "clear_fs"
-#define KEY_CONN_DATA	   "conn_data"
+#define KEY_ASYNC		"async"
+#define KEY_CHANGELOG_CLEAR	"changelog_clear"
+#define KEY_FID2PATH		"fid2path"
+#define KEY_CHECKSUM		"checksum"
+#define KEY_CLEAR_FS		"clear_fs"
+#define KEY_CONN_DATA		"conn_data"
 #define KEY_EVICT_BY_NID	"evict_by_nid"
-#define KEY_FIEMAP	      "fiemap"
-#define KEY_FLUSH_CTX	   "flush_ctx"
+#define KEY_FIEMAP		"fiemap"
+#define KEY_FLUSH_CTX		"flush_ctx"
 #define KEY_GRANT_SHRINK	"grant_shrink"
 #define KEY_HSM_COPYTOOL_SEND   "hsm_send"
 #define KEY_INIT_RECOV_BACKUP   "init_recov_bk"
-#define KEY_INTERMDS	    "inter_mds"
-#define KEY_LAST_ID	     "last_id"
+#define KEY_INTERMDS		"inter_mds"
+#define KEY_LAST_ID		"last_id"
 #define KEY_LAST_FID		"last_fid"
 #define KEY_MAX_EASIZE		"max_easize"
 #define KEY_DEFAULT_EASIZE	"default_easize"
-#define KEY_MGSSEC	      "mgssec"
-#define KEY_READ_ONLY	   "read-only"
-#define KEY_REGISTER_TARGET     "register_target"
-#define KEY_SET_FS	      "set_fs"
-#define KEY_TGT_COUNT	   "tgt_count"
+#define KEY_MGSSEC		"mgssec"
+#define KEY_READ_ONLY		"read-only"
+#define KEY_REGISTER_TARGET	"register_target"
+#define KEY_SET_FS		"set_fs"
+#define KEY_TGT_COUNT		"tgt_count"
 /*      KEY_SET_INFO in lustre_idl.h */
 #define KEY_SPTLRPC_CONF	"sptlrpc_conf"
 
@@ -698,11 +699,11 @@ enum md_op_flags {
 };
 
 enum md_cli_flags {
-	CLI_SET_MEA	= BIT(0),
-	CLI_RM_ENTRY	= BIT(1),
-	CLI_HASH64	= BIT(2),
-	CLI_API32	= BIT(3),
-	CLI_MIGRATE	= BIT(4),
+	CLI_SET_MEA		= BIT(0),
+	CLI_RM_ENTRY		= BIT(1),
+	CLI_HASH64		= BIT(2),
+	CLI_API32		= BIT(3),
+	CLI_MIGRATE		= BIT(4),
 };
 
 /**
@@ -716,39 +717,39 @@ static inline bool it_has_reply_body(const struct lookup_intent *it)
 }
 
 struct md_op_data {
-	struct lu_fid	   op_fid1; /* operation fid1 (usually parent) */
-	struct lu_fid	   op_fid2; /* operation fid2 (usually child) */
-	struct lu_fid	   op_fid3; /* 2 extra fids to find conflicting */
-	struct lu_fid	   op_fid4; /* to the operation locks. */
+	struct lu_fid		op_fid1; /* operation fid1 (usually parent) */
+	struct lu_fid		op_fid2; /* operation fid2 (usually child) */
+	struct lu_fid		op_fid3; /* 2 extra fids to find conflicting */
+	struct lu_fid		op_fid4; /* to the operation locks. */
 	u32			op_mds;  /* what mds server open will go to */
 	u32			op_mode;
-	struct lustre_handle    op_handle;
+	struct lustre_handle	op_handle;
 	s64			op_mod_time;
-	const char	     *op_name;
+	const char	       *op_name;
 	size_t			op_namelen;
 	struct lmv_stripe_md   *op_mea1;
 	struct lmv_stripe_md   *op_mea2;
-	u32		   op_suppgids[2];
-	u32		   op_fsuid;
-	u32		   op_fsgid;
-	kernel_cap_t	       op_cap;
-	void		   *op_data;
+	u32			op_suppgids[2];
+	u32			op_fsuid;
+	u32			op_fsgid;
+	kernel_cap_t		op_cap;
+	void		       *op_data;
 	size_t			op_data_size;
 
 	/* iattr fields and blocks. */
-	struct iattr	    op_attr;
+	struct iattr		op_attr;
 	enum op_xvalid		op_xvalid;	/* eXtra validity flags */
-	unsigned int	    op_attr_flags;
-	u64		   op_valid;
-	loff_t		  op_attr_blocks;
+	unsigned int		op_attr_flags;
+	u64			op_valid;
+	loff_t			op_attr_blocks;
 
-	u32		   op_flags;
+	u32			op_flags;
 
 	/* Various operation flags. */
 	enum mds_op_bias        op_bias;
 
 	/* Used by readdir */
-	u64		   op_offset;
+	u64			op_offset;
 
 	/* used to transfer info between the stacks of MD client
 	 * see enum op_cli_flags
@@ -780,14 +781,14 @@ struct md_callback {
 /* metadata stat-ahead */
 
 struct md_enqueue_info {
-	struct md_op_data       mi_data;
-	struct lookup_intent    mi_it;
-	struct lustre_handle    mi_lockh;
-	struct inode	   *mi_dir;
+	struct md_op_data		mi_data;
+	struct lookup_intent		mi_it;
+	struct lustre_handle		mi_lockh;
+	struct inode		       *mi_dir;
 	struct ldlm_enqueue_info	mi_einfo;
 	int (*mi_cb)(struct ptlrpc_request *req,
 		     struct md_enqueue_info *minfo, int rc);
-	void			*mi_cbdata;
+	void			       *mi_cbdata;
 };
 
 struct obd_ops {
@@ -886,33 +887,33 @@ struct obd_ops {
 
 /* lmv structures */
 struct lustre_md {
-	struct mdt_body	 *body;
-	struct lu_buf		 layout;
-	struct lmv_stripe_md    *lmv;
+	struct mdt_body			*body;
+	struct lu_buf			 layout;
+	struct lmv_stripe_md		*lmv;
 #ifdef CONFIG_FS_POSIX_ACL
-	struct posix_acl	*posix_acl;
+	struct posix_acl		*posix_acl;
 #endif
-	struct mdt_remote_perm  *remote_perm;
+	struct mdt_remote_perm		*remote_perm;
 };
 
 struct md_open_data {
-	struct obd_client_handle *mod_och;
-	struct ptlrpc_request    *mod_open_req;
-	struct ptlrpc_request    *mod_close_req;
-	atomic_t		  mod_refcount;
-	bool			  mod_is_create;
+	struct obd_client_handle	*mod_och;
+	struct ptlrpc_request		*mod_open_req;
+	struct ptlrpc_request		*mod_close_req;
+	atomic_t			 mod_refcount;
+	bool				 mod_is_create;
 };
 
 struct obd_client_handle {
-	struct lustre_handle	 och_fh;
-	struct lu_fid		 och_fid;
-	struct md_open_data	*och_mod;
-	struct lustre_handle	 och_lease_handle; /* open lock for lease */
-	u32			 och_magic;
-	fmode_t			 och_flags;
+	struct lustre_handle		och_fh;
+	struct lu_fid			och_fid;
+	struct md_open_data	       *och_mod;
+	struct lustre_handle		och_lease_handle; /* open lock for lease */
+	u32				och_magic;
+	fmode_t				och_flags;
 };
 
-#define OBD_CLIENT_HANDLE_MAGIC 0xd15ea5ed
+#define OBD_CLIENT_HANDLE_MAGIC	0xd15ea5ed
 
 struct lookup_intent;
 struct cl_attr;
@@ -1015,13 +1016,13 @@ static inline struct md_open_data *obd_mod_alloc(void)
 }
 
 #define obd_mod_get(mod) atomic_inc(&(mod)->mod_refcount)
-#define obd_mod_put(mod)					\
-({							      \
-	if (atomic_dec_and_test(&(mod)->mod_refcount)) {	  \
-		if ((mod)->mod_open_req)			  \
-			ptlrpc_req_finished((mod)->mod_open_req);   \
-		kfree(mod);			      \
-	}						       \
+#define obd_mod_put(mod)						\
+({									\
+	if (atomic_dec_and_test(&(mod)->mod_refcount)) {		\
+		if ((mod)->mod_open_req)				\
+			ptlrpc_req_finished((mod)->mod_open_req);	\
+		kfree(mod);						\
+	}								\
 })
 
 void obdo_from_inode(struct obdo *dst, struct inode *src, u32 valid);
@@ -1050,8 +1051,8 @@ static inline const char *lu_dev_name(const struct lu_device *lu_dev)
 static inline bool filename_is_volatile(const char *name, size_t namelen,
 					int *idx)
 {
-	const char	*start;
-	char		*end;
+	const char *start;
+	char *end;
 
 	if (strncmp(name, LUSTRE_VOLATILE_HDR, LUSTRE_VOLATILE_HDR_LEN) != 0)
 		return false;
diff --git a/drivers/staging/lustre/lustre/include/obd_cksum.h b/drivers/staging/lustre/lustre/include/obd_cksum.h
index e5f7bb2..26a9555 100644
--- a/drivers/staging/lustre/lustre/include/obd_cksum.h
+++ b/drivers/staging/lustre/lustre/include/obd_cksum.h
@@ -65,8 +65,8 @@ static inline unsigned char cksum_obd2cfs(enum cksum_type cksum_type)
  */
 static inline u32 cksum_type_pack(enum cksum_type cksum_type)
 {
-	unsigned int    performance = 0, tmp;
-	u32		flag = OBD_FL_CKSUM_ADLER;
+	unsigned int performance = 0, tmp;
+	u32 flag = OBD_FL_CKSUM_ADLER;
 
 	if (cksum_type & OBD_CKSUM_CRC32) {
 		tmp = cfs_crypto_hash_speed(cksum_obd2cfs(OBD_CKSUM_CRC32));
diff --git a/drivers/staging/lustre/lustre/include/obd_class.h b/drivers/staging/lustre/lustre/include/obd_class.h
index b64ba8b..30b3e2c 100644
--- a/drivers/staging/lustre/lustre/include/obd_class.h
+++ b/drivers/staging/lustre/lustre/include/obd_class.h
@@ -144,22 +144,22 @@ int class_config_llog_handler(const struct lu_env *env,
 /* obdecho */
 void lprocfs_echo_init_vars(struct lprocfs_static_vars *lvars);
 
-#define CFG_F_START     0x01   /* Set when we start updating from a log */
-#define CFG_F_MARKER    0x02   /* We are within a maker */
-#define CFG_F_SKIP      0x04   /* We should ignore this cfg command */
-#define CFG_F_COMPAT146 0x08   /* Allow old-style logs */
-#define CFG_F_EXCLUDE   0x10   /* OST exclusion list */
+#define CFG_F_START	0x01   /* Set when we start updating from a log */
+#define CFG_F_MARKER	0x02   /* We are within a maker */
+#define CFG_F_SKIP	0x04   /* We should ignore this cfg command */
+#define CFG_F_COMPAT146	0x08   /* Allow old-style logs */
+#define CFG_F_EXCLUDE	0x10   /* OST exclusion list */
 
 /* Passed as data param to class_config_parse_llog */
 struct config_llog_instance {
-	char		   *cfg_obdname;
-	void		   *cfg_instance;
-	struct super_block *cfg_sb;
-	struct obd_uuid     cfg_uuid;
-	llog_cb_t	    cfg_callback;
-	int		    cfg_last_idx; /* for partial llog processing */
-	int		    cfg_flags;
-	u32		    cfg_sub_clds;
+	char		       *cfg_obdname;
+	void		       *cfg_instance;
+	struct super_block     *cfg_sb;
+	struct obd_uuid		cfg_uuid;
+	llog_cb_t		cfg_callback;
+	int			cfg_last_idx; /* for partial llog processing */
+	int			cfg_flags;
+	u32			cfg_sub_clds;
 };
 
 int class_config_parse_llog(const struct lu_env *env, struct llog_ctxt *ctxt,
@@ -181,31 +181,31 @@ int class_config_parse_llog(const struct lu_env *env, struct llog_ctxt *ctxt,
 
 /* list of active configuration logs  */
 struct config_llog_data {
-	struct ldlm_res_id	    cld_resid;
-	struct config_llog_instance cld_cfg;
-	struct list_head	    cld_list_chain;
-	atomic_t		    cld_refcount;
-	struct config_llog_data    *cld_sptlrpc;/* depended sptlrpc log */
-	struct config_llog_data	   *cld_params;	/* common parameters log */
-	struct config_llog_data    *cld_recover;/* imperative recover log */
-	struct obd_export	   *cld_mgcexp;
-	struct mutex		    cld_lock;
-	int			    cld_type;
-	unsigned int		    cld_stopping:1, /*
-						     * we were told to stop
-						     * watching
-						     */
-				    cld_lostlock:1; /* lock not requeued */
-	char			    cld_logname[0];
+	struct ldlm_res_id		cld_resid;
+	struct config_llog_instance	cld_cfg;
+	struct list_head		cld_list_chain;
+	atomic_t			cld_refcount;
+	struct config_llog_data	       *cld_sptlrpc;	/* depended sptlrpc log */
+	struct config_llog_data	       *cld_params;	/* common parameters log */
+	struct config_llog_data	       *cld_recover;	/* imperative recover log */
+	struct obd_export	       *cld_mgcexp;
+	struct mutex			cld_lock;
+	int				cld_type;
+	unsigned int			cld_stopping:1, /*
+							 * we were told to stop
+							 * watching
+							 */
+					cld_lostlock:1; /* lock not requeued */
+	char				cld_logname[0];
 };
 
 struct lustre_profile {
-	struct list_head lp_list;
-	char		*lp_profile;
-	char		*lp_dt;
-	char		*lp_md;
-	int		 lp_refs;
-	bool		 lp_list_deleted;
+	struct list_head		lp_list;
+	char			       *lp_profile;
+	char			       *lp_dt;
+	char			       *lp_md;
+	int				lp_refs;
+	bool				lp_list_deleted;
 };
 
 struct lustre_profile *class_get_profile(const char *prof);
@@ -423,7 +423,7 @@ static inline int obd_setup(struct obd_device *obd, struct lustre_cfg *cfg)
 
 	ldt = obd->obd_type->typ_lu;
 	if (ldt) {
-		struct lu_context  session_ctx;
+		struct lu_context session_ctx;
 		struct lu_env env;
 
 		lu_context_init(&session_ctx, LCT_SESSION | LCT_SERVER_SESSION);
@@ -1642,11 +1642,11 @@ static inline int md_unpackmd(struct obd_export *exp,
 typedef int (*register_lwp_cb)(void *data);
 
 struct lwp_register_item {
-	struct obd_export **lri_exp;
-	register_lwp_cb	    lri_cb_func;
-	void		   *lri_cb_data;
-	struct list_head    lri_list;
-	char		    lri_name[MTI_NAME_MAXLEN];
+	struct obd_export     **lri_exp;
+	register_lwp_cb		lri_cb_func;
+	void		       *lri_cb_data;
+	struct list_head	lri_list;
+	char			lri_name[MTI_NAME_MAXLEN];
 };
 
 /*
diff --git a/drivers/staging/lustre/lustre/include/obd_support.h b/drivers/staging/lustre/lustre/include/obd_support.h
index 93a3745..3e15cac 100644
--- a/drivers/staging/lustre/lustre/include/obd_support.h
+++ b/drivers/staging/lustre/lustre/include/obd_support.h
@@ -59,151 +59,151 @@
 
 /* Some hash init argument constants */
 /* Timeout definitions */
-#define OBD_TIMEOUT_DEFAULT	     100
+#define OBD_TIMEOUT_DEFAULT	100
 /* Time to wait for all clients to reconnect during recovery (hard limit) */
-#define OBD_RECOVERY_TIME_HARD	  (obd_timeout * 9)
+#define OBD_RECOVERY_TIME_HARD	(obd_timeout * 9)
 /* Time to wait for all clients to reconnect during recovery (soft limit) */
 /* Should be very conservative; must catch the first reconnect after reboot */
-#define OBD_RECOVERY_TIME_SOFT	  (obd_timeout * 3)
+#define OBD_RECOVERY_TIME_SOFT	(obd_timeout * 3)
 /* Change recovery-small 26b time if you change this */
-#define PING_INTERVAL max(obd_timeout / 4, 1U)
+#define PING_INTERVAL		max(obd_timeout / 4, 1U)
 /* a bit more than maximal journal commit time in seconds */
-#define PING_INTERVAL_SHORT min(PING_INTERVAL, 7U)
+#define PING_INTERVAL_SHORT	min(PING_INTERVAL, 7U)
 /* Client may skip 1 ping; we must wait at least 2.5. But for multiple
  * failover targets the client only pings one server at a time, and pings
  * can be lost on a loaded network. Since eviction has serious consequences,
  * and there's no urgent need to evict a client just because it's idle, we
  * should be very conservative here.
  */
-#define PING_EVICT_TIMEOUT (PING_INTERVAL * 6)
-#define DISK_TIMEOUT 50	  /* Beyond this we warn about disk speed */
-#define CONNECTION_SWITCH_MIN 5U /* Connection switching rate limiter */
+#define PING_EVICT_TIMEOUT	(PING_INTERVAL * 6)
+#define DISK_TIMEOUT		50 /* Beyond this we warn about disk speed */
+#define CONNECTION_SWITCH_MIN	5U /* Connection switching rate limiter */
 /* Max connect interval for nonresponsive servers; ~50s to avoid building up
  * connect requests in the LND queues, but within obd_timeout so we don't
  * miss the recovery window
  */
-#define CONNECTION_SWITCH_MAX min(50U, max(CONNECTION_SWITCH_MIN, obd_timeout))
-#define CONNECTION_SWITCH_INC 5  /* Connection timeout backoff */
+#define CONNECTION_SWITCH_MAX	min(50U, max(CONNECTION_SWITCH_MIN, obd_timeout))
+#define CONNECTION_SWITCH_INC	5  /* Connection timeout backoff */
 /* In general this should be low to have quick detection of a system
  * running on a backup server. (If it's too low, import_select_connection
  * will increase the timeout anyhow.)
  */
-#define INITIAL_CONNECT_TIMEOUT max(CONNECTION_SWITCH_MIN, obd_timeout / 20)
+#define INITIAL_CONNECT_TIMEOUT	max(CONNECTION_SWITCH_MIN, obd_timeout / 20)
 /* The max delay between connects is SWITCH_MAX + SWITCH_INC + INITIAL */
-#define RECONNECT_DELAY_MAX (CONNECTION_SWITCH_MAX + CONNECTION_SWITCH_INC + \
-			     INITIAL_CONNECT_TIMEOUT)
+#define RECONNECT_DELAY_MAX	(CONNECTION_SWITCH_MAX + CONNECTION_SWITCH_INC + \
+				 INITIAL_CONNECT_TIMEOUT)
 /* The min time a target should wait for clients to reconnect in recovery */
-#define OBD_RECOVERY_TIME_MIN    (2 * RECONNECT_DELAY_MAX)
-#define OBD_IR_FACTOR_MIN	 1
-#define OBD_IR_FACTOR_MAX	 10
-#define OBD_IR_FACTOR_DEFAULT    (OBD_IR_FACTOR_MAX / 2)
+#define OBD_RECOVERY_TIME_MIN	(2 * RECONNECT_DELAY_MAX)
+#define OBD_IR_FACTOR_MIN	1
+#define OBD_IR_FACTOR_MAX	10
+#define OBD_IR_FACTOR_DEFAULT	(OBD_IR_FACTOR_MAX / 2)
 /* default timeout for the MGS to become IR_FULL */
-#define OBD_IR_MGS_TIMEOUT       (4 * obd_timeout)
-#define LONG_UNLINK 300	  /* Unlink should happen before now */
+#define OBD_IR_MGS_TIMEOUT	(4 * obd_timeout)
+#define LONG_UNLINK		300 /* Unlink should happen before now */
 
 /**
  * Time interval of shrink, if the client is "idle" more than this interval,
  * then the ll_grant thread will return the requested grant space to filter
  */
-#define GRANT_SHRINK_INTERVAL	    1200/*20 minutes*/
-
-#define OBD_FAIL_MDS		     0x100
-#define OBD_FAIL_MDS_HANDLE_UNPACK       0x101
-#define OBD_FAIL_MDS_GETATTR_NET	 0x102
-#define OBD_FAIL_MDS_GETATTR_PACK	0x103
-#define OBD_FAIL_MDS_READPAGE_NET	0x104
-#define OBD_FAIL_MDS_READPAGE_PACK       0x105
-#define OBD_FAIL_MDS_SENDPAGE	    0x106
-#define OBD_FAIL_MDS_REINT_NET	   0x107
-#define OBD_FAIL_MDS_REINT_UNPACK	0x108
-#define OBD_FAIL_MDS_REINT_SETATTR       0x109
-#define OBD_FAIL_MDS_REINT_SETATTR_WRITE 0x10a
-#define OBD_FAIL_MDS_REINT_CREATE	0x10b
-#define OBD_FAIL_MDS_REINT_CREATE_WRITE  0x10c
-#define OBD_FAIL_MDS_REINT_UNLINK	0x10d
-#define OBD_FAIL_MDS_REINT_UNLINK_WRITE  0x10e
-#define OBD_FAIL_MDS_REINT_LINK	  0x10f
-#define OBD_FAIL_MDS_REINT_LINK_WRITE    0x110
-#define OBD_FAIL_MDS_REINT_RENAME	0x111
-#define OBD_FAIL_MDS_REINT_RENAME_WRITE  0x112
-#define OBD_FAIL_MDS_OPEN_NET	    0x113
-#define OBD_FAIL_MDS_OPEN_PACK	   0x114
-#define OBD_FAIL_MDS_CLOSE_NET	   0x115
-#define OBD_FAIL_MDS_CLOSE_PACK	  0x116
-#define OBD_FAIL_MDS_CONNECT_NET	 0x117
-#define OBD_FAIL_MDS_CONNECT_PACK	0x118
-#define OBD_FAIL_MDS_REINT_NET_REP       0x119
-#define OBD_FAIL_MDS_DISCONNECT_NET      0x11a
-#define OBD_FAIL_MDS_GET_ROOT_NET	0x11b
-#define OBD_FAIL_MDS_GET_ROOT_PACK	0x11c
-#define OBD_FAIL_MDS_STATFS_PACK	 0x11d
-#define OBD_FAIL_MDS_STATFS_NET	  0x11e
-#define OBD_FAIL_MDS_GETATTR_NAME_NET    0x11f
-#define OBD_FAIL_MDS_PIN_NET	     0x120
-#define OBD_FAIL_MDS_UNPIN_NET	   0x121
-#define OBD_FAIL_MDS_ALL_REPLY_NET       0x122
-#define OBD_FAIL_MDS_ALL_REQUEST_NET     0x123
-#define OBD_FAIL_MDS_SYNC_NET	    0x124
-#define OBD_FAIL_MDS_SYNC_PACK	   0x125
+#define GRANT_SHRINK_INTERVAL	1200/*20 minutes*/
+
+#define OBD_FAIL_MDS					0x100
+#define OBD_FAIL_MDS_HANDLE_UNPACK			0x101
+#define OBD_FAIL_MDS_GETATTR_NET			0x102
+#define OBD_FAIL_MDS_GETATTR_PACK			0x103
+#define OBD_FAIL_MDS_READPAGE_NET			0x104
+#define OBD_FAIL_MDS_READPAGE_PACK			0x105
+#define OBD_FAIL_MDS_SENDPAGE				0x106
+#define OBD_FAIL_MDS_REINT_NET				0x107
+#define OBD_FAIL_MDS_REINT_UNPACK			0x108
+#define OBD_FAIL_MDS_REINT_SETATTR			0x109
+#define OBD_FAIL_MDS_REINT_SETATTR_WRITE		0x10a
+#define OBD_FAIL_MDS_REINT_CREATE			0x10b
+#define OBD_FAIL_MDS_REINT_CREATE_WRITE			0x10c
+#define OBD_FAIL_MDS_REINT_UNLINK			0x10d
+#define OBD_FAIL_MDS_REINT_UNLINK_WRITE			0x10e
+#define OBD_FAIL_MDS_REINT_LINK				0x10f
+#define OBD_FAIL_MDS_REINT_LINK_WRITE			0x110
+#define OBD_FAIL_MDS_REINT_RENAME			0x111
+#define OBD_FAIL_MDS_REINT_RENAME_WRITE			0x112
+#define OBD_FAIL_MDS_OPEN_NET				0x113
+#define OBD_FAIL_MDS_OPEN_PACK				0x114
+#define OBD_FAIL_MDS_CLOSE_NET				0x115
+#define OBD_FAIL_MDS_CLOSE_PACK				0x116
+#define OBD_FAIL_MDS_CONNECT_NET			0x117
+#define OBD_FAIL_MDS_CONNECT_PACK			0x118
+#define OBD_FAIL_MDS_REINT_NET_REP			0x119
+#define OBD_FAIL_MDS_DISCONNECT_NET			0x11a
+#define OBD_FAIL_MDS_GET_ROOT_NET			0x11b
+#define OBD_FAIL_MDS_GET_ROOT_PACK			0x11c
+#define OBD_FAIL_MDS_STATFS_PACK			0x11d
+#define OBD_FAIL_MDS_STATFS_NET				0x11e
+#define OBD_FAIL_MDS_GETATTR_NAME_NET			0x11f
+#define OBD_FAIL_MDS_PIN_NET				0x120
+#define OBD_FAIL_MDS_UNPIN_NET				0x121
+#define OBD_FAIL_MDS_ALL_REPLY_NET			0x122
+#define OBD_FAIL_MDS_ALL_REQUEST_NET			0x123
+#define OBD_FAIL_MDS_SYNC_NET				0x124
+#define OBD_FAIL_MDS_SYNC_PACK				0x125
 /*	OBD_FAIL_MDS_DONE_WRITING_NET	0x126 obsolete since 2.8.0 */
 /*	OBD_FAIL_MDS_DONE_WRITING_PACK	0x127 obsolete since 2.8.0 */
-#define OBD_FAIL_MDS_ALLOC_OBDO	  0x128
-#define OBD_FAIL_MDS_PAUSE_OPEN	  0x129
-#define OBD_FAIL_MDS_STATFS_LCW_SLEEP    0x12a
-#define OBD_FAIL_MDS_OPEN_CREATE	 0x12b
-#define OBD_FAIL_MDS_OST_SETATTR	 0x12c
+#define OBD_FAIL_MDS_ALLOC_OBDO				0x128
+#define OBD_FAIL_MDS_PAUSE_OPEN				0x129
+#define OBD_FAIL_MDS_STATFS_LCW_SLEEP			0x12a
+#define OBD_FAIL_MDS_OPEN_CREATE			0x12b
+#define OBD_FAIL_MDS_OST_SETATTR			0x12c
 /*	OBD_FAIL_MDS_QUOTACHECK_NET      0x12d obsolete since 2.4 */
-#define OBD_FAIL_MDS_QUOTACTL_NET	0x12e
-#define OBD_FAIL_MDS_CLIENT_ADD	  0x12f
-#define OBD_FAIL_MDS_GETXATTR_NET	0x130
-#define OBD_FAIL_MDS_GETXATTR_PACK       0x131
-#define OBD_FAIL_MDS_SETXATTR_NET	0x132
-#define OBD_FAIL_MDS_SETXATTR	    0x133
-#define OBD_FAIL_MDS_SETXATTR_WRITE      0x134
-#define OBD_FAIL_MDS_FS_SETUP	    0x135
-#define OBD_FAIL_MDS_RESEND	      0x136
-#define OBD_FAIL_MDS_LLOG_CREATE_FAILED  0x137
-#define OBD_FAIL_MDS_LOV_SYNC_RACE       0x138
-#define OBD_FAIL_MDS_OSC_PRECREATE       0x139
-#define OBD_FAIL_MDS_LLOG_SYNC_TIMEOUT   0x13a
-#define OBD_FAIL_MDS_CLOSE_NET_REP       0x13b
-#define OBD_FAIL_MDS_BLOCK_QUOTA_REQ     0x13c
-#define OBD_FAIL_MDS_DROP_QUOTA_REQ      0x13d
-#define OBD_FAIL_MDS_REMOVE_COMMON_EA    0x13e
-#define OBD_FAIL_MDS_ALLOW_COMMON_EA_SETTING   0x13f
-#define OBD_FAIL_MDS_FAIL_LOV_LOG_ADD    0x140
-#define OBD_FAIL_MDS_LOV_PREP_CREATE     0x141
-#define OBD_FAIL_MDS_REINT_DELAY	 0x142
-#define OBD_FAIL_MDS_READLINK_EPROTO     0x143
-#define OBD_FAIL_MDS_OPEN_WAIT_CREATE    0x144
-#define OBD_FAIL_MDS_PDO_LOCK	    0x145
-#define OBD_FAIL_MDS_PDO_LOCK2	   0x146
-#define OBD_FAIL_MDS_OSC_CREATE_FAIL     0x147
-#define OBD_FAIL_MDS_NEGATIVE_POSITIVE	 0x148
-#define OBD_FAIL_MDS_HSM_STATE_GET_NET		0x149
-#define OBD_FAIL_MDS_HSM_STATE_SET_NET		0x14a
-#define OBD_FAIL_MDS_HSM_PROGRESS_NET		0x14b
-#define OBD_FAIL_MDS_HSM_REQUEST_NET		0x14c
-#define OBD_FAIL_MDS_HSM_CT_REGISTER_NET	0x14d
-#define OBD_FAIL_MDS_HSM_CT_UNREGISTER_NET	0x14e
-#define OBD_FAIL_MDS_SWAP_LAYOUTS_NET		0x14f
-#define OBD_FAIL_MDS_HSM_ACTION_NET		0x150
-#define OBD_FAIL_MDS_CHANGELOG_INIT		0x151
+#define OBD_FAIL_MDS_QUOTACTL_NET			0x12e
+#define OBD_FAIL_MDS_CLIENT_ADD				0x12f
+#define OBD_FAIL_MDS_GETXATTR_NET			0x130
+#define OBD_FAIL_MDS_GETXATTR_PACK			0x131
+#define OBD_FAIL_MDS_SETXATTR_NET			0x132
+#define OBD_FAIL_MDS_SETXATTR				0x133
+#define OBD_FAIL_MDS_SETXATTR_WRITE			0x134
+#define OBD_FAIL_MDS_FS_SETUP				0x135
+#define OBD_FAIL_MDS_RESEND				0x136
+#define OBD_FAIL_MDS_LLOG_CREATE_FAILED			0x137
+#define OBD_FAIL_MDS_LOV_SYNC_RACE			0x138
+#define OBD_FAIL_MDS_OSC_PRECREATE			0x139
+#define OBD_FAIL_MDS_LLOG_SYNC_TIMEOUT			0x13a
+#define OBD_FAIL_MDS_CLOSE_NET_REP			0x13b
+#define OBD_FAIL_MDS_BLOCK_QUOTA_REQ			0x13c
+#define OBD_FAIL_MDS_DROP_QUOTA_REQ			0x13d
+#define OBD_FAIL_MDS_REMOVE_COMMON_EA			0x13e
+#define OBD_FAIL_MDS_ALLOW_COMMON_EA_SETTING		0x13f
+#define OBD_FAIL_MDS_FAIL_LOV_LOG_ADD			0x140
+#define OBD_FAIL_MDS_LOV_PREP_CREATE			0x141
+#define OBD_FAIL_MDS_REINT_DELAY			0x142
+#define OBD_FAIL_MDS_READLINK_EPROTO			0x143
+#define OBD_FAIL_MDS_OPEN_WAIT_CREATE			0x144
+#define OBD_FAIL_MDS_PDO_LOCK				0x145
+#define OBD_FAIL_MDS_PDO_LOCK2				0x146
+#define OBD_FAIL_MDS_OSC_CREATE_FAIL			0x147
+#define OBD_FAIL_MDS_NEGATIVE_POSITIVE			0x148
+#define OBD_FAIL_MDS_HSM_STATE_GET_NET			0x149
+#define OBD_FAIL_MDS_HSM_STATE_SET_NET			0x14a
+#define OBD_FAIL_MDS_HSM_PROGRESS_NET			0x14b
+#define OBD_FAIL_MDS_HSM_REQUEST_NET			0x14c
+#define OBD_FAIL_MDS_HSM_CT_REGISTER_NET		0x14d
+#define OBD_FAIL_MDS_HSM_CT_UNREGISTER_NET		0x14e
+#define OBD_FAIL_MDS_SWAP_LAYOUTS_NET			0x14f
+#define OBD_FAIL_MDS_HSM_ACTION_NET			0x150
+#define OBD_FAIL_MDS_CHANGELOG_INIT			0x151
 
 /* layout lock */
-#define OBD_FAIL_MDS_NO_LL_GETATTR	 0x170
-#define OBD_FAIL_MDS_NO_LL_OPEN		 0x171
-#define OBD_FAIL_MDS_LL_BLOCK		 0x172
+#define OBD_FAIL_MDS_NO_LL_GETATTR			0x170
+#define OBD_FAIL_MDS_NO_LL_OPEN				0x171
+#define OBD_FAIL_MDS_LL_BLOCK				0x172
 
 /* CMD */
-#define OBD_FAIL_MDS_IS_SUBDIR_NET       0x180
-#define OBD_FAIL_MDS_IS_SUBDIR_PACK      0x181
-#define OBD_FAIL_MDS_SET_INFO_NET	0x182
-#define OBD_FAIL_MDS_WRITEPAGE_NET       0x183
-#define OBD_FAIL_MDS_WRITEPAGE_PACK      0x184
-#define OBD_FAIL_MDS_RECOVERY_ACCEPTS_GAPS 0x185
-#define OBD_FAIL_MDS_GET_INFO_NET	0x186
-#define OBD_FAIL_MDS_DQACQ_NET	   0x187
+#define OBD_FAIL_MDS_IS_SUBDIR_NET			0x180
+#define OBD_FAIL_MDS_IS_SUBDIR_PACK			0x181
+#define OBD_FAIL_MDS_SET_INFO_NET			0x182
+#define OBD_FAIL_MDS_WRITEPAGE_NET			0x183
+#define OBD_FAIL_MDS_WRITEPAGE_PACK			0x184
+#define OBD_FAIL_MDS_RECOVERY_ACCEPTS_GAPS		0x185
+#define OBD_FAIL_MDS_GET_INFO_NET			0x186
+#define OBD_FAIL_MDS_DQACQ_NET				0x187
 
 /* OI scrub */
 #define OBD_FAIL_OSD_SCRUB_DELAY			0x190
@@ -213,278 +213,278 @@
 #define OBD_FAIL_OSD_LMA_INCOMPAT			0x194
 #define OBD_FAIL_OSD_COMPAT_INVALID_ENTRY		0x195
 
-#define OBD_FAIL_OST		     0x200
-#define OBD_FAIL_OST_CONNECT_NET	 0x201
-#define OBD_FAIL_OST_DISCONNECT_NET      0x202
-#define OBD_FAIL_OST_GET_INFO_NET	0x203
-#define OBD_FAIL_OST_CREATE_NET	  0x204
-#define OBD_FAIL_OST_DESTROY_NET	 0x205
-#define OBD_FAIL_OST_GETATTR_NET	 0x206
-#define OBD_FAIL_OST_SETATTR_NET	 0x207
-#define OBD_FAIL_OST_OPEN_NET	    0x208
-#define OBD_FAIL_OST_CLOSE_NET	   0x209
-#define OBD_FAIL_OST_BRW_NET	     0x20a
-#define OBD_FAIL_OST_PUNCH_NET	   0x20b
-#define OBD_FAIL_OST_STATFS_NET	  0x20c
-#define OBD_FAIL_OST_HANDLE_UNPACK       0x20d
-#define OBD_FAIL_OST_BRW_WRITE_BULK      0x20e
-#define OBD_FAIL_OST_BRW_READ_BULK       0x20f
-#define OBD_FAIL_OST_SYNC_NET	    0x210
-#define OBD_FAIL_OST_ALL_REPLY_NET       0x211
-#define OBD_FAIL_OST_ALL_REQUEST_NET     0x212
-#define OBD_FAIL_OST_LDLM_REPLY_NET      0x213
-#define OBD_FAIL_OST_BRW_PAUSE_BULK      0x214
-#define OBD_FAIL_OST_ENOSPC	      0x215
-#define OBD_FAIL_OST_EROFS	       0x216
-#define OBD_FAIL_OST_ENOENT	      0x217
+#define OBD_FAIL_OST					0x200
+#define OBD_FAIL_OST_CONNECT_NET			0x201
+#define OBD_FAIL_OST_DISCONNECT_NET			0x202
+#define OBD_FAIL_OST_GET_INFO_NET			0x203
+#define OBD_FAIL_OST_CREATE_NET				0x204
+#define OBD_FAIL_OST_DESTROY_NET			0x205
+#define OBD_FAIL_OST_GETATTR_NET			0x206
+#define OBD_FAIL_OST_SETATTR_NET			0x207
+#define OBD_FAIL_OST_OPEN_NET				0x208
+#define OBD_FAIL_OST_CLOSE_NET				0x209
+#define OBD_FAIL_OST_BRW_NET				0x20a
+#define OBD_FAIL_OST_PUNCH_NET				0x20b
+#define OBD_FAIL_OST_STATFS_NET				0x20c
+#define OBD_FAIL_OST_HANDLE_UNPACK			0x20d
+#define OBD_FAIL_OST_BRW_WRITE_BULK			0x20e
+#define OBD_FAIL_OST_BRW_READ_BULK			0x20f
+#define OBD_FAIL_OST_SYNC_NET				0x210
+#define OBD_FAIL_OST_ALL_REPLY_NET			0x211
+#define OBD_FAIL_OST_ALL_REQUEST_NET			0x212
+#define OBD_FAIL_OST_LDLM_REPLY_NET			0x213
+#define OBD_FAIL_OST_BRW_PAUSE_BULK			0x214
+#define OBD_FAIL_OST_ENOSPC				0x215
+#define OBD_FAIL_OST_EROFS				0x216
+#define OBD_FAIL_OST_ENOENT				0x217
 /*	OBD_FAIL_OST_QUOTACHECK_NET      0x218 obsolete since 2.4 */
-#define OBD_FAIL_OST_QUOTACTL_NET	0x219
-#define OBD_FAIL_OST_CHECKSUM_RECEIVE    0x21a
-#define OBD_FAIL_OST_CHECKSUM_SEND       0x21b
-#define OBD_FAIL_OST_BRW_SIZE	    0x21c
-#define OBD_FAIL_OST_DROP_REQ	    0x21d
-#define OBD_FAIL_OST_SETATTR_CREDITS     0x21e
-#define OBD_FAIL_OST_HOLD_WRITE_RPC      0x21f
-#define OBD_FAIL_OST_BRW_WRITE_BULK2     0x220
-#define OBD_FAIL_OST_LLOG_RECOVERY_TIMEOUT 0x221
-#define OBD_FAIL_OST_CANCEL_COOKIE_TIMEOUT 0x222
-#define OBD_FAIL_OST_PAUSE_CREATE	0x223
-#define OBD_FAIL_OST_BRW_PAUSE_PACK      0x224
-#define OBD_FAIL_OST_CONNECT_NET2	0x225
-#define OBD_FAIL_OST_NOMEM	       0x226
-#define OBD_FAIL_OST_BRW_PAUSE_BULK2     0x227
-#define OBD_FAIL_OST_MAPBLK_ENOSPC       0x228
-#define OBD_FAIL_OST_ENOINO	      0x229
-#define OBD_FAIL_OST_DQACQ_NET	   0x230
-#define OBD_FAIL_OST_STATFS_EINPROGRESS  0x231
-#define OBD_FAIL_OST_SET_INFO_NET		0x232
-
-#define OBD_FAIL_LDLM		    0x300
-#define OBD_FAIL_LDLM_NAMESPACE_NEW      0x301
+#define OBD_FAIL_OST_QUOTACTL_NET			0x219
+#define OBD_FAIL_OST_CHECKSUM_RECEIVE			0x21a
+#define OBD_FAIL_OST_CHECKSUM_SEND			0x21b
+#define OBD_FAIL_OST_BRW_SIZE				0x21c
+#define OBD_FAIL_OST_DROP_REQ				0x21d
+#define OBD_FAIL_OST_SETATTR_CREDITS			0x21e
+#define OBD_FAIL_OST_HOLD_WRITE_RPC			0x21f
+#define OBD_FAIL_OST_BRW_WRITE_BULK2			0x220
+#define OBD_FAIL_OST_LLOG_RECOVERY_TIMEOUT		0x221
+#define OBD_FAIL_OST_CANCEL_COOKIE_TIMEOUT		0x222
+#define OBD_FAIL_OST_PAUSE_CREATE			0x223
+#define OBD_FAIL_OST_BRW_PAUSE_PACK			0x224
+#define OBD_FAIL_OST_CONNECT_NET2			0x225
+#define OBD_FAIL_OST_NOMEM				0x226
+#define OBD_FAIL_OST_BRW_PAUSE_BULK2			0x227
+#define OBD_FAIL_OST_MAPBLK_ENOSPC			0x228
+#define OBD_FAIL_OST_ENOINO				0x229
+#define OBD_FAIL_OST_DQACQ_NET				0x230
+#define OBD_FAIL_OST_STATFS_EINPROGRESS			0x231
+#define OBD_FAIL_OST_SET_INFO_NET			0x232
+
+#define OBD_FAIL_LDLM					0x300
+#define OBD_FAIL_LDLM_NAMESPACE_NEW			0x301
 #define OBD_FAIL_LDLM_ENQUEUE_NET			0x302
 #define OBD_FAIL_LDLM_CONVERT_NET			0x303
 #define OBD_FAIL_LDLM_CANCEL_NET			0x304
 #define OBD_FAIL_LDLM_BL_CALLBACK_NET			0x305
 #define OBD_FAIL_LDLM_CP_CALLBACK_NET			0x306
 #define OBD_FAIL_LDLM_GL_CALLBACK_NET			0x307
-#define OBD_FAIL_LDLM_ENQUEUE_EXTENT_ERR 0x308
-#define OBD_FAIL_LDLM_ENQUEUE_INTENT_ERR 0x309
-#define OBD_FAIL_LDLM_CREATE_RESOURCE    0x30a
-#define OBD_FAIL_LDLM_ENQUEUE_BLOCKED    0x30b
-#define OBD_FAIL_LDLM_REPLY	      0x30c
-#define OBD_FAIL_LDLM_RECOV_CLIENTS      0x30d
-#define OBD_FAIL_LDLM_ENQUEUE_OLD_EXPORT 0x30e
-#define OBD_FAIL_LDLM_GLIMPSE	    0x30f
-#define OBD_FAIL_LDLM_CANCEL_RACE	0x310
-#define OBD_FAIL_LDLM_CANCEL_EVICT_RACE  0x311
-#define OBD_FAIL_LDLM_PAUSE_CANCEL       0x312
-#define OBD_FAIL_LDLM_CLOSE_THREAD       0x313
-#define OBD_FAIL_LDLM_CANCEL_BL_CB_RACE  0x314
-#define OBD_FAIL_LDLM_CP_CB_WAIT	 0x315
-#define OBD_FAIL_LDLM_OST_FAIL_RACE      0x316
-#define OBD_FAIL_LDLM_INTR_CP_AST	0x317
-#define OBD_FAIL_LDLM_CP_BL_RACE	 0x318
-#define OBD_FAIL_LDLM_NEW_LOCK	   0x319
-#define OBD_FAIL_LDLM_AGL_DELAY	  0x31a
-#define OBD_FAIL_LDLM_AGL_NOLOCK	 0x31b
-#define OBD_FAIL_LDLM_OST_LVB		 0x31c
-#define OBD_FAIL_LDLM_ENQUEUE_HANG	 0x31d
-#define OBD_FAIL_LDLM_PAUSE_CANCEL2	 0x31f
-#define OBD_FAIL_LDLM_CP_CB_WAIT2	 0x320
-#define OBD_FAIL_LDLM_CP_CB_WAIT3	 0x321
-#define OBD_FAIL_LDLM_CP_CB_WAIT4	 0x322
-#define OBD_FAIL_LDLM_CP_CB_WAIT5	 0x323
-
-#define OBD_FAIL_LDLM_GRANT_CHECK        0x32a
+#define OBD_FAIL_LDLM_ENQUEUE_EXTENT_ERR		0x308
+#define OBD_FAIL_LDLM_ENQUEUE_INTENT_ERR		0x309
+#define OBD_FAIL_LDLM_CREATE_RESOURCE			0x30a
+#define OBD_FAIL_LDLM_ENQUEUE_BLOCKED			0x30b
+#define OBD_FAIL_LDLM_REPLY				0x30c
+#define OBD_FAIL_LDLM_RECOV_CLIENTS			0x30d
+#define OBD_FAIL_LDLM_ENQUEUE_OLD_EXPORT		0x30e
+#define OBD_FAIL_LDLM_GLIMPSE				0x30f
+#define OBD_FAIL_LDLM_CANCEL_RACE			0x310
+#define OBD_FAIL_LDLM_CANCEL_EVICT_RACE			0x311
+#define OBD_FAIL_LDLM_PAUSE_CANCEL			0x312
+#define OBD_FAIL_LDLM_CLOSE_THREAD			0x313
+#define OBD_FAIL_LDLM_CANCEL_BL_CB_RACE			0x314
+#define OBD_FAIL_LDLM_CP_CB_WAIT			0x315
+#define OBD_FAIL_LDLM_OST_FAIL_RACE			0x316
+#define OBD_FAIL_LDLM_INTR_CP_AST			0x317
+#define OBD_FAIL_LDLM_CP_BL_RACE			0x318
+#define OBD_FAIL_LDLM_NEW_LOCK				0x319
+#define OBD_FAIL_LDLM_AGL_DELAY				0x31a
+#define OBD_FAIL_LDLM_AGL_NOLOCK			0x31b
+#define OBD_FAIL_LDLM_OST_LVB				0x31c
+#define OBD_FAIL_LDLM_ENQUEUE_HANG			0x31d
+#define OBD_FAIL_LDLM_PAUSE_CANCEL2			0x31f
+#define OBD_FAIL_LDLM_CP_CB_WAIT2			0x320
+#define OBD_FAIL_LDLM_CP_CB_WAIT3			0x321
+#define OBD_FAIL_LDLM_CP_CB_WAIT4			0x322
+#define OBD_FAIL_LDLM_CP_CB_WAIT5			0x323
+
+#define OBD_FAIL_LDLM_GRANT_CHECK			0x32a
 
 /* LOCKLESS IO */
-#define OBD_FAIL_LDLM_SET_CONTENTION     0x385
-
-#define OBD_FAIL_OSC		     0x400
-#define OBD_FAIL_OSC_BRW_READ_BULK       0x401
-#define OBD_FAIL_OSC_BRW_WRITE_BULK      0x402
-#define OBD_FAIL_OSC_LOCK_BL_AST	 0x403
-#define OBD_FAIL_OSC_LOCK_CP_AST	 0x404
-#define OBD_FAIL_OSC_MATCH	       0x405
-#define OBD_FAIL_OSC_BRW_PREP_REQ	0x406
-#define OBD_FAIL_OSC_SHUTDOWN	    0x407
-#define OBD_FAIL_OSC_CHECKSUM_RECEIVE    0x408
-#define OBD_FAIL_OSC_CHECKSUM_SEND       0x409
-#define OBD_FAIL_OSC_BRW_PREP_REQ2       0x40a
+#define OBD_FAIL_LDLM_SET_CONTENTION			0x385
+
+#define OBD_FAIL_OSC					0x400
+#define OBD_FAIL_OSC_BRW_READ_BULK			0x401
+#define OBD_FAIL_OSC_BRW_WRITE_BULK			0x402
+#define OBD_FAIL_OSC_LOCK_BL_AST			0x403
+#define OBD_FAIL_OSC_LOCK_CP_AST			0x404
+#define OBD_FAIL_OSC_MATCH				0x405
+#define OBD_FAIL_OSC_BRW_PREP_REQ			0x406
+#define OBD_FAIL_OSC_SHUTDOWN				0x407
+#define OBD_FAIL_OSC_CHECKSUM_RECEIVE			0x408
+#define OBD_FAIL_OSC_CHECKSUM_SEND			0x409
+#define OBD_FAIL_OSC_BRW_PREP_REQ2			0x40a
 /* #define OBD_FAIL_OSC_CONNECT_CKSUM	 0x40b Obsolete since 2.9 */
-#define OBD_FAIL_OSC_CKSUM_ADLER_ONLY    0x40c
-#define OBD_FAIL_OSC_DIO_PAUSE	   0x40d
-#define OBD_FAIL_OSC_OBJECT_CONTENTION   0x40e
-#define OBD_FAIL_OSC_CP_CANCEL_RACE      0x40f
-#define OBD_FAIL_OSC_CP_ENQ_RACE	 0x410
-#define OBD_FAIL_OSC_NO_GRANT	    0x411
-#define OBD_FAIL_OSC_DELAY_SETTIME	 0x412
-#define OBD_FAIL_OSC_CONNECT_GRANT_PARAM 0x413
-#define OBD_FAIL_OSC_DELAY_IO		 0x414
-
-#define OBD_FAIL_PTLRPC		  0x500
-#define OBD_FAIL_PTLRPC_ACK	      0x501
-#define OBD_FAIL_PTLRPC_RQBD	     0x502
-#define OBD_FAIL_PTLRPC_BULK_GET_NET     0x503
-#define OBD_FAIL_PTLRPC_BULK_PUT_NET     0x504
-#define OBD_FAIL_PTLRPC_DROP_RPC	 0x505
-#define OBD_FAIL_PTLRPC_DELAY_SEND       0x506
-#define OBD_FAIL_PTLRPC_DELAY_RECOV      0x507
-#define OBD_FAIL_PTLRPC_CLIENT_BULK_CB   0x508
-#define OBD_FAIL_PTLRPC_PAUSE_REQ	0x50a
-#define OBD_FAIL_PTLRPC_PAUSE_REP	0x50c
-#define OBD_FAIL_PTLRPC_IMP_DEACTIVE     0x50d
-#define OBD_FAIL_PTLRPC_DUMP_LOG	 0x50e
-#define OBD_FAIL_PTLRPC_LONG_REPL_UNLINK 0x50f
-#define OBD_FAIL_PTLRPC_LONG_BULK_UNLINK 0x510
-#define OBD_FAIL_PTLRPC_HPREQ_TIMEOUT    0x511
-#define OBD_FAIL_PTLRPC_HPREQ_NOTIMEOUT  0x512
-#define OBD_FAIL_PTLRPC_DROP_REQ_OPC     0x513
-#define OBD_FAIL_PTLRPC_FINISH_REPLAY    0x514
-#define OBD_FAIL_PTLRPC_CLIENT_BULK_CB2  0x515
-#define OBD_FAIL_PTLRPC_DELAY_IMP_FULL   0x516
-#define OBD_FAIL_PTLRPC_CANCEL_RESEND    0x517
-#define OBD_FAIL_PTLRPC_DROP_BULK	 0x51a
-#define OBD_FAIL_PTLRPC_LONG_REQ_UNLINK	 0x51b
-#define OBD_FAIL_PTLRPC_LONG_BOTH_UNLINK 0x51c
-
-#define OBD_FAIL_OBD_PING_NET	    0x600
-#define OBD_FAIL_OBD_LOG_CANCEL_NET      0x601
-#define OBD_FAIL_OBD_LOGD_NET	    0x602
+#define OBD_FAIL_OSC_CKSUM_ADLER_ONLY			0x40c
+#define OBD_FAIL_OSC_DIO_PAUSE				0x40d
+#define OBD_FAIL_OSC_OBJECT_CONTENTION			0x40e
+#define OBD_FAIL_OSC_CP_CANCEL_RACE			0x40f
+#define OBD_FAIL_OSC_CP_ENQ_RACE			0x410
+#define OBD_FAIL_OSC_NO_GRANT				0x411
+#define OBD_FAIL_OSC_DELAY_SETTIME			0x412
+#define OBD_FAIL_OSC_CONNECT_GRANT_PARAM		0x413
+#define OBD_FAIL_OSC_DELAY_IO				0x414
+
+#define OBD_FAIL_PTLRPC					0x500
+#define OBD_FAIL_PTLRPC_ACK				0x501
+#define OBD_FAIL_PTLRPC_RQBD				0x502
+#define OBD_FAIL_PTLRPC_BULK_GET_NET			0x503
+#define OBD_FAIL_PTLRPC_BULK_PUT_NET			0x504
+#define OBD_FAIL_PTLRPC_DROP_RPC			0x505
+#define OBD_FAIL_PTLRPC_DELAY_SEND			0x506
+#define OBD_FAIL_PTLRPC_DELAY_RECOV			0x507
+#define OBD_FAIL_PTLRPC_CLIENT_BULK_CB			0x508
+#define OBD_FAIL_PTLRPC_PAUSE_REQ			0x50a
+#define OBD_FAIL_PTLRPC_PAUSE_REP			0x50c
+#define OBD_FAIL_PTLRPC_IMP_DEACTIVE			0x50d
+#define OBD_FAIL_PTLRPC_DUMP_LOG			0x50e
+#define OBD_FAIL_PTLRPC_LONG_REPL_UNLINK		0x50f
+#define OBD_FAIL_PTLRPC_LONG_BULK_UNLINK		0x510
+#define OBD_FAIL_PTLRPC_HPREQ_TIMEOUT			0x511
+#define OBD_FAIL_PTLRPC_HPREQ_NOTIMEOUT			0x512
+#define OBD_FAIL_PTLRPC_DROP_REQ_OPC			0x513
+#define OBD_FAIL_PTLRPC_FINISH_REPLAY			0x514
+#define OBD_FAIL_PTLRPC_CLIENT_BULK_CB2			0x515
+#define OBD_FAIL_PTLRPC_DELAY_IMP_FULL			0x516
+#define OBD_FAIL_PTLRPC_CANCEL_RESEND			0x517
+#define OBD_FAIL_PTLRPC_DROP_BULK			0x51a
+#define OBD_FAIL_PTLRPC_LONG_REQ_UNLINK			0x51b
+#define OBD_FAIL_PTLRPC_LONG_BOTH_UNLINK		0x51c
+
+#define OBD_FAIL_OBD_PING_NET				0x600
+#define OBD_FAIL_OBD_LOG_CANCEL_NET			0x601
+#define OBD_FAIL_OBD_LOGD_NET				0x602
 /*	OBD_FAIL_OBD_QC_CALLBACK_NET     0x603 obsolete since 2.4 */
-#define OBD_FAIL_OBD_DQACQ	       0x604
-#define OBD_FAIL_OBD_LLOG_SETUP	  0x605
-#define OBD_FAIL_OBD_LOG_CANCEL_REP      0x606
-#define OBD_FAIL_OBD_IDX_READ_NET	0x607
-#define OBD_FAIL_OBD_IDX_READ_BREAK	 0x608
-#define OBD_FAIL_OBD_NO_LRU		 0x609
-#define OBD_FAIL_OBDCLASS_MODULE_LOAD	 0x60a
-
-#define OBD_FAIL_TGT_REPLY_NET	   0x700
-#define OBD_FAIL_TGT_CONN_RACE	   0x701
-#define OBD_FAIL_TGT_FORCE_RECONNECT     0x702
-#define OBD_FAIL_TGT_DELAY_CONNECT       0x703
-#define OBD_FAIL_TGT_DELAY_RECONNECT     0x704
-#define OBD_FAIL_TGT_DELAY_PRECREATE     0x705
-#define OBD_FAIL_TGT_TOOMANY_THREADS     0x706
-#define OBD_FAIL_TGT_REPLAY_DROP	 0x707
-#define OBD_FAIL_TGT_FAKE_EXP	    0x708
-#define OBD_FAIL_TGT_REPLAY_DELAY	0x709
-#define OBD_FAIL_TGT_LAST_REPLAY	 0x710
-#define OBD_FAIL_TGT_CLIENT_ADD	  0x711
-#define OBD_FAIL_TGT_RCVG_FLAG	   0x712
-#define OBD_FAIL_TGT_DELAY_CONDITIONAL	 0x713
-
-#define OBD_FAIL_MDC_REVALIDATE_PAUSE    0x800
-#define OBD_FAIL_MDC_ENQUEUE_PAUSE       0x801
-#define OBD_FAIL_MDC_OLD_EXT_FLAGS       0x802
-#define OBD_FAIL_MDC_GETATTR_ENQUEUE     0x803
-#define OBD_FAIL_MDC_RPCS_SEM		 0x804
-#define OBD_FAIL_MDC_LIGHTWEIGHT	 0x805
-#define OBD_FAIL_MDC_CLOSE		 0x806
-
-#define OBD_FAIL_MGS		     0x900
-#define OBD_FAIL_MGS_ALL_REQUEST_NET     0x901
-#define OBD_FAIL_MGS_ALL_REPLY_NET       0x902
-#define OBD_FAIL_MGC_PAUSE_PROCESS_LOG   0x903
-#define OBD_FAIL_MGS_PAUSE_REQ	   0x904
-#define OBD_FAIL_MGS_PAUSE_TARGET_REG    0x905
-#define OBD_FAIL_MGS_CONNECT_NET	 0x906
-#define OBD_FAIL_MGS_DISCONNECT_NET	 0x907
-#define OBD_FAIL_MGS_SET_INFO_NET	 0x908
-#define OBD_FAIL_MGS_EXCEPTION_NET	 0x909
-#define OBD_FAIL_MGS_TARGET_REG_NET	 0x90a
-#define OBD_FAIL_MGS_TARGET_DEL_NET	 0x90b
-#define OBD_FAIL_MGS_CONFIG_READ_NET	 0x90c
+#define OBD_FAIL_OBD_DQACQ				0x604
+#define OBD_FAIL_OBD_LLOG_SETUP				0x605
+#define OBD_FAIL_OBD_LOG_CANCEL_REP			0x606
+#define OBD_FAIL_OBD_IDX_READ_NET			0x607
+#define OBD_FAIL_OBD_IDX_READ_BREAK			0x608
+#define OBD_FAIL_OBD_NO_LRU				0x609
+#define OBD_FAIL_OBDCLASS_MODULE_LOAD			0x60a
+
+#define OBD_FAIL_TGT_REPLY_NET				0x700
+#define OBD_FAIL_TGT_CONN_RACE				0x701
+#define OBD_FAIL_TGT_FORCE_RECONNECT			0x702
+#define OBD_FAIL_TGT_DELAY_CONNECT			0x703
+#define OBD_FAIL_TGT_DELAY_RECONNECT			0x704
+#define OBD_FAIL_TGT_DELAY_PRECREATE			0x705
+#define OBD_FAIL_TGT_TOOMANY_THREADS			0x706
+#define OBD_FAIL_TGT_REPLAY_DROP			0x707
+#define OBD_FAIL_TGT_FAKE_EXP				0x708
+#define OBD_FAIL_TGT_REPLAY_DELAY			0x709
+#define OBD_FAIL_TGT_LAST_REPLAY			0x710
+#define OBD_FAIL_TGT_CLIENT_ADD				0x711
+#define OBD_FAIL_TGT_RCVG_FLAG				0x712
+#define OBD_FAIL_TGT_DELAY_CONDITIONAL			0x713
+
+#define OBD_FAIL_MDC_REVALIDATE_PAUSE			0x800
+#define OBD_FAIL_MDC_ENQUEUE_PAUSE			0x801
+#define OBD_FAIL_MDC_OLD_EXT_FLAGS			0x802
+#define OBD_FAIL_MDC_GETATTR_ENQUEUE			0x803
+#define OBD_FAIL_MDC_RPCS_SEM				0x804
+#define OBD_FAIL_MDC_LIGHTWEIGHT			0x805
+#define OBD_FAIL_MDC_CLOSE				0x806
+
+#define OBD_FAIL_MGS					0x900
+#define OBD_FAIL_MGS_ALL_REQUEST_NET			0x901
+#define OBD_FAIL_MGS_ALL_REPLY_NET			0x902
+#define OBD_FAIL_MGC_PAUSE_PROCESS_LOG			0x903
+#define OBD_FAIL_MGS_PAUSE_REQ				0x904
+#define OBD_FAIL_MGS_PAUSE_TARGET_REG			0x905
+#define OBD_FAIL_MGS_CONNECT_NET			0x906
+#define OBD_FAIL_MGS_DISCONNECT_NET			0x907
+#define OBD_FAIL_MGS_SET_INFO_NET			0x908
+#define OBD_FAIL_MGS_EXCEPTION_NET			0x909
+#define OBD_FAIL_MGS_TARGET_REG_NET			0x90a
+#define OBD_FAIL_MGS_TARGET_DEL_NET			0x90b
+#define OBD_FAIL_MGS_CONFIG_READ_NET			0x90c
 
 #define OBD_FAIL_QUOTA_DQACQ_NET			0xA01
-#define OBD_FAIL_QUOTA_EDQUOT	    0xA02
-#define OBD_FAIL_QUOTA_DELAY_REINT       0xA03
-#define OBD_FAIL_QUOTA_RECOVERABLE_ERR   0xA04
-
-#define OBD_FAIL_LPROC_REMOVE	    0xB00
-
-#define OBD_FAIL_SEQ		     0x1000
-#define OBD_FAIL_SEQ_QUERY_NET	   0x1001
-#define OBD_FAIL_SEQ_EXHAUST		 0x1002
-
-#define OBD_FAIL_FLD		     0x1100
-#define OBD_FAIL_FLD_QUERY_NET	   0x1101
-#define OBD_FAIL_FLD_READ_NET		0x1102
-
-#define OBD_FAIL_SEC_CTX		 0x1200
-#define OBD_FAIL_SEC_CTX_INIT_NET	0x1201
-#define OBD_FAIL_SEC_CTX_INIT_CONT_NET   0x1202
-#define OBD_FAIL_SEC_CTX_FINI_NET	0x1203
-#define OBD_FAIL_SEC_CTX_HDL_PAUSE       0x1204
-
-#define OBD_FAIL_LLOG			       0x1300
-#define OBD_FAIL_LLOG_ORIGIN_CONNECT_NET	    0x1301
-#define OBD_FAIL_LLOG_ORIGIN_HANDLE_CREATE_NET      0x1302
-#define OBD_FAIL_LLOG_ORIGIN_HANDLE_DESTROY_NET     0x1303
-#define OBD_FAIL_LLOG_ORIGIN_HANDLE_READ_HEADER_NET 0x1304
-#define OBD_FAIL_LLOG_ORIGIN_HANDLE_NEXT_BLOCK_NET  0x1305
-#define OBD_FAIL_LLOG_ORIGIN_HANDLE_PREV_BLOCK_NET  0x1306
-#define OBD_FAIL_LLOG_ORIGIN_HANDLE_WRITE_REC_NET   0x1307
-#define OBD_FAIL_LLOG_ORIGIN_HANDLE_CLOSE_NET       0x1308
-#define OBD_FAIL_LLOG_CATINFO_NET		   0x1309
-#define OBD_FAIL_MDS_SYNC_CAPA_SL		   0x1310
-#define OBD_FAIL_SEQ_ALLOC			  0x1311
-
-#define OBD_FAIL_LLITE			      0x1400
-#define OBD_FAIL_LLITE_FAULT_TRUNC_RACE	     0x1401
-#define OBD_FAIL_LOCK_STATE_WAIT_INTR	       0x1402
-#define OBD_FAIL_LOV_INIT			    0x1403
-#define OBD_FAIL_GLIMPSE_DELAY			    0x1404
-#define OBD_FAIL_LLITE_XATTR_ENOMEM		    0x1405
-#define OBD_FAIL_MAKE_LOVEA_HOLE		    0x1406
-#define OBD_FAIL_LLITE_LOST_LAYOUT		    0x1407
-#define OBD_FAIL_GETATTR_DELAY			    0x1409
-#define OBD_FAIL_LLITE_CREATE_NODE_PAUSE	    0x140c
-#define OBD_FAIL_LLITE_IMUTEX_SEC		    0x140e
-#define OBD_FAIL_LLITE_IMUTEX_NOSEC		    0x140f
-
-#define OBD_FAIL_FID_INDIR	0x1501
-#define OBD_FAIL_FID_INLMA	0x1502
-#define OBD_FAIL_FID_IGIF	0x1504
-#define OBD_FAIL_FID_LOOKUP	0x1505
-#define OBD_FAIL_FID_NOLMA	0x1506
+#define OBD_FAIL_QUOTA_EDQUOT				0xA02
+#define OBD_FAIL_QUOTA_DELAY_REINT			0xA03
+#define OBD_FAIL_QUOTA_RECOVERABLE_ERR			0xA04
+
+#define OBD_FAIL_LPROC_REMOVE				0xB00
+
+#define OBD_FAIL_SEQ					0x1000
+#define OBD_FAIL_SEQ_QUERY_NET				0x1001
+#define OBD_FAIL_SEQ_EXHAUST				0x1002
+
+#define OBD_FAIL_FLD					0x1100
+#define OBD_FAIL_FLD_QUERY_NET				0x1101
+#define OBD_FAIL_FLD_READ_NET				0x1102
+
+#define OBD_FAIL_SEC_CTX				0x1200
+#define OBD_FAIL_SEC_CTX_INIT_NET			0x1201
+#define OBD_FAIL_SEC_CTX_INIT_CONT_NET			0x1202
+#define OBD_FAIL_SEC_CTX_FINI_NET			0x1203
+#define OBD_FAIL_SEC_CTX_HDL_PAUSE			0x1204
+
+#define OBD_FAIL_LLOG					0x1300
+#define OBD_FAIL_LLOG_ORIGIN_CONNECT_NET		0x1301
+#define OBD_FAIL_LLOG_ORIGIN_HANDLE_CREATE_NET		0x1302
+#define OBD_FAIL_LLOG_ORIGIN_HANDLE_DESTROY_NET		0x1303
+#define OBD_FAIL_LLOG_ORIGIN_HANDLE_READ_HEADER_NET	0x1304
+#define OBD_FAIL_LLOG_ORIGIN_HANDLE_NEXT_BLOCK_NET	0x1305
+#define OBD_FAIL_LLOG_ORIGIN_HANDLE_PREV_BLOCK_NET	0x1306
+#define OBD_FAIL_LLOG_ORIGIN_HANDLE_WRITE_REC_NET	0x1307
+#define OBD_FAIL_LLOG_ORIGIN_HANDLE_CLOSE_NET		0x1308
+#define OBD_FAIL_LLOG_CATINFO_NET			0x1309
+#define OBD_FAIL_MDS_SYNC_CAPA_SL			0x1310
+#define OBD_FAIL_SEQ_ALLOC				0x1311
+
+#define OBD_FAIL_LLITE					0x1400
+#define OBD_FAIL_LLITE_FAULT_TRUNC_RACE			0x1401
+#define OBD_FAIL_LOCK_STATE_WAIT_INTR			0x1402
+#define OBD_FAIL_LOV_INIT				0x1403
+#define OBD_FAIL_GLIMPSE_DELAY				0x1404
+#define OBD_FAIL_LLITE_XATTR_ENOMEM			0x1405
+#define OBD_FAIL_MAKE_LOVEA_HOLE			0x1406
+#define OBD_FAIL_LLITE_LOST_LAYOUT			0x1407
+#define OBD_FAIL_GETATTR_DELAY				0x1409
+#define OBD_FAIL_LLITE_CREATE_NODE_PAUSE		0x140c
+#define OBD_FAIL_LLITE_IMUTEX_SEC			0x140e
+#define OBD_FAIL_LLITE_IMUTEX_NOSEC			0x140f
+
+#define OBD_FAIL_FID_INDIR				0x1501
+#define OBD_FAIL_FID_INLMA				0x1502
+#define OBD_FAIL_FID_IGIF				0x1504
+#define OBD_FAIL_FID_LOOKUP				0x1505
+#define OBD_FAIL_FID_NOLMA				0x1506
 
 /* LFSCK */
-#define OBD_FAIL_LFSCK_DELAY1		0x1600
-#define OBD_FAIL_LFSCK_DELAY2		0x1601
-#define OBD_FAIL_LFSCK_DELAY3		0x1602
-#define OBD_FAIL_LFSCK_LINKEA_CRASH	0x1603
-#define OBD_FAIL_LFSCK_LINKEA_MORE	0x1604
-#define OBD_FAIL_LFSCK_LINKEA_MORE2	0x1605
-#define OBD_FAIL_LFSCK_FATAL1		0x1608
-#define OBD_FAIL_LFSCK_FATAL2		0x1609
-#define OBD_FAIL_LFSCK_CRASH		0x160a
-#define OBD_FAIL_LFSCK_NO_AUTO		0x160b
-#define OBD_FAIL_LFSCK_NO_DOUBLESCAN	0x160c
-#define OBD_FAIL_LFSCK_INVALID_PFID	0x1619
-#define OBD_FAIL_LFSCK_BAD_NAME_HASH	0x1628
+#define OBD_FAIL_LFSCK_DELAY1				0x1600
+#define OBD_FAIL_LFSCK_DELAY2				0x1601
+#define OBD_FAIL_LFSCK_DELAY3				0x1602
+#define OBD_FAIL_LFSCK_LINKEA_CRASH			0x1603
+#define OBD_FAIL_LFSCK_LINKEA_MORE			0x1604
+#define OBD_FAIL_LFSCK_LINKEA_MORE2			0x1605
+#define OBD_FAIL_LFSCK_FATAL1				0x1608
+#define OBD_FAIL_LFSCK_FATAL2				0x1609
+#define OBD_FAIL_LFSCK_CRASH				0x160a
+#define OBD_FAIL_LFSCK_NO_AUTO				0x160b
+#define OBD_FAIL_LFSCK_NO_DOUBLESCAN			0x160c
+#define OBD_FAIL_LFSCK_INVALID_PFID			0x1619
+#define OBD_FAIL_LFSCK_BAD_NAME_HASH			0x1628
 
 /* UPDATE */
-#define OBD_FAIL_UPDATE_OBJ_NET			0x1700
-#define OBD_FAIL_UPDATE_OBJ_NET_REP		0x1701
+#define OBD_FAIL_UPDATE_OBJ_NET				0x1700
+#define OBD_FAIL_UPDATE_OBJ_NET_REP			0x1701
 
 /* LMV */
-#define OBD_FAIL_UNKNOWN_LMV_STRIPE		0x1901
+#define OBD_FAIL_UNKNOWN_LMV_STRIPE			0x1901
 
 /* Assign references to moved code to reduce code changes */
-#define OBD_FAIL_PRECHECK(id)		   CFS_FAIL_PRECHECK(id)
-#define OBD_FAIL_CHECK(id)		      CFS_FAIL_CHECK(id)
-#define OBD_FAIL_CHECK_VALUE(id, value)	 CFS_FAIL_CHECK_VALUE(id, value)
-#define OBD_FAIL_CHECK_ORSET(id, value)	 CFS_FAIL_CHECK_ORSET(id, value)
-#define OBD_FAIL_CHECK_RESET(id, value)	 CFS_FAIL_CHECK_RESET(id, value)
+#define OBD_FAIL_PRECHECK(id)			CFS_FAIL_PRECHECK(id)
+#define OBD_FAIL_CHECK(id)			CFS_FAIL_CHECK(id)
+#define OBD_FAIL_CHECK_VALUE(id, value)		CFS_FAIL_CHECK_VALUE(id, value)
+#define OBD_FAIL_CHECK_ORSET(id, value)		CFS_FAIL_CHECK_ORSET(id, value)
+#define OBD_FAIL_CHECK_RESET(id, value)		CFS_FAIL_CHECK_RESET(id, value)
 #define OBD_FAIL_RETURN(id, ret)		CFS_FAIL_RETURN(id, ret)
-#define OBD_FAIL_TIMEOUT(id, secs)	      CFS_FAIL_TIMEOUT(id, secs)
-#define OBD_FAIL_TIMEOUT_MS(id, ms)	     CFS_FAIL_TIMEOUT_MS(id, ms)
-#define OBD_FAIL_TIMEOUT_ORSET(id, value, secs) CFS_FAIL_TIMEOUT_ORSET(id, value, secs)
-#define OBD_RACE(id)			    CFS_RACE(id)
-#define OBD_FAIL_ONCE			   CFS_FAIL_ONCE
-#define OBD_FAILED			      CFS_FAILED
+#define OBD_FAIL_TIMEOUT(id, secs)		CFS_FAIL_TIMEOUT(id, secs)
+#define OBD_FAIL_TIMEOUT_MS(id, ms)		CFS_FAIL_TIMEOUT_MS(id, ms)
+#define OBD_FAIL_TIMEOUT_ORSET(id, value, secs)	CFS_FAIL_TIMEOUT_ORSET(id, value, secs)
+#define OBD_RACE(id)				CFS_RACE(id)
+#define OBD_FAIL_ONCE				CFS_FAIL_ONCE
+#define OBD_FAILED				CFS_FAILED
 
 #ifdef CONFIG_DEBUG_SLAB
 #define POISON(ptr, c, s) do {} while (0)
@@ -495,22 +495,22 @@
 #endif
 
 #ifdef POISON_BULK
-#define POISON_PAGE(page, val) do {		  \
-	memset(kmap(page), val, PAGE_SIZE); \
-	kunmap(page);				  \
+#define POISON_PAGE(page, val) do {			\
+	memset(kmap(page), val, PAGE_SIZE);		\
+	kunmap(page);					\
 } while (0)
 #else
 #define POISON_PAGE(page, val) do { } while (0)
 #endif
 
-#define OBD_FREE_RCU(ptr, size, handle)					      \
-do {									      \
-	struct portals_handle *__h = (handle);				      \
-									      \
-	__h->h_cookie = (unsigned long)(ptr);				      \
-	__h->h_size = (size);						      \
-	call_rcu(&__h->h_rcu, class_handle_free_cb);			      \
-	POISON_PTR(ptr);						      \
+#define OBD_FREE_RCU(ptr, size, handle)			\
+do {							\
+	struct portals_handle *__h = (handle);		\
+							\
+	__h->h_cookie = (unsigned long)(ptr);		\
+	__h->h_size = (size);				\
+	call_rcu(&__h->h_rcu, class_handle_free_cb);	\
+	POISON_PTR(ptr);				\
 } while (0)
 
 #define KEY_IS(str)					\
diff --git a/drivers/staging/lustre/lustre/include/seq_range.h b/drivers/staging/lustre/lustre/include/seq_range.h
index 9450da72..884d4d4 100644
--- a/drivers/staging/lustre/lustre/include/seq_range.h
+++ b/drivers/staging/lustre/lustre/include/seq_range.h
@@ -176,7 +176,7 @@ static inline int lu_seq_range_compare_loc(const struct lu_seq_range *r1,
 					   const struct lu_seq_range *r2)
 {
 	return r1->lsr_index != r2->lsr_index ||
-		r1->lsr_flags != r2->lsr_flags;
+	       r1->lsr_flags != r2->lsr_flags;
 }
 
 #if !defined(__REQ_LAYOUT_USER__)
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [lustre-devel] [PATCH 23/26] libcfs: cleanup white spaces
  2019-01-31 17:19 [lustre-devel] [PATCH 00/26] lustre: cleanups with no code changes James Simmons
                   ` (21 preceding siblings ...)
  2019-01-31 17:19 ` [lustre-devel] [PATCH 22/26] lustre: last " James Simmons
@ 2019-01-31 17:19 ` James Simmons
  2019-01-31 17:19 ` [lustre-devel] [PATCH 24/26] lnet: " James Simmons
                   ` (3 subsequent siblings)
  26 siblings, 0 replies; 30+ messages in thread
From: James Simmons @ 2019-01-31 17:19 UTC (permalink / raw)
  To: lustre-devel

The libcfs code is very messy and difficult to read. Remove excess
white space and properly align data structures so they are easy on
the eyes.

Signed-off-by: James Simmons <jsimmons@infradead.org>
---
 .../lustre/include/linux/libcfs/libcfs_debug.h     |  74 ++++++------
 .../lustre/include/linux/libcfs/libcfs_fail.h      |   8 +-
 .../lustre/include/linux/libcfs/libcfs_private.h   |  56 ++++-----
 .../lustre/include/linux/libcfs/libcfs_string.h    |  10 +-
 drivers/staging/lustre/lnet/libcfs/debug.c         |  22 ++--
 drivers/staging/lustre/lnet/libcfs/libcfs_cpu.c    |   8 +-
 drivers/staging/lustre/lnet/libcfs/libcfs_string.c |   4 +-
 .../lustre/lnet/libcfs/linux-crypto-adler.c        |   2 +-
 drivers/staging/lustre/lnet/libcfs/linux-crypto.c  |   1 -
 drivers/staging/lustre/lnet/libcfs/module.c        | 128 ++++++++++-----------
 drivers/staging/lustre/lnet/libcfs/tracefile.c     |   5 +-
 drivers/staging/lustre/lnet/libcfs/tracefile.h     |  12 +-
 12 files changed, 164 insertions(+), 166 deletions(-)

diff --git a/drivers/staging/lustre/include/linux/libcfs/libcfs_debug.h b/drivers/staging/lustre/include/linux/libcfs/libcfs_debug.h
index 27a3b12..3fd6ad7 100644
--- a/drivers/staging/lustre/include/linux/libcfs/libcfs_debug.h
+++ b/drivers/staging/lustre/include/linux/libcfs/libcfs_debug.h
@@ -67,28 +67,28 @@
 /* Enable debug-checks on stack size - except on x86_64 */
 #if !defined(__x86_64__)
 # ifdef __ia64__
-#  define CDEBUG_STACK() (THREAD_SIZE -				 \
+#  define CDEBUG_STACK() (THREAD_SIZE -					\
 			  ((unsigned long)__builtin_dwarf_cfa() &       \
 			   (THREAD_SIZE - 1)))
 # else
-#  define CDEBUG_STACK() (THREAD_SIZE -				 \
+#  define CDEBUG_STACK() (THREAD_SIZE -					\
 			  ((unsigned long)__builtin_frame_address(0) &  \
 			   (THREAD_SIZE - 1)))
 # endif /* __ia64__ */
 
-#define __CHECK_STACK(msgdata, mask, cdls)			      \
-do {								    \
-	if (unlikely(CDEBUG_STACK() > libcfs_stack)) {		  \
+#define __CHECK_STACK(msgdata, mask, cdls)				\
+do {									\
+	if (unlikely(CDEBUG_STACK() > libcfs_stack)) {			\
 		LIBCFS_DEBUG_MSG_DATA_INIT(msgdata, D_WARNING, NULL);   \
-		libcfs_stack = CDEBUG_STACK();			  \
-		libcfs_debug_msg(msgdata,			       \
-				 "maximum lustre stack %lu\n",	  \
-				 CDEBUG_STACK());		       \
-		(msgdata)->msg_mask = mask;			     \
-		(msgdata)->msg_cdls = cdls;			     \
-		dump_stack();					   \
+		libcfs_stack = CDEBUG_STACK();				\
+		libcfs_debug_msg(msgdata,				\
+				 "maximum lustre stack %lu\n",		\
+				 CDEBUG_STACK());			\
+		(msgdata)->msg_mask = mask;				\
+		(msgdata)->msg_cdls = cdls;				\
+		dump_stack();						\
 	      /*panic("LBUG");*/					\
-	}							       \
+	}								\
 } while (0)
 #define CFS_CHECK_STACK(msgdata, mask, cdls)  __CHECK_STACK(msgdata, mask, cdls)
 #else /* __x86_64__ */
@@ -104,37 +104,37 @@
 #define CDEBUG_DEFAULT_MIN_DELAY ((HZ + 1) / 2) /* jiffies */
 #define CDEBUG_DEFAULT_BACKOFF   2
 struct cfs_debug_limit_state {
-	unsigned long   cdls_next;
-	unsigned int cdls_delay;
-	int	     cdls_count;
+	unsigned long			cdls_next;
+	unsigned int			cdls_delay;
+	int				cdls_count;
 };
 
 struct libcfs_debug_msg_data {
-	const char *msg_file;
-	const char *msg_fn;
-	int	    msg_subsys;
-	int	    msg_line;
-	int	    msg_mask;
-	struct cfs_debug_limit_state *msg_cdls;
+	const char		       *msg_file;
+	const char		       *msg_fn;
+	int				msg_subsys;
+	int				msg_line;
+	int				msg_mask;
+	struct cfs_debug_limit_state   *msg_cdls;
 };
 
-#define LIBCFS_DEBUG_MSG_DATA_INIT(data, mask, cdls)		\
-do {								\
-	(data)->msg_subsys = DEBUG_SUBSYSTEM;			\
-	(data)->msg_file   = __FILE__;				\
-	(data)->msg_fn     = __func__;				\
-	(data)->msg_line   = __LINE__;				\
-	(data)->msg_cdls   = (cdls);				\
-	(data)->msg_mask   = (mask);				\
+#define LIBCFS_DEBUG_MSG_DATA_INIT(data, mask, cdls)			\
+do {									\
+	(data)->msg_subsys = DEBUG_SUBSYSTEM;				\
+	(data)->msg_file   = __FILE__;					\
+	(data)->msg_fn     = __func__;					\
+	(data)->msg_line   = __LINE__;					\
+	(data)->msg_cdls   = (cdls);					\
+	(data)->msg_mask   = (mask);					\
 } while (0)
 
-#define LIBCFS_DEBUG_MSG_DATA_DECL(dataname, mask, cdls)	\
-	static struct libcfs_debug_msg_data dataname = {	\
-	       .msg_subsys = DEBUG_SUBSYSTEM,			\
-	       .msg_file   = __FILE__,				\
-	       .msg_fn     = __func__,				\
-	       .msg_line   = __LINE__,				\
-	       .msg_cdls   = (cdls)	 };			\
+#define LIBCFS_DEBUG_MSG_DATA_DECL(dataname, mask, cdls)		\
+	static struct libcfs_debug_msg_data dataname = {		\
+	       .msg_subsys = DEBUG_SUBSYSTEM,				\
+	       .msg_file   = __FILE__,					\
+	       .msg_fn     = __func__,					\
+	       .msg_line   = __LINE__,					\
+	       .msg_cdls   = (cdls)	 };				\
 	dataname.msg_mask   = (mask)
 
 /**
diff --git a/drivers/staging/lustre/include/linux/libcfs/libcfs_fail.h b/drivers/staging/lustre/include/linux/libcfs/libcfs_fail.h
index 8074e39..4a41978 100644
--- a/drivers/staging/lustre/include/linux/libcfs/libcfs_fail.h
+++ b/drivers/staging/lustre/include/linux/libcfs/libcfs_fail.h
@@ -54,14 +54,14 @@ enum {
 };
 
 /* Failure injection control */
-#define CFS_FAIL_MASK_SYS    0x0000FF00
-#define CFS_FAIL_MASK_LOC   (0x000000FF | CFS_FAIL_MASK_SYS)
+#define CFS_FAIL_MASK_SYS	0x0000FF00
+#define CFS_FAIL_MASK_LOC	(0x000000FF | CFS_FAIL_MASK_SYS)
 
-#define CFS_FAILED_BIT       30
+#define CFS_FAILED_BIT		30
 /* CFS_FAILED is 0x40000000 */
 #define CFS_FAILED		BIT(CFS_FAILED_BIT)
 
-#define CFS_FAIL_ONCE_BIT    31
+#define CFS_FAIL_ONCE_BIT	31
 /* CFS_FAIL_ONCE is 0x80000000 */
 #define CFS_FAIL_ONCE		BIT(CFS_FAIL_ONCE_BIT)
 
diff --git a/drivers/staging/lustre/include/linux/libcfs/libcfs_private.h b/drivers/staging/lustre/include/linux/libcfs/libcfs_private.h
index 491d597..d525e4f 100644
--- a/drivers/staging/lustre/include/linux/libcfs/libcfs_private.h
+++ b/drivers/staging/lustre/include/linux/libcfs/libcfs_private.h
@@ -69,9 +69,9 @@
 
 void __noreturn lbug_with_loc(struct libcfs_debug_msg_data *msg);
 
-#define LBUG()							  \
-do {								    \
-	LIBCFS_DEBUG_MSG_DATA_DECL(msgdata, D_EMERG, NULL);	     \
+#define LBUG()								\
+do {									\
+	LIBCFS_DEBUG_MSG_DATA_DECL(msgdata, D_EMERG, NULL);		\
 	lbug_with_loc(&msgdata);					\
 } while (0)
 
@@ -86,7 +86,7 @@
 	kmalloc_node(size, flags | __GFP_ZERO,				\
 		     cfs_cpt_spread_node(lnet_cpt_table(), cpt))
 
-#define kvmalloc_cpt(size, flags, cpt) \
+#define kvmalloc_cpt(size, flags, cpt)					\
 	kvmalloc_node(size, flags,					\
 		      cfs_cpt_spread_node(lnet_cpt_table(), cpt))
 
@@ -138,50 +138,50 @@
 	LASSERTF(atomic_read(a) >= v, "value: %d\n", atomic_read((a)))
 
 /** assert value of @a is great than @v1 and little than @v2 */
-#define LASSERT_ATOMIC_GT_LT(a, v1, v2)			 \
-do {							    \
-	int __v = atomic_read(a);			   \
+#define LASSERT_ATOMIC_GT_LT(a, v1, v2)				\
+do {								\
+	int __v = atomic_read(a);				\
 	LASSERTF(__v > v1 && __v < v2, "value: %d\n", __v);     \
 } while (0)
 
 /** assert value of @a is great than @v1 and little/equal to @v2 */
-#define LASSERT_ATOMIC_GT_LE(a, v1, v2)			 \
-do {							    \
-	int __v = atomic_read(a);			   \
+#define LASSERT_ATOMIC_GT_LE(a, v1, v2)				\
+do {								\
+	int __v = atomic_read(a);				\
 	LASSERTF(__v > v1 && __v <= v2, "value: %d\n", __v);    \
 } while (0)
 
 /** assert value of @a is great/equal to @v1 and little than @v2 */
-#define LASSERT_ATOMIC_GE_LT(a, v1, v2)			 \
-do {							    \
-	int __v = atomic_read(a);			   \
+#define LASSERT_ATOMIC_GE_LT(a, v1, v2)				\
+do {								\
+	int __v = atomic_read(a);				\
 	LASSERTF(__v >= v1 && __v < v2, "value: %d\n", __v);    \
 } while (0)
 
 /** assert value of @a is great/equal to @v1 and little/equal to @v2 */
-#define LASSERT_ATOMIC_GE_LE(a, v1, v2)			 \
-do {							    \
-	int __v = atomic_read(a);			   \
+#define LASSERT_ATOMIC_GE_LE(a, v1, v2)				\
+do {								\
+	int __v = atomic_read(a);				\
 	LASSERTF(__v >= v1 && __v <= v2, "value: %d\n", __v);   \
 } while (0)
 
 #else /* !LASSERT_ATOMIC_ENABLED */
 
-#define LASSERT_ATOMIC_EQ(a, v)		 do {} while (0)
-#define LASSERT_ATOMIC_NE(a, v)		 do {} while (0)
-#define LASSERT_ATOMIC_LT(a, v)		 do {} while (0)
-#define LASSERT_ATOMIC_LE(a, v)		 do {} while (0)
-#define LASSERT_ATOMIC_GT(a, v)		 do {} while (0)
-#define LASSERT_ATOMIC_GE(a, v)		 do {} while (0)
-#define LASSERT_ATOMIC_GT_LT(a, v1, v2)	 do {} while (0)
-#define LASSERT_ATOMIC_GT_LE(a, v1, v2)	 do {} while (0)
-#define LASSERT_ATOMIC_GE_LT(a, v1, v2)	 do {} while (0)
-#define LASSERT_ATOMIC_GE_LE(a, v1, v2)	 do {} while (0)
+#define LASSERT_ATOMIC_EQ(a, v)		do {} while (0)
+#define LASSERT_ATOMIC_NE(a, v)		do {} while (0)
+#define LASSERT_ATOMIC_LT(a, v)		do {} while (0)
+#define LASSERT_ATOMIC_LE(a, v)		do {} while (0)
+#define LASSERT_ATOMIC_GT(a, v)		do {} while (0)
+#define LASSERT_ATOMIC_GE(a, v)		do {} while (0)
+#define LASSERT_ATOMIC_GT_LT(a, v1, v2)	do {} while (0)
+#define LASSERT_ATOMIC_GT_LE(a, v1, v2)	do {} while (0)
+#define LASSERT_ATOMIC_GE_LT(a, v1, v2)	do {} while (0)
+#define LASSERT_ATOMIC_GE_LE(a, v1, v2)	do {} while (0)
 
 #endif /* LASSERT_ATOMIC_ENABLED */
 
-#define LASSERT_ATOMIC_ZERO(a)		  LASSERT_ATOMIC_EQ(a, 0)
-#define LASSERT_ATOMIC_POS(a)		   LASSERT_ATOMIC_GT(a, 0)
+#define LASSERT_ATOMIC_ZERO(a)		LASSERT_ATOMIC_EQ(a, 0)
+#define LASSERT_ATOMIC_POS(a)		LASSERT_ATOMIC_GT(a, 0)
 
 /* implication */
 #define ergo(a, b) (!(a) || (b))
diff --git a/drivers/staging/lustre/include/linux/libcfs/libcfs_string.h b/drivers/staging/lustre/include/linux/libcfs/libcfs_string.h
index 3117708..f2ac9dc 100644
--- a/drivers/staging/lustre/include/linux/libcfs/libcfs_string.h
+++ b/drivers/staging/lustre/include/linux/libcfs/libcfs_string.h
@@ -53,8 +53,8 @@ int cfs_str2mask(const char *str, const char *(*bit2str)(int bit),
  * Structure to represent NULL-less strings.
  */
 struct cfs_lstr {
-	char		*ls_str;
-	int		ls_len;
+	char			*ls_str;
+	int			ls_len;
 };
 
 /*
@@ -65,9 +65,9 @@ struct cfs_range_expr {
 	 * Link to cfs_expr_list::el_exprs.
 	 */
 	struct list_head	re_link;
-	u32		re_lo;
-	u32		re_hi;
-	u32		re_stride;
+	u32			re_lo;
+	u32			re_hi;
+	u32			re_stride;
 };
 
 struct cfs_expr_list {
diff --git a/drivers/staging/lustre/lnet/libcfs/debug.c b/drivers/staging/lustre/lnet/libcfs/debug.c
index f954436..b7f0c73 100644
--- a/drivers/staging/lustre/lnet/libcfs/debug.c
+++ b/drivers/staging/lustre/lnet/libcfs/debug.c
@@ -85,8 +85,8 @@ static int libcfs_param_debug_mb_set(const char *val,
  * debug_mb parameter type with corresponding methods to handle this case
  */
 static const struct kernel_param_ops param_ops_debug_mb = {
-	.set = libcfs_param_debug_mb_set,
-	.get = param_get_uint,
+	.set		= libcfs_param_debug_mb_set,
+	.get		= param_get_uint,
 };
 
 #define param_check_debug_mb(name, p) \
@@ -143,8 +143,8 @@ static int param_set_console_max_delay(const char *val,
 }
 
 static const struct kernel_param_ops param_ops_console_max_delay = {
-	.set = param_set_console_max_delay,
-	.get = param_get_delay,
+	.set		= param_set_console_max_delay,
+	.get		= param_get_delay,
 };
 
 #define param_check_console_max_delay(name, p) \
@@ -161,8 +161,8 @@ static int param_set_console_min_delay(const char *val,
 }
 
 static const struct kernel_param_ops param_ops_console_min_delay = {
-	.set = param_set_console_min_delay,
-	.get = param_get_delay,
+	.set		= param_set_console_min_delay,
+	.get		= param_get_delay,
 };
 
 #define param_check_console_min_delay(name, p) \
@@ -195,8 +195,8 @@ static int param_set_uintpos(const char *val, const struct kernel_param *kp)
 }
 
 static const struct kernel_param_ops param_ops_uintpos = {
-	.set = param_set_uintpos,
-	.get = param_get_uint,
+	.set		= param_set_uintpos,
+	.get		= param_get_uint,
 };
 
 #define param_check_uintpos(name, p) \
@@ -499,9 +499,9 @@ static int panic_notifier(struct notifier_block *self, unsigned long unused1,
 }
 
 static struct notifier_block libcfs_panic_notifier = {
-	.notifier_call	= panic_notifier,
-	.next		= NULL,
-	.priority	= 10000,
+	.notifier_call		= panic_notifier,
+	.next			= NULL,
+	.priority		= 10000,
 };
 
 static void libcfs_register_panic_notifier(void)
diff --git a/drivers/staging/lustre/lnet/libcfs/libcfs_cpu.c b/drivers/staging/lustre/lnet/libcfs/libcfs_cpu.c
index a384a73..262469f 100644
--- a/drivers/staging/lustre/lnet/libcfs/libcfs_cpu.c
+++ b/drivers/staging/lustre/lnet/libcfs/libcfs_cpu.c
@@ -76,7 +76,7 @@ struct cfs_cpt_table {
 };
 
 /** Global CPU partition table */
-struct cfs_cpt_table   *cfs_cpt_tab __read_mostly;
+struct cfs_cpt_table *cfs_cpt_tab __read_mostly;
 EXPORT_SYMBOL(cfs_cpt_tab);
 
 /**
@@ -86,7 +86,7 @@ struct cfs_cpt_table {
  *  1 : disable multiple partitions
  * >1 : specify number of partitions
  */
-static int	cpu_npartitions;
+static int cpu_npartitions;
 module_param(cpu_npartitions, int, 0444);
 MODULE_PARM_DESC(cpu_npartitions, "# of CPU partitions");
 
@@ -103,7 +103,7 @@ struct cfs_cpt_table {
  *
  * NB: If user specified cpu_pattern, cpu_npartitions will be ignored
  */
-static char	*cpu_pattern = "N";
+static char *cpu_pattern = "N";
 module_param(cpu_pattern, charp, 0444);
 MODULE_PARM_DESC(cpu_pattern, "CPU partitions pattern");
 
@@ -167,7 +167,7 @@ struct cfs_cpt_table *cfs_cpt_table_alloc(unsigned int ncpt)
 						    GFP_KERNEL);
 		if (!part->cpt_distance) {
 			kfree(part->cpt_nodemask);
-		failed_setting_one_part:
+failed_setting_one_part:
 			free_cpumask_var(part->cpt_cpumask);
 			goto failed_setting_ctb_parts;
 		}
diff --git a/drivers/staging/lustre/lnet/libcfs/libcfs_string.c b/drivers/staging/lustre/lnet/libcfs/libcfs_string.c
index e1fb126..5fb8524 100644
--- a/drivers/staging/lustre/lnet/libcfs/libcfs_string.c
+++ b/drivers/staging/lustre/lnet/libcfs/libcfs_string.c
@@ -314,11 +314,11 @@ char *cfs_firststr(char *str, size_t size)
 		}
 	}
 
- out:
+out:
 	*expr = re;
 	return 0;
 
- failed:
+failed:
 	kfree(re);
 	return -EINVAL;
 }
diff --git a/drivers/staging/lustre/lnet/libcfs/linux-crypto-adler.c b/drivers/staging/lustre/lnet/libcfs/linux-crypto-adler.c
index db81ed52..d3da7a22 100644
--- a/drivers/staging/lustre/lnet/libcfs/linux-crypto-adler.c
+++ b/drivers/staging/lustre/lnet/libcfs/linux-crypto-adler.c
@@ -104,7 +104,7 @@ static int adler32_digest(struct shash_desc *desc, const u8 *data,
 			  unsigned int len, u8 *out)
 {
 	return __adler32_finup(crypto_shash_ctx(desc->tfm), data, len,
-				    out);
+			       out);
 }
 
 static struct shash_alg alg = {
diff --git a/drivers/staging/lustre/lnet/libcfs/linux-crypto.c b/drivers/staging/lustre/lnet/libcfs/linux-crypto.c
index b206e3c..a0b1377 100644
--- a/drivers/staging/lustre/lnet/libcfs/linux-crypto.c
+++ b/drivers/staging/lustre/lnet/libcfs/linux-crypto.c
@@ -205,7 +205,6 @@ struct ahash_request *
 	const struct cfs_crypto_hash_type *type;
 
 	err = cfs_crypto_hash_alloc(hash_alg, &type, &req, key, key_len);
-
 	if (err)
 		return ERR_PTR(err);
 	return req;
diff --git a/drivers/staging/lustre/lnet/libcfs/module.c b/drivers/staging/lustre/lnet/libcfs/module.c
index 1de83b1..dd9a953 100644
--- a/drivers/staging/lustre/lnet/libcfs/module.c
+++ b/drivers/staging/lustre/lnet/libcfs/module.c
@@ -60,8 +60,8 @@
 #include "tracefile.h"
 
 struct lnet_debugfs_symlink_def {
-	const char *name;
-	const char *target;
+	const char	*name;
+	const char	*target;
 };
 
 static struct dentry *lnet_debugfs_root;
@@ -281,14 +281,14 @@ static int libcfs_ioctl(unsigned long cmd, void __user *uparam)
 }
 
 static const struct file_operations libcfs_fops = {
-	.owner		= THIS_MODULE,
-	.unlocked_ioctl	= libcfs_psdev_ioctl,
+	.owner			= THIS_MODULE,
+	.unlocked_ioctl		= libcfs_psdev_ioctl,
 };
 
 static struct miscdevice libcfs_dev = {
-	.minor = MISC_DYNAMIC_MINOR,
-	.name = "lnet",
-	.fops = &libcfs_fops,
+	.minor			= MISC_DYNAMIC_MINOR,
+	.name			= "lnet",
+	.fops			= &libcfs_fops,
 };
 
 static int libcfs_dev_registered;
@@ -423,7 +423,7 @@ static int proc_cpt_table(struct ctl_table *table, int write,
 	}
 
 	rc = cfs_trace_copyout_string(buffer, nob, buf + pos, NULL);
- out:
+out:
 	kfree(buf);
 	return rc;
 }
@@ -472,84 +472,84 @@ static int proc_cpt_distance(struct ctl_table *table, int write,
 
 static struct ctl_table lnet_table[] = {
 	{
-		.procname = "debug",
-		.data     = &libcfs_debug,
-		.maxlen   = sizeof(int),
-		.mode     = 0644,
-		.proc_handler = &proc_dobitmasks,
+		.procname	= "debug",
+		.data		= &libcfs_debug,
+		.maxlen		= sizeof(int),
+		.mode		= 0644,
+		.proc_handler	= &proc_dobitmasks,
 	},
 	{
-		.procname = "subsystem_debug",
-		.data     = &libcfs_subsystem_debug,
-		.maxlen   = sizeof(int),
-		.mode     = 0644,
-		.proc_handler = &proc_dobitmasks,
+		.procname	= "subsystem_debug",
+		.data		= &libcfs_subsystem_debug,
+		.maxlen		= sizeof(int),
+		.mode		= 0644,
+		.proc_handler	= &proc_dobitmasks,
 	},
 	{
-		.procname = "printk",
-		.data     = &libcfs_printk,
-		.maxlen   = sizeof(int),
-		.mode     = 0644,
-		.proc_handler = &proc_dobitmasks,
+		.procname	= "printk",
+		.data		= &libcfs_printk,
+		.maxlen		= sizeof(int),
+		.mode		= 0644,
+		.proc_handler	= &proc_dobitmasks,
 	},
 	{
-		.procname = "cpu_partition_table",
-		.maxlen   = 128,
-		.mode     = 0444,
-		.proc_handler = &proc_cpt_table,
+		.procname	= "cpu_partition_table",
+		.maxlen		= 128,
+		.mode		= 0444,
+		.proc_handler	= &proc_cpt_table,
 	},
 	{
-		.procname = "cpu_partition_distance",
-		.maxlen	  = 128,
-		.mode	  = 0444,
-		.proc_handler = &proc_cpt_distance,
+		.procname	= "cpu_partition_distance",
+		.maxlen		= 128,
+		.mode		= 0444,
+		.proc_handler	= &proc_cpt_distance,
 	},
 	{
-		.procname = "debug_log_upcall",
-		.data     = lnet_debug_log_upcall,
-		.maxlen   = sizeof(lnet_debug_log_upcall),
-		.mode     = 0644,
-		.proc_handler = &proc_dostring,
+		.procname	= "debug_log_upcall",
+		.data		= lnet_debug_log_upcall,
+		.maxlen		= sizeof(lnet_debug_log_upcall),
+		.mode		= 0644,
+		.proc_handler	= &proc_dostring,
 	},
 	{
-		.procname = "catastrophe",
-		.data     = &libcfs_catastrophe,
-		.maxlen   = sizeof(int),
-		.mode     = 0444,
-		.proc_handler = &proc_dointvec,
+		.procname	= "catastrophe",
+		.data		= &libcfs_catastrophe,
+		.maxlen		= sizeof(int),
+		.mode		= 0444,
+		.proc_handler	= &proc_dointvec,
 	},
 	{
-		.procname = "dump_kernel",
-		.maxlen   = 256,
-		.mode     = 0200,
-		.proc_handler = &proc_dump_kernel,
+		.procname	= "dump_kernel",
+		.maxlen		= 256,
+		.mode		= 0200,
+		.proc_handler	= &proc_dump_kernel,
 	},
 	{
-		.procname = "daemon_file",
-		.mode     = 0644,
-		.maxlen   = 256,
-		.proc_handler = &proc_daemon_file,
+		.procname	= "daemon_file",
+		.mode		= 0644,
+		.maxlen		= 256,
+		.proc_handler	= &proc_daemon_file,
 	},
 	{
-		.procname = "force_lbug",
-		.data     = NULL,
-		.maxlen   = 0,
-		.mode     = 0200,
-		.proc_handler = &libcfs_force_lbug
+		.procname	= "force_lbug",
+		.data		= NULL,
+		.maxlen		= 0,
+		.mode		= 0200,
+		.proc_handler	= &libcfs_force_lbug
 	},
 	{
-		.procname = "fail_loc",
-		.data     = &cfs_fail_loc,
-		.maxlen   = sizeof(cfs_fail_loc),
-		.mode     = 0644,
-		.proc_handler = &proc_fail_loc
+		.procname	= "fail_loc",
+		.data		= &cfs_fail_loc,
+		.maxlen		= sizeof(cfs_fail_loc),
+		.mode		= 0644,
+		.proc_handler	= &proc_fail_loc
 	},
 	{
-		.procname = "fail_val",
-		.data     = &cfs_fail_val,
-		.maxlen   = sizeof(int),
-		.mode     = 0644,
-		.proc_handler = &proc_dointvec
+		.procname	= "fail_val",
+		.data		= &cfs_fail_val,
+		.maxlen		= sizeof(int),
+		.mode		= 0644,
+		.proc_handler	= &proc_dointvec
 	},
 	{
 		.procname	= "fail_err",
diff --git a/drivers/staging/lustre/lnet/libcfs/tracefile.c b/drivers/staging/lustre/lnet/libcfs/tracefile.c
index 22b1dd0..40440ce 100644
--- a/drivers/staging/lustre/lnet/libcfs/tracefile.c
+++ b/drivers/staging/lustre/lnet/libcfs/tracefile.c
@@ -331,7 +331,6 @@ static struct cfs_trace_page *cfs_trace_get_tage(struct cfs_trace_cpu_data *tcd,
 	 * XXX nikita: do NOT call portals_debug_msg() (CDEBUG/ENTRY/EXIT)
 	 * from here: this will lead to infinite recursion.
 	 */
-
 	if (len > PAGE_SIZE) {
 		pr_err("cowardly refusing to write %lu bytes in a page\n", len);
 		return NULL;
@@ -489,7 +488,7 @@ int libcfs_debug_vmsg2(struct libcfs_debug_msg_data *msgdata,
 		}
 
 		string_buf = (char *)page_address(tage->page) +
-					tage->used + known_size;
+			     tage->used + known_size;
 
 		max_nob = PAGE_SIZE - tage->used - known_size;
 		if (max_nob <= 0) {
@@ -746,7 +745,7 @@ static void put_pages_back(struct page_collection *pc)
 		put_pages_back_on_all_cpus(pc);
 }
 
-/* Add pages to a per-cpu debug daemon ringbuffer.  This buffer makes sure that
+/* Add pages to a per-cpu debug daemon ringbuffer. This buffer makes sure that
  * we have a good amount of data@all times for dumping during an LBUG, even
  * if we have been steadily writing (and otherwise discarding) pages via the
  * debug daemon.
diff --git a/drivers/staging/lustre/lnet/libcfs/tracefile.h b/drivers/staging/lustre/lnet/libcfs/tracefile.h
index 2134549..71a031d1 100644
--- a/drivers/staging/lustre/lnet/libcfs/tracefile.h
+++ b/drivers/staging/lustre/lnet/libcfs/tracefile.h
@@ -165,12 +165,12 @@ void cfs_trace_assertion_failed(const char *str,
 	}								\
 } while (0)
 
-#define __LASSERT_TAGE_INVARIANT(tage)			\
-do {							\
-	__LASSERT(tage);				\
-	__LASSERT(tage->page);				\
-	__LASSERT(tage->used <= PAGE_SIZE);		\
-	__LASSERT(page_count(tage->page) > 0);		\
+#define __LASSERT_TAGE_INVARIANT(tage)					\
+do {									\
+	__LASSERT(tage);						\
+	__LASSERT(tage->page);						\
+	__LASSERT(tage->used <= PAGE_SIZE);				\
+	__LASSERT(page_count(tage->page) > 0);				\
 } while (0)
 
 #endif /* __LIBCFS_TRACEFILE_H__ */
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [lustre-devel] [PATCH 24/26] lnet: cleanup white spaces
  2019-01-31 17:19 [lustre-devel] [PATCH 00/26] lustre: cleanups with no code changes James Simmons
                   ` (22 preceding siblings ...)
  2019-01-31 17:19 ` [lustre-devel] [PATCH 23/26] libcfs: cleanup white spaces James Simmons
@ 2019-01-31 17:19 ` James Simmons
  2019-02-04  3:13   ` NeilBrown
  2019-01-31 17:19 ` [lustre-devel] [PATCH 25/26] socklnd: " James Simmons
                   ` (2 subsequent siblings)
  26 siblings, 1 reply; 30+ messages in thread
From: James Simmons @ 2019-01-31 17:19 UTC (permalink / raw)
  To: lustre-devel

The LNet core code is very messy and difficult to read. Remove
excess white space and properly align data structures so they
are easy on the eyes.

Signed-off-by: James Simmons <jsimmons@infradead.org>
---
 drivers/staging/lustre/include/linux/lnet/api.h    |  32 ++--
 .../staging/lustre/include/linux/lnet/lib-lnet.h   |   3 +-
 .../staging/lustre/include/linux/lnet/lib-types.h  | 192 ++++++++++-----------
 drivers/staging/lustre/lnet/lnet/acceptor.c        |  16 +-
 drivers/staging/lustre/lnet/lnet/api-ni.c          |  89 +++++-----
 drivers/staging/lustre/lnet/lnet/config.c          |  38 ++--
 drivers/staging/lustre/lnet/lnet/lib-eq.c          |   2 +-
 drivers/staging/lustre/lnet/lnet/lib-move.c        |  84 +++++----
 drivers/staging/lustre/lnet/lnet/lib-msg.c         |  72 ++++----
 drivers/staging/lustre/lnet/lnet/lib-ptl.c         |  22 +--
 drivers/staging/lustre/lnet/lnet/lib-socket.c      |  11 +-
 drivers/staging/lustre/lnet/lnet/module.c          |   3 +-
 drivers/staging/lustre/lnet/lnet/net_fault.c       |   1 -
 drivers/staging/lustre/lnet/lnet/nidstrings.c      |  40 ++---
 drivers/staging/lustre/lnet/lnet/peer.c            |  37 ++--
 drivers/staging/lustre/lnet/lnet/router.c          |  26 +--
 drivers/staging/lustre/lnet/lnet/router_proc.c     |  61 ++++---
 17 files changed, 361 insertions(+), 368 deletions(-)

diff --git a/drivers/staging/lustre/include/linux/lnet/api.h b/drivers/staging/lustre/include/linux/lnet/api.h
index 7c30475..8d37509 100644
--- a/drivers/staging/lustre/include/linux/lnet/api.h
+++ b/drivers/staging/lustre/include/linux/lnet/api.h
@@ -94,18 +94,18 @@
  * and LNetMEInsert(), and removed from its list by LNetMEUnlink().
  * @{
  */
-int LNetMEAttach(unsigned int      portal,
+int LNetMEAttach(unsigned int portal,
 		 struct lnet_process_id match_id_in,
-		 u64		   match_bits_in,
-		 u64		   ignore_bits_in,
+		 u64 match_bits_in,
+		 u64 ignore_bits_in,
 		 enum lnet_unlink unlink_in,
 		 enum lnet_ins_pos pos_in,
 		 struct lnet_handle_me *handle_out);
 
 int LNetMEInsert(struct lnet_handle_me current_in,
 		 struct lnet_process_id match_id_in,
-		 u64		   match_bits_in,
-		 u64		   ignore_bits_in,
+		 u64 match_bits_in,
+		 u64 ignore_bits_in,
 		 enum lnet_unlink unlink_in,
 		 enum lnet_ins_pos position_in,
 		 struct lnet_handle_me *handle_out);
@@ -161,18 +161,18 @@ int LNetMDBind(struct lnet_md md_in,
  * on multiple EQs.
  * @{
  */
-int LNetEQAlloc(unsigned int       count_in,
+int LNetEQAlloc(unsigned int count_in,
 		lnet_eq_handler_t  handler,
 		struct lnet_handle_eq *handle_out);
 
 int LNetEQFree(struct lnet_handle_eq eventq_in);
 
 int LNetEQPoll(struct lnet_handle_eq *eventqs_in,
-	       int		 neq_in,
-	       signed long	 timeout,
-	       int		 interruptible,
+	       int neq_in,
+	       signed long timeout,
+	       int interruptible,
 	       struct lnet_event *event_out,
-	       int		*which_eq_out);
+	       int *which_eq_out);
 /** @} lnet_eq */
 
 /** \defgroup lnet_data Data movement operations
@@ -181,21 +181,21 @@ int LNetEQPoll(struct lnet_handle_eq *eventqs_in,
  * and LNetGet().
  * @{
  */
-int LNetPut(lnet_nid_t	      self,
+int LNetPut(lnet_nid_t self,
 	    struct lnet_handle_md md_in,
 	    enum lnet_ack_req ack_req_in,
 	    struct lnet_process_id target_in,
-	    unsigned int      portal_in,
+	    unsigned int portal_in,
 	    u64 match_bits_in,
-	    unsigned int      offset_in,
+	    unsigned int offset_in,
 	    u64	hdr_data_in);
 
-int LNetGet(lnet_nid_t	      self,
+int LNetGet(lnet_nid_t self,
 	    struct lnet_handle_md md_in,
 	    struct lnet_process_id target_in,
-	    unsigned int      portal_in,
+	    unsigned int portal_in,
 	    u64	match_bits_in,
-	    unsigned int      offset_in);
+	    unsigned int offset_in);
 /** @} lnet_data */
 
 /** \defgroup lnet_misc Miscellaneous operations.
diff --git a/drivers/staging/lustre/include/linux/lnet/lib-lnet.h b/drivers/staging/lustre/include/linux/lnet/lib-lnet.h
index 5c3f5e3..fb5d074 100644
--- a/drivers/staging/lustre/include/linux/lnet/lib-lnet.h
+++ b/drivers/staging/lustre/include/linux/lnet/lib-lnet.h
@@ -205,7 +205,6 @@ static inline int lnet_md_unlinkable(struct lnet_libmd *md)
 	}
 
 	md = kzalloc(size, GFP_NOFS);
-
 	if (md) {
 		/* Set here in case of early free */
 		md->md_options = umd->options;
@@ -467,7 +466,7 @@ int lnet_get_peer_list(u32 *countp, u32 *sizep,
 
 void lnet_router_debugfs_init(void);
 void lnet_router_debugfs_fini(void);
-int  lnet_rtrpools_alloc(int im_a_router);
+int lnet_rtrpools_alloc(int im_a_router);
 void lnet_destroy_rtrbuf(struct lnet_rtrbuf *rb, int npages);
 int lnet_rtrpools_adjust(int tiny, int small, int large);
 int lnet_rtrpools_enable(void);
diff --git a/drivers/staging/lustre/include/linux/lnet/lib-types.h b/drivers/staging/lustre/include/linux/lnet/lib-types.h
index 0646f07..33c7aaf 100644
--- a/drivers/staging/lustre/include/linux/lnet/lib-types.h
+++ b/drivers/staging/lustre/include/linux/lnet/lib-types.h
@@ -88,18 +88,18 @@ struct lnet_msg {
 	/* ready for pending on RX delay list */
 	unsigned int		msg_rx_ready_delay:1;
 
-	unsigned int	msg_vmflush:1;		/* VM trying to free memory */
-	unsigned int	msg_target_is_router:1; /* sending to a router */
-	unsigned int	msg_routing:1;		/* being forwarded */
-	unsigned int	msg_ack:1;		/* ack on finalize (PUT) */
-	unsigned int	msg_sending:1;		/* outgoing message */
-	unsigned int	msg_receiving:1;	/* being received */
-	unsigned int	msg_txcredit:1;		/* taken an NI send credit */
-	unsigned int	msg_peertxcredit:1;	/* taken a peer send credit */
-	unsigned int	msg_rtrcredit:1;	/* taken a global router credit */
-	unsigned int	msg_peerrtrcredit:1;	/* taken a peer router credit */
-	unsigned int	msg_onactivelist:1;	/* on the activelist */
-	unsigned int	msg_rdma_get:1;
+	unsigned int		msg_vmflush:1;		/* VM trying to free memory */
+	unsigned int		msg_target_is_router:1; /* sending to a router */
+	unsigned int		msg_routing:1;		/* being forwarded */
+	unsigned int		msg_ack:1;		/* ack on finalize (PUT) */
+	unsigned int		msg_sending:1;		/* outgoing message */
+	unsigned int		msg_receiving:1;	/* being received */
+	unsigned int		msg_txcredit:1;		/* taken an NI send credit */
+	unsigned int		msg_peertxcredit:1;	/* taken a peer send credit */
+	unsigned int		msg_rtrcredit:1;	/* taken a global router credit */
+	unsigned int		msg_peerrtrcredit:1;	/* taken a peer router credit */
+	unsigned int		msg_onactivelist:1;	/* on the activelist */
+	unsigned int		msg_rdma_get:1;
 
 	struct lnet_peer_ni	*msg_txpeer;	 /* peer I'm sending to */
 	struct lnet_peer_ni	*msg_rxpeer;	 /* peer I received from */
@@ -130,14 +130,14 @@ struct lnet_libhandle {
 	((type *)((char *)(ptr) - (char *)(&((type *)0)->member)))
 
 struct lnet_eq {
-	struct list_head	  eq_list;
-	struct lnet_libhandle	  eq_lh;
-	unsigned long		  eq_enq_seq;
-	unsigned long		  eq_deq_seq;
-	unsigned int		  eq_size;
-	lnet_eq_handler_t	  eq_callback;
-	struct lnet_event	 *eq_events;
-	int			**eq_refs;	/* percpt refcount for EQ */
+	struct list_head	eq_list;
+	struct lnet_libhandle	eq_lh;
+	unsigned long		eq_enq_seq;
+	unsigned long		eq_deq_seq;
+	unsigned int		eq_size;
+	lnet_eq_handler_t	eq_callback;
+	struct lnet_event      *eq_events;
+	int		      **eq_refs;	/* percpt refcount for EQ */
 };
 
 struct lnet_me {
@@ -218,7 +218,7 @@ struct lnet_lnd {
 	 */
 
 	/*
-	 * Start sending a preformatted message.  'private' is NULL for PUT and
+	 * Start sending a preformatted message.x  'private' is NULL for PUT and
 	 * GET messages; otherwise this is a response to an incoming message
 	 * and 'private' is the 'private' passed to lnet_parse().  Return
 	 * non-zero for immediate failure, otherwise complete later with
@@ -267,7 +267,7 @@ struct lnet_tx_queue {
 
 enum lnet_ni_state {
 	/* set when NI block is allocated */
-	LNET_NI_STATE_INIT = 0,
+	LNET_NI_STATE_INIT	= 0,
 	/* set when NI is started successfully */
 	LNET_NI_STATE_ACTIVE,
 	/* set when LND notifies NI failed */
@@ -279,23 +279,23 @@ enum lnet_ni_state {
 };
 
 enum lnet_stats_type {
-	LNET_STATS_TYPE_SEND = 0,
+	LNET_STATS_TYPE_SEND	= 0,
 	LNET_STATS_TYPE_RECV,
 	LNET_STATS_TYPE_DROP
 };
 
 struct lnet_comm_count {
-	atomic_t co_get_count;
-	atomic_t co_put_count;
-	atomic_t co_reply_count;
-	atomic_t co_ack_count;
-	atomic_t co_hello_count;
+	atomic_t		co_get_count;
+	atomic_t		co_put_count;
+	atomic_t		co_reply_count;
+	atomic_t		co_ack_count;
+	atomic_t		co_hello_count;
 };
 
 struct lnet_element_stats {
-	struct lnet_comm_count el_send_stats;
-	struct lnet_comm_count el_recv_stats;
-	struct lnet_comm_count el_drop_stats;
+	struct lnet_comm_count	el_send_stats;
+	struct lnet_comm_count	el_recv_stats;
+	struct lnet_comm_count	el_drop_stats;
 };
 
 struct lnet_net {
@@ -376,7 +376,7 @@ struct lnet_ni {
 	struct lnet_lnd_tunables ni_lnd_tunables;
 
 	/* lnd tunables set explicitly */
-	bool ni_lnd_tunables_set;
+	bool			ni_lnd_tunables_set;
 
 	/* NI statistics */
 	struct lnet_element_stats ni_stats;
@@ -391,9 +391,9 @@ struct lnet_ni {
 	 * equivalent interfaces to use
 	 * This is an array because socklnd bonding can still be configured
 	 */
-	char			 *ni_interfaces[LNET_INTERFACES_NUM];
+	char			*ni_interfaces[LNET_INTERFACES_NUM];
 	/* original net namespace */
-	struct net		 *ni_net_ns;
+	struct net		*ni_net_ns;
 };
 
 #define LNET_PROTO_PING_MATCHBITS	0x8000000000000000LL
@@ -434,9 +434,9 @@ struct lnet_rc_data {
 
 struct lnet_peer_ni {
 	/* chain on lpn_peer_nis */
-	struct list_head	lpni_peer_nis;
+	struct list_head	 lpni_peer_nis;
 	/* chain on remote peer list */
-	struct list_head	lpni_on_remote_peer_ni_list;
+	struct list_head	 lpni_on_remote_peer_ni_list;
 	/* chain on peer hash */
 	struct list_head	 lpni_hashlist;
 	/* messages blocking for tx credits */
@@ -448,7 +448,7 @@ struct lnet_peer_ni {
 	/* statistics kept on each peer NI */
 	struct lnet_element_stats lpni_stats;
 	/* spin lock protecting credits and lpni_txq / lpni_rtrq */
-	spinlock_t		lpni_lock;
+	spinlock_t		 lpni_lock;
 	/* # tx credits available */
 	int			 lpni_txcredits;
 	struct lnet_peer_net	*lpni_peer_net;
@@ -491,26 +491,26 @@ struct lnet_peer_ni {
 	/* CPT this peer attached on */
 	int			 lpni_cpt;
 	/* state flags -- protected by lpni_lock */
-	unsigned int		lpni_state;
+	unsigned int		 lpni_state;
 	/* # refs from lnet_route::lr_gateway */
 	int			 lpni_rtr_refcount;
 	/* sequence number used to round robin over peer nis within a net */
-	u32			lpni_seq;
+	u32			 lpni_seq;
 	/* sequence number used to round robin over gateways */
-	u32			lpni_gw_seq;
+	u32			 lpni_gw_seq;
 	/* health flag */
-	bool			lpni_healthy;
+	bool			 lpni_healthy;
 	/* returned RC ping features. Protected with lpni_lock */
 	unsigned int		 lpni_ping_feats;
 	/* routers on this peer */
 	struct list_head	 lpni_routes;
 	/* preferred local nids: if only one, use lpni_pref.nid */
 	union lpni_pref {
-		lnet_nid_t	nid;
+		lnet_nid_t	 nid;
 		lnet_nid_t	*nids;
 	} lpni_pref;
 	/* number of preferred NIDs in lnpi_pref_nids */
-	u32			lpni_pref_nnids;
+	u32			 lpni_pref_nnids;
 	/* router checker state */
 	struct lnet_rc_data	*lpni_rcd;
 };
@@ -676,9 +676,9 @@ struct lnet_peer_table {
 	/* # peers extant */
 	atomic_t		 pt_number;
 	/* peers */
-	struct list_head	pt_peer_list;
+	struct list_head	 pt_peer_list;
 	/* # peers */
-	int			pt_peers;
+	int			 pt_peers;
 	/* # zombies to go to deathrow (and not there yet) */
 	int			 pt_zombies;
 	/* zombie peers_ni */
@@ -704,7 +704,7 @@ struct lnet_route {
 	/* chain on gateway */
 	struct list_head	lr_gwlist;
 	/* router node */
-	struct lnet_peer_ni	*lr_gateway;
+	struct lnet_peer_ni    *lr_gateway;
 	/* remote network number */
 	u32			lr_net;
 	/* sequence for round-robin */
@@ -754,9 +754,9 @@ struct lnet_rtrbufpool {
 };
 
 struct lnet_rtrbuf {
-	struct list_head	 rb_list;	/* chain on rbp_bufs */
-	struct lnet_rtrbufpool	*rb_pool;	/* owning pool */
-	struct bio_vec		 rb_kiov[0];	/* the buffer space */
+	struct list_head	rb_list;	/* chain on rbp_bufs */
+	struct lnet_rtrbufpool *rb_pool;	/* owning pool */
+	struct bio_vec		rb_kiov[0];	/* the buffer space */
 };
 
 #define LNET_PEER_HASHSIZE	503	/* prime! */
@@ -904,58 +904,58 @@ enum lnet_state {
 
 struct lnet {
 	/* CPU partition table of LNet */
-	struct cfs_cpt_table		 *ln_cpt_table;
+	struct cfs_cpt_table	       *ln_cpt_table;
 	/* number of CPTs in ln_cpt_table */
-	unsigned int			  ln_cpt_number;
-	unsigned int			  ln_cpt_bits;
+	unsigned int			ln_cpt_number;
+	unsigned int			ln_cpt_bits;
 
 	/* protect LNet resources (ME/MD/EQ) */
-	struct cfs_percpt_lock		 *ln_res_lock;
+	struct cfs_percpt_lock	       *ln_res_lock;
 	/* # portals */
-	int				  ln_nportals;
+	int				ln_nportals;
 	/* the vector of portals */
-	struct lnet_portal		**ln_portals;
+	struct lnet_portal	      **ln_portals;
 	/* percpt ME containers */
-	struct lnet_res_container	**ln_me_containers;
+	struct lnet_res_container     **ln_me_containers;
 	/* percpt MD container */
-	struct lnet_res_container	**ln_md_containers;
+	struct lnet_res_container     **ln_md_containers;
 
 	/* Event Queue container */
-	struct lnet_res_container	  ln_eq_container;
-	wait_queue_head_t		  ln_eq_waitq;
-	spinlock_t			  ln_eq_wait_lock;
-	unsigned int			  ln_remote_nets_hbits;
+	struct lnet_res_container	ln_eq_container;
+	wait_queue_head_t		ln_eq_waitq;
+	spinlock_t			ln_eq_wait_lock;
+	unsigned int			ln_remote_nets_hbits;
 
 	/* protect NI, peer table, credits, routers, rtrbuf... */
-	struct cfs_percpt_lock		 *ln_net_lock;
+	struct cfs_percpt_lock	       *ln_net_lock;
 	/* percpt message containers for active/finalizing/freed message */
-	struct lnet_msg_container	**ln_msg_containers;
-	struct lnet_counters		**ln_counters;
-	struct lnet_peer_table		**ln_peer_tables;
+	struct lnet_msg_container     **ln_msg_containers;
+	struct lnet_counters	      **ln_counters;
+	struct lnet_peer_table	      **ln_peer_tables;
 	/* list of peer nis not on a local network */
 	struct list_head		ln_remote_peer_ni_list;
 	/* failure simulation */
-	struct list_head		  ln_test_peers;
-	struct list_head		  ln_drop_rules;
-	struct list_head		  ln_delay_rules;
+	struct list_head		ln_test_peers;
+	struct list_head		ln_drop_rules;
+	struct list_head		ln_delay_rules;
 
 	/* LND instances */
 	struct list_head		ln_nets;
 	/* network zombie list */
 	struct list_head		ln_net_zombie;
 	/* the loopback NI */
-	struct lnet_ni			*ln_loni;
+	struct lnet_ni		       *ln_loni;
 
 	/* remote networks with routes to them */
-	struct list_head		 *ln_remote_nets_hash;
+	struct list_head	       *ln_remote_nets_hash;
 	/* validity stamp */
-	u64				  ln_remote_nets_version;
+	u64				ln_remote_nets_version;
 	/* list of all known routers */
-	struct list_head		  ln_routers;
+	struct list_head		ln_routers;
 	/* validity stamp */
-	u64				  ln_routers_version;
+	u64				ln_routers_version;
 	/* percpt router buffer pools */
-	struct lnet_rtrbufpool		**ln_rtrpools;
+	struct lnet_rtrbufpool	      **ln_rtrpools;
 
 	/*
 	 * Ping target / Push source
@@ -964,9 +964,9 @@ struct lnet {
 	 * ln_ping_target is protected against concurrent updates by
 	 * ln_api_mutex.
 	 */
-	struct lnet_handle_md		  ln_ping_target_md;
-	struct lnet_handle_eq		  ln_ping_target_eq;
-	struct lnet_ping_buffer		 *ln_ping_target;
+	struct lnet_handle_md		ln_ping_target_md;
+	struct lnet_handle_eq		ln_ping_target_eq;
+	struct lnet_ping_buffer	       *ln_ping_target;
 	atomic_t			ln_ping_target_seqno;
 
 	/*
@@ -979,7 +979,7 @@ struct lnet {
 	 */
 	struct lnet_handle_eq		ln_push_target_eq;
 	struct lnet_handle_md		ln_push_target_md;
-	struct lnet_ping_buffer		*ln_push_target;
+	struct lnet_ping_buffer	       *ln_push_target;
 	int				ln_push_target_nnis;
 
 	/* discovery event queue handle */
@@ -996,35 +996,35 @@ struct lnet {
 	int				ln_dc_state;
 
 	/* router checker startup/shutdown state */
-	enum lnet_rc_state		  ln_rc_state;
+	enum lnet_rc_state		ln_rc_state;
 	/* router checker's event queue */
-	struct lnet_handle_eq		  ln_rc_eqh;
+	struct lnet_handle_eq		ln_rc_eqh;
 	/* rcd still pending on net */
-	struct list_head		  ln_rcd_deathrow;
+	struct list_head		ln_rcd_deathrow;
 	/* rcd ready for free */
-	struct list_head		  ln_rcd_zombie;
+	struct list_head		ln_rcd_zombie;
 	/* serialise startup/shutdown */
-	struct completion		  ln_rc_signal;
+	struct completion		ln_rc_signal;
 
-	struct mutex			  ln_api_mutex;
-	struct mutex			  ln_lnd_mutex;
-	struct mutex			  ln_delay_mutex;
+	struct mutex			ln_api_mutex;
+	struct mutex			ln_lnd_mutex;
+	struct mutex			ln_delay_mutex;
 	/* Have I called LNetNIInit myself? */
-	int				  ln_niinit_self;
+	int				ln_niinit_self;
 	/* LNetNIInit/LNetNIFini counter */
-	int				  ln_refcount;
+	int				ln_refcount;
 	/* SHUTDOWN/RUNNING/STOPPING */
-	enum lnet_state			  ln_state;
+	enum lnet_state			ln_state;
 
-	int				  ln_routing;	/* am I a router? */
-	lnet_pid_t			  ln_pid;	/* requested pid */
+	int				ln_routing;	/* am I a router? */
+	lnet_pid_t			ln_pid;		/* requested pid */
 	/* uniquely identifies this ni in this epoch */
-	u64				  ln_interface_cookie;
+	u64				ln_interface_cookie;
 	/* registered LNDs */
-	struct list_head		  ln_lnds;
+	struct list_head		ln_lnds;
 
 	/* test protocol compatibility flags */
-	int				  ln_testprotocompat;
+	int				ln_testprotocompat;
 
 	/*
 	 * 0 - load the NIs from the mod params
@@ -1032,14 +1032,14 @@ struct lnet {
 	 * Reverse logic to ensure that other calls to LNetNIInit
 	 * need no change
 	 */
-	bool				  ln_nis_from_mod_params;
+	bool				ln_nis_from_mod_params;
 
 	/*
 	 * waitq for router checker.  As long as there are no routes in
 	 * the list, the router checker will sleep on this queue.  when
 	 * routes are added the thread will wake up
 	 */
-	wait_queue_head_t		  ln_rc_waitq;
+	wait_queue_head_t		ln_rc_waitq;
 
 };
 
diff --git a/drivers/staging/lustre/lnet/lnet/acceptor.c b/drivers/staging/lustre/lnet/lnet/acceptor.c
index aa28a9f..83ab3b1 100644
--- a/drivers/staging/lustre/lnet/lnet/acceptor.c
+++ b/drivers/staging/lustre/lnet/lnet/acceptor.c
@@ -36,9 +36,9 @@
 #include <net/sock.h>
 #include <linux/lnet/lib-lnet.h>
 
-static int   accept_port    = 988;
-static int   accept_backlog = 127;
-static int   accept_timeout = 5;
+static int accept_port = 988;
+static int accept_backlog = 127;
+static int accept_timeout = 5;
 
 static struct {
 	int			pta_shutdown;
@@ -167,9 +167,9 @@
 
 		BUILD_BUG_ON(LNET_PROTO_ACCEPTOR_VERSION != 1);
 
-		cr.acr_magic   = LNET_PROTO_ACCEPTOR_MAGIC;
+		cr.acr_magic = LNET_PROTO_ACCEPTOR_MAGIC;
 		cr.acr_version = LNET_PROTO_ACCEPTOR_VERSION;
-		cr.acr_nid     = peer_nid;
+		cr.acr_nid = peer_nid;
 
 		if (the_lnet.ln_testprotocompat) {
 			/* single-shot proto check */
@@ -196,9 +196,9 @@
 	rc = -EADDRINUSE;
 	goto failed;
 
- failed_sock:
+failed_sock:
 	sock_release(sock);
- failed:
+failed:
 	lnet_connect_console_error(rc, peer_nid, peer_ip, peer_port);
 	return rc;
 }
@@ -297,7 +297,7 @@
 		__swab64s(&cr.acr_nid);
 
 	ni = lnet_nid2ni_addref(cr.acr_nid);
-	if (!ni ||	       /* no matching net */
+	if (!ni ||			/* no matching net */
 	    ni->ni_nid != cr.acr_nid) { /* right NET, wrong NID! */
 		if (ni)
 			lnet_ni_decref(ni);
diff --git a/drivers/staging/lustre/lnet/lnet/api-ni.c b/drivers/staging/lustre/lnet/lnet/api-ni.c
index be77e10..64b8bef9 100644
--- a/drivers/staging/lustre/lnet/lnet/api-ni.c
+++ b/drivers/staging/lustre/lnet/lnet/api-ni.c
@@ -47,7 +47,7 @@
  * before module init completes. The mutex needs to be ready for use then.
  */
 struct lnet the_lnet = {
-	.ln_api_mutex = __MUTEX_INITIALIZER(the_lnet.ln_api_mutex),
+	.ln_api_mutex		= __MUTEX_INITIALIZER(the_lnet.ln_api_mutex),
 };		/* THE state of the network */
 EXPORT_SYMBOL(the_lnet);
 
@@ -281,7 +281,7 @@ static int lnet_discover(struct lnet_process_id id, u32 force,
 
 	return 0;
 
- failed:
+failed:
 	lnet_destroy_locks();
 	return -ENOMEM;
 }
@@ -476,17 +476,17 @@ static void lnet_assert_wire_constants(void)
 	lnet_net_lock(LNET_LOCK_EX);
 
 	cfs_percpt_for_each(ctr, i, the_lnet.ln_counters) {
-		counters->msgs_max     += ctr->msgs_max;
-		counters->msgs_alloc   += ctr->msgs_alloc;
-		counters->errors       += ctr->errors;
-		counters->send_count   += ctr->send_count;
-		counters->recv_count   += ctr->recv_count;
-		counters->route_count  += ctr->route_count;
-		counters->drop_count   += ctr->drop_count;
-		counters->send_length  += ctr->send_length;
-		counters->recv_length  += ctr->recv_length;
+		counters->msgs_max += ctr->msgs_max;
+		counters->msgs_alloc += ctr->msgs_alloc;
+		counters->errors += ctr->errors;
+		counters->send_count += ctr->send_count;
+		counters->recv_count += ctr->recv_count;
+		counters->route_count += ctr->route_count;
+		counters->drop_count += ctr->drop_count;
+		counters->send_length += ctr->send_length;
+		counters->recv_length += ctr->recv_length;
 		counters->route_length += ctr->route_length;
-		counters->drop_length  += ctr->drop_length;
+		counters->drop_length += ctr->drop_length;
 	}
 	lnet_net_unlock(LNET_LOCK_EX);
 }
@@ -755,7 +755,7 @@ struct lnet_libhandle *
 
 	return 0;
 
- failed:
+failed:
 	lnet_unprepare();
 	return rc;
 }
@@ -942,7 +942,7 @@ struct lnet_net *
 	return false;
 }
 
-struct lnet_ni  *
+struct lnet_ni *
 lnet_nid2ni_locked(lnet_nid_t nid, int cpt)
 {
 	struct lnet_net *net;
@@ -1146,8 +1146,10 @@ struct lnet_ping_buffer *
 		       struct lnet_handle_md *ping_mdh,
 		       int ni_count, bool set_eq)
 {
-	struct lnet_process_id id = { .nid = LNET_NID_ANY,
-				      .pid = LNET_PID_ANY };
+	struct lnet_process_id id = {
+		.nid = LNET_NID_ANY,
+		.pid = LNET_PID_ANY
+	};
 	struct lnet_handle_me me_handle;
 	struct lnet_md md = { NULL };
 	int rc, rc2;
@@ -1244,7 +1246,7 @@ struct lnet_ping_buffer *
 
 			lnet_ni_lock(ni);
 			ns->ns_status = ni->ni_status ?
-					 ni->ni_status->ns_status :
+					ni->ni_status->ns_status :
 						LNET_NI_STATUS_UP;
 			ni->ni_status = ns;
 			lnet_ni_unlock(ni);
@@ -1322,7 +1324,10 @@ struct lnet_ping_buffer *
 /* Resize the push target. */
 int lnet_push_target_resize(void)
 {
-	struct lnet_process_id id = { LNET_NID_ANY, LNET_PID_ANY };
+	struct lnet_process_id id = {
+		.nid	= LNET_NID_ANY,
+		.pid	= LNET_PID_ANY
+	};
 	struct lnet_md md = { NULL };
 	struct lnet_handle_me meh;
 	struct lnet_handle_md mdh;
@@ -1353,13 +1358,13 @@ int lnet_push_target_resize(void)
 	}
 
 	/* initialize md content */
-	md.start     = &pbuf->pb_info;
-	md.length    = LNET_PING_INFO_SIZE(nnis);
+	md.start = &pbuf->pb_info;
+	md.length = LNET_PING_INFO_SIZE(nnis);
 	md.threshold = LNET_MD_THRESH_INF;
-	md.max_size  = 0;
-	md.options   = LNET_MD_OP_PUT | LNET_MD_TRUNCATE |
-		       LNET_MD_MANAGE_REMOTE;
-	md.user_ptr  = pbuf;
+	md.max_size = 0;
+	md.options = LNET_MD_OP_PUT | LNET_MD_TRUNCATE |
+		     LNET_MD_MANAGE_REMOTE;
+	md.user_ptr = pbuf;
 	md.eq_handle = the_lnet.ln_push_target_eq;
 
 	rc = LNetMDAttach(meh, md, LNET_RETAIN, &mdh);
@@ -1428,7 +1433,6 @@ static int lnet_push_target_init(void)
 	the_lnet.ln_push_target_nnis = LNET_INTERFACES_MIN;
 
 	rc = lnet_push_target_resize();
-
 	if (rc) {
 		LNetEQFree(the_lnet.ln_push_target_eq);
 		LNetInvalidateEQHandle(&the_lnet.ln_push_target_eq);
@@ -1723,10 +1727,10 @@ static void lnet_push_target_fini(void)
 
 	CDEBUG(D_LNI, "Added LNI %s [%d/%d/%d/%d]\n",
 	       libcfs_nid2str(ni->ni_nid),
-		ni->ni_net->net_tunables.lct_peer_tx_credits,
+	       ni->ni_net->net_tunables.lct_peer_tx_credits,
 	       lnet_ni_tq_credits(ni) * LNET_CPT_NUMBER,
 	       ni->ni_net->net_tunables.lct_peer_rtr_credits,
-		ni->ni_net->net_tunables.lct_peer_timeout);
+	       ni->ni_net->net_tunables.lct_peer_timeout);
 
 	return 0;
 failed0:
@@ -1932,7 +1936,6 @@ static void lnet_push_target_fini(void)
 		list_del_init(&net->net_list);
 
 		rc = lnet_startup_lndnet(net, NULL);
-
 		if (rc < 0)
 			goto failed;
 
@@ -1963,8 +1966,8 @@ int lnet_lib_init(void)
 	lnet_assert_wire_constants();
 
 	/* refer to global cfs_cpt_tab for now */
-	the_lnet.ln_cpt_table	= cfs_cpt_tab;
-	the_lnet.ln_cpt_number	= cfs_cpt_number(cfs_cpt_tab);
+	the_lnet.ln_cpt_table = cfs_cpt_tab;
+	the_lnet.ln_cpt_number = cfs_cpt_number(cfs_cpt_tab);
 
 	LASSERT(the_lnet.ln_cpt_number > 0);
 	if (the_lnet.ln_cpt_number > LNET_CPT_MAX) {
@@ -2409,7 +2412,7 @@ struct lnet_ni *
 	if (!prev) {
 		if (!net)
 			net = list_entry(the_lnet.ln_nets.next, struct lnet_net,
-					net_list);
+					 net_list);
 		ni = list_entry(net->net_ni_list.next, struct lnet_ni,
 				ni_netlist);
 
@@ -2455,7 +2458,6 @@ struct lnet_ni *
 	cpt = lnet_net_lock_current();
 
 	ni = lnet_get_ni_idx_locked(idx);
-
 	if (ni) {
 		rc = 0;
 		lnet_ni_lock(ni);
@@ -2483,7 +2485,6 @@ struct lnet_ni *
 	cpt = lnet_net_lock_current();
 
 	ni = lnet_get_ni_idx_locked(cfg_ni->lic_idx);
-
 	if (ni) {
 		rc = 0;
 		lnet_ni_lock(ni);
@@ -2705,7 +2706,7 @@ int lnet_dyn_del_ni(struct lnet_ioctl_config_ni *conf)
 	struct lnet_ni *ni;
 	u32 net_id = LNET_NIDNET(conf->lic_nid);
 	struct lnet_ping_buffer *pbuf;
-	struct lnet_handle_md  ping_mdh;
+	struct lnet_handle_md ping_mdh;
 	int rc;
 	int net_count;
 	u32 addr;
@@ -2912,7 +2913,7 @@ u32 lnet_get_dlc_seq_locked(void)
 {
 	struct libcfs_ioctl_data *data = arg;
 	struct lnet_ioctl_config_data *config;
-	struct lnet_process_id id = {0};
+	struct lnet_process_id id = { 0 };
 	struct lnet_ni *ni;
 	int rc;
 
@@ -3357,7 +3358,7 @@ static int lnet_ping(struct lnet_process_id id, signed long timeout,
 	int which;
 	int unlinked = 0;
 	int replied = 0;
-	const signed long a_long_time = 60*HZ;
+	const signed long a_long_time = 60 * HZ;
 	struct lnet_ping_buffer *pbuf;
 	struct lnet_process_id tmpid;
 	int i;
@@ -3384,12 +3385,12 @@ static int lnet_ping(struct lnet_process_id id, signed long timeout,
 	}
 
 	/* initialize md content */
-	md.start     = &pbuf->pb_info;
-	md.length    = LNET_PING_INFO_SIZE(n_ids);
+	md.start = &pbuf->pb_info;
+	md.length = LNET_PING_INFO_SIZE(n_ids);
 	md.threshold = 2; /* GET/REPLY */
-	md.max_size  = 0;
-	md.options   = LNET_MD_TRUNCATE;
-	md.user_ptr  = NULL;
+	md.max_size = 0;
+	md.options = LNET_MD_TRUNCATE;
+	md.user_ptr = NULL;
 	md.eq_handle = eqh;
 
 	rc = LNetMDBind(md, LNET_UNLINK, &mdh);
@@ -3401,7 +3402,6 @@ static int lnet_ping(struct lnet_process_id id, signed long timeout,
 	rc = LNetGet(LNET_NID_ANY, mdh, id,
 		     LNET_RESERVED_PORTAL,
 		     LNET_PROTO_PING_MATCHBITS, 0);
-
 	if (rc) {
 		/* Don't CERROR; this could be deliberate! */
 		rc2 = LNetMDUnlink(mdh);
@@ -3414,7 +3414,6 @@ static int lnet_ping(struct lnet_process_id id, signed long timeout,
 
 	do {
 		/* MUST block for unlink to complete */
-
 		rc2 = LNetEQPoll(&eqh, 1, timeout, !unlinked,
 				 &event, &which);
 
@@ -3510,13 +3509,13 @@ static int lnet_ping(struct lnet_process_id id, signed long timeout,
 	}
 	rc = pbuf->pb_info.pi_nnis;
 
- fail_free_eq:
+fail_free_eq:
 	rc2 = LNetEQFree(eqh);
 	if (rc2)
 		CERROR("rc2 %d\n", rc2);
 	LASSERT(!rc2);
 
- fail_ping_buffer_decref:
+fail_ping_buffer_decref:
 	lnet_ping_buffer_decref(pbuf);
 	return rc;
 }
diff --git a/drivers/staging/lustre/lnet/lnet/config.c b/drivers/staging/lustre/lnet/lnet/config.c
index 16c42bf..ecf656b 100644
--- a/drivers/staging/lustre/lnet/lnet/config.c
+++ b/drivers/staging/lustre/lnet/lnet/config.c
@@ -38,15 +38,15 @@
 #include <linux/lnet/lib-lnet.h>
 #include <linux/inetdevice.h>
 
-struct lnet_text_buf {	    /* tmp struct for parsing routes */
-	struct list_head ltb_list;	/* stash on lists */
-	int ltb_size;	/* allocated size */
-	char ltb_text[0];     /* text buffer */
+struct lnet_text_buf {		/* tmp struct for parsing routes */
+	struct list_head	ltb_list;	/* stash on lists */
+	int			ltb_size;	/* allocated size */
+	char			ltb_text[0];	/* text buffer */
 };
 
-static int lnet_tbnob;			/* track text buf allocation */
-#define LNET_MAX_TEXTBUF_NOB     (64 << 10)	/* bound allocation */
-#define LNET_SINGLE_TEXTBUF_NOB  (4 << 10)
+static int lnet_tbnob;		/* track text buf allocation */
+#define LNET_MAX_TEXTBUF_NOB	(64 << 10)	/* bound allocation */
+#define LNET_SINGLE_TEXTBUF_NOB	(4 << 10)
 
 #define SPACESTR " \t\v\r\n"
 #define DELIMITERS ":()[]"
@@ -126,6 +126,7 @@ struct lnet_text_buf {	    /* tmp struct for parsing routes */
 lnet_ni_unique_ni(char *iface_list[LNET_INTERFACES_NUM], char *iface)
 {
 	int i;
+
 	for (i = 0; i < LNET_INTERFACES_NUM; i++) {
 		if (iface_list[i] &&
 		    strncmp(iface_list[i], iface, strlen(iface)) == 0)
@@ -554,7 +555,7 @@ struct lnet_ni *
 		goto failed;
 
 	return ni;
- failed:
+failed:
 	lnet_ni_free(ni);
 	return NULL;
 }
@@ -743,9 +744,9 @@ struct lnet_ni *
 					goto failed_syntax;
 				}
 				rc = cfs_expr_list_parse(elstr,
-							nistr - elstr + 1,
-							0, LNET_CPT_NUMBER - 1,
-							&ni_el);
+							 nistr - elstr + 1,
+							 0, LNET_CPT_NUMBER - 1,
+							 &ni_el);
 				if (rc != 0) {
 					str = elstr;
 					goto failed_syntax;
@@ -812,9 +813,9 @@ struct lnet_ni *
 	kfree(tokens);
 	return nnets;
 
- failed_syntax:
+failed_syntax:
 	lnet_syntax("networks", networks, (int)(str - tokens), strlen(str));
- failed:
+failed:
 	/* free the net list and all the nis on each net */
 	while (!list_empty(netlist)) {
 		net = list_entry(netlist->next, struct lnet_net, net_list);
@@ -1038,7 +1039,7 @@ struct lnet_ni *
 	list_splice(&pending, tbs->prev);
 	return 1;
 
- failed:
+failed:
 	lnet_free_text_bufs(&pending);
 	return -EINVAL;
 }
@@ -1093,7 +1094,6 @@ struct lnet_ni *
 {
 	/* static scratch buffer OK (single threaded) */
 	static char cmd[LNET_SINGLE_TEXTBUF_NOB];
-
 	struct list_head nets;
 	struct list_head gateways;
 	struct list_head *tmp1;
@@ -1226,9 +1226,9 @@ struct lnet_ni *
 	myrc = 0;
 	goto out;
 
- token_error:
+token_error:
 	lnet_syntax("routes", cmd, (int)(token - str), strlen(token));
- out:
+out:
 	lnet_free_text_bufs(&nets);
 	lnet_free_text_bufs(&gateways);
 	return myrc;
@@ -1298,7 +1298,6 @@ struct lnet_ni *
 lnet_match_network_tokens(char *net_entry, u32 *ipaddrs, int nip)
 {
 	static char tokens[LNET_SINGLE_TEXTBUF_NOB];
-
 	int matched = 0;
 	int ntokens = 0;
 	int len;
@@ -1451,7 +1450,6 @@ struct lnet_ni *
 {
 	static char networks[LNET_SINGLE_TEXTBUF_NOB];
 	static char source[LNET_SINGLE_TEXTBUF_NOB];
-
 	struct list_head raw_entries;
 	struct list_head matched_nets;
 	struct list_head current_nets;
@@ -1549,7 +1547,7 @@ struct lnet_ni *
 		count++;
 	}
 
- out:
+out:
 	lnet_free_text_bufs(&raw_entries);
 	lnet_free_text_bufs(&matched_nets);
 	lnet_free_text_bufs(&current_nets);
diff --git a/drivers/staging/lustre/lnet/lnet/lib-eq.c b/drivers/staging/lustre/lnet/lnet/lib-eq.c
index f085388..f500b49 100644
--- a/drivers/staging/lustre/lnet/lnet/lib-eq.c
+++ b/drivers/staging/lustre/lnet/lnet/lib-eq.c
@@ -198,7 +198,7 @@
 	lnet_res_lh_invalidate(&eq->eq_lh);
 	list_del(&eq->eq_list);
 	kfree(eq);
- out:
+out:
 	lnet_eq_wait_unlock();
 	lnet_res_unlock(LNET_LOCK_EX);
 
diff --git a/drivers/staging/lustre/lnet/lnet/lib-move.c b/drivers/staging/lustre/lnet/lnet/lib-move.c
index 639f67ed..92c6a34 100644
--- a/drivers/staging/lustre/lnet/lnet/lib-move.c
+++ b/drivers/staging/lustre/lnet/lnet/lib-move.c
@@ -489,7 +489,7 @@ void lnet_usr_translate_stats(struct lnet_ioctl_element_msg_stats *msg_stats,
 
 		if (mlen) {
 			niov = msg->msg_niov;
-			iov  = msg->msg_iov;
+			iov = msg->msg_iov;
 			kiov = msg->msg_kiov;
 
 			LASSERT(niov > 0);
@@ -541,12 +541,12 @@ void lnet_usr_translate_stats(struct lnet_ioctl_element_msg_stats *msg_stats,
 		lnet_setpayloadbuffer(msg);
 
 	memset(&msg->msg_hdr, 0, sizeof(msg->msg_hdr));
-	msg->msg_hdr.type	   = cpu_to_le32(type);
+	msg->msg_hdr.type = cpu_to_le32(type);
 	/* dest_nid will be overwritten by lnet_select_pathway() */
-	msg->msg_hdr.dest_nid       = cpu_to_le64(target.nid);
-	msg->msg_hdr.dest_pid       = cpu_to_le32(target.pid);
+	msg->msg_hdr.dest_nid = cpu_to_le64(target.nid);
+	msg->msg_hdr.dest_pid = cpu_to_le32(target.pid);
 	/* src_nid will be set later */
-	msg->msg_hdr.src_pid	= cpu_to_le32(the_lnet.ln_pid);
+	msg->msg_hdr.src_pid = cpu_to_le32(the_lnet.ln_pid);
 	msg->msg_hdr.payload_length = cpu_to_le32(len);
 }
 
@@ -635,7 +635,7 @@ void lnet_usr_translate_stats(struct lnet_ioctl_element_msg_stats *msg_stats,
 	}
 
 	deadline = lp->lpni_last_alive +
-		lp->lpni_net->net_tunables.lct_peer_timeout;
+		   lp->lpni_net->net_tunables.lct_peer_timeout;
 	alive = deadline > now;
 
 	/* Update obsolete lpni_alive except for routers assumed to be dead
@@ -911,7 +911,7 @@ void lnet_usr_translate_stats(struct lnet_ioctl_element_msg_stats *msg_stats,
 {
 	struct lnet_peer_ni *txpeer = msg->msg_txpeer;
 	struct lnet_msg *msg2;
-	struct lnet_ni	*txni = msg->msg_txni;
+	struct lnet_ni *txni = msg->msg_txni;
 
 	if (msg->msg_txcredit) {
 		struct lnet_ni *ni = msg->msg_txni;
@@ -1044,7 +1044,7 @@ void lnet_usr_translate_stats(struct lnet_ioctl_element_msg_stats *msg_stats,
 lnet_return_rx_credits_locked(struct lnet_msg *msg)
 {
 	struct lnet_peer_ni *rxpeer = msg->msg_rxpeer;
-	struct lnet_ni	*rxni = msg->msg_rxni;
+	struct lnet_ni *rxni = msg->msg_rxni;
 	struct lnet_msg *msg2;
 
 	if (msg->msg_rtrcredit) {
@@ -1796,7 +1796,7 @@ void lnet_usr_translate_stats(struct lnet_ioctl_element_msg_stats *msg_stats,
 	/* if we still can't find a peer ni then we can't reach it */
 	if (!best_lpni) {
 		u32 net_id = peer_net ? peer_net->lpn_net_id :
-			LNET_NIDNET(dst_nid);
+					LNET_NIDNET(dst_nid);
 
 		lnet_net_unlock(cpt);
 		LCONSOLE_WARN("no peer_ni found on peer net %s\n",
@@ -1912,7 +1912,6 @@ void lnet_usr_translate_stats(struct lnet_ioctl_element_msg_stats *msg_stats,
 	}
 
 	rc = lnet_post_send_locked(msg, 0);
-
 	if (!rc)
 		CDEBUG(D_NET, "TRACE: %s(%s:%s) -> %s(%s:%s) : %s\n",
 		       libcfs_nid2str(msg->msg_hdr.src_nid),
@@ -1931,8 +1930,8 @@ void lnet_usr_translate_stats(struct lnet_ioctl_element_msg_stats *msg_stats,
 int
 lnet_send(lnet_nid_t src_nid, struct lnet_msg *msg, lnet_nid_t rtr_nid)
 {
-	lnet_nid_t		dst_nid = msg->msg_target.nid;
-	int			rc;
+	lnet_nid_t dst_nid = msg->msg_target.nid;
+	int rc;
 
 	/*
 	 * NB: rtr_nid is set to LNET_NID_ANY for all current use-cases,
@@ -2008,19 +2007,19 @@ void lnet_usr_translate_stats(struct lnet_ioctl_element_msg_stats *msg_stats,
 	le32_to_cpus(&hdr->msg.put.offset);
 
 	/* Primary peer NID. */
-	info.mi_id.nid	= msg->msg_initiator;
-	info.mi_id.pid	= hdr->src_pid;
-	info.mi_opc	= LNET_MD_OP_PUT;
-	info.mi_portal	= hdr->msg.put.ptl_index;
+	info.mi_id.nid = msg->msg_initiator;
+	info.mi_id.pid = hdr->src_pid;
+	info.mi_opc = LNET_MD_OP_PUT;
+	info.mi_portal = hdr->msg.put.ptl_index;
 	info.mi_rlength	= hdr->payload_length;
 	info.mi_roffset	= hdr->msg.put.offset;
-	info.mi_mbits	= hdr->msg.put.match_bits;
-	info.mi_cpt	= lnet_cpt_of_nid(msg->msg_rxpeer->lpni_nid, ni);
+	info.mi_mbits = hdr->msg.put.match_bits;
+	info.mi_cpt = lnet_cpt_of_nid(msg->msg_rxpeer->lpni_nid, ni);
 
 	msg->msg_rx_ready_delay = !ni->ni_net->net_lnd->lnd_eager_recv;
 	ready_delay = msg->msg_rx_ready_delay;
 
- again:
+again:
 	rc = lnet_ptl_match_md(&info, msg);
 	switch (rc) {
 	default:
@@ -2069,17 +2068,17 @@ void lnet_usr_translate_stats(struct lnet_ioctl_element_msg_stats *msg_stats,
 	le32_to_cpus(&hdr->msg.get.sink_length);
 	le32_to_cpus(&hdr->msg.get.src_offset);
 
-	source_id.nid   = hdr->src_nid;
-	source_id.pid   = hdr->src_pid;
+	source_id.nid = hdr->src_nid;
+	source_id.pid = hdr->src_pid;
 	/* Primary peer NID */
-	info.mi_id.nid  = msg->msg_initiator;
-	info.mi_id.pid  = hdr->src_pid;
-	info.mi_opc     = LNET_MD_OP_GET;
-	info.mi_portal  = hdr->msg.get.ptl_index;
+	info.mi_id.nid = msg->msg_initiator;
+	info.mi_id.pid = hdr->src_pid;
+	info.mi_opc = LNET_MD_OP_GET;
+	info.mi_portal = hdr->msg.get.ptl_index;
 	info.mi_rlength = hdr->msg.get.sink_length;
 	info.mi_roffset = hdr->msg.get.src_offset;
-	info.mi_mbits   = hdr->msg.get.match_bits;
-	info.mi_cpt	= lnet_cpt_of_nid(msg->msg_rxpeer->lpni_nid, ni);
+	info.mi_mbits = hdr->msg.get.match_bits;
+	info.mi_cpt = lnet_cpt_of_nid(msg->msg_rxpeer->lpni_nid, ni);
 
 	rc = lnet_ptl_match_md(&info, msg);
 	if (rc == LNET_MATCHMD_DROP) {
@@ -2128,7 +2127,7 @@ void lnet_usr_translate_stats(struct lnet_ioctl_element_msg_stats *msg_stats,
 {
 	void *private = msg->msg_private;
 	struct lnet_hdr *hdr = &msg->msg_hdr;
-	struct lnet_process_id src = {0};
+	struct lnet_process_id src = { 0 };
 	struct lnet_libmd *md;
 	int rlength;
 	int mlength;
@@ -2192,7 +2191,7 @@ void lnet_usr_translate_stats(struct lnet_ioctl_element_msg_stats *msg_stats,
 lnet_parse_ack(struct lnet_ni *ni, struct lnet_msg *msg)
 {
 	struct lnet_hdr *hdr = &msg->msg_hdr;
-	struct lnet_process_id src = {0};
+	struct lnet_process_id src = { 0 };
 	struct lnet_libmd *md;
 	int cpt;
 
@@ -2316,8 +2315,8 @@ void lnet_usr_translate_stats(struct lnet_ioctl_element_msg_stats *msg_stats,
 void
 lnet_print_hdr(struct lnet_hdr *hdr)
 {
-	struct lnet_process_id src = {0};
-	struct lnet_process_id dst = {0};
+	struct lnet_process_id src = { 0 };
+	struct lnet_process_id dst = { 0 };
 	char *type_str = lnet_msgtyp2str(hdr->type);
 
 	src.nid = hdr->src_nid;
@@ -2533,17 +2532,16 @@ void lnet_usr_translate_stats(struct lnet_ioctl_element_msg_stats *msg_stats,
 	/* for building message event */
 	msg->msg_from = from_nid;
 	if (!for_me) {
-		msg->msg_target.pid	= dest_pid;
-		msg->msg_target.nid	= dest_nid;
-		msg->msg_routing	= 1;
-
+		msg->msg_target.pid = dest_pid;
+		msg->msg_target.nid = dest_nid;
+		msg->msg_routing = 1;
 	} else {
 		/* convert common msg->hdr fields to host byteorder */
-		msg->msg_hdr.type	= type;
-		msg->msg_hdr.src_nid	= src_nid;
+		msg->msg_hdr.type = type;
+		msg->msg_hdr.src_nid = src_nid;
 		le32_to_cpus(&msg->msg_hdr.src_pid);
-		msg->msg_hdr.dest_nid	= dest_nid;
-		msg->msg_hdr.dest_pid	= dest_pid;
+		msg->msg_hdr.dest_nid = dest_nid;
+		msg->msg_hdr.dest_pid = dest_pid;
 		msg->msg_hdr.payload_length = payload_length;
 	}
 
@@ -2609,11 +2607,11 @@ void lnet_usr_translate_stats(struct lnet_ioctl_element_msg_stats *msg_stats,
 		goto free_drop;
 	return 0;
 
- free_drop:
+free_drop:
 	LASSERT(!msg->msg_md);
 	lnet_finalize(msg, rc);
 
- drop:
+drop:
 	lnet_drop_message(ni, cpt, private, payload_length, type);
 	return 0;
 }
@@ -2623,7 +2621,7 @@ void lnet_usr_translate_stats(struct lnet_ioctl_element_msg_stats *msg_stats,
 lnet_drop_delayed_msg_list(struct list_head *head, char *reason)
 {
 	while (!list_empty(head)) {
-		struct lnet_process_id id = {0};
+		struct lnet_process_id id = { 0 };
 		struct lnet_msg *msg;
 
 		msg = list_entry(head->next, struct lnet_msg, msg_list);
@@ -2887,7 +2885,7 @@ struct lnet_msg *
 
 	return msg;
 
- drop:
+drop:
 	cpt = lnet_cpt_of_nid(peer_id.nid, ni);
 
 	lnet_net_lock(cpt);
diff --git a/drivers/staging/lustre/lnet/lnet/lib-msg.c b/drivers/staging/lustre/lnet/lnet/lib-msg.c
index 7f58cfe..b9e9257 100644
--- a/drivers/staging/lustre/lnet/lnet/lib-msg.c
+++ b/drivers/staging/lustre/lnet/lnet/lib-msg.c
@@ -44,9 +44,9 @@
 {
 	memset(ev, 0, sizeof(*ev));
 
-	ev->status   = 0;
+	ev->status = 0;
 	ev->unlinked = 1;
-	ev->type     = LNET_EVENT_UNLINK;
+	ev->type = LNET_EVENT_UNLINK;
 	lnet_md_deconstruct(md, &ev->md);
 	lnet_md2handle(&ev->md_handle, md);
 }
@@ -58,7 +58,7 @@
 lnet_build_msg_event(struct lnet_msg *msg, enum lnet_event_kind ev_type)
 {
 	struct lnet_hdr *hdr = &msg->msg_hdr;
-	struct lnet_event *ev  = &msg->msg_ev;
+	struct lnet_event *ev = &msg->msg_ev;
 
 	LASSERT(!msg->msg_routing);
 
@@ -67,27 +67,27 @@
 
 	if (ev_type == LNET_EVENT_SEND) {
 		/* event for active message */
-		ev->target.nid    = le64_to_cpu(hdr->dest_nid);
-		ev->target.pid    = le32_to_cpu(hdr->dest_pid);
+		ev->target.nid = le64_to_cpu(hdr->dest_nid);
+		ev->target.pid = le32_to_cpu(hdr->dest_pid);
 		ev->initiator.nid = LNET_NID_ANY;
 		ev->initiator.pid = the_lnet.ln_pid;
-		ev->source.nid	  = LNET_NID_ANY;
-		ev->source.pid    = the_lnet.ln_pid;
-		ev->sender        = LNET_NID_ANY;
+		ev->source.nid = LNET_NID_ANY;
+		ev->source.pid = the_lnet.ln_pid;
+		ev->sender = LNET_NID_ANY;
 	} else {
 		/* event for passive message */
-		ev->target.pid    = hdr->dest_pid;
-		ev->target.nid    = hdr->dest_nid;
+		ev->target.pid = hdr->dest_pid;
+		ev->target.nid = hdr->dest_nid;
 		ev->initiator.pid = hdr->src_pid;
 		/* Multi-Rail: resolve src_nid to "primary" peer NID */
 		ev->initiator.nid = msg->msg_initiator;
 		/* Multi-Rail: track source NID. */
-		ev->source.pid	  = hdr->src_pid;
-		ev->source.nid	  = hdr->src_nid;
-		ev->rlength       = hdr->payload_length;
-		ev->sender        = msg->msg_from;
-		ev->mlength       = msg->msg_wanted;
-		ev->offset        = msg->msg_offset;
+		ev->source.pid = hdr->src_pid;
+		ev->source.nid = hdr->src_nid;
+		ev->rlength = hdr->payload_length;
+		ev->sender = msg->msg_from;
+		ev->mlength = msg->msg_wanted;
+		ev->offset = msg->msg_offset;
 	}
 
 	switch (ev_type) {
@@ -95,20 +95,20 @@
 		LBUG();
 
 	case LNET_EVENT_PUT: /* passive PUT */
-		ev->pt_index   = hdr->msg.put.ptl_index;
+		ev->pt_index = hdr->msg.put.ptl_index;
 		ev->match_bits = hdr->msg.put.match_bits;
-		ev->hdr_data   = hdr->msg.put.hdr_data;
+		ev->hdr_data = hdr->msg.put.hdr_data;
 		return;
 
 	case LNET_EVENT_GET: /* passive GET */
-		ev->pt_index   = hdr->msg.get.ptl_index;
+		ev->pt_index = hdr->msg.get.ptl_index;
 		ev->match_bits = hdr->msg.get.match_bits;
-		ev->hdr_data   = 0;
+		ev->hdr_data = 0;
 		return;
 
 	case LNET_EVENT_ACK: /* ACK */
 		ev->match_bits = hdr->msg.ack.match_bits;
-		ev->mlength    = hdr->msg.ack.mlength;
+		ev->mlength = hdr->msg.ack.mlength;
 		return;
 
 	case LNET_EVENT_REPLY: /* REPLY */
@@ -116,21 +116,21 @@
 
 	case LNET_EVENT_SEND: /* active message */
 		if (msg->msg_type == LNET_MSG_PUT) {
-			ev->pt_index   = le32_to_cpu(hdr->msg.put.ptl_index);
+			ev->pt_index = le32_to_cpu(hdr->msg.put.ptl_index);
 			ev->match_bits = le64_to_cpu(hdr->msg.put.match_bits);
-			ev->offset     = le32_to_cpu(hdr->msg.put.offset);
-			ev->mlength    =
-			ev->rlength    = le32_to_cpu(hdr->payload_length);
-			ev->hdr_data   = le64_to_cpu(hdr->msg.put.hdr_data);
+			ev->offset = le32_to_cpu(hdr->msg.put.offset);
+			ev->mlength =
+			ev->rlength = le32_to_cpu(hdr->payload_length);
+			ev->hdr_data = le64_to_cpu(hdr->msg.put.hdr_data);
 
 		} else {
 			LASSERT(msg->msg_type == LNET_MSG_GET);
-			ev->pt_index   = le32_to_cpu(hdr->msg.get.ptl_index);
+			ev->pt_index = le32_to_cpu(hdr->msg.get.ptl_index);
 			ev->match_bits = le64_to_cpu(hdr->msg.get.match_bits);
-			ev->mlength    =
-			ev->rlength    = le32_to_cpu(hdr->msg.get.sink_length);
-			ev->offset     = le32_to_cpu(hdr->msg.get.src_offset);
-			ev->hdr_data   = 0;
+			ev->mlength =
+			ev->rlength = le32_to_cpu(hdr->msg.get.sink_length);
+			ev->offset = le32_to_cpu(hdr->msg.get.src_offset);
+			ev->hdr_data = 0;
 		}
 		return;
 	}
@@ -140,7 +140,7 @@
 lnet_msg_commit(struct lnet_msg *msg, int cpt)
 {
 	struct lnet_msg_container *container = the_lnet.ln_msg_containers[cpt];
-	struct lnet_counters *counters  = the_lnet.ln_counters[cpt];
+	struct lnet_counters *counters = the_lnet.ln_counters[cpt];
 
 	/* routed message can be committed for both receiving and sending */
 	LASSERT(!msg->msg_tx_committed);
@@ -172,7 +172,7 @@
 static void
 lnet_msg_decommit_tx(struct lnet_msg *msg, int status)
 {
-	struct lnet_counters	*counters;
+	struct lnet_counters *counters;
 	struct lnet_event *ev = &msg->msg_ev;
 
 	LASSERT(msg->msg_tx_committed);
@@ -294,7 +294,7 @@
 	if (ev->type == LNET_EVENT_PUT || ev->type == LNET_EVENT_REPLY)
 		counters->recv_length += msg->msg_wanted;
 
- out:
+out:
 	lnet_return_rx_credits_locked(msg);
 	msg->msg_rx_committed = 0;
 }
@@ -375,7 +375,7 @@
 
 	unlink = lnet_md_unlinkable(md);
 	if (md->md_eq) {
-		msg->msg_ev.status   = status;
+		msg->msg_ev.status = status;
 		msg->msg_ev.unlinked = unlink;
 		lnet_eq_enqueue_event(md->md_eq, &msg->msg_ev);
 	}
@@ -488,7 +488,7 @@
 		lnet_res_unlock(cpt);
 	}
 
- again:
+again:
 	rc = 0;
 	if (!msg->msg_tx_committed && !msg->msg_rx_committed) {
 		/* not committed to network yet */
diff --git a/drivers/staging/lustre/lnet/lnet/lib-ptl.c b/drivers/staging/lustre/lnet/lnet/lib-ptl.c
index fa391ee..ea232c7 100644
--- a/drivers/staging/lustre/lnet/lnet/lib-ptl.c
+++ b/drivers/staging/lustre/lnet/lnet/lib-ptl.c
@@ -74,7 +74,7 @@
 
 	return 1;
 
- match:
+match:
 	if ((lnet_ptl_is_unique(ptl) && !unique) ||
 	    (lnet_ptl_is_wildcard(ptl) && unique))
 		return 0;
@@ -387,7 +387,7 @@ struct list_head *
 		head = &mtable->mt_mhash[LNET_MT_HASH_IGNORE];
 	else
 		head = lnet_mt_match_head(mtable, info->mi_id, info->mi_mbits);
- again:
+again:
 	/* NB: only wildcard portal needs to return LNET_MATCHMD_EXHAUSTED */
 	if (lnet_ptl_is_wildcard(the_lnet.ln_portals[mtable->mt_portal]))
 		exhausted = LNET_MATCHMD_EXHAUSTED;
@@ -634,9 +634,9 @@ struct list_head *
 		       info->mi_mbits, info->mi_roffset, info->mi_rlength);
 	}
 	goto out0;
- out1:
+out1:
 	lnet_res_unlock(mtable->mt_cpt);
- out0:
+out0:
 	/* EXHAUSTED bit is only meaningful for internal functions */
 	return rc & ~LNET_MATCHMD_EXHAUSTED;
 }
@@ -678,7 +678,7 @@ struct list_head *
 
 	lnet_ptl_lock(ptl);
 	head = &ptl->ptl_msg_stealing;
- again:
+again:
 	list_for_each_entry_safe(msg, tmp, head, msg_list) {
 		struct lnet_match_info info;
 		struct lnet_hdr *hdr;
@@ -688,13 +688,13 @@ struct list_head *
 
 		hdr = &msg->msg_hdr;
 		/* Multi-Rail: Primary peer NID */
-		info.mi_id.nid  = msg->msg_initiator;
-		info.mi_id.pid  = hdr->src_pid;
-		info.mi_opc     = LNET_MD_OP_PUT;
-		info.mi_portal  = hdr->msg.put.ptl_index;
+		info.mi_id.nid = msg->msg_initiator;
+		info.mi_id.pid = hdr->src_pid;
+		info.mi_opc = LNET_MD_OP_PUT;
+		info.mi_portal = hdr->msg.put.ptl_index;
 		info.mi_rlength = hdr->payload_length;
 		info.mi_roffset = hdr->msg.put.offset;
-		info.mi_mbits   = hdr->msg.put.match_bits;
+		info.mi_mbits = hdr->msg.put.match_bits;
 
 		rc = lnet_try_match_md(md, &info, msg);
 
@@ -824,7 +824,7 @@ struct list_head *
 	}
 
 	return 0;
- failed:
+failed:
 	lnet_ptl_cleanup(ptl);
 	return -ENOMEM;
 }
diff --git a/drivers/staging/lustre/lnet/lnet/lib-socket.c b/drivers/staging/lustre/lnet/lnet/lib-socket.c
index cff3d1e..095f9f5 100644
--- a/drivers/staging/lustre/lnet/lnet/lib-socket.c
+++ b/drivers/staging/lustre/lnet/lnet/lib-socket.c
@@ -50,8 +50,11 @@
 	long jiffies_left = timeout * msecs_to_jiffies(MSEC_PER_SEC);
 	unsigned long then;
 	struct timeval tv;
-	struct kvec  iov = { .iov_base = buffer, .iov_len  = nob };
-	struct msghdr msg = {NULL,};
+	struct kvec iov = {
+		.iov_base = buffer,
+		.iov_len = nob
+	};
+	struct msghdr msg = { NULL, };
 
 	LASSERT(nob > 0);
 	/*
@@ -102,9 +105,9 @@
 	long jiffies_left = timeout * msecs_to_jiffies(MSEC_PER_SEC);
 	unsigned long then;
 	struct timeval tv;
-	struct kvec  iov = {
+	struct kvec iov = {
 		.iov_base = buffer,
-		.iov_len  = nob
+		.iov_len = nob
 	};
 	struct msghdr msg = {
 		.msg_flags = 0
diff --git a/drivers/staging/lustre/lnet/lnet/module.c b/drivers/staging/lustre/lnet/lnet/module.c
index 4c08c74..f306569 100644
--- a/drivers/staging/lustre/lnet/lnet/module.c
+++ b/drivers/staging/lustre/lnet/lnet/module.c
@@ -52,7 +52,6 @@
 
 	if (!the_lnet.ln_niinit_self) {
 		rc = try_module_get(THIS_MODULE);
-
 		if (rc != 1)
 			goto out;
 
@@ -229,7 +228,7 @@
 }
 
 static struct notifier_block lnet_ioctl_handler = {
-	.notifier_call = lnet_ioctl,
+	.notifier_call		= lnet_ioctl,
 };
 
 static int __init lnet_init(void)
diff --git a/drivers/staging/lustre/lnet/lnet/net_fault.c b/drivers/staging/lustre/lnet/lnet/net_fault.c
index e2c7468..4234ce1 100644
--- a/drivers/staging/lustre/lnet/lnet/net_fault.c
+++ b/drivers/staging/lustre/lnet/lnet/net_fault.c
@@ -614,7 +614,6 @@ struct delay_daemon_data {
 			rc = lnet_parse_local(ni, msg);
 			if (!rc)
 				continue;
-
 		} else {
 			lnet_net_lock(cpt);
 			rc = lnet_parse_forward_locked(ni, msg);
diff --git a/drivers/staging/lustre/lnet/lnet/nidstrings.c b/drivers/staging/lustre/lnet/lnet/nidstrings.c
index 0f2b75e..8f3d87c 100644
--- a/drivers/staging/lustre/lnet/lnet/nidstrings.c
+++ b/drivers/staging/lustre/lnet/lnet/nidstrings.c
@@ -60,8 +60,8 @@
  * between getting its string and using it.
  */
 
-static char      libcfs_nidstrings[LNET_NIDSTR_COUNT][LNET_NIDSTR_SIZE];
-static int       libcfs_nidstring_idx;
+static char libcfs_nidstrings[LNET_NIDSTR_COUNT][LNET_NIDSTR_SIZE];
+static int libcfs_nidstring_idx;
 
 static DEFINE_SPINLOCK(libcfs_nidstring_lock);
 
@@ -117,23 +117,23 @@ struct nidrange {
 	 * Link to list of this structures which is built on nid range
 	 * list parsing.
 	 */
-	struct list_head nr_link;
+	struct list_head	nr_link;
 	/**
 	 * List head for addrrange::ar_link.
 	 */
-	struct list_head nr_addrranges;
+	struct list_head	nr_addrranges;
 	/**
 	 * Flag indicating that *@<net> is found.
 	 */
-	int nr_all;
+	int			nr_all;
 	/**
 	 * Pointer to corresponding element of libcfs_netstrfns.
 	 */
-	struct netstrfns *nr_netstrfns;
+	struct netstrfns	*nr_netstrfns;
 	/**
 	 * Number of network. E.g. 5 if \<net\> is "elan5".
 	 */
-	int nr_netnum;
+	int			nr_netnum;
 };
 
 /**
@@ -143,11 +143,11 @@ struct addrrange {
 	/**
 	 * Link to nidrange::nr_addrranges.
 	 */
-	struct list_head ar_link;
+	struct list_head	ar_link;
 	/**
 	 * List head for cfs_expr_list::el_list.
 	 */
-	struct list_head ar_numaddr_ranges;
+	struct list_head	ar_numaddr_ranges;
 };
 
 /**
@@ -471,8 +471,8 @@ static void cfs_ip_ar_min_max(struct addrrange *ar, u32 *min_nid,
 	struct cfs_expr_list *el;
 	struct cfs_range_expr *re;
 	u32 tmp_ip_addr = 0;
-	unsigned int min_ip[4] = {0};
-	unsigned int max_ip[4] = {0};
+	unsigned int min_ip[4] = { 0 };
+	unsigned int max_ip[4] = { 0 };
 	int re_count = 0;
 
 	list_for_each_entry(el, &ar->ar_numaddr_ranges, el_link) {
@@ -794,11 +794,11 @@ static void cfs_ip_min_max(struct list_head *nidlist, u32 *min_nid,
 static int
 libcfs_ip_str2addr(const char *str, int nob, u32 *addr)
 {
-	unsigned int	a;
-	unsigned int	b;
-	unsigned int	c;
-	unsigned int	d;
-	int		n = nob; /* XscanfX */
+	unsigned int a;
+	unsigned int b;
+	unsigned int c;
+	unsigned int d;
+	int n = nob; /* XscanfX */
 
 	/* numeric IP? */
 	if (sscanf(str, "%u.%u.%u.%u%n", &a, &b, &c, &d, &n) >= 4 &&
@@ -897,7 +897,7 @@ static void cfs_ip_min_max(struct list_head *nidlist, u32 *min_nid,
 static int
 libcfs_num_str2addr(const char *str, int nob, u32 *addr)
 {
-	int     n;
+	int n;
 
 	n = nob;
 	if (sscanf(str, "0x%x%n", addr, &n) >= 1 && n == nob)
@@ -926,7 +926,7 @@ static void cfs_ip_min_max(struct list_head *nidlist, u32 *min_nid,
 libcfs_num_parse(char *str, int len, struct list_head *list)
 {
 	struct cfs_expr_list *el;
-	int	rc;
+	int rc;
 
 	rc = cfs_expr_list_parse(str, len, 0, MAX_NUMERIC_VALUE, &el);
 	if (!rc)
@@ -1049,7 +1049,7 @@ static void cfs_ip_min_max(struct list_head *nidlist, u32 *min_nid,
 static struct netstrfns *
 libcfs_name2netstrfns(const char *name)
 {
-	int    i;
+	int i;
 
 	for (i = 0; i < libcfs_nnetstrfns; i++)
 		if (!strcmp(libcfs_netstrfns[i].nf_name, name))
@@ -1194,7 +1194,7 @@ static void cfs_ip_min_max(struct list_head *nidlist, u32 *min_nid,
 u32
 libcfs_str2net(const char *str)
 {
-	u32  net;
+	u32 net;
 
 	if (libcfs_str2net_internal(str, &net))
 		return net;
diff --git a/drivers/staging/lustre/lnet/lnet/peer.c b/drivers/staging/lustre/lnet/lnet/peer.c
index d807dd4..dfe1f3d 100644
--- a/drivers/staging/lustre/lnet/lnet/peer.c
+++ b/drivers/staging/lustre/lnet/lnet/peer.c
@@ -586,8 +586,8 @@ void lnet_peer_uninit(void)
 static struct lnet_peer_ni *
 lnet_get_peer_ni_locked(struct lnet_peer_table *ptable, lnet_nid_t nid)
 {
-	struct list_head	*peers;
-	struct lnet_peer_ni	*lp;
+	struct list_head *peers;
+	struct lnet_peer_ni *lp;
 
 	LASSERT(the_lnet.ln_state == LNET_STATE_RUNNING);
 
@@ -1069,6 +1069,7 @@ struct lnet_peer_net *
 lnet_peer_get_net_locked(struct lnet_peer *peer, u32 net_id)
 {
 	struct lnet_peer_net *peer_net;
+
 	list_for_each_entry(peer_net, &peer->lp_peer_nets, lpn_peer_nets) {
 		if (peer_net->lpn_net_id == net_id)
 			return peer_net;
@@ -1088,9 +1089,9 @@ struct lnet_peer_net *
  */
 static int
 lnet_peer_attach_peer_ni(struct lnet_peer *lp,
-				struct lnet_peer_net *lpn,
-				struct lnet_peer_ni *lpni,
-				unsigned int flags)
+			 struct lnet_peer_net *lpn,
+			 struct lnet_peer_ni *lpni,
+			 unsigned int flags)
 {
 	struct lnet_peer_table *ptable;
 
@@ -2686,12 +2687,12 @@ static int lnet_peer_send_ping(struct lnet_peer *lp)
 	}
 
 	/* initialize md content */
-	md.start     = &pbuf->pb_info;
-	md.length    = LNET_PING_INFO_SIZE(nnis);
+	md.start = &pbuf->pb_info;
+	md.length = LNET_PING_INFO_SIZE(nnis);
 	md.threshold = 2; /* GET/REPLY */
-	md.max_size  = 0;
-	md.options   = LNET_MD_TRUNCATE;
-	md.user_ptr  = lp;
+	md.max_size = 0;
+	md.options = LNET_MD_TRUNCATE;
+	md.user_ptr = lp;
 	md.eq_handle = the_lnet.ln_dc_eqh;
 
 	rc = LNetMDBind(md, LNET_UNLINK, &lp->lp_ping_mdh);
@@ -2715,7 +2716,6 @@ static int lnet_peer_send_ping(struct lnet_peer *lp)
 	rc = LNetGet(LNET_NID_ANY, lp->lp_ping_mdh, id,
 		     LNET_RESERVED_PORTAL,
 		     LNET_PROTO_PING_MATCHBITS, 0);
-
 	if (rc)
 		goto fail_unlink_md;
 
@@ -2792,13 +2792,13 @@ static int lnet_peer_send_push(struct lnet_peer *lp)
 	lnet_net_unlock(cpt);
 
 	/* Push source MD */
-	md.start     = &pbuf->pb_info;
-	md.length    = LNET_PING_INFO_SIZE(pbuf->pb_nnis);
+	md.start = &pbuf->pb_info;
+	md.length = LNET_PING_INFO_SIZE(pbuf->pb_nnis);
 	md.threshold = 2; /* Put/Ack */
-	md.max_size  = 0;
-	md.options   = 0;
+	md.max_size = 0;
+	md.options = 0;
 	md.eq_handle = the_lnet.ln_dc_eqh;
-	md.user_ptr  = lp;
+	md.user_ptr = lp;
 
 	rc = LNetMDBind(md, LNET_UNLINK, &lp->lp_push_mdh);
 	if (rc) {
@@ -2821,7 +2821,6 @@ static int lnet_peer_send_push(struct lnet_peer *lp)
 	rc = LNetPut(LNET_NID_ANY, lp->lp_push_mdh,
 		     LNET_ACK_REQ, id, LNET_RESERVED_PORTAL,
 		     LNET_PROTO_PING_MATCHBITS, 0, 0);
-
 	if (rc)
 		goto fail_unlink;
 
@@ -3315,8 +3314,8 @@ int lnet_get_peer_info(struct lnet_ioctl_peer_cfg *cfg, void __user *bulk)
 		goto out;
 	}
 
-	size = sizeof(nid) + sizeof(*lpni_info) + sizeof(*lpni_stats)
-		+ sizeof(*lpni_msg_stats);
+	size = sizeof(nid) + sizeof(*lpni_info) + sizeof(*lpni_stats) +
+	       sizeof(*lpni_msg_stats);
 	size *= lp->lp_nnis;
 	if (size > cfg->prcfg_size) {
 		cfg->prcfg_size = size;
diff --git a/drivers/staging/lustre/lnet/lnet/router.c b/drivers/staging/lustre/lnet/lnet/router.c
index 22c88ec..463b123 100644
--- a/drivers/staging/lustre/lnet/lnet/router.c
+++ b/drivers/staging/lustre/lnet/lnet/router.c
@@ -172,7 +172,7 @@
 		notifylnd = lp->lpni_notifylnd;
 
 		lp->lpni_notifylnd = 0;
-		lp->lpni_notify    = 0;
+		lp->lpni_notify = 0;
 
 		if (notifylnd && ni->ni_net->net_lnd->lnd_notify) {
 			spin_unlock(&lp->lpni_lock);
@@ -274,6 +274,7 @@ static void lnet_shuffle_seed(void)
 	 * the NID for this node gives the most entropy in the low bits */
 	while ((ni = lnet_get_next_ni_locked(NULL, ni))) {
 		u32 lnd_type, seed;
+
 		lnd_type = LNET_NETTYP(LNET_NIDNET(ni->ni_nid));
 		if (lnd_type != LOLND) {
 			seed = (LNET_NIDADDR(ni->ni_nid) | lnd_type);
@@ -386,7 +387,6 @@ static void lnet_shuffle_seed(void)
 	/* Search for a duplicate route (it's a NOOP if it is) */
 	add_route = 1;
 	list_for_each_entry(route2, &rnet2->lrn_routes, lr_list) {
-
 		if (route2->lr_gateway == route->lr_gateway) {
 			add_route = 0;
 			break;
@@ -501,7 +501,7 @@ static void lnet_shuffle_seed(void)
 	else
 		rn_list = lnet_net2rnethash(net);
 
- again:
+again:
 	list_for_each_entry(rnet, rn_list, lrn_list) {
 		if (!(net == LNET_NIDNET(LNET_NID_ANY) ||
 		      net == rnet->lrn_net))
@@ -601,10 +601,10 @@ int lnet_get_rtr_pool_cfg(int idx, struct lnet_ioctl_pool_cfg *pool_cfg)
 		list_for_each_entry(rnet, rn_list, lrn_list) {
 			list_for_each_entry(route, &rnet->lrn_routes, lr_list) {
 				if (!idx--) {
-					*net      = rnet->lrn_net;
-					*hops     = route->lr_hops;
+					*net = rnet->lrn_net;
+					*hops = route->lr_hops;
 					*priority = route->lr_priority;
-					*gateway  = route->lr_gateway->lpni_nid;
+					*gateway = route->lr_gateway->lpni_nid;
 					*alive = lnet_is_route_alive(route);
 					lnet_net_unlock(cpt);
 					return 0;
@@ -648,7 +648,7 @@ int lnet_get_rtr_pool_cfg(int idx, struct lnet_ioctl_pool_cfg *pool_cfg)
 	struct lnet_ping_buffer *pbuf = rcd->rcd_pingbuffer;
 	struct lnet_peer_ni *gw = rcd->rcd_gateway;
 	struct lnet_route *rte;
-	int			nnis;
+	int nnis;
 
 	if (!gw->lpni_alive || !pbuf)
 		return;
@@ -799,7 +799,7 @@ int lnet_get_rtr_pool_cfg(int idx, struct lnet_ioctl_pool_cfg *pool_cfg)
 	if (avoid_asym_router_failure && !event->status)
 		lnet_parse_rc_info(rcd);
 
- out:
+out:
 	lnet_net_unlock(lp->lpni_cpt);
 }
 
@@ -1069,14 +1069,14 @@ int lnet_get_rtr_pool_cfg(int idx, struct lnet_ioctl_pool_cfg *pool_cfg)
 		id.pid = LNET_PID_LUSTRE;
 		CDEBUG(D_NET, "Check: %s\n", libcfs_id2str(id));
 
-		rtr->lpni_ping_notsent   = 1;
+		rtr->lpni_ping_notsent = 1;
 		rtr->lpni_ping_timestamp = now;
 
 		mdh = rcd->rcd_mdh;
 
 		if (!rtr->lpni_ping_deadline) {
 			rtr->lpni_ping_deadline = ktime_get_seconds() +
-						router_ping_timeout;
+						  router_ping_timeout;
 		}
 
 		lnet_net_unlock(rtr->lpni_cpt);
@@ -1652,7 +1652,7 @@ int lnet_get_rtr_pool_cfg(int idx, struct lnet_ioctl_pool_cfg *pool_cfg)
 
 	return 0;
 
- failed:
+failed:
 	lnet_rtrpools_free(0);
 	return rc;
 }
@@ -1797,8 +1797,8 @@ int lnet_get_rtr_pool_cfg(int idx, struct lnet_ioctl_pool_cfg *pool_cfg)
 		return -EINVAL;
 	}
 
-	if (ni && !alive &&	     /* LND telling me she's down */
-	    !auto_down) {		       /* auto-down disabled */
+	if (ni && !alive &&	/* LND telling me she's down */
+	    !auto_down) {	/* auto-down disabled */
 		CDEBUG(D_NET, "Auto-down disabled\n");
 		return 0;
 	}
diff --git a/drivers/staging/lustre/lnet/lnet/router_proc.c b/drivers/staging/lustre/lnet/lnet/router_proc.c
index e8cc70f..94ef441 100644
--- a/drivers/staging/lustre/lnet/lnet/router_proc.c
+++ b/drivers/staging/lustre/lnet/lnet/router_proc.c
@@ -66,8 +66,8 @@
 #define LNET_PROC_HOFF_GET(pos)				\
 	(int)((pos) & LNET_PROC_HOFF_MASK)
 
-#define LNET_PROC_POS_MAKE(cpt, ver, hash, off)		\
-	(((((loff_t)(cpt)) & LNET_PROC_CPT_MASK) << LNET_PROC_VPOS_BITS) |   \
+#define LNET_PROC_POS_MAKE(cpt, ver, hash, off)				    \
+	(((((loff_t)(cpt)) & LNET_PROC_CPT_MASK) << LNET_PROC_VPOS_BITS) |  \
 	((((loff_t)(ver)) & LNET_PROC_VER_MASK) << LNET_PROC_HPOS_BITS) |   \
 	((((loff_t)(hash)) & LNET_PROC_HASH_MASK) << LNET_PROC_HOFF_BITS) | \
 	((off) & LNET_PROC_HOFF_MASK))
@@ -91,7 +91,6 @@ static int proc_lnet_stats(struct ctl_table *table, int write,
 	}
 
 	/* read */
-
 	ctrs = kzalloc(sizeof(*ctrs), GFP_NOFS);
 	if (!ctrs)
 		return -ENOMEM;
@@ -395,8 +394,8 @@ static int proc_lnet_peers(struct ctl_table *table, int write,
 	struct lnet_peer_table *ptable;
 	char *tmpstr = NULL;
 	char *s;
-	int cpt  = LNET_PROC_CPT_GET(*ppos);
-	int ver  = LNET_PROC_VER_GET(*ppos);
+	int cpt = LNET_PROC_CPT_GET(*ppos);
+	int ver = LNET_PROC_VER_GET(*ppos);
 	int hash = LNET_PROC_HASH_GET(*ppos);
 	int hoff = LNET_PROC_HOFF_GET(*ppos);
 	int rc = 0;
@@ -456,7 +455,7 @@ static int proc_lnet_peers(struct ctl_table *table, int write,
 		struct lnet_peer_ni *peer;
 		struct list_head *p;
 		int skip;
- again:
+again:
 		p = NULL;
 		peer = NULL;
 		skip = hoff - 1;
@@ -630,7 +629,7 @@ static int proc_lnet_buffers(struct ctl_table *table, int write,
 		lnet_net_unlock(LNET_LOCK_EX);
 	}
 
- out:
+out:
 	len = s - tmpstr;
 
 	if (pos >= min_t(int, len, strlen(tmpstr)))
@@ -787,9 +786,9 @@ static int proc_lnet_nis(struct ctl_table *table, int write,
 }
 
 struct lnet_portal_rotors {
-	int pr_value;
-	const char *pr_name;
-	const char *pr_desc;
+	int		 pr_value;
+	const char	*pr_name;
+	const char	*pr_desc;
 };
 
 static struct lnet_portal_rotors	portal_rotors[] = {
@@ -890,39 +889,39 @@ static int proc_lnet_portal_rotor(struct ctl_table *table, int write,
 	 * to go via /proc for portability.
 	 */
 	{
-		.procname     = "stats",
-		.mode         = 0644,
-		.proc_handler = &proc_lnet_stats,
+		.procname	= "stats",
+		.mode		= 0644,
+		.proc_handler	= &proc_lnet_stats,
 	},
 	{
-		.procname     = "routes",
-		.mode         = 0444,
-		.proc_handler = &proc_lnet_routes,
+		.procname	= "routes",
+		.mode		= 0444,
+		.proc_handler	= &proc_lnet_routes,
 	},
 	{
-		.procname     = "routers",
-		.mode         = 0444,
-		.proc_handler = &proc_lnet_routers,
+		.procname	= "routers",
+		.mode		= 0444,
+		.proc_handler	= &proc_lnet_routers,
 	},
 	{
-		.procname     = "peers",
-		.mode         = 0644,
-		.proc_handler = &proc_lnet_peers,
+		.procname	= "peers",
+		.mode		= 0644,
+		.proc_handler	= &proc_lnet_peers,
 	},
 	{
-		.procname     = "buffers",
-		.mode         = 0444,
-		.proc_handler = &proc_lnet_buffers,
+		.procname	= "buffers",
+		.mode		= 0444,
+		.proc_handler	= &proc_lnet_buffers,
 	},
 	{
-		.procname     = "nis",
-		.mode         = 0644,
-		.proc_handler = &proc_lnet_nis,
+		.procname	= "nis",
+		.mode		= 0644,
+		.proc_handler	= &proc_lnet_nis,
 	},
 	{
-		.procname     = "portal_rotor",
-		.mode         = 0644,
-		.proc_handler = &proc_lnet_portal_rotor,
+		.procname	= "portal_rotor",
+		.mode		= 0644,
+		.proc_handler	= &proc_lnet_portal_rotor,
 	},
 	{
 	}
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [lustre-devel] [PATCH 25/26] socklnd: cleanup white spaces
  2019-01-31 17:19 [lustre-devel] [PATCH 00/26] lustre: cleanups with no code changes James Simmons
                   ` (23 preceding siblings ...)
  2019-01-31 17:19 ` [lustre-devel] [PATCH 24/26] lnet: " James Simmons
@ 2019-01-31 17:19 ` James Simmons
  2019-01-31 17:19 ` [lustre-devel] [PATCH 26/26] o2iblnd: " James Simmons
  2019-02-04  8:44 ` [lustre-devel] [PATCH 00/26] lustre: cleanups with no code changes Andreas Dilger
  26 siblings, 0 replies; 30+ messages in thread
From: James Simmons @ 2019-01-31 17:19 UTC (permalink / raw)
  To: lustre-devel

The ksocklnd code is very messy and difficult to read. Remove
excess white space and properly align data structures so they
are easy on the eyes.

Signed-off-by: James Simmons <jsimmons@infradead.org>
---
 .../staging/lustre/include/linux/lnet/socklnd.h    |   6 +-
 .../staging/lustre/lnet/klnds/socklnd/socklnd.c    |  71 +--
 .../staging/lustre/lnet/klnds/socklnd/socklnd.h    | 548 ++++++++++-----------
 .../staging/lustre/lnet/klnds/socklnd/socklnd_cb.c |  48 +-
 .../lustre/lnet/klnds/socklnd/socklnd_lib.c        |  16 +-
 .../lustre/lnet/klnds/socklnd/socklnd_modparams.c  |  54 +-
 .../lustre/lnet/klnds/socklnd/socklnd_proto.c      |  79 ++-
 7 files changed, 408 insertions(+), 414 deletions(-)

diff --git a/drivers/staging/lustre/include/linux/lnet/socklnd.h b/drivers/staging/lustre/include/linux/lnet/socklnd.h
index 20fa221d..ca814af 100644
--- a/drivers/staging/lustre/include/linux/lnet/socklnd.h
+++ b/drivers/staging/lustre/include/linux/lnet/socklnd.h
@@ -64,9 +64,9 @@ struct ksock_lnet_msg {
 } __packed;
 
 struct ksock_msg {
-	u32	ksm_type;		/* type of socklnd message */
-	u32	ksm_csum;		/* checksum if != 0 */
-	u64	ksm_zc_cookies[2];	/* Zero-Copy request/ACK cookie */
+	u32		ksm_type;		/* type of socklnd message */
+	u32		ksm_csum;		/* checksum if != 0 */
+	u64		ksm_zc_cookies[2];	/* Zero-Copy request/ACK cookie */
 	union {
 		struct ksock_lnet_msg lnetmsg; /* lnet message, it's empty if
 						* it's NOOP
diff --git a/drivers/staging/lustre/lnet/klnds/socklnd/socklnd.c b/drivers/staging/lustre/lnet/klnds/socklnd/socklnd.c
index f048f0a..785f76c 100644
--- a/drivers/staging/lustre/lnet/klnds/socklnd/socklnd.c
+++ b/drivers/staging/lustre/lnet/klnds/socklnd/socklnd.c
@@ -309,7 +309,7 @@ struct ksock_peer *
 			}
 		}
 	}
- out:
+out:
 	read_unlock(&ksocknal_data.ksnd_global_lock);
 	return rc;
 }
@@ -713,8 +713,8 @@ struct ksock_peer *
 ksocknal_match_peerip(struct ksock_interface *iface, u32 *ips, int nips)
 {
 	int best_netmatch = 0;
-	int best_xor      = 0;
-	int best	  = -1;
+	int best_xor = 0;
+	int best = -1;
 	int this_xor;
 	int this_netmatch;
 	int i;
@@ -944,7 +944,8 @@ struct ksock_peer *
 			best_iface = iface;
 			best_netmatch = this_netmatch;
 			best_nroutes = iface->ksni_nroutes;
-		next_iface:;
+next_iface:
+			;
 		}
 
 		if (!best_iface)
@@ -955,7 +956,8 @@ struct ksock_peer *
 
 		ksocknal_add_route_locked(peer_ni, newroute);
 		newroute = NULL;
-	next_ipaddr:;
+next_ipaddr:
+		;
 	}
 
 	write_unlock_bh(global_lock);
@@ -982,7 +984,7 @@ struct ksock_peer *
 	}
 
 	lnet_ni_addref(ni);
-	cr->ksncr_ni   = ni;
+	cr->ksncr_ni = ni;
 	cr->ksncr_sock = sock;
 
 	spin_lock_bh(&ksocknal_data.ksnd_connd_lock);
@@ -1215,7 +1217,6 @@ struct ksock_peer *
 	 */
 	if (conn->ksnc_ipaddr != conn->ksnc_myipaddr) {
 		list_for_each_entry(conn2, &peer_ni->ksnp_conns, ksnc_list) {
-
 			if (conn2->ksnc_ipaddr != conn->ksnc_ipaddr ||
 			    conn2->ksnc_myipaddr != conn->ksnc_myipaddr ||
 			    conn2->ksnc_type != conn->ksnc_type)
@@ -1249,7 +1250,7 @@ struct ksock_peer *
 
 	/*
 	 * Search for a route corresponding to the new connection and
-	 * create an association.  This allows incoming connections created
+	 * create an association. This allows incoming connections created
 	 * by routes in my peer_ni to match my own route entries so I don't
 	 * continually create duplicate routes.
 	 */
@@ -1371,7 +1372,7 @@ struct ksock_peer *
 	ksocknal_conn_decref(conn);
 	return rc;
 
- failed_2:
+failed_2:
 	if (!peer_ni->ksnp_closing &&
 	    list_empty(&peer_ni->ksnp_conns) &&
 	    list_empty(&peer_ni->ksnp_routes)) {
@@ -1457,7 +1458,7 @@ struct ksock_peer *
 				goto conn2_found;
 		}
 		route->ksnr_connected &= ~(1 << conn->ksnc_type);
-	conn2_found:
+conn2_found:
 		conn->ksnc_route = NULL;
 
 		ksocknal_route_decref(route);     /* drop conn's ref on route */
@@ -2121,7 +2122,7 @@ static int ksocknal_push(struct lnet_ni *ni, struct lnet_process_id id)
 
 	switch (cmd) {
 	case IOC_LIBCFS_GET_INTERFACE: {
-		struct ksock_net       *net = ni->ni_data;
+		struct ksock_net *net = ni->ni_data;
 		struct ksock_interface *iface;
 
 		read_lock(&ksocknal_data.ksnd_global_lock);
@@ -2164,8 +2165,8 @@ static int ksocknal_push(struct lnet_ni *ni, struct lnet_process_id id)
 		if (rc)
 			return rc;
 
-		data->ioc_nid    = id.nid;
-		data->ioc_count  = share_count;
+		data->ioc_nid  = id.nid;
+		data->ioc_count = share_count;
 		data->ioc_u32[0] = ip;
 		data->ioc_u32[1] = port;
 		data->ioc_u32[2] = myip;
@@ -2178,14 +2179,14 @@ static int ksocknal_push(struct lnet_ni *ni, struct lnet_process_id id)
 		id.nid = data->ioc_nid;
 		id.pid = LNET_PID_LUSTRE;
 		return ksocknal_add_peer(ni, id,
-					  data->ioc_u32[0], /* IP */
-					  data->ioc_u32[1]); /* port */
+					 data->ioc_u32[0], /* IP */
+					 data->ioc_u32[1]); /* port */
 
 	case IOC_LIBCFS_DEL_PEER:
 		id.nid = data->ioc_nid;
 		id.pid = LNET_PID_ANY;
 		return ksocknal_del_peer(ni, id,
-					  data->ioc_u32[0]); /* IP */
+					 data->ioc_u32[0]); /* IP */
 
 	case IOC_LIBCFS_GET_CONN: {
 		int txmem;
@@ -2199,9 +2200,9 @@ static int ksocknal_push(struct lnet_ni *ni, struct lnet_process_id id)
 
 		ksocknal_lib_get_conn_tunables(conn, &txmem, &rxmem, &nagle);
 
-		data->ioc_count  = txmem;
-		data->ioc_nid    = conn->ksnc_peer->ksnp_id.nid;
-		data->ioc_flags  = nagle;
+		data->ioc_count = txmem;
+		data->ioc_nid = conn->ksnc_peer->ksnp_id.nid;
+		data->ioc_flags = nagle;
 		data->ioc_u32[0] = conn->ksnc_ipaddr;
 		data->ioc_u32[1] = conn->ksnc_port;
 		data->ioc_u32[2] = conn->ksnc_myipaddr;
@@ -2217,7 +2218,7 @@ static int ksocknal_push(struct lnet_ni *ni, struct lnet_process_id id)
 		id.nid = data->ioc_nid;
 		id.pid = LNET_PID_ANY;
 		return ksocknal_close_matching_conns(id,
-						      data->ioc_u32[0]);
+						     data->ioc_u32[0]);
 
 	case IOC_LIBCFS_REGISTER_MYNID:
 		/* Ignore if this is a noop */
@@ -2449,8 +2450,8 @@ static int ksocknal_push(struct lnet_ni *ni, struct lnet_process_id id)
 		}
 	}
 
-	ksocknal_data.ksnd_connd_starting       = 0;
-	ksocknal_data.ksnd_connd_failed_stamp   = 0;
+	ksocknal_data.ksnd_connd_starting = 0;
+	ksocknal_data.ksnd_connd_failed_stamp = 0;
 	ksocknal_data.ksnd_connd_starting_stamp = ktime_get_real_seconds();
 	/*
 	 * must have at least 2 connds to remain responsive to accepts while
@@ -2495,7 +2496,7 @@ static int ksocknal_push(struct lnet_ni *ni, struct lnet_process_id id)
 
 	return 0;
 
- failed:
+failed:
 	ksocknal_base_shutdown();
 	return -ENETDOWN;
 }
@@ -2512,7 +2513,7 @@ static int ksocknal_push(struct lnet_ni *ni, struct lnet_process_id id)
 		list_for_each_entry(peer_ni, &ksocknal_data.ksnd_peers[i],
 				    ksnp_list) {
 			struct ksock_route *route;
-			struct ksock_conn  *conn;
+			struct ksock_conn *conn;
 
 			if (peer_ni->ksnp_ni != ni)
 				continue;
@@ -2555,7 +2556,7 @@ static int ksocknal_push(struct lnet_ni *ni, struct lnet_process_id id)
 {
 	struct ksock_net *net = ni->ni_data;
 	int i;
-	struct lnet_process_id anyid = {0};
+	struct lnet_process_id anyid = { 0 };
 
 	anyid.nid = LNET_NID_ANY;
 	anyid.pid = LNET_PID_ANY;
@@ -2846,9 +2847,9 @@ static int ksocknal_push(struct lnet_ni *ni, struct lnet_process_id id)
 
 	return 0;
 
- fail_1:
+fail_1:
 	kfree(net);
- fail_0:
+fail_0:
 	if (!ksocknal_data.ksnd_nnets)
 		ksocknal_base_shutdown();
 
@@ -2869,15 +2870,15 @@ static int __init ksocklnd_init(void)
 	BUILD_BUG_ON(SOCKLND_CONN_ACK != SOCKLND_CONN_BULK_IN);
 
 	/* initialize the_ksocklnd */
-	the_ksocklnd.lnd_type     = SOCKLND;
-	the_ksocklnd.lnd_startup  = ksocknal_startup;
+	the_ksocklnd.lnd_type = SOCKLND;
+	the_ksocklnd.lnd_startup = ksocknal_startup;
 	the_ksocklnd.lnd_shutdown = ksocknal_shutdown;
-	the_ksocklnd.lnd_ctl      = ksocknal_ctl;
-	the_ksocklnd.lnd_send     = ksocknal_send;
-	the_ksocklnd.lnd_recv     = ksocknal_recv;
-	the_ksocklnd.lnd_notify   = ksocknal_notify;
-	the_ksocklnd.lnd_query    = ksocknal_query;
-	the_ksocklnd.lnd_accept   = ksocknal_accept;
+	the_ksocklnd.lnd_ctl = ksocknal_ctl;
+	the_ksocklnd.lnd_send = ksocknal_send;
+	the_ksocklnd.lnd_recv = ksocknal_recv;
+	the_ksocklnd.lnd_notify = ksocknal_notify;
+	the_ksocklnd.lnd_query = ksocknal_query;
+	the_ksocklnd.lnd_accept = ksocknal_accept;
 
 	rc = ksocknal_tunables_init();
 	if (rc)
diff --git a/drivers/staging/lustre/lnet/klnds/socklnd/socklnd.h b/drivers/staging/lustre/lnet/klnds/socklnd/socklnd.h
index a390381..ce1f9e7 100644
--- a/drivers/staging/lustre/lnet/klnds/socklnd/socklnd.h
+++ b/drivers/staging/lustre/lnet/klnds/socklnd/socklnd.h
@@ -69,36 +69,36 @@
  * no risk if we're not running on a CONFIG_HIGHMEM platform.
  */
 #ifdef CONFIG_HIGHMEM
-# define SOCKNAL_RISK_KMAP_DEADLOCK  0
+# define SOCKNAL_RISK_KMAP_DEADLOCK	0
 #else
-# define SOCKNAL_RISK_KMAP_DEADLOCK  1
+# define SOCKNAL_RISK_KMAP_DEADLOCK	1
 #endif
 
 struct ksock_sched_info;
 
 struct ksock_sched {				/* per scheduler state */
-	spinlock_t              kss_lock;       /* serialise */
-	struct list_head        kss_rx_conns;   /* conn waiting to be read */
-	struct list_head        kss_tx_conns;   /* conn waiting to be written */
-	struct list_head        kss_zombie_noop_txs; /* zombie noop tx list */
-	wait_queue_head_t       kss_waitq;	/* where scheduler sleeps */
-	int                     kss_nconns;     /* # connections assigned to
+	spinlock_t		kss_lock;	/* serialise */
+	struct list_head	kss_rx_conns;	/* conn waiting to be read */
+	struct list_head	kss_tx_conns;	/* conn waiting to be written */
+	struct list_head	kss_zombie_noop_txs; /* zombie noop tx list */
+	wait_queue_head_t	kss_waitq;	/* where scheduler sleeps */
+	int			kss_nconns;	/* # connections assigned to
 						 * this scheduler
 						 */
-	struct ksock_sched_info *kss_info;	/* owner of it */
+	struct ksock_sched_info	*kss_info;	/* owner of it */
 };
 
 struct ksock_sched_info {
-	int                     ksi_nthreads_max; /* max allowed threads */
-	int                     ksi_nthreads;     /* number of threads */
-	int                     ksi_cpt;          /* CPT id */
+	int			ksi_nthreads_max; /* max allowed threads */
+	int			ksi_nthreads;	  /* number of threads */
+	int			ksi_cpt;	  /* CPT id */
 	struct ksock_sched	*ksi_scheds;	  /* array of schedulers */
 };
 
-#define KSOCK_CPT_SHIFT           16
-#define KSOCK_THREAD_ID(cpt, sid) (((cpt) << KSOCK_CPT_SHIFT) | (sid))
-#define KSOCK_THREAD_CPT(id)      ((id) >> KSOCK_CPT_SHIFT)
-#define KSOCK_THREAD_SID(id)      ((id) & ((1UL << KSOCK_CPT_SHIFT) - 1))
+#define KSOCK_CPT_SHIFT			16
+#define KSOCK_THREAD_ID(cpt, sid)	(((cpt) << KSOCK_CPT_SHIFT) | (sid))
+#define KSOCK_THREAD_CPT(id)		((id) >> KSOCK_CPT_SHIFT)
+#define KSOCK_THREAD_SID(id)		((id) & ((1UL << KSOCK_CPT_SHIFT) - 1))
 
 struct ksock_interface {			/* in-use interface */
 	u32		ksni_ipaddr;		/* interface's IP address */
@@ -109,149 +109,149 @@ struct ksock_interface {			/* in-use interface */
 };
 
 struct ksock_tunables {
-	int          *ksnd_timeout;            /* "stuck" socket timeout
-						* (seconds)
-						*/
-	int          *ksnd_nscheds;            /* # scheduler threads in each
-						* pool while starting
-						*/
-	int          *ksnd_nconnds;            /* # connection daemons */
-	int          *ksnd_nconnds_max;        /* max # connection daemons */
-	int          *ksnd_min_reconnectms;    /* first connection retry after
-						* (ms)...
-						*/
-	int          *ksnd_max_reconnectms;    /* ...exponentially increasing to
-						* this
-						*/
-	int          *ksnd_eager_ack;          /* make TCP ack eagerly? */
-	int          *ksnd_typed_conns;        /* drive sockets by type? */
-	int          *ksnd_min_bulk;           /* smallest "large" message */
-	int          *ksnd_tx_buffer_size;     /* socket tx buffer size */
-	int          *ksnd_rx_buffer_size;     /* socket rx buffer size */
-	int          *ksnd_nagle;              /* enable NAGLE? */
-	int          *ksnd_round_robin;        /* round robin for multiple
-						* interfaces
-						*/
-	int          *ksnd_keepalive;          /* # secs for sending keepalive
-						* NOOP
-						*/
-	int          *ksnd_keepalive_idle;     /* # idle secs before 1st probe
-						*/
-	int          *ksnd_keepalive_count;    /* # probes */
-	int          *ksnd_keepalive_intvl;    /* time between probes */
-	int          *ksnd_credits;            /* # concurrent sends */
-	int          *ksnd_peertxcredits;      /* # concurrent sends to 1 peer
-						*/
-	int          *ksnd_peerrtrcredits;     /* # per-peer_ni router buffer
-						* credits
-						*/
-	int          *ksnd_peertimeout;        /* seconds to consider
-						* peer_ni dead
-						*/
-	int          *ksnd_enable_csum;        /* enable check sum */
-	int          *ksnd_inject_csum_error;  /* set non-zero to inject
-						* checksum error
-						*/
-	int          *ksnd_nonblk_zcack;       /* always send zc-ack on
-						* non-blocking connection
-						*/
-	unsigned int *ksnd_zc_min_payload;     /* minimum zero copy payload
-						* size
-						*/
-	int          *ksnd_zc_recv;            /* enable ZC receive (for
-						* Chelsio TOE)
-						*/
-	int          *ksnd_zc_recv_min_nfrags; /* minimum # of fragments to
-						* enable ZC receive
-						*/
+	int		*ksnd_timeout;		/* "stuck" socket timeout
+						 * (seconds)
+						 */
+	int		*ksnd_nscheds;		/* # scheduler threads in each
+						 * pool while starting
+						 */
+	int		*ksnd_nconnds;		/* # connection daemons */
+	int		*ksnd_nconnds_max;	/* max # connection daemons */
+	int		*ksnd_min_reconnectms;	/* first connection retry after
+						 * (ms)...
+						 */
+	int		*ksnd_max_reconnectms;	/* ...exponentially increasing to
+						 * this
+						 */
+	int		*ksnd_eager_ack;	/* make TCP ack eagerly? */
+	int		*ksnd_typed_conns;	/* drive sockets by type? */
+	int		*ksnd_min_bulk;		/* smallest "large" message */
+	int		*ksnd_tx_buffer_size;	/* socket tx buffer size */
+	int		*ksnd_rx_buffer_size;	/* socket rx buffer size */
+	int		*ksnd_nagle;		/* enable NAGLE? */
+	int		*ksnd_round_robin;	/* round robin for multiple
+						 * interfaces
+						 */
+	int		*ksnd_keepalive;	/* # secs for sending keepalive
+						 * NOOP
+						 */
+	int		*ksnd_keepalive_idle;	/* # idle secs before 1st probe
+						 */
+	int		*ksnd_keepalive_count;	/* # probes */
+	int		*ksnd_keepalive_intvl;	/* time between probes */
+	int		*ksnd_credits;		/* # concurrent sends */
+	int		*ksnd_peertxcredits;	/* # concurrent sends to 1 peer
+						 */
+	int		*ksnd_peerrtrcredits;	/* # per-peer_ni router buffer
+						 * credits
+						 */
+	int		*ksnd_peertimeout;	/* seconds to consider
+						 * peer_ni dead
+						 */
+	int		*ksnd_enable_csum;	/* enable check sum */
+	int		*ksnd_inject_csum_error;/* set non-zero to inject
+						 * checksum error
+						 */
+	int		*ksnd_nonblk_zcack;	/* always send zc-ack on
+						 * non-blocking connection
+						 */
+	unsigned int	*ksnd_zc_min_payload;	/* minimum zero copy payload
+						 * size
+						 */
+	int		*ksnd_zc_recv;		/* enable ZC receive (for
+						 * Chelsio TOE)
+						 */
+	int		*ksnd_zc_recv_min_nfrags; /* minimum # of fragments to
+						   * enable ZC receive
+						   */
 };
 
 struct ksock_net {
-	u64		  ksnn_incarnation;	/* my epoch */
-	spinlock_t	  ksnn_lock;		/* serialise */
-	struct list_head	  ksnn_list;		/* chain on global list */
-	int		  ksnn_npeers;		/* # peers */
-	int		  ksnn_shutdown;	/* shutting down? */
-	int		  ksnn_ninterfaces;	/* IP interfaces */
-	struct ksock_interface ksnn_interfaces[LNET_INTERFACES_NUM];
+	u64			ksnn_incarnation;	/* my epoch */
+	spinlock_t		ksnn_lock;		/* serialise */
+	struct list_head	ksnn_list;		/* chain on global list */
+	int			ksnn_npeers;		/* # peers */
+	int			ksnn_shutdown;		/* shutting down? */
+	int			ksnn_ninterfaces;	/* IP interfaces */
+	struct ksock_interface	ksnn_interfaces[LNET_INTERFACES_NUM];
 };
 
 /** connd timeout */
-#define SOCKNAL_CONND_TIMEOUT  120
+#define SOCKNAL_CONND_TIMEOUT	120
 /** reserved thread for accepting & creating new connd */
-#define SOCKNAL_CONND_RESV     1
+#define SOCKNAL_CONND_RESV	1
 
 struct ksock_nal_data {
-	int                     ksnd_init;              /* initialisation state
+	int			ksnd_init;		/* initialisation state
 							 */
-	int                     ksnd_nnets;             /* # networks set up */
-	struct list_head        ksnd_nets;              /* list of nets */
-	rwlock_t                ksnd_global_lock;       /* stabilize
+	int			ksnd_nnets;		/* # networks set up */
+	struct list_head	ksnd_nets;		/* list of nets */
+	rwlock_t		ksnd_global_lock;	/* stabilize
 							 * peer_ni/conn ops
 							 */
-	struct list_head        *ksnd_peers;            /* hash table of all my
+	struct list_head	*ksnd_peers;		/* hash table of all my
 							 * known peers
 							 */
-	int                     ksnd_peer_hash_size;    /* size of ksnd_peers */
+	int			ksnd_peer_hash_size;	/* size of ksnd_peers */
 
-	int                     ksnd_nthreads;          /* # live threads */
-	int                     ksnd_shuttingdown;      /* tell threads to exit
+	int			ksnd_nthreads;		/* # live threads */
+	int			ksnd_shuttingdown;	/* tell threads to exit
 							 */
-	struct ksock_sched_info **ksnd_sched_info;      /* schedulers info */
+	struct ksock_sched_info	**ksnd_sched_info;	/* schedulers info */
 
-	atomic_t                ksnd_nactive_txs;       /* #active txs */
+	atomic_t		ksnd_nactive_txs;	/* #active txs */
 
-	struct list_head        ksnd_deathrow_conns;    /* conns to close:
+	struct list_head	ksnd_deathrow_conns;	/* conns to close:
 							 * reaper_lock
 							 */
-	struct list_head        ksnd_zombie_conns;      /* conns to free:
+	struct list_head	ksnd_zombie_conns;	/* conns to free:
 							 * reaper_lock
 							 */
-	struct list_head        ksnd_enomem_conns;      /* conns to retry:
+	struct list_head	ksnd_enomem_conns;	/* conns to retry:
 							 * reaper_lock
 							 */
-	wait_queue_head_t       ksnd_reaper_waitq;      /* reaper sleeps here */
-	time64_t		ksnd_reaper_waketime;   /* when reaper will wake
+	wait_queue_head_t	ksnd_reaper_waitq;	/* reaper sleeps here */
+	time64_t		ksnd_reaper_waketime;	/* when reaper will wake
 							 */
-	spinlock_t              ksnd_reaper_lock;       /* serialise */
+	spinlock_t		ksnd_reaper_lock;	/* serialise */
 
-	int                     ksnd_enomem_tx;         /* test ENOMEM sender */
-	int                     ksnd_stall_tx;          /* test sluggish sender
+	int			ksnd_enomem_tx;		/* test ENOMEM sender */
+	int			ksnd_stall_tx;		/* test sluggish sender
 							 */
-	int                     ksnd_stall_rx;          /* test sluggish
+	int			ksnd_stall_rx;		/* test sluggish
 							 * receiver
 							 */
-	struct list_head        ksnd_connd_connreqs;    /* incoming connection
+	struct list_head	ksnd_connd_connreqs;	/* incoming connection
 							 * requests
 							 */
-	struct list_head        ksnd_connd_routes;      /* routes waiting to be
+	struct list_head	ksnd_connd_routes;	/* routes waiting to be
 							 * connected
 							 */
-	wait_queue_head_t       ksnd_connd_waitq;       /* connds sleep here */
-	int                     ksnd_connd_connecting;  /* # connds connecting
+	wait_queue_head_t	ksnd_connd_waitq;	/* connds sleep here */
+	int			ksnd_connd_connecting;	/* # connds connecting
 							 */
-	time64_t                ksnd_connd_failed_stamp;/* time stamp of the
+	time64_t		ksnd_connd_failed_stamp;/* time stamp of the
 							 * last failed
 							 * connecting attempt
 							 */
-	time64_t                ksnd_connd_starting_stamp;/* time stamp of the
+	time64_t		ksnd_connd_starting_stamp;/* time stamp of the
 							   * last starting connd
 							   */
 	unsigned int		ksnd_connd_starting;	/* # starting connd */
 	unsigned int		ksnd_connd_running;	/* # running connd */
-	spinlock_t              ksnd_connd_lock;        /* serialise */
+	spinlock_t		ksnd_connd_lock;	/* serialise */
 
-	struct list_head        ksnd_idle_noop_txs;     /* list head for freed
+	struct list_head	ksnd_idle_noop_txs;	/* list head for freed
 							 * noop tx
 							 */
-	spinlock_t              ksnd_tx_lock;           /* serialise, g_lock
+	spinlock_t		ksnd_tx_lock;		/* serialise, g_lock
 							 * unsafe
 							 */
 };
 
-#define SOCKNAL_INIT_NOTHING 0
-#define SOCKNAL_INIT_DATA    1
-#define SOCKNAL_INIT_ALL     2
+#define SOCKNAL_INIT_NOTHING	0
+#define SOCKNAL_INIT_DATA	1
+#define SOCKNAL_INIT_ALL	2
 
 /*
  * A packet just assembled for transmission is represented by 1 or more
@@ -268,34 +268,34 @@ struct ksock_nal_data {
 struct ksock_route; /* forward ref */
 struct ksock_proto; /* forward ref */
 
-struct ksock_tx {			   /* transmit packet */
-	struct list_head  tx_list;         /* queue on conn for transmission etc
-					    */
-	struct list_head  tx_zc_list;      /* queue on peer_ni for ZC request */
-	atomic_t          tx_refcount;     /* tx reference count */
-	int               tx_nob;          /* # packet bytes */
-	int               tx_resid;        /* residual bytes */
-	int               tx_niov;         /* # packet iovec frags */
-	struct kvec       *tx_iov;         /* packet iovec frags */
-	int               tx_nkiov;        /* # packet page frags */
-	unsigned short    tx_zc_aborted;   /* aborted ZC request */
-	unsigned short    tx_zc_capable:1; /* payload is large enough for ZC */
-	unsigned short    tx_zc_checked:1; /* Have I checked if I should ZC? */
-	unsigned short    tx_nonblk:1;     /* it's a non-blocking ACK */
-	struct bio_vec	  *tx_kiov;	   /* packet page frags */
-	struct ksock_conn *tx_conn;        /* owning conn */
-	struct lnet_msg        *tx_lnetmsg;     /* lnet message for lnet_finalize()
-					    */
+struct ksock_tx {				/* transmit packet */
+	struct list_head	tx_list;	/* queue on conn for transmission etc
+						 */
+	struct list_head	tx_zc_list;	/* queue on peer_ni for ZC request */
+	atomic_t		tx_refcount;	/* tx reference count */
+	int			tx_nob;		/* # packet bytes */
+	int			tx_resid;	/* residual bytes */
+	int			tx_niov;	/* # packet iovec frags */
+	struct kvec		*tx_iov;	/* packet iovec frags */
+	int			tx_nkiov;	/* # packet page frags */
+	unsigned short		tx_zc_aborted;	/* aborted ZC request */
+	unsigned short		tx_zc_capable:1;/* payload is large enough for ZC */
+	unsigned short		tx_zc_checked:1;/* Have I checked if I should ZC? */
+	unsigned short		tx_nonblk:1;	/* it's a non-blocking ACK */
+	struct bio_vec		*tx_kiov;	/* packet page frags */
+	struct ksock_conn	*tx_conn;	/* owning conn */
+	struct lnet_msg		*tx_lnetmsg;	/* lnet message for lnet_finalize()
+						 */
 	time64_t		tx_deadline;	/* when (in secs) tx times out */
-	struct ksock_msg       tx_msg;          /* socklnd message buffer */
-	int               tx_desc_size;    /* size of this descriptor */
+	struct ksock_msg	tx_msg;		/* socklnd message buffer */
+	int			tx_desc_size;	/* size of this descriptor */
 	union {
 		struct {
-			struct kvec iov;     /* virt hdr */
-			struct bio_vec kiov[0]; /* paged payload */
+			struct kvec	iov;	/* virt hdr */
+			struct bio_vec	kiov[0];/* paged payload */
 		} paged;
 		struct {
-			struct kvec iov[1];  /* virt hdr + payload */
+			struct kvec	iov[1];	/* virt hdr + payload */
 		} virt;
 	} tx_frags;
 };
@@ -304,160 +304,160 @@ struct ksock_tx {			   /* transmit packet */
 
 /* network zero copy callback descriptor embedded in struct ksock_tx */
 
-#define SOCKNAL_RX_KSM_HEADER   1 /* reading ksock message header */
-#define SOCKNAL_RX_LNET_HEADER  2 /* reading lnet message header */
-#define SOCKNAL_RX_PARSE        3 /* Calling lnet_parse() */
-#define SOCKNAL_RX_PARSE_WAIT   4 /* waiting to be told to read the body */
-#define SOCKNAL_RX_LNET_PAYLOAD 5 /* reading lnet payload (to deliver here) */
-#define SOCKNAL_RX_SLOP         6 /* skipping body */
+#define SOCKNAL_RX_KSM_HEADER	1 /* reading ksock message header */
+#define SOCKNAL_RX_LNET_HEADER	2 /* reading lnet message header */
+#define SOCKNAL_RX_PARSE	3 /* Calling lnet_parse() */
+#define SOCKNAL_RX_PARSE_WAIT	4 /* waiting to be told to read the body */
+#define SOCKNAL_RX_LNET_PAYLOAD	5 /* reading lnet payload (to deliver here) */
+#define SOCKNAL_RX_SLOP		6 /* skipping body */
 
 struct ksock_conn {
-	struct ksock_peer  *ksnc_peer;        /* owning peer_ni */
-	struct ksock_route *ksnc_route;       /* owning route */
-	struct list_head   ksnc_list;         /* stash on peer_ni's conn list */
-	struct socket      *ksnc_sock;        /* actual socket */
-	void               *ksnc_saved_data_ready;  /* socket's original
-						     * data_ready() callback
-						     */
-	void               *ksnc_saved_write_space; /* socket's original
-						     * write_space() callback
-						     */
-	atomic_t           ksnc_conn_refcount;/* conn refcount */
-	atomic_t           ksnc_sock_refcount;/* sock refcount */
-	struct ksock_sched *ksnc_scheduler;	/* who schedules this connection
-						 */
-	u32              ksnc_myipaddr;     /* my IP */
-	u32              ksnc_ipaddr;       /* peer_ni's IP */
-	int                ksnc_port;         /* peer_ni's port */
-	signed int         ksnc_type:3;       /* type of connection, should be
-					       * signed value
-					       */
-	unsigned int       ksnc_closing:1;    /* being shut down */
-	unsigned int       ksnc_flip:1;       /* flip or not, only for V2.x */
-	unsigned int       ksnc_zc_capable:1; /* enable to ZC */
-	struct ksock_proto *ksnc_proto;       /* protocol for the connection */
+	struct ksock_peer      *ksnc_peer;		/* owning peer_ni */
+	struct ksock_route     *ksnc_route;		/* owning route */
+	struct list_head	ksnc_list;		/* stash on peer_ni's conn list */
+	struct socket	       *ksnc_sock;		/* actual socket */
+	void		       *ksnc_saved_data_ready;	/* socket's original
+							 * data_ready() callback
+							 */
+	void		       *ksnc_saved_write_space;	/* socket's original
+							 * write_space() callback
+							 */
+	atomic_t		ksnc_conn_refcount;	/* conn refcount */
+	atomic_t		ksnc_sock_refcount;	/* sock refcount */
+	struct ksock_sched     *ksnc_scheduler;		/* who schedules this connection
+							 */
+	u32			ksnc_myipaddr;		/* my IP */
+	u32			ksnc_ipaddr;		/* peer_ni's IP */
+	int			ksnc_port;		/* peer_ni's port */
+	signed int		ksnc_type:3;		/* type of connection, should be
+							 * signed value
+							 */
+	unsigned int		ksnc_closing:1;		/* being shut down */
+	unsigned int		ksnc_flip:1;		/* flip or not, only for V2.x */
+	unsigned int		ksnc_zc_capable:1;	/* enable to ZC */
+	struct ksock_proto     *ksnc_proto;		/* protocol for the connection */
 
 	/* reader */
-	struct list_head   ksnc_rx_list;      /* where I enq waiting input or a
-					       * forwarding descriptor
-					       */
-	time64_t	   ksnc_rx_deadline;  /* when (in secs) receive times
-					       * out
-					       */
-	u8               ksnc_rx_started;   /* started receiving a message */
-	u8               ksnc_rx_ready;     /* data ready to read */
-	u8               ksnc_rx_scheduled; /* being progressed */
-	u8               ksnc_rx_state;     /* what is being read */
-	int                ksnc_rx_nob_left;  /* # bytes to next hdr/body */
-	struct iov_iter    ksnc_rx_to;		/* copy destination */
-	struct kvec        ksnc_rx_iov_space[LNET_MAX_IOV]; /* space for frag descriptors */
-	u32              ksnc_rx_csum;      /* partial checksum for incoming
-					       * data
-					       */
-	void               *ksnc_cookie;      /* rx lnet_finalize passthru arg
-					       */
-	struct ksock_msg        ksnc_msg;          /* incoming message buffer:
-					       * V2.x message takes the
-					       * whole struct
-					       * V1.x message is a bare
-					       * struct lnet_hdr, it's stored in
-					       * ksnc_msg.ksm_u.lnetmsg
-					       */
+	struct list_head	ksnc_rx_list;		/* where I enq waiting input or a
+							 * forwarding descriptor
+							 */
+	time64_t		ksnc_rx_deadline;	/* when (in secs) receive times
+							 * out
+							 */
+	u8			ksnc_rx_started;	/* started receiving a message */
+	u8			ksnc_rx_ready;		/* data ready to read */
+	u8			ksnc_rx_scheduled;	/* being progressed */
+	u8			ksnc_rx_state;		/* what is being read */
+	int			ksnc_rx_nob_left;	/* # bytes to next hdr/body */
+	struct iov_iter		ksnc_rx_to;		/* copy destination */
+	struct kvec		ksnc_rx_iov_space[LNET_MAX_IOV]; /* space for frag descriptors */
+	u32			ksnc_rx_csum;		/* partial checksum for incoming
+							 * data
+							 */
+	void		       *ksnc_cookie;		/* rx lnet_finalize passthru arg
+							 */
+	struct ksock_msg	ksnc_msg;		/* incoming message buffer:
+							 * V2.x message takes the
+							 * whole struct
+							 * V1.x message is a bare
+							 * struct lnet_hdr, it's stored in
+							 * ksnc_msg.ksm_u.lnetmsg
+							 */
 	/* WRITER */
-	struct list_head   ksnc_tx_list;      /* where I enq waiting for output
-					       * space
-					       */
-	struct list_head   ksnc_tx_queue;     /* packets waiting to be sent */
-	struct ksock_tx	  *ksnc_tx_carrier;   /* next TX that can carry a LNet
-					       * message or ZC-ACK
-					       */
-	time64_t	   ksnc_tx_deadline;  /* when (in secs) tx times out
-					       */
-	int                ksnc_tx_bufnob;    /* send buffer marker */
-	atomic_t           ksnc_tx_nob;       /* # bytes queued */
-	int		   ksnc_tx_ready;     /* write space */
-	int		   ksnc_tx_scheduled; /* being progressed */
-	time64_t	   ksnc_tx_last_post; /* time stamp of the last posted
-					       * TX
-					       */
+	struct list_head	ksnc_tx_list;		/* where I enq waiting for output
+							 * space
+							 */
+	struct list_head	ksnc_tx_queue;		/* packets waiting to be sent */
+	struct ksock_tx	       *ksnc_tx_carrier;	/* next TX that can carry a LNet
+							 * message or ZC-ACK
+							 */
+	time64_t		ksnc_tx_deadline;	/* when (in secs) tx times out
+							 */
+	int			ksnc_tx_bufnob;		/* send buffer marker */
+	atomic_t		ksnc_tx_nob;		/* # bytes queued */
+	int			ksnc_tx_ready;		/* write space */
+	int			ksnc_tx_scheduled;	/* being progressed */
+	time64_t		ksnc_tx_last_post;	/* time stamp of the last posted
+							 * TX
+							 */
 };
 
 struct ksock_route {
-	struct list_head  ksnr_list;           /* chain on peer_ni route list */
-	struct list_head  ksnr_connd_list;     /* chain on ksnr_connd_routes */
-	struct ksock_peer *ksnr_peer;          /* owning peer_ni */
-	atomic_t          ksnr_refcount;       /* # users */
-	time64_t	  ksnr_timeout;        /* when (in secs) reconnection
-						* can happen next
-						*/
-	time64_t	  ksnr_retry_interval; /* how long between retries */
-	u32             ksnr_myipaddr;       /* my IP */
-	u32             ksnr_ipaddr;         /* IP address to connect to */
-	int               ksnr_port;           /* port to connect to */
-	unsigned int      ksnr_scheduled:1;    /* scheduled for attention */
-	unsigned int      ksnr_connecting:1;   /* connection establishment in
-						* progress
-						*/
-	unsigned int      ksnr_connected:4;    /* connections established by
-						* type
-						*/
-	unsigned int      ksnr_deleted:1;      /* been removed from peer_ni? */
-	unsigned int      ksnr_share_count;    /* created explicitly? */
-	int               ksnr_conn_count;     /* # conns established by this
-						* route
-						*/
+	struct list_head	ksnr_list;		/* chain on peer_ni route list */
+	struct list_head	ksnr_connd_list;	/* chain on ksnr_connd_routes */
+	struct ksock_peer      *ksnr_peer;		/* owning peer_ni */
+	atomic_t		ksnr_refcount;		/* # users */
+	time64_t		ksnr_timeout;		/* when (in secs) reconnection
+							 * can happen next
+							 */
+	time64_t		ksnr_retry_interval;	/* how long between retries */
+	u32			ksnr_myipaddr;		/* my IP */
+	u32			ksnr_ipaddr;		/* IP address to connect to */
+	int			ksnr_port;		/* port to connect to */
+	unsigned int		ksnr_scheduled:1;	/* scheduled for attention */
+	unsigned int		ksnr_connecting:1;	/* connection establishment in
+							 * progress
+							 */
+	unsigned int		ksnr_connected:4;	/* connections established by
+							 * type
+							 */
+	unsigned int		ksnr_deleted:1;		/* been removed from peer_ni? */
+	unsigned int		ksnr_share_count;	/* created explicitly? */
+	int			ksnr_conn_count;	/* # conns established by this
+							 * route
+							 */
 };
 
-#define SOCKNAL_KEEPALIVE_PING 1 /* cookie for keepalive ping */
+#define SOCKNAL_KEEPALIVE_PING	1	/* cookie for keepalive ping */
 
 struct ksock_peer {
-	struct list_head   ksnp_list;         /* stash on global peer_ni list */
-	time64_t	   ksnp_last_alive;     /* when (in seconds) I was last
-						 * alive
-						 */
-	struct lnet_process_id  ksnp_id;	/* who's on the other end(s) */
-	atomic_t           ksnp_refcount;       /* # users */
-	int                ksnp_sharecount;     /* lconf usage counter */
-	int                ksnp_closing;        /* being closed */
-	int                ksnp_accepting;      /* # passive connections pending
-						 */
-	int                ksnp_error;          /* errno on closing last conn */
-	u64              ksnp_zc_next_cookie; /* ZC completion cookie */
-	u64              ksnp_incarnation;    /* latest known peer_ni
-						 * incarnation
-						 */
-	struct ksock_proto *ksnp_proto;         /* latest known peer_ni
-						 * protocol
-						 */
-	struct list_head   ksnp_conns;          /* all active connections */
-	struct list_head   ksnp_routes;         /* routes */
-	struct list_head   ksnp_tx_queue;       /* waiting packets */
-	spinlock_t         ksnp_lock;           /* serialize, g_lock unsafe */
-	struct list_head   ksnp_zc_req_list;    /* zero copy requests wait for
-						 * ACK
-						 */
-	time64_t	   ksnp_send_keepalive; /* time to send keepalive */
-	struct lnet_ni	   *ksnp_ni;		/* which network */
-	int                ksnp_n_passive_ips;  /* # of... */
+	struct list_head	ksnp_list;		/* stash on global peer_ni list */
+	time64_t		ksnp_last_alive;	/* when (in seconds) I was last
+							 * alive
+							 */
+	struct lnet_process_id	ksnp_id;		/* who's on the other end(s) */
+	atomic_t		ksnp_refcount;		/* # users */
+	int			ksnp_sharecount;	/* lconf usage counter */
+	int			ksnp_closing;		/* being closed */
+	int			ksnp_accepting;		/* # passive connections pending
+							 */
+	int			ksnp_error;		/* errno on closing last conn */
+	u64			ksnp_zc_next_cookie;	/* ZC completion cookie */
+	u64			ksnp_incarnation;	/* latest known peer_ni
+							 * incarnation
+							 */
+	struct ksock_proto     *ksnp_proto;		/* latest known peer_ni
+							 * protocol
+							 */
+	struct list_head	ksnp_conns;		/* all active connections */
+	struct list_head	ksnp_routes;		/* routes */
+	struct list_head	ksnp_tx_queue;		/* waiting packets */
+	spinlock_t		ksnp_lock;		/* serialize, g_lock unsafe */
+	struct list_head	ksnp_zc_req_list;	/* zero copy requests wait for
+							 * ACK
+							 */
+	time64_t		ksnp_send_keepalive;	/* time to send keepalive */
+	struct lnet_ni	       *ksnp_ni;		/* which network */
+	int			ksnp_n_passive_ips;	/* # of... */
 
 	/* preferred local interfaces */
-	u32              ksnp_passive_ips[LNET_INTERFACES_NUM];
+	u32			ksnp_passive_ips[LNET_INTERFACES_NUM];
 };
 
 struct ksock_connreq {
-	struct list_head ksncr_list;  /* stash on ksnd_connd_connreqs */
-	struct lnet_ni	 *ksncr_ni;	/* chosen NI */
-	struct socket    *ksncr_sock; /* accepted socket */
+	struct list_head	ksncr_list;	/* stash on ksnd_connd_connreqs */
+	struct lnet_ni	       *ksncr_ni;	/* chosen NI */
+	struct socket	       *ksncr_sock;	/* accepted socket */
 };
 
 extern struct ksock_nal_data ksocknal_data;
 extern struct ksock_tunables ksocknal_tunables;
 
-#define SOCKNAL_MATCH_NO  0 /* TX can't match type of connection */
-#define SOCKNAL_MATCH_YES 1 /* TX matches type of connection */
-#define SOCKNAL_MATCH_MAY 2 /* TX can be sent on the connection, but not
-			     * preferred
-			     */
+#define SOCKNAL_MATCH_NO	0 /* TX can't match type of connection */
+#define SOCKNAL_MATCH_YES	1 /* TX matches type of connection */
+#define SOCKNAL_MATCH_MAY	2 /* TX can be sent on the connection, but not
+				   * preferred
+				   */
 
 struct ksock_proto {
 	/* version number of protocol */
@@ -501,12 +501,12 @@ struct ksock_proto {
 extern struct ksock_proto ksocknal_protocol_v2x;
 extern struct ksock_proto ksocknal_protocol_v3x;
 
-#define KSOCK_PROTO_V1_MAJOR LNET_PROTO_TCP_VERSION_MAJOR
-#define KSOCK_PROTO_V1_MINOR LNET_PROTO_TCP_VERSION_MINOR
-#define KSOCK_PROTO_V1       KSOCK_PROTO_V1_MAJOR
+#define KSOCK_PROTO_V1_MAJOR	LNET_PROTO_TCP_VERSION_MAJOR
+#define KSOCK_PROTO_V1_MINOR	LNET_PROTO_TCP_VERSION_MINOR
+#define KSOCK_PROTO_V1		KSOCK_PROTO_V1_MAJOR
 
 #ifndef CPU_MASK_NONE
-#define CPU_MASK_NONE   0UL
+#define CPU_MASK_NONE		0UL
 #endif
 
 static inline int
@@ -646,15 +646,15 @@ int ksocknal_create_conn(struct lnet_ni *ni, struct ksock_route *route,
 void ksocknal_close_conn_locked(struct ksock_conn *conn, int why);
 void ksocknal_terminate_conn(struct ksock_conn *conn);
 void ksocknal_destroy_conn(struct ksock_conn *conn);
-int  ksocknal_close_peer_conns_locked(struct ksock_peer *peer_ni,
-				      u32 ipaddr, int why);
+int ksocknal_close_peer_conns_locked(struct ksock_peer *peer_ni,
+				     u32 ipaddr, int why);
 int ksocknal_close_conn_and_siblings(struct ksock_conn *conn, int why);
 int ksocknal_close_matching_conns(struct lnet_process_id id, u32 ipaddr);
 struct ksock_conn *ksocknal_find_conn_locked(struct ksock_peer *peer_ni,
 					     struct ksock_tx *tx, int nonblk);
 
-int  ksocknal_launch_packet(struct lnet_ni *ni, struct ksock_tx *tx,
-			    struct lnet_process_id id);
+int ksocknal_launch_packet(struct lnet_ni *ni, struct ksock_tx *tx,
+			   struct lnet_process_id id);
 struct ksock_tx *ksocknal_alloc_tx(int type, int size);
 void ksocknal_free_tx(struct ksock_tx *tx);
 struct ksock_tx *ksocknal_alloc_tx_noop(u64 cookie, int nonblk);
diff --git a/drivers/staging/lustre/lnet/klnds/socklnd/socklnd_cb.c b/drivers/staging/lustre/lnet/klnds/socklnd/socklnd_cb.c
index dd4fb69..8e20f43 100644
--- a/drivers/staging/lustre/lnet/klnds/socklnd/socklnd_cb.c
+++ b/drivers/staging/lustre/lnet/klnds/socklnd/socklnd_cb.c
@@ -56,7 +56,7 @@ struct ksock_tx *
 	tx->tx_zc_aborted = 0;
 	tx->tx_zc_capable = 0;
 	tx->tx_zc_checked = 0;
-	tx->tx_desc_size  = size;
+	tx->tx_desc_size = size;
 
 	atomic_inc(&ksocknal_data.ksnd_nactive_txs);
 
@@ -74,13 +74,13 @@ struct ksock_tx *
 		return NULL;
 	}
 
-	tx->tx_conn    = NULL;
+	tx->tx_conn = NULL;
 	tx->tx_lnetmsg = NULL;
-	tx->tx_kiov    = NULL;
-	tx->tx_nkiov   = 0;
-	tx->tx_iov     = tx->tx_frags.virt.iov;
-	tx->tx_niov    = 1;
-	tx->tx_nonblk  = nonblk;
+	tx->tx_kiov = NULL;
+	tx->tx_nkiov = 0;
+	tx->tx_iov = tx->tx_frags.virt.iov;
+	tx->tx_niov = 1;
+	tx->tx_nonblk = nonblk;
 
 	tx->tx_msg.ksm_csum = 0;
 	tx->tx_msg.ksm_type = KSOCK_MSG_NOOP;
@@ -228,7 +228,6 @@ struct ksock_tx *
 		}
 
 		if (rc <= 0) { /* Didn't write anything? */
-
 			if (!rc) /* some stacks return 0 instead of -EAGAIN */
 				rc = -EAGAIN;
 
@@ -260,7 +259,6 @@ struct ksock_tx *
 	 * status inside ksocknal_lib_recv
 	 */
 	rc = ksocknal_lib_recv(conn);
-
 	if (rc <= 0)
 		return rc;
 
@@ -316,7 +314,6 @@ struct ksock_tx *
 		}
 
 		/* Completed a fragment */
-
 		if (!iov_iter_count(&conn->ksnc_rx_to)) {
 			rc = 1;
 			break;
@@ -521,7 +518,6 @@ struct ksock_tx *
 ksocknal_launch_connection_locked(struct ksock_route *route)
 {
 	/* called holding write lock on ksnd_global_lock */
-
 	LASSERT(!route->ksnr_scheduled);
 	LASSERT(!route->ksnr_connecting);
 	LASSERT(ksocknal_route_mask() & ~route->ksnr_connected);
@@ -588,7 +584,7 @@ struct ksock_conn *
 			    (tnob == nob && *ksocknal_tunables.ksnd_round_robin &&
 			     typed->ksnc_tx_last_post > c->ksnc_tx_last_post)) {
 				typed = c;
-				tnob  = nob;
+				tnob = nob;
 			}
 			break;
 
@@ -760,7 +756,6 @@ struct ksock_route *
 	struct ksock_route *route;
 
 	list_for_each_entry(route, &peer_ni->ksnp_routes, ksnr_list) {
-
 		LASSERT(!route->ksnr_connecting || route->ksnr_scheduled);
 
 		if (route->ksnr_scheduled)
@@ -978,7 +973,6 @@ struct ksock_route *
 {
 	static char ksocknal_slop_buffer[4096];
 	struct kvec *kvec = conn->ksnc_rx_iov_space;
-
 	int nob;
 	unsigned int niov;
 	int skipped;
@@ -1001,8 +995,8 @@ struct ksock_route *
 			kvec->iov_base = &conn->ksnc_msg;
 			kvec->iov_len = offsetof(struct ksock_msg, ksm_u);
 			conn->ksnc_rx_nob_left = offsetof(struct ksock_msg, ksm_u);
-			iov_iter_kvec(&conn->ksnc_rx_to, READ, kvec,
-					1, offsetof(struct ksock_msg, ksm_u));
+			iov_iter_kvec(&conn->ksnc_rx_to, READ, kvec, 1,
+				      offsetof(struct ksock_msg, ksm_u));
 			break;
 
 		case KSOCK_PROTO_V1:
@@ -1011,8 +1005,8 @@ struct ksock_route *
 			kvec->iov_base = &conn->ksnc_msg.ksm_u.lnetmsg;
 			kvec->iov_len = sizeof(struct lnet_hdr);
 			conn->ksnc_rx_nob_left = sizeof(struct lnet_hdr);
-			iov_iter_kvec(&conn->ksnc_rx_to, READ, kvec,
-					1, sizeof(struct lnet_hdr));
+			iov_iter_kvec(&conn->ksnc_rx_to, READ, kvec, 1,
+				      sizeof(struct lnet_hdr));
 			break;
 
 		default:
@@ -1035,7 +1029,7 @@ struct ksock_route *
 		nob = min_t(int, nob_to_skip, sizeof(ksocknal_slop_buffer));
 
 		kvec[niov].iov_base = ksocknal_slop_buffer;
-		kvec[niov].iov_len  = nob;
+		kvec[niov].iov_len = nob;
 		niov++;
 		skipped += nob;
 		nob_to_skip -= nob;
@@ -1063,7 +1057,7 @@ struct ksock_route *
 		conn->ksnc_rx_state == SOCKNAL_RX_LNET_PAYLOAD ||
 		conn->ksnc_rx_state == SOCKNAL_RX_LNET_HEADER ||
 		conn->ksnc_rx_state == SOCKNAL_RX_SLOP);
- again:
+again:
 	if (iov_iter_count(&conn->ksnc_rx_to)) {
 		rc = ksocknal_receive(conn);
 
@@ -1157,8 +1151,8 @@ struct ksock_route *
 		kvec->iov_base = &conn->ksnc_msg.ksm_u.lnetmsg;
 		kvec->iov_len = sizeof(struct ksock_lnet_msg);
 
-		iov_iter_kvec(&conn->ksnc_rx_to, READ, kvec,
-				1, sizeof(struct ksock_lnet_msg));
+		iov_iter_kvec(&conn->ksnc_rx_to, READ, kvec, 1,
+			      sizeof(struct ksock_lnet_msg));
 
 		goto again;     /* read lnet header now */
 
@@ -1295,8 +1289,8 @@ struct ksock_route *
 	spin_lock_bh(&sched->kss_lock);
 
 	rc = !ksocknal_data.ksnd_shuttingdown &&
-	      list_empty(&sched->kss_rx_conns) &&
-	      list_empty(&sched->kss_tx_conns);
+	     list_empty(&sched->kss_rx_conns) &&
+	     list_empty(&sched->kss_tx_conns);
 
 	spin_unlock_bh(&sched->kss_lock);
 	return rc;
@@ -1419,7 +1413,6 @@ int ksocknal_scheduler(void *arg)
 			}
 
 			rc = ksocknal_process_transmit(conn, tx);
-
 			if (rc == -ENOMEM || rc == -EAGAIN) {
 				/*
 				 * Incomplete send: replace tx on HEAD of
@@ -1879,7 +1872,7 @@ void ksocknal_write_callback(struct ksock_conn *conn)
 	write_unlock_bh(&ksocknal_data.ksnd_global_lock);
 	return retry_later;
 
- failed:
+failed:
 	write_lock_bh(&ksocknal_data.ksnd_global_lock);
 
 	route->ksnr_scheduled = 0;
@@ -2026,7 +2019,6 @@ void ksocknal_write_callback(struct ksock_conn *conn)
 		return 0;
 
 	/* no creating in past 120 seconds */
-
 	return ksocknal_data.ksnd_connd_running >
 	       ksocknal_data.ksnd_connd_connecting + SOCKNAL_CONND_RESV;
 }
@@ -2341,7 +2333,7 @@ void ksocknal_write_callback(struct ksock_conn *conn)
 	struct ksock_conn *conn;
 	struct ksock_tx *tx;
 
- again:
+again:
 	/*
 	 * NB. We expect to have a look at all the peers and not find any
 	 * connections to time out, so we just use a shared lock while we
diff --git a/drivers/staging/lustre/lnet/klnds/socklnd/socklnd_lib.c b/drivers/staging/lustre/lnet/klnds/socklnd/socklnd_lib.c
index 565c50c..a190869 100644
--- a/drivers/staging/lustre/lnet/klnds/socklnd/socklnd_lib.c
+++ b/drivers/staging/lustre/lnet/klnds/socklnd/socklnd_lib.c
@@ -75,14 +75,14 @@
 int
 ksocknal_lib_send_iov(struct ksock_conn *conn, struct ksock_tx *tx)
 {
-	struct msghdr msg = {.msg_flags = MSG_DONTWAIT};
+	struct msghdr msg = { .msg_flags = MSG_DONTWAIT };
 	struct socket *sock = conn->ksnc_sock;
 	int nob, i;
 
-	if (*ksocknal_tunables.ksnd_enable_csum	&& /* checksum enabled */
-	    conn->ksnc_proto == &ksocknal_protocol_v2x && /* V2.x connection  */
-	    tx->tx_nob == tx->tx_resid		 && /* frist sending    */
-	    !tx->tx_msg.ksm_csum)		     /* not checksummed  */
+	if (*ksocknal_tunables.ksnd_enable_csum	&&		/* checksum enabled */
+	    conn->ksnc_proto == &ksocknal_protocol_v2x &&	/* V2.x connection */
+	    tx->tx_nob == tx->tx_resid &&			/* first sending */
+	    !tx->tx_msg.ksm_csum)				/* not checksummed */
 		ksocknal_lib_csum_tx(tx);
 
 	for (nob = i = 0; i < tx->tx_niov; i++)
@@ -130,7 +130,7 @@
 			rc = tcp_sendpage(sk, page, offset, fragsize, msgflg);
 		}
 	} else {
-		struct msghdr msg = {.msg_flags = MSG_DONTWAIT};
+		struct msghdr msg = { .msg_flags = MSG_DONTWAIT };
 		int i;
 
 		for (nob = i = 0; i < tx->tx_nkiov; i++)
@@ -144,6 +144,7 @@
 			      kiov, tx->tx_nkiov, nob);
 		rc = sock_sendmsg(sock, &msg);
 	}
+
 	return rc;
 }
 
@@ -166,6 +167,7 @@
 static int lustre_csum(struct kvec *v, void *context)
 {
 	struct ksock_conn *conn = context;
+
 	conn->ksnc_rx_csum = crc32_le(conn->ksnc_rx_csum,
 				      v->iov_base, v->iov_len);
 	return 0;
@@ -325,7 +327,7 @@ static int lustre_csum(struct kvec *v, void *context)
 		return rc;
 	}
 
-/* TCP_BACKOFF_* sockopt tunables unsupported in stock kernels */
+	/* TCP_BACKOFF_* sockopt tunables unsupported in stock kernels */
 
 	/* snapshot tunables */
 	keep_idle  = *ksocknal_tunables.ksnd_keepalive_idle;
diff --git a/drivers/staging/lustre/lnet/klnds/socklnd/socklnd_modparams.c b/drivers/staging/lustre/lnet/klnds/socklnd/socklnd_modparams.c
index da59100..0c923f9 100644
--- a/drivers/staging/lustre/lnet/klnds/socklnd/socklnd_modparams.c
+++ b/drivers/staging/lustre/lnet/klnds/socklnd/socklnd_modparams.c
@@ -145,36 +145,36 @@
 int ksocknal_tunables_init(void)
 {
 	/* initialize ksocknal_tunables structure */
-	ksocknal_tunables.ksnd_timeout            = &sock_timeout;
-	ksocknal_tunables.ksnd_nscheds            = &nscheds;
-	ksocknal_tunables.ksnd_nconnds            = &nconnds;
-	ksocknal_tunables.ksnd_nconnds_max        = &nconnds_max;
-	ksocknal_tunables.ksnd_min_reconnectms    = &min_reconnectms;
-	ksocknal_tunables.ksnd_max_reconnectms    = &max_reconnectms;
-	ksocknal_tunables.ksnd_eager_ack          = &eager_ack;
-	ksocknal_tunables.ksnd_typed_conns        = &typed_conns;
-	ksocknal_tunables.ksnd_min_bulk           = &min_bulk;
-	ksocknal_tunables.ksnd_tx_buffer_size     = &tx_buffer_size;
-	ksocknal_tunables.ksnd_rx_buffer_size     = &rx_buffer_size;
-	ksocknal_tunables.ksnd_nagle              = &nagle;
-	ksocknal_tunables.ksnd_round_robin        = &round_robin;
-	ksocknal_tunables.ksnd_keepalive          = &keepalive;
-	ksocknal_tunables.ksnd_keepalive_idle     = &keepalive_idle;
-	ksocknal_tunables.ksnd_keepalive_count    = &keepalive_count;
-	ksocknal_tunables.ksnd_keepalive_intvl    = &keepalive_intvl;
-	ksocknal_tunables.ksnd_credits            = &credits;
-	ksocknal_tunables.ksnd_peertxcredits      = &peer_credits;
-	ksocknal_tunables.ksnd_peerrtrcredits     = &peer_buffer_credits;
-	ksocknal_tunables.ksnd_peertimeout        = &peer_timeout;
-	ksocknal_tunables.ksnd_enable_csum        = &enable_csum;
-	ksocknal_tunables.ksnd_inject_csum_error  = &inject_csum_error;
-	ksocknal_tunables.ksnd_nonblk_zcack       = &nonblk_zcack;
-	ksocknal_tunables.ksnd_zc_min_payload     = &zc_min_payload;
-	ksocknal_tunables.ksnd_zc_recv            = &zc_recv;
+	ksocknal_tunables.ksnd_timeout = &sock_timeout;
+	ksocknal_tunables.ksnd_nscheds = &nscheds;
+	ksocknal_tunables.ksnd_nconnds = &nconnds;
+	ksocknal_tunables.ksnd_nconnds_max = &nconnds_max;
+	ksocknal_tunables.ksnd_min_reconnectms = &min_reconnectms;
+	ksocknal_tunables.ksnd_max_reconnectms = &max_reconnectms;
+	ksocknal_tunables.ksnd_eager_ack = &eager_ack;
+	ksocknal_tunables.ksnd_typed_conns = &typed_conns;
+	ksocknal_tunables.ksnd_min_bulk = &min_bulk;
+	ksocknal_tunables.ksnd_tx_buffer_size = &tx_buffer_size;
+	ksocknal_tunables.ksnd_rx_buffer_size = &rx_buffer_size;
+	ksocknal_tunables.ksnd_nagle = &nagle;
+	ksocknal_tunables.ksnd_round_robin = &round_robin;
+	ksocknal_tunables.ksnd_keepalive = &keepalive;
+	ksocknal_tunables.ksnd_keepalive_idle = &keepalive_idle;
+	ksocknal_tunables.ksnd_keepalive_count = &keepalive_count;
+	ksocknal_tunables.ksnd_keepalive_intvl = &keepalive_intvl;
+	ksocknal_tunables.ksnd_credits = &credits;
+	ksocknal_tunables.ksnd_peertxcredits = &peer_credits;
+	ksocknal_tunables.ksnd_peerrtrcredits = &peer_buffer_credits;
+	ksocknal_tunables.ksnd_peertimeout = &peer_timeout;
+	ksocknal_tunables.ksnd_enable_csum = &enable_csum;
+	ksocknal_tunables.ksnd_inject_csum_error = &inject_csum_error;
+	ksocknal_tunables.ksnd_nonblk_zcack = &nonblk_zcack;
+	ksocknal_tunables.ksnd_zc_min_payload = &zc_min_payload;
+	ksocknal_tunables.ksnd_zc_recv = &zc_recv;
 	ksocknal_tunables.ksnd_zc_recv_min_nfrags = &zc_recv_min_nfrags;
 
 #if SOCKNAL_VERSION_DEBUG
-	ksocknal_tunables.ksnd_protocol           = &protocol;
+	ksocknal_tunables.ksnd_protocol = &protocol;
 #endif
 
 	if (*ksocknal_tunables.ksnd_zc_min_payload < (2 << 10))
diff --git a/drivers/staging/lustre/lnet/klnds/socklnd/socklnd_proto.c b/drivers/staging/lustre/lnet/klnds/socklnd/socklnd_proto.c
index 91bed59..c694fec 100644
--- a/drivers/staging/lustre/lnet/klnds/socklnd/socklnd_proto.c
+++ b/drivers/staging/lustre/lnet/klnds/socklnd/socklnd_proto.c
@@ -116,7 +116,7 @@
 static struct ksock_tx *
 ksocknal_queue_tx_msg_v2(struct ksock_conn *conn, struct ksock_tx *tx_msg)
 {
-	struct ksock_tx *tx  = conn->ksnc_tx_carrier;
+	struct ksock_tx *tx = conn->ksnc_tx_carrier;
 
 	/*
 	 * Enqueue tx_msg:
@@ -220,7 +220,7 @@
 	/* takes two or more cookies already */
 
 	if (tx->tx_msg.ksm_zc_cookies[0] > tx->tx_msg.ksm_zc_cookies[1]) {
-		u64   tmp = 0;
+		u64 tmp = 0;
 
 		/* two separated cookies: (a+2, a) or (a+1, a) */
 		LASSERT(tx->tx_msg.ksm_zc_cookies[0] -
@@ -479,7 +479,7 @@
 	 * Re-organize V2.x message header to V1.x (struct lnet_hdr)
 	 * header and send out
 	 */
-	hmv->magic         = cpu_to_le32(LNET_PROTO_TCP_MAGIC);
+	hmv->magic = cpu_to_le32(LNET_PROTO_TCP_MAGIC);
 	hmv->version_major = cpu_to_le16(KSOCK_PROTO_V1_MAJOR);
 	hmv->version_minor = cpu_to_le16(KSOCK_PROTO_V1_MINOR);
 
@@ -537,7 +537,7 @@
 	struct socket *sock = conn->ksnc_sock;
 	int rc;
 
-	hello->kshm_magic   = LNET_PROTO_MAGIC;
+	hello->kshm_magic = LNET_PROTO_MAGIC;
 	hello->kshm_version = conn->ksnc_proto->pro_version;
 
 	if (the_lnet.ln_testprotocompat) {
@@ -607,12 +607,11 @@
 		goto out;
 	}
 
-	hello->kshm_src_nid         = le64_to_cpu(hdr->src_nid);
-	hello->kshm_src_pid         = le32_to_cpu(hdr->src_pid);
+	hello->kshm_src_nid = le64_to_cpu(hdr->src_nid);
+	hello->kshm_src_pid = le32_to_cpu(hdr->src_pid);
 	hello->kshm_src_incarnation = le64_to_cpu(hdr->msg.hello.incarnation);
-	hello->kshm_ctype           = le32_to_cpu(hdr->msg.hello.type);
-	hello->kshm_nips            = le32_to_cpu(hdr->payload_length) /
-						  sizeof(u32);
+	hello->kshm_ctype = le32_to_cpu(hdr->msg.hello.type);
+	hello->kshm_nips = le32_to_cpu(hdr->payload_length) / sizeof(u32);
 
 	if (hello->kshm_nips > LNET_INTERFACES_NUM) {
 		CERROR("Bad nips %d from ip %pI4h\n",
@@ -724,7 +723,7 @@
 	LASSERT(tx->tx_lnetmsg);
 
 	tx->tx_iov[0].iov_base = &tx->tx_lnetmsg->msg_hdr;
-	tx->tx_iov[0].iov_len  = sizeof(struct lnet_hdr);
+	tx->tx_iov[0].iov_len = sizeof(struct lnet_hdr);
 
 	tx->tx_nob = tx->tx_lnetmsg->msg_len + sizeof(struct lnet_hdr);
 	tx->tx_resid = tx->tx_lnetmsg->msg_len + sizeof(struct lnet_hdr);
@@ -771,40 +770,40 @@
 }
 
 struct ksock_proto ksocknal_protocol_v1x = {
-	.pro_version        = KSOCK_PROTO_V1,
-	.pro_send_hello     = ksocknal_send_hello_v1,
-	.pro_recv_hello     = ksocknal_recv_hello_v1,
-	.pro_pack           = ksocknal_pack_msg_v1,
-	.pro_unpack         = ksocknal_unpack_msg_v1,
-	.pro_queue_tx_msg   = ksocknal_queue_tx_msg_v1,
-	.pro_handle_zcreq   = NULL,
-	.pro_handle_zcack   = NULL,
-	.pro_queue_tx_zcack = NULL,
-	.pro_match_tx       = ksocknal_match_tx
+	.pro_version		= KSOCK_PROTO_V1,
+	.pro_send_hello		= ksocknal_send_hello_v1,
+	.pro_recv_hello		= ksocknal_recv_hello_v1,
+	.pro_pack		= ksocknal_pack_msg_v1,
+	.pro_unpack		= ksocknal_unpack_msg_v1,
+	.pro_queue_tx_msg	= ksocknal_queue_tx_msg_v1,
+	.pro_handle_zcreq	= NULL,
+	.pro_handle_zcack	= NULL,
+	.pro_queue_tx_zcack	= NULL,
+	.pro_match_tx		= ksocknal_match_tx
 };
 
 struct ksock_proto ksocknal_protocol_v2x = {
-	.pro_version        = KSOCK_PROTO_V2,
-	.pro_send_hello     = ksocknal_send_hello_v2,
-	.pro_recv_hello     = ksocknal_recv_hello_v2,
-	.pro_pack           = ksocknal_pack_msg_v2,
-	.pro_unpack         = ksocknal_unpack_msg_v2,
-	.pro_queue_tx_msg   = ksocknal_queue_tx_msg_v2,
-	.pro_queue_tx_zcack = ksocknal_queue_tx_zcack_v2,
-	.pro_handle_zcreq   = ksocknal_handle_zcreq,
-	.pro_handle_zcack   = ksocknal_handle_zcack,
-	.pro_match_tx       = ksocknal_match_tx
+	.pro_version		= KSOCK_PROTO_V2,
+	.pro_send_hello		= ksocknal_send_hello_v2,
+	.pro_recv_hello		= ksocknal_recv_hello_v2,
+	.pro_pack		= ksocknal_pack_msg_v2,
+	.pro_unpack		= ksocknal_unpack_msg_v2,
+	.pro_queue_tx_msg	= ksocknal_queue_tx_msg_v2,
+	.pro_queue_tx_zcack	= ksocknal_queue_tx_zcack_v2,
+	.pro_handle_zcreq	= ksocknal_handle_zcreq,
+	.pro_handle_zcack	= ksocknal_handle_zcack,
+	.pro_match_tx		= ksocknal_match_tx
 };
 
 struct ksock_proto ksocknal_protocol_v3x = {
-	.pro_version        = KSOCK_PROTO_V3,
-	.pro_send_hello     = ksocknal_send_hello_v2,
-	.pro_recv_hello     = ksocknal_recv_hello_v2,
-	.pro_pack           = ksocknal_pack_msg_v2,
-	.pro_unpack         = ksocknal_unpack_msg_v2,
-	.pro_queue_tx_msg   = ksocknal_queue_tx_msg_v2,
-	.pro_queue_tx_zcack = ksocknal_queue_tx_zcack_v3,
-	.pro_handle_zcreq   = ksocknal_handle_zcreq,
-	.pro_handle_zcack   = ksocknal_handle_zcack,
-	.pro_match_tx       = ksocknal_match_tx_v3
+	.pro_version		= KSOCK_PROTO_V3,
+	.pro_send_hello		= ksocknal_send_hello_v2,
+	.pro_recv_hello		= ksocknal_recv_hello_v2,
+	.pro_pack		= ksocknal_pack_msg_v2,
+	.pro_unpack		= ksocknal_unpack_msg_v2,
+	.pro_queue_tx_msg	= ksocknal_queue_tx_msg_v2,
+	.pro_queue_tx_zcack	= ksocknal_queue_tx_zcack_v3,
+	.pro_handle_zcreq	= ksocknal_handle_zcreq,
+	.pro_handle_zcack	= ksocknal_handle_zcack,
+	.pro_match_tx		= ksocknal_match_tx_v3
 };
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [lustre-devel] [PATCH 26/26] o2iblnd: cleanup white spaces
  2019-01-31 17:19 [lustre-devel] [PATCH 00/26] lustre: cleanups with no code changes James Simmons
                   ` (24 preceding siblings ...)
  2019-01-31 17:19 ` [lustre-devel] [PATCH 25/26] socklnd: " James Simmons
@ 2019-01-31 17:19 ` James Simmons
  2019-02-04  8:44 ` [lustre-devel] [PATCH 00/26] lustre: cleanups with no code changes Andreas Dilger
  26 siblings, 0 replies; 30+ messages in thread
From: James Simmons @ 2019-01-31 17:19 UTC (permalink / raw)
  To: lustre-devel

The o2iblnd code is very messy and difficult to read. Remove
excess white space and properly align data structures so they
are easy on the eyes.

Signed-off-by: James Simmons <jsimmons@infradead.org>
---
 .../staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c    |  79 +--
 .../staging/lustre/lnet/klnds/o2iblnd/o2iblnd.h    | 612 +++++++++++----------
 .../staging/lustre/lnet/klnds/o2iblnd/o2iblnd_cb.c |  99 ++--
 .../lustre/lnet/klnds/o2iblnd/o2iblnd_modparams.c  |  22 +-
 4 files changed, 407 insertions(+), 405 deletions(-)

diff --git a/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c b/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c
index 1a6bc45..74b21fe2 100644
--- a/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c
+++ b/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c
@@ -183,15 +183,15 @@ void kiblnd_pack_msg(struct lnet_ni *ni, struct kib_msg *msg, int version,
 	 * CAVEAT EMPTOR! all message fields not set here should have been
 	 * initialised previously.
 	 */
-	msg->ibm_magic    = IBLND_MSG_MAGIC;
-	msg->ibm_version  = version;
+	msg->ibm_magic = IBLND_MSG_MAGIC;
+	msg->ibm_version = version;
 	/*   ibm_type */
-	msg->ibm_credits  = credits;
+	msg->ibm_credits = credits;
 	/*   ibm_nob */
-	msg->ibm_cksum    = 0;
-	msg->ibm_srcnid   = ni->ni_nid;
+	msg->ibm_cksum = 0;
+	msg->ibm_srcnid = ni->ni_nid;
 	msg->ibm_srcstamp = net->ibn_incarnation;
-	msg->ibm_dstnid   = dstnid;
+	msg->ibm_dstnid = dstnid;
 	msg->ibm_dststamp = dststamp;
 
 	if (*kiblnd_tunables.kib_cksum) {
@@ -260,7 +260,7 @@ int kiblnd_unpack_msg(struct kib_msg *msg, int nob)
 		msg->ibm_version = version;
 		BUILD_BUG_ON(sizeof(msg->ibm_type) != 1);
 		BUILD_BUG_ON(sizeof(msg->ibm_credits) != 1);
-		msg->ibm_nob     = msg_nob;
+		msg->ibm_nob = msg_nob;
 		__swab64s(&msg->ibm_srcnid);
 		__swab64s(&msg->ibm_srcstamp);
 		__swab64s(&msg->ibm_dstnid);
@@ -903,12 +903,12 @@ struct kib_conn *kiblnd_create_conn(struct kib_peer_ni *peer_ni,
 	atomic_inc(&net->ibn_nconns);
 	return conn;
 
- failed_2:
+failed_2:
 	kiblnd_destroy_conn(conn);
 	kfree(conn);
- failed_1:
+failed_1:
 	kfree(init_qp_attr);
- failed_0:
+failed_0:
 	return NULL;
 }
 
@@ -1004,7 +1004,7 @@ int kiblnd_close_stale_conns_locked(struct kib_peer_ni *peer_ni,
 	list_for_each_safe(ctmp, cnxt, &peer_ni->ibp_conns) {
 		conn = list_entry(ctmp, struct kib_conn, ibc_list);
 
-		if (conn->ibc_version     == version &&
+		if (conn->ibc_version == version &&
 		    conn->ibc_incarnation == incarnation)
 			continue;
 
@@ -1077,7 +1077,7 @@ static int kiblnd_ctl(struct lnet_ni *ni, unsigned int cmd, void *arg)
 
 		rc = kiblnd_get_peer_info(ni, data->ioc_count,
 					  &nid, &count);
-		data->ioc_nid   = nid;
+		data->ioc_nid = nid;
 		data->ioc_count = count;
 		break;
 	}
@@ -1414,15 +1414,16 @@ static void kiblnd_destroy_fmr_pool_list(struct list_head *head)
 static int kiblnd_alloc_fmr_pool(struct kib_fmr_poolset *fps, struct kib_fmr_pool *fpo)
 {
 	struct ib_fmr_pool_param param = {
-		.max_pages_per_fmr = LNET_MAX_IOV,
-		.page_shift        = PAGE_SHIFT,
-		.access            = (IB_ACCESS_LOCAL_WRITE |
-				      IB_ACCESS_REMOTE_WRITE),
-		.pool_size         = fps->fps_pool_size,
-		.dirty_watermark   = fps->fps_flush_trigger,
-		.flush_function    = NULL,
-		.flush_arg         = NULL,
-		.cache             = !!fps->fps_cache };
+		.max_pages_per_fmr	= LNET_MAX_IOV,
+		.page_shift		= PAGE_SHIFT,
+		.access			= (IB_ACCESS_LOCAL_WRITE |
+					   IB_ACCESS_REMOTE_WRITE),
+		.pool_size		= fps->fps_pool_size,
+		.dirty_watermark	= fps->fps_flush_trigger,
+		.flush_function		= NULL,
+		.flush_arg		= NULL,
+		.cache			= !!fps->fps_cache
+	};
 	int rc = 0;
 
 	fpo->fmr.fpo_fmr_pool = ib_create_fmr_pool(fpo->fpo_hdev->ibh_pd,
@@ -1696,7 +1697,7 @@ int kiblnd_fmr_pool_map(struct kib_fmr_poolset *fps, struct kib_tx *tx,
 	u64 version;
 	int rc;
 
- again:
+again:
 	spin_lock(&fps->fps_lock);
 	version = fps->fps_version;
 	list_for_each_entry(fpo, &fps->fps_pool_list, fpo_list) {
@@ -1844,8 +1845,8 @@ static void kiblnd_init_pool(struct kib_poolset *ps, struct kib_pool *pool, int
 	memset(pool, 0, sizeof(*pool));
 	INIT_LIST_HEAD(&pool->po_free_list);
 	pool->po_deadline = ktime_get_seconds() + IBLND_POOL_DEADLINE;
-	pool->po_owner    = ps;
-	pool->po_size     = size;
+	pool->po_owner = ps;
+	pool->po_size = size;
 }
 
 static void kiblnd_destroy_pool_list(struct list_head *head)
@@ -1900,13 +1901,13 @@ static int kiblnd_init_poolset(struct kib_poolset *ps, int cpt,
 
 	memset(ps, 0, sizeof(*ps));
 
-	ps->ps_cpt          = cpt;
-	ps->ps_net          = net;
+	ps->ps_cpt = cpt;
+	ps->ps_net = net;
 	ps->ps_pool_create  = po_create;
 	ps->ps_pool_destroy = po_destroy;
-	ps->ps_node_init    = nd_init;
-	ps->ps_node_fini    = nd_fini;
-	ps->ps_pool_size    = size;
+	ps->ps_node_init = nd_init;
+	ps->ps_node_fini = nd_fini;
+	ps->ps_pool_size = size;
 	if (strlcpy(ps->ps_name, name, sizeof(ps->ps_name))
 	    >= sizeof(ps->ps_name))
 		return -E2BIG;
@@ -1971,7 +1972,7 @@ struct list_head *kiblnd_pool_alloc_node(struct kib_poolset *ps)
 	unsigned int trips = 0;
 	int rc;
 
- again:
+again:
 	spin_lock(&ps->ps_lock);
 	list_for_each_entry(pool, &ps->ps_pool_list, po_list) {
 		if (list_empty(&pool->po_free_list))
@@ -2286,7 +2287,7 @@ static int kiblnd_net_init_pools(struct kib_net *net, struct lnet_ni *ni,
 	}
 
 	return 0;
- failed:
+failed:
 	kiblnd_net_fini_pools(net);
 	LASSERT(rc);
 	return rc;
@@ -2302,8 +2303,8 @@ static int kiblnd_hdev_get_attr(struct kib_hca_dev *hdev)
 	 * matching that of the native system
 	 */
 	hdev->ibh_page_shift = PAGE_SHIFT;
-	hdev->ibh_page_size  = 1 << PAGE_SHIFT;
-	hdev->ibh_page_mask  = ~((u64)hdev->ibh_page_size - 1);
+	hdev->ibh_page_size = 1 << PAGE_SHIFT;
+	hdev->ibh_page_mask = ~((u64)hdev->ibh_page_size - 1);
 
 	if (hdev->ibh_ibdev->ops.alloc_fmr &&
 	    hdev->ibh_ibdev->ops.dealloc_fmr &&
@@ -2455,9 +2456,9 @@ int kiblnd_dev_failover(struct kib_dev *dev)
 	}
 
 	memset(&addr, 0, sizeof(addr));
-	addr.sin_family      = AF_INET;
+	addr.sin_family = AF_INET;
 	addr.sin_addr.s_addr = htonl(dev->ibd_ifip);
-	addr.sin_port	= htons(*kiblnd_tunables.kib_service);
+	addr.sin_port = htons(*kiblnd_tunables.kib_service);
 
 	/* Bind to failover device or port */
 	rc = rdma_bind_addr(cmid, (struct sockaddr *)&addr);
@@ -2478,8 +2479,8 @@ int kiblnd_dev_failover(struct kib_dev *dev)
 	}
 
 	atomic_set(&hdev->ibh_ref, 1);
-	hdev->ibh_dev   = dev;
-	hdev->ibh_cmid  = cmid;
+	hdev->ibh_dev = dev;
+	hdev->ibh_cmid = cmid;
 	hdev->ibh_ibdev = cmid->device;
 
 	pd = ib_alloc_pd(cmid->device, 0);
@@ -2519,7 +2520,7 @@ int kiblnd_dev_failover(struct kib_dev *dev)
 	}
 
 	write_unlock_irqrestore(&kiblnd_data.kib_global_lock, flags);
- out:
+out:
 	if (!list_empty(&zombie_tpo))
 		kiblnd_destroy_pool_list(&zombie_tpo);
 	if (!list_empty(&zombie_ppo))
@@ -2832,7 +2833,7 @@ static int kiblnd_base_startup(void)
 
 	return 0;
 
- failed:
+failed:
 	kiblnd_base_shutdown();
 	return -ENETDOWN;
 }
diff --git a/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.h b/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.h
index 423bae7..2bf1228 100644
--- a/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.h
+++ b/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.h
@@ -97,10 +97,10 @@ struct kib_tunables {
 
 extern struct kib_tunables  kiblnd_tunables;
 
-#define IBLND_MSG_QUEUE_SIZE_V1   8 /* V1 only : # messages/RDMAs in-flight */
+#define IBLND_MSG_QUEUE_SIZE_V1	  8 /* V1 only : # messages/RDMAs in-flight */
 #define IBLND_CREDIT_HIGHWATER_V1 7 /* V1 only : when eagerly to return credits */
 
-#define IBLND_CREDITS_DEFAULT     8 /* default # of peer_ni credits */
+#define IBLND_CREDITS_DEFAULT	  8 /* default # of peer_ni credits */
 /* Max # of peer_ni credits */
 #define IBLND_CREDITS_MAX	  ((typeof(((struct kib_msg *)0)->ibm_credits)) - 1)
 
@@ -114,8 +114,8 @@ struct kib_tunables {
 							       ps, qpt)
 
 /* 2 OOB shall suffice for 1 keepalive and 1 returning credits */
-#define IBLND_OOB_CAPABLE(v)       ((v) != IBLND_MSG_VERSION_1)
-#define IBLND_OOB_MSGS(v)	   (IBLND_OOB_CAPABLE(v) ? 2 : 0)
+#define IBLND_OOB_CAPABLE(v)	((v) != IBLND_MSG_VERSION_1)
+#define IBLND_OOB_MSGS(v)	(IBLND_OOB_CAPABLE(v) ? 2 : 0)
 
 #define IBLND_MSG_SIZE		(4 << 10)	/* max size of queued messages (inc hdr) */
 #define IBLND_MAX_RDMA_FRAGS	LNET_MAX_IOV	/* max # of fragments supported */
@@ -124,9 +124,9 @@ struct kib_tunables {
 /* derived constants... */
 /* Pools (shared by connections on each CPT) */
 /* These pools can grow@runtime, so don't need give a very large value */
-#define IBLND_TX_POOL			256
-#define IBLND_FMR_POOL			256
-#define IBLND_FMR_POOL_FLUSH		192
+#define IBLND_TX_POOL		256
+#define IBLND_FMR_POOL		256
+#define IBLND_FMR_POOL_FLUSH	192
 
 #define IBLND_RX_MSGS(c)	\
 	((c->ibc_queue_depth) * 2 + IBLND_OOB_MSGS(c->ibc_version))
@@ -143,9 +143,9 @@ struct kib_tunables {
 
 /* o2iblnd can run over aliased interface */
 #ifdef IFALIASZ
-#define KIB_IFNAME_SIZE	      IFALIASZ
+#define KIB_IFNAME_SIZE		IFALIASZ
 #else
-#define KIB_IFNAME_SIZE	      256
+#define KIB_IFNAME_SIZE		256
 #endif
 
 enum kib_dev_caps {
@@ -155,44 +155,46 @@ enum kib_dev_caps {
 };
 
 struct kib_dev {
-	struct list_head   ibd_list;            /* chain on kib_devs */
-	struct list_head   ibd_fail_list;       /* chain on kib_failed_devs */
-	u32              ibd_ifip;            /* IPoIB interface IP */
+	struct list_head	ibd_list;	/* chain on kib_devs */
+	struct list_head	ibd_fail_list;	/* chain on kib_failed_devs */
+	u32			ibd_ifip;	/* IPoIB interface IP */
 
 	/* IPoIB interface name */
-	char               ibd_ifname[KIB_IFNAME_SIZE];
-	int                ibd_nnets;           /* # nets extant */
-
-	time64_t	   ibd_next_failover;
-	int                ibd_failed_failover; /* # failover failures */
-	unsigned int       ibd_failover;        /* failover in progress */
-	unsigned int ibd_can_failover; /* IPoIB interface is a bonding master */
-	struct list_head   ibd_nets;
-	struct kib_hca_dev *ibd_hdev;
+	char			ibd_ifname[KIB_IFNAME_SIZE];
+	int			ibd_nnets;	/* # nets extant */
+
+	time64_t		ibd_next_failover;
+	int			ibd_failed_failover; /* # failover failures */
+	unsigned int		ibd_failover;        /* failover in progress */
+	unsigned int		ibd_can_failover;    /* IPoIB interface is a
+						      * bonding master
+						      */
+	struct list_head	ibd_nets;
+	struct kib_hca_dev	*ibd_hdev;
 	enum kib_dev_caps	ibd_dev_caps;
 };
 
 struct kib_hca_dev {
-	struct rdma_cm_id  *ibh_cmid;           /* listener cmid */
-	struct ib_device   *ibh_ibdev;          /* IB device */
-	int                ibh_page_shift;      /* page shift of current HCA */
-	int                ibh_page_size;       /* page size of current HCA */
-	u64              ibh_page_mask;       /* page mask of current HCA */
-	int                ibh_mr_shift;        /* bits shift of max MR size */
-	u64              ibh_mr_size;         /* size of MR */
-	struct ib_pd       *ibh_pd;             /* PD */
-	struct kib_dev	   *ibh_dev;		/* owner */
-	atomic_t           ibh_ref;             /* refcount */
+	struct rdma_cm_id	*ibh_cmid;	/* listener cmid */
+	struct ib_device	*ibh_ibdev;	/* IB device */
+	int			ibh_page_shift;	/* page shift of current HCA */
+	int			ibh_page_size;	/* page size of current HCA */
+	u64			ibh_page_mask;	/* page mask of current HCA */
+	int			ibh_mr_shift;	/* bits shift of max MR size */
+	u64			ibh_mr_size;	/* size of MR */
+	struct ib_pd		*ibh_pd;	/* PD */
+	struct kib_dev		*ibh_dev;	/* owner */
+	atomic_t		ibh_ref;	/* refcount */
 };
 
 /** # of seconds to keep pool alive */
-#define IBLND_POOL_DEADLINE     300
+#define IBLND_POOL_DEADLINE	300
 /** # of seconds to retry if allocation failed */
 #define IBLND_POOL_RETRY	1
 
 struct kib_pages {
-	int                ibp_npages;          /* # pages */
-	struct page        *ibp_pages[0];       /* page array */
+	int			ibp_npages;	/* # pages */
+	struct page		*ibp_pages[0];	/* page array */
 };
 
 struct kib_pool;
@@ -206,39 +208,39 @@ typedef int  (*kib_ps_pool_create_t)(struct kib_poolset *ps,
 
 struct kib_net;
 
-#define IBLND_POOL_NAME_LEN     32
+#define IBLND_POOL_NAME_LEN	32
 
 struct kib_poolset {
-	spinlock_t            ps_lock;            /* serialize */
-	struct kib_net        *ps_net;            /* network it belongs to */
-	char                  ps_name[IBLND_POOL_NAME_LEN]; /* pool set name */
-	struct list_head      ps_pool_list;       /* list of pools */
-	struct list_head      ps_failed_pool_list;/* failed pool list */
-	time64_t	      ps_next_retry;	  /* time stamp for retry if */
-						  /* failed to allocate */
-	int                   ps_increasing;      /* is allocating new pool */
-	int                   ps_pool_size;       /* new pool size */
-	int                   ps_cpt;             /* CPT id */
-
-	kib_ps_pool_create_t  ps_pool_create;     /* create a new pool */
-	kib_ps_pool_destroy_t ps_pool_destroy;    /* destroy a pool */
-	kib_ps_node_init_t    ps_node_init; /* initialize new allocated node */
-	kib_ps_node_fini_t    ps_node_fini;       /* finalize node */
+	spinlock_t		ps_lock;	/* serialize */
+	struct kib_net		*ps_net;	/* network it belongs to */
+	char			ps_name[IBLND_POOL_NAME_LEN]; /* pool set name */
+	struct list_head	ps_pool_list;	/* list of pools */
+	struct list_head	ps_failed_pool_list;/* failed pool list */
+	time64_t		ps_next_retry;	/* time stamp for retry if */
+						/* failed to allocate */
+	int			ps_increasing;	/* is allocating new pool */
+	int			ps_pool_size;	/* new pool size */
+	int			ps_cpt;		/* CPT id */
+
+	kib_ps_pool_create_t	ps_pool_create;	 /* create a new pool */
+	kib_ps_pool_destroy_t	ps_pool_destroy; /* destroy a pool */
+	kib_ps_node_init_t	ps_node_init;	 /* initialize new allocated node */
+	kib_ps_node_fini_t	ps_node_fini;    /* finalize node */
 };
 
 struct kib_pool {
-	struct list_head      po_list;       /* chain on pool list */
-	struct list_head      po_free_list;  /* pre-allocated node */
-	struct kib_poolset	*po_owner;	/* pool_set of this pool */
+	struct list_head	po_list;	/* chain on pool list */
+	struct list_head	po_free_list;	/* pre-allocated node */
+	struct kib_poolset     *po_owner;	/* pool_set of this pool */
 	time64_t		po_deadline;	/* deadline of this pool */
-	int                   po_allocated;  /* # of elements in use */
-	int                   po_failed;     /* pool is created on failed HCA */
-	int                   po_size;       /* # of pre-allocated elements */
+	int			po_allocated;	/* # of elements in use */
+	int			po_failed;	/* pool is created on failed HCA */
+	int			po_size;	/* # of pre-allocated elements */
 };
 
 struct kib_tx_poolset {
 	struct kib_poolset	tps_poolset;		/* pool-set */
-	u64                 tps_next_tx_cookie; /* cookie of TX */
+	u64			tps_next_tx_cookie;	/* cookie of TX */
 };
 
 struct kib_tx_pool {
@@ -249,27 +251,27 @@ struct kib_tx_pool {
 };
 
 struct kib_fmr_poolset {
-	spinlock_t            fps_lock;            /* serialize */
-	struct kib_net        *fps_net;            /* IB network */
-	struct list_head      fps_pool_list;       /* FMR pool list */
-	struct list_head      fps_failed_pool_list;/* FMR pool list */
-	u64                 fps_version;         /* validity stamp */
-	int                   fps_cpt;             /* CPT id */
-	int                   fps_pool_size;
-	int                   fps_flush_trigger;
-	int		      fps_cache;
-	int                   fps_increasing;      /* is allocating new pool */
+	spinlock_t		fps_lock;		/* serialize */
+	struct kib_net	       *fps_net;		/* IB network */
+	struct list_head	fps_pool_list;		/* FMR pool list */
+	struct list_head	fps_failed_pool_list;	/* FMR pool list */
+	u64			fps_version;		/* validity stamp */
+	int			fps_cpt;		/* CPT id */
+	int			fps_pool_size;
+	int			fps_flush_trigger;
+	int			fps_cache;
+	int			fps_increasing;		/* is allocating new pool */
 	time64_t		fps_next_retry;		/* time stamp for retry
 							 * if failed to allocate
 							 */
 };
 
 struct kib_fast_reg_descriptor { /* For fast registration */
-	struct list_head		 frd_list;
-	struct ib_send_wr		 frd_inv_wr;
-	struct ib_reg_wr		 frd_fastreg_wr;
-	struct ib_mr			*frd_mr;
-	bool				 frd_valid;
+	struct list_head	frd_list;
+	struct ib_send_wr	frd_inv_wr;
+	struct ib_reg_wr	frd_fastreg_wr;
+	struct ib_mr	       *frd_mr;
+	bool			frd_valid;
 };
 
 struct kib_fmr_pool {
@@ -278,16 +280,16 @@ struct kib_fmr_pool {
 	struct kib_fmr_poolset	*fpo_owner;	/* owner of this pool */
 	union {
 		struct {
-			struct ib_fmr_pool *fpo_fmr_pool; /* IB FMR pool */
+			struct ib_fmr_pool	*fpo_fmr_pool; /* IB FMR pool */
 		} fmr;
 		struct { /* For fast registration */
-			struct list_head    fpo_pool_list;
-			int		    fpo_pool_size;
+			struct list_head	fpo_pool_list;
+			int			fpo_pool_size;
 		} fast_reg;
 	};
 	time64_t		fpo_deadline;	/* deadline of this pool */
-	int                   fpo_failed;          /* fmr pool is failed */
-	int                   fpo_map_count;       /* # of mapped FMR */
+	int			fpo_failed;	/* fmr pool is failed */
+	int			fpo_map_count;	/* # of mapped FMR */
 };
 
 struct kib_fmr {
@@ -298,13 +300,13 @@ struct kib_fmr {
 };
 
 struct kib_net {
-	struct list_head      ibn_list;       /* chain on struct kib_dev::ibd_nets */
-	u64                 ibn_incarnation;/* my epoch */
-	int                   ibn_init;       /* initialisation state */
-	int                   ibn_shutdown;   /* shutting down? */
+	struct list_head	ibn_list;	/* chain on struct kib_dev::ibd_nets */
+	u64			ibn_incarnation;/* my epoch */
+	int			ibn_init;	/* initialisation state */
+	int			ibn_shutdown;	/* shutting down? */
 
-	atomic_t              ibn_npeers;     /* # peers extant */
-	atomic_t              ibn_nconns;     /* # connections extant */
+	atomic_t		ibn_npeers;	/* # peers extant */
+	atomic_t		ibn_nconns;	/* # connections extant */
 
 	struct kib_tx_poolset	**ibn_tx_ps;	/* tx pool-set */
 	struct kib_fmr_poolset	**ibn_fmr_ps;	/* fmr pool-set */
@@ -318,27 +320,27 @@ struct kib_net {
 #define KIB_THREAD_TID(id)		((id) & ((1UL << KIB_THREAD_SHIFT) - 1))
 
 struct kib_sched_info {
-	spinlock_t         ibs_lock;     /* serialise */
-	wait_queue_head_t  ibs_waitq;    /* schedulers sleep here */
-	struct list_head   ibs_conns;    /* conns to check for rx completions */
-	int                ibs_nthreads; /* number of scheduler threads */
-	int                ibs_nthreads_max; /* max allowed scheduler threads */
-	int                ibs_cpt;      /* CPT id */
+	spinlock_t		ibs_lock;	/* serialise */
+	wait_queue_head_t	ibs_waitq;	/* schedulers sleep here */
+	struct list_head	ibs_conns;	/* conns to check for rx completions */
+	int			ibs_nthreads;	/* number of scheduler threads */
+	int			ibs_nthreads_max; /* max allowed scheduler threads */
+	int			ibs_cpt;	/* CPT id */
 };
 
 struct kib_data {
-	int               kib_init;           /* initialisation state */
-	int               kib_shutdown;       /* shut down? */
-	struct list_head  kib_devs;           /* IB devices extant */
-	struct list_head  kib_failed_devs;    /* list head of failed devices */
-	wait_queue_head_t kib_failover_waitq; /* schedulers sleep here */
-	atomic_t kib_nthreads;                /* # live threads */
-	rwlock_t kib_global_lock;    /* stabilize net/dev/peer_ni/conn ops */
-	struct list_head *kib_peers; /* hash table of all my known peers */
-	int  kib_peer_hash_size;     /* size of kib_peers */
-	void *kib_connd; /* the connd task (serialisation assertions) */
-	struct list_head kib_connd_conns;   /* connections to setup/teardown */
-	struct list_head kib_connd_zombies; /* connections with zero refcount */
+	int			kib_init;	    /* initialisation state */
+	int			kib_shutdown;       /* shut down? */
+	struct list_head	kib_devs;           /* IB devices extant */
+	struct list_head	kib_failed_devs;    /* list head of failed devices */
+	wait_queue_head_t	kib_failover_waitq; /* schedulers sleep here */
+	atomic_t		kib_nthreads;	    /* # live threads */
+	rwlock_t		kib_global_lock;    /* stabilize net/dev/peer_ni/conn ops */
+	struct list_head       *kib_peers;	    /* hash table of all my known peers */
+	int			kib_peer_hash_size; /* size of kib_peers */
+	void		       *kib_connd;	    /* the connd task (serialisation assertions) */
+	struct list_head	kib_connd_conns;    /* connections to setup/teardown */
+	struct list_head	kib_connd_zombies;  /* connections with zero refcount */
 	/* connections to reconnect */
 	struct list_head	kib_reconn_list;
 	/* peers wait for reconnection */
@@ -349,15 +351,15 @@ struct kib_data {
 	 */
 	time64_t		kib_reconn_sec;
 
-	wait_queue_head_t kib_connd_waitq;  /* connection daemon sleeps here */
-	spinlock_t kib_connd_lock;          /* serialise */
-	struct ib_qp_attr kib_error_qpa;    /* QP->ERROR */
-	struct kib_sched_info **kib_scheds; /* percpt data for schedulers */
+	wait_queue_head_t	kib_connd_waitq;    /* connection daemon sleeps here */
+	spinlock_t		kib_connd_lock;	    /* serialise */
+	struct ib_qp_attr	kib_error_qpa;	    /* QP->ERROR */
+	struct kib_sched_info **kib_scheds;	    /* percpt data for schedulers */
 };
 
-#define IBLND_INIT_NOTHING 0
-#define IBLND_INIT_DATA    1
-#define IBLND_INIT_ALL     2
+#define IBLND_INIT_NOTHING	0
+#define IBLND_INIT_DATA		1
+#define IBLND_INIT_ALL		2
 
 /************************************************************************
  * IB Wire message format.
@@ -365,62 +367,62 @@ struct kib_data {
  */
 
 struct kib_connparams {
-	u16        ibcp_queue_depth;
-	u16        ibcp_max_frags;
-	u32        ibcp_max_msg_size;
+	u16			ibcp_queue_depth;
+	u16			ibcp_max_frags;
+	u32			ibcp_max_msg_size;
 } __packed;
 
 struct kib_immediate_msg {
-	struct lnet_hdr	ibim_hdr;        /* portals header */
-	char         ibim_payload[0]; /* piggy-backed payload */
+	struct lnet_hdr		ibim_hdr;	/* portals header */
+	char			ibim_payload[0];/* piggy-backed payload */
 } __packed;
 
 struct kib_rdma_frag {
-	u32        rf_nob;          /* # bytes this frag */
-	u64        rf_addr;         /* CAVEAT EMPTOR: misaligned!! */
+	u32			rf_nob;		/* # bytes this frag */
+	u64			rf_addr;	/* CAVEAT EMPTOR: misaligned!! */
 } __packed;
 
 struct kib_rdma_desc {
-	u32           rd_key;       /* local/remote key */
-	u32           rd_nfrags;    /* # fragments */
+	u32			rd_key;		/* local/remote key */
+	u32			rd_nfrags;	/* # fragments */
 	struct kib_rdma_frag	rd_frags[0];	/* buffer frags */
 } __packed;
 
 struct kib_putreq_msg {
-	struct lnet_hdr	ibprm_hdr;    /* portals header */
-	u64           ibprm_cookie; /* opaque completion cookie */
+	struct lnet_hdr		ibprm_hdr;	/* portals header */
+	u64			ibprm_cookie;	/* opaque completion cookie */
 } __packed;
 
 struct kib_putack_msg {
-	u64           ibpam_src_cookie; /* reflected completion cookie */
-	u64           ibpam_dst_cookie; /* opaque completion cookie */
-	struct kib_rdma_desc ibpam_rd;         /* sender's sink buffer */
+	u64			ibpam_src_cookie; /* reflected completion cookie */
+	u64			ibpam_dst_cookie; /* opaque completion cookie */
+	struct kib_rdma_desc	ibpam_rd;	  /* sender's sink buffer */
 } __packed;
 
 struct kib_get_msg {
-	struct lnet_hdr ibgm_hdr;     /* portals header */
-	u64           ibgm_cookie;  /* opaque completion cookie */
-	struct kib_rdma_desc ibgm_rd;      /* rdma descriptor */
+	struct lnet_hdr		ibgm_hdr;	/* portals header */
+	u64			ibgm_cookie;	/* opaque completion cookie */
+	struct kib_rdma_desc	ibgm_rd;	/* rdma descriptor */
 } __packed;
 
 struct kib_completion_msg {
-	u64           ibcm_cookie;  /* opaque completion cookie */
-	s32           ibcm_status;  /* < 0 failure: >= 0 length */
+	u64			ibcm_cookie;	/* opaque completion cookie */
+	s32			ibcm_status;	/* < 0 failure: >= 0 length */
 } __packed;
 
 struct kib_msg {
 	/* First 2 fields fixed FOR ALL TIME */
-	u32           ibm_magic;    /* I'm an ibnal message */
-	u16           ibm_version;  /* this is my version number */
-
-	u8            ibm_type;     /* msg type */
-	u8            ibm_credits;  /* returned credits */
-	u32           ibm_nob;      /* # bytes in whole message */
-	u32           ibm_cksum;    /* checksum (0 == no checksum) */
-	u64           ibm_srcnid;   /* sender's NID */
-	u64           ibm_srcstamp; /* sender's incarnation */
-	u64           ibm_dstnid;   /* destination's NID */
-	u64           ibm_dststamp; /* destination's incarnation */
+	u32			ibm_magic;	/* I'm an ibnal message */
+	u16			ibm_version;	/* this is my version number */
+
+	u8			ibm_type;	/* msg type */
+	u8			ibm_credits;	/* returned credits */
+	u32			ibm_nob;	/* # bytes in whole message */
+	u32			ibm_cksum;	/* checksum (0 == no checksum) */
+	u64			ibm_srcnid;	/* sender's NID */
+	u64			ibm_srcstamp;	/* sender's incarnation */
+	u64			ibm_dstnid;	/* destination's NID */
+	u64			ibm_dststamp;	/* destination's incarnation */
 
 	union {
 		struct kib_connparams		connparams;
@@ -432,161 +434,161 @@ struct kib_msg {
 	} __packed ibm_u;
 } __packed;
 
-#define IBLND_MSG_MAGIC     LNET_PROTO_IB_MAGIC /* unique magic */
+#define IBLND_MSG_MAGIC		LNET_PROTO_IB_MAGIC /* unique magic */
 
-#define IBLND_MSG_VERSION_1 0x11
-#define IBLND_MSG_VERSION_2 0x12
-#define IBLND_MSG_VERSION   IBLND_MSG_VERSION_2
+#define IBLND_MSG_VERSION_1	0x11
+#define IBLND_MSG_VERSION_2	0x12
+#define IBLND_MSG_VERSION	IBLND_MSG_VERSION_2
 
-#define IBLND_MSG_CONNREQ   0xc0	/* connection request */
-#define IBLND_MSG_CONNACK   0xc1	/* connection acknowledge */
-#define IBLND_MSG_NOOP      0xd0	/* nothing (just credits) */
-#define IBLND_MSG_IMMEDIATE 0xd1	/* immediate */
-#define IBLND_MSG_PUT_REQ   0xd2	/* putreq (src->sink) */
-#define IBLND_MSG_PUT_NAK   0xd3	/* completion (sink->src) */
-#define IBLND_MSG_PUT_ACK   0xd4	/* putack (sink->src) */
-#define IBLND_MSG_PUT_DONE  0xd5	/* completion (src->sink) */
-#define IBLND_MSG_GET_REQ   0xd6	/* getreq (sink->src) */
-#define IBLND_MSG_GET_DONE  0xd7	/* completion (src->sink: all OK) */
+#define IBLND_MSG_CONNREQ	0xc0	/* connection request */
+#define IBLND_MSG_CONNACK	0xc1	/* connection acknowledge */
+#define IBLND_MSG_NOOP		0xd0	/* nothing (just credits) */
+#define IBLND_MSG_IMMEDIATE	0xd1	/* immediate */
+#define IBLND_MSG_PUT_REQ	0xd2	/* putreq (src->sink) */
+#define IBLND_MSG_PUT_NAK	0xd3	/* completion (sink->src) */
+#define IBLND_MSG_PUT_ACK	0xd4	/* putack (sink->src) */
+#define IBLND_MSG_PUT_DONE	0xd5	/* completion (src->sink) */
+#define IBLND_MSG_GET_REQ	0xd6	/* getreq (sink->src) */
+#define IBLND_MSG_GET_DONE	0xd7	/* completion (src->sink: all OK) */
 
 struct kib_rej {
-	u32            ibr_magic;       /* sender's magic */
-	u16            ibr_version;     /* sender's version */
-	u8             ibr_why;         /* reject reason */
-	u8             ibr_padding;     /* padding */
-	u64            ibr_incarnation; /* incarnation of peer_ni */
-	struct kib_connparams ibr_cp;          /* connection parameters */
+	u32			ibr_magic;	/* sender's magic */
+	u16			ibr_version;	/* sender's version */
+	u8			ibr_why;	/* reject reason */
+	u8			ibr_padding;	/* padding */
+	u64			ibr_incarnation;/* incarnation of peer_ni */
+	struct kib_connparams	ibr_cp;		/* connection parameters */
 } __packed;
 
 /* connection rejection reasons */
-#define IBLND_REJECT_CONN_RACE      1 /* You lost connection race */
-#define IBLND_REJECT_NO_RESOURCES   2 /* Out of memory/conns etc */
-#define IBLND_REJECT_FATAL          3 /* Anything else */
-#define IBLND_REJECT_CONN_UNCOMPAT  4 /* incompatible version peer_ni */
-#define IBLND_REJECT_CONN_STALE     5 /* stale peer_ni */
+#define IBLND_REJECT_CONN_RACE		1 /* You lost connection race */
+#define IBLND_REJECT_NO_RESOURCES	2 /* Out of memory/conns etc */
+#define IBLND_REJECT_FATAL		3 /* Anything else */
+#define IBLND_REJECT_CONN_UNCOMPAT	4 /* incompatible version peer_ni */
+#define IBLND_REJECT_CONN_STALE		5 /* stale peer_ni */
 /* peer_ni's rdma frags doesn't match mine */
-#define IBLND_REJECT_RDMA_FRAGS	    6
+#define IBLND_REJECT_RDMA_FRAGS		6
 /* peer_ni's msg queue size doesn't match mine */
-#define IBLND_REJECT_MSG_QUEUE_SIZE 7
-#define IBLND_REJECT_INVALID_SRV_ID 8
+#define IBLND_REJECT_MSG_QUEUE_SIZE	7
+#define IBLND_REJECT_INVALID_SRV_ID	8
 
 /***********************************************************************/
 
 struct kib_rx {					/* receive message */
-	struct list_head       rx_list;       /* queue for attention */
-	struct kib_conn        *rx_conn;      /* owning conn */
-	int                    rx_nob; /* # bytes received (-1 while posted) */
-	enum ib_wc_status      rx_status;     /* completion status */
-	struct kib_msg		*rx_msg;	/* message buffer (host vaddr) */
-	u64                  rx_msgaddr;    /* message buffer (I/O addr) */
-	DEFINE_DMA_UNMAP_ADDR(rx_msgunmap);  /* for dma_unmap_single() */
-	struct ib_recv_wr      rx_wrq;        /* receive work item... */
-	struct ib_sge          rx_sge;        /* ...and its memory */
+	struct list_head	rx_list;	/* queue for attention */
+	struct kib_conn        *rx_conn;	/* owning conn */
+	int			rx_nob;		/* # bytes received (-1 while posted) */
+	enum ib_wc_status	rx_status;	/* completion status */
+	struct kib_msg	       *rx_msg;		/* message buffer (host vaddr) */
+	u64			rx_msgaddr;	/* message buffer (I/O addr) */
+	DEFINE_DMA_UNMAP_ADDR(rx_msgunmap);	/* for dma_unmap_single() */
+	struct ib_recv_wr	rx_wrq;		/* receive work item... */
+	struct ib_sge		rx_sge;		/* ...and its memory */
 };
 
-#define IBLND_POSTRX_DONT_POST    0 /* don't post */
-#define IBLND_POSTRX_NO_CREDIT    1 /* post: no credits */
-#define IBLND_POSTRX_PEER_CREDIT  2 /* post: give peer_ni back 1 credit */
-#define IBLND_POSTRX_RSRVD_CREDIT 3 /* post: give self back 1 reserved credit */
+#define IBLND_POSTRX_DONT_POST		0	/* don't post */
+#define IBLND_POSTRX_NO_CREDIT		1	/* post: no credits */
+#define IBLND_POSTRX_PEER_CREDIT	2	/* post: give peer_ni back 1 credit */
+#define IBLND_POSTRX_RSRVD_CREDIT	3	/* post: give self back 1 reserved credit */
 
 struct kib_tx {					/* transmit message */
-	struct list_head      tx_list; /* queue on idle_txs ibc_tx_queue etc. */
-	struct kib_tx_pool	*tx_pool;	/* pool I'm from */
-	struct kib_conn       *tx_conn;       /* owning conn */
-	short                 tx_sending;     /* # tx callbacks outstanding */
-	short                 tx_queued;      /* queued for sending */
-	short                 tx_waiting;     /* waiting for peer_ni */
-	int                   tx_status;      /* LNET completion status */
+	struct list_head	tx_list;	/* queue on idle_txs ibc_tx_queue etc. */
+	struct kib_tx_pool     *tx_pool;	/* pool I'm from */
+	struct kib_conn	       *tx_conn;	/* owning conn */
+	short			tx_sending;	/* # tx callbacks outstanding */
+	short			tx_queued;	/* queued for sending */
+	short			tx_waiting;	/* waiting for peer_ni */
+	int			tx_status;	/* LNET completion status */
 	ktime_t			tx_deadline;	/* completion deadline */
-	u64                 tx_cookie;      /* completion cookie */
-	struct lnet_msg		*tx_lntmsg[2];	/* lnet msgs to finalize on completion */
-	struct kib_msg	      *tx_msg;        /* message buffer (host vaddr) */
-	u64                 tx_msgaddr;     /* message buffer (I/O addr) */
-	DEFINE_DMA_UNMAP_ADDR(tx_msgunmap);  /* for dma_unmap_single() */
+	u64			tx_cookie;	/* completion cookie */
+	struct lnet_msg	       *tx_lntmsg[2];	/* lnet msgs to finalize on completion */
+	struct kib_msg	       *tx_msg;		/* message buffer (host vaddr) */
+	u64			tx_msgaddr;	/* message buffer (I/O addr) */
+	DEFINE_DMA_UNMAP_ADDR(tx_msgunmap);	/* for dma_unmap_single() */
 	/** sge for tx_msgaddr */
 	struct ib_sge		tx_msgsge;
-	int                   tx_nwrq;        /* # send work items */
+	int			tx_nwrq;	/* # send work items */
 	/* # used scatter/gather elements */
 	int			tx_nsge;
-	struct ib_rdma_wr     *tx_wrq;        /* send work items... */
-	struct ib_sge         *tx_sge;        /* ...and their memory */
-	struct kib_rdma_desc  *tx_rd;         /* rdma descriptor */
-	int                   tx_nfrags;      /* # entries in... */
-	struct scatterlist    *tx_frags;      /* dma_map_sg descriptor */
-	u64                 *tx_pages;      /* rdma phys page addrs */
+	struct ib_rdma_wr      *tx_wrq;		/* send work items... */
+	struct ib_sge	       *tx_sge;		/* ...and their memory */
+	struct kib_rdma_desc   *tx_rd;		/* rdma descriptor */
+	int			tx_nfrags;	/* # entries in... */
+	struct scatterlist     *tx_frags;	/* dma_map_sg descriptor */
+	u64		       *tx_pages;	/* rdma phys page addrs */
 	/* gaps in fragments */
 	bool			tx_gaps;
 	struct kib_fmr		tx_fmr;		/* FMR */
-	int                   tx_dmadir;      /* dma direction */
+	int			tx_dmadir;	/* dma direction */
 };
 
 struct kib_connvars {
-	struct kib_msg cv_msg; /* connection-in-progress variables */
+	struct kib_msg		cv_msg;		/* connection-in-progress variables */
 };
 
 struct kib_conn {
-	struct kib_sched_info *ibc_sched;      /* scheduler information */
-	struct kib_peer_ni       *ibc_peer;       /* owning peer_ni */
-	struct kib_hca_dev         *ibc_hdev;       /* HCA bound on */
-	struct list_head ibc_list;            /* stash on peer_ni's conn list */
-	struct list_head      ibc_sched_list;  /* schedule for attention */
-	u16                 ibc_version;     /* version of connection */
+	struct kib_sched_info  *ibc_sched;	/* scheduler information */
+	struct kib_peer_ni     *ibc_peer;	/* owning peer_ni */
+	struct kib_hca_dev     *ibc_hdev;	/* HCA bound on */
+	struct list_head	ibc_list;	/* stash on peer_ni's conn list */
+	struct list_head	ibc_sched_list;	/* schedule for attention */
+	u16			ibc_version;	/* version of connection */
 	/* reconnect later */
 	u16			ibc_reconnect:1;
-	u64                 ibc_incarnation; /* which instance of the peer_ni */
-	atomic_t              ibc_refcount;    /* # users */
-	int                   ibc_state;       /* what's happening */
-	int                   ibc_nsends_posted; /* # uncompleted sends */
-	int                   ibc_noops_posted;  /* # uncompleted NOOPs */
-	int                   ibc_credits;     /* # credits I have */
-	int                   ibc_outstanding_credits; /* # credits to return */
-	int                   ibc_reserved_credits; /* # ACK/DONE msg credits */
-	int                   ibc_comms_error; /* set on comms error */
+	u64			ibc_incarnation;/* which instance of the peer_ni */
+	atomic_t		ibc_refcount;	/* # users */
+	int			ibc_state;	/* what's happening */
+	int			ibc_nsends_posted;	/* # uncompleted sends */
+	int			ibc_noops_posted;	/* # uncompleted NOOPs */
+	int			ibc_credits;     /* # credits I have */
+	int			ibc_outstanding_credits; /* # credits to return */
+	int			ibc_reserved_credits; /* # ACK/DONE msg credits */
+	int			ibc_comms_error; /* set on comms error */
 	/* connections queue depth */
-	u16		      ibc_queue_depth;
+	u16			ibc_queue_depth;
 	/* connections max frags */
-	u16		      ibc_max_frags;
-	unsigned int          ibc_nrx:16;      /* receive buffers owned */
-	unsigned int          ibc_scheduled:1; /* scheduled for attention */
-	unsigned int          ibc_ready:1;     /* CQ callback fired */
+	u16			ibc_max_frags;
+	unsigned int		ibc_nrx:16;	/* receive buffers owned */
+	unsigned int		ibc_scheduled:1;/* scheduled for attention */
+	unsigned int		ibc_ready:1;	/* CQ callback fired */
 	ktime_t			ibc_last_send;	/* time of last send */
-	struct list_head      ibc_connd_list;  /* link chain for */
-					       /* kiblnd_check_conns only */
-	struct list_head ibc_early_rxs; /* rxs completed before ESTABLISHED */
-	struct list_head ibc_tx_noops;         /* IBLND_MSG_NOOPs for */
-					       /* IBLND_MSG_VERSION_1 */
-	struct list_head ibc_tx_queue;         /* sends that need a credit */
-	struct list_head ibc_tx_queue_nocred;  /* sends that don't need a */
-					       /* credit */
-	struct list_head ibc_tx_queue_rsrvd;   /* sends that need to */
-					       /* reserve an ACK/DONE msg */
-	struct list_head ibc_active_txs; /* active tx awaiting completion */
-	spinlock_t            ibc_lock;        /* serialise */
-	struct kib_rx              *ibc_rxs;        /* the rx descs */
-	struct kib_pages           *ibc_rx_pages;   /* premapped rx msg pages */
-
-	struct rdma_cm_id     *ibc_cmid;       /* CM id */
-	struct ib_cq          *ibc_cq;         /* completion queue */
+	struct list_head	ibc_connd_list;	/* link chain for */
+						/* kiblnd_check_conns only */
+	struct list_head	ibc_early_rxs;	/* rxs completed before ESTABLISHED */
+	struct list_head	ibc_tx_noops;	/* IBLND_MSG_NOOPs for */
+						/* IBLND_MSG_VERSION_1 */
+	struct list_head	ibc_tx_queue;	/* sends that need a credit */
+	struct list_head	ibc_tx_queue_nocred;  /* sends that don't need a */
+						      /* credit */
+	struct list_head	ibc_tx_queue_rsrvd;   /* sends that need to */
+						      /* reserve an ACK/DONE msg */
+	struct list_head	ibc_active_txs;	/* active tx awaiting completion */
+	spinlock_t		ibc_lock;	/* serialise */
+	struct kib_rx		*ibc_rxs;	/* the rx descs */
+	struct kib_pages	*ibc_rx_pages;	/* premapped rx msg pages */
+
+	struct rdma_cm_id	*ibc_cmid;	/* CM id */
+	struct ib_cq		*ibc_cq;	/* completion queue */
 
 	struct kib_connvars	*ibc_connvars;	/* in-progress connection state */
 };
 
-#define IBLND_CONN_INIT           0	 /* being initialised */
-#define IBLND_CONN_ACTIVE_CONNECT 1	 /* active sending req */
-#define IBLND_CONN_PASSIVE_WAIT   2	 /* passive waiting for rtu */
-#define IBLND_CONN_ESTABLISHED    3	 /* connection established */
-#define IBLND_CONN_CLOSING        4	 /* being closed */
-#define IBLND_CONN_DISCONNECTED   5	 /* disconnected */
+#define IBLND_CONN_INIT			0	 /* being initialised */
+#define IBLND_CONN_ACTIVE_CONNECT	1	 /* active sending req */
+#define IBLND_CONN_PASSIVE_WAIT		2	 /* passive waiting for rtu */
+#define IBLND_CONN_ESTABLISHED		3	 /* connection established */
+#define IBLND_CONN_CLOSING		4	 /* being closed */
+#define IBLND_CONN_DISCONNECTED		5	 /* disconnected */
 
 struct kib_peer_ni {
-	struct list_head ibp_list;        /* stash on global peer_ni list */
-	lnet_nid_t       ibp_nid;         /* who's on the other end(s) */
-	struct lnet_ni	*ibp_ni;         /* LNet interface */
-	struct list_head ibp_conns;       /* all active connections */
-	struct kib_conn	*ibp_next_conn;  /* next connection to send on for
-					  * round robin */
-	struct list_head ibp_tx_queue;    /* msgs waiting for a conn */
-	u64            ibp_incarnation; /* incarnation of peer_ni */
+	struct list_head	ibp_list;        /* stash on global peer_ni list */
+	lnet_nid_t		ibp_nid;         /* who's on the other end(s) */
+	struct lnet_ni		*ibp_ni;         /* LNet interface */
+	struct list_head	ibp_conns;       /* all active connections */
+	struct kib_conn		*ibp_next_conn;  /* next connection to send on for
+						  * round robin */
+	struct list_head	ibp_tx_queue;	 /* msgs waiting for a conn */
+	u64			ibp_incarnation; /* incarnation of peer_ni */
 	/* when (in seconds) I was last alive */
 	time64_t		ibp_last_alive;
 	/* # users */
@@ -604,11 +606,11 @@ struct kib_peer_ni {
 	/* # consecutive reconnection attempts to this peer_ni */
 	unsigned int		ibp_reconnected;
 	/* errno on closing this peer_ni */
-	int              ibp_error;
+	int			ibp_error;
 	/* max map_on_demand */
-	u16		 ibp_max_frags;
+	u16			ibp_max_frags;
 	/* max_peer_credits */
-	u16		 ibp_queue_depth;
+	u16			ibp_queue_depth;
 };
 
 extern struct kib_data kiblnd_data;
@@ -647,11 +649,11 @@ struct kib_peer_ni {
 	return dev->ibd_can_failover;
 }
 
-#define kiblnd_conn_addref(conn)				\
-do {							    \
-	CDEBUG(D_NET, "conn[%p] (%d)++\n",		      \
-	       (conn), atomic_read(&(conn)->ibc_refcount)); \
-	atomic_inc(&(conn)->ibc_refcount);		  \
+#define kiblnd_conn_addref(conn)					\
+do {									\
+	CDEBUG(D_NET, "conn[%p] (%d)++\n",				\
+	       (conn), atomic_read(&(conn)->ibc_refcount));		\
+	atomic_inc(&(conn)->ibc_refcount);				\
 } while (0)
 
 #define kiblnd_conn_decref(conn)					\
@@ -665,27 +667,27 @@ struct kib_peer_ni {
 		spin_lock_irqsave(&kiblnd_data.kib_connd_lock, flags);	\
 		list_add_tail(&(conn)->ibc_list,			\
 				  &kiblnd_data.kib_connd_zombies);	\
-		wake_up(&kiblnd_data.kib_connd_waitq);		\
+		wake_up(&kiblnd_data.kib_connd_waitq);			\
 		spin_unlock_irqrestore(&kiblnd_data.kib_connd_lock, flags);\
 	}								\
 } while (0)
 
-#define kiblnd_peer_addref(peer_ni)				\
-do {							    \
-	CDEBUG(D_NET, "peer_ni[%p] -> %s (%d)++\n",		\
-	       (peer_ni), libcfs_nid2str((peer_ni)->ibp_nid),	 \
-	       atomic_read(&(peer_ni)->ibp_refcount));	\
-	atomic_inc(&(peer_ni)->ibp_refcount);		  \
+#define kiblnd_peer_addref(peer_ni)					\
+do {									\
+	CDEBUG(D_NET, "peer_ni[%p] -> %s (%d)++\n",			\
+	       (peer_ni), libcfs_nid2str((peer_ni)->ibp_nid),		\
+	       atomic_read(&(peer_ni)->ibp_refcount));			\
+	atomic_inc(&(peer_ni)->ibp_refcount);				\
 } while (0)
 
-#define kiblnd_peer_decref(peer_ni)				\
-do {							    \
-	CDEBUG(D_NET, "peer_ni[%p] -> %s (%d)--\n",		\
-	       (peer_ni), libcfs_nid2str((peer_ni)->ibp_nid),	 \
-	       atomic_read(&(peer_ni)->ibp_refcount));	\
-	LASSERT_ATOMIC_POS(&(peer_ni)->ibp_refcount);	      \
-	if (atomic_dec_and_test(&(peer_ni)->ibp_refcount))     \
-		kiblnd_destroy_peer(peer_ni);		      \
+#define kiblnd_peer_decref(peer_ni)					\
+do {									\
+	CDEBUG(D_NET, "peer_ni[%p] -> %s (%d)--\n",			\
+	       (peer_ni), libcfs_nid2str((peer_ni)->ibp_nid),		\
+	       atomic_read(&(peer_ni)->ibp_refcount));			\
+	LASSERT_ATOMIC_POS(&(peer_ni)->ibp_refcount);			\
+	if (atomic_dec_and_test(&(peer_ni)->ibp_refcount))		\
+		kiblnd_destroy_peer(peer_ni);				\
 } while (0)
 
 static inline bool
@@ -812,12 +814,12 @@ struct kib_peer_ni {
 /* CAVEAT EMPTOR: We rely on descriptor alignment to allow us to use the */
 /* lowest bits of the work request id to stash the work item type. */
 
-#define IBLND_WID_INVAL	0
-#define IBLND_WID_TX	1
-#define IBLND_WID_RX	2
-#define IBLND_WID_RDMA	3
-#define IBLND_WID_MR	4
-#define IBLND_WID_MASK	7UL
+#define IBLND_WID_INVAL		0
+#define IBLND_WID_TX		1
+#define IBLND_WID_RX		2
+#define IBLND_WID_RDMA		3
+#define IBLND_WID_MR		4
+#define IBLND_WID_MASK		7UL
 
 static inline u64
 kiblnd_ptr2wreqid(void *ptr, int type)
@@ -852,14 +854,14 @@ struct kib_peer_ni {
 kiblnd_init_msg(struct kib_msg *msg, int type, int body_nob)
 {
 	msg->ibm_type = type;
-	msg->ibm_nob  = offsetof(struct kib_msg, ibm_u) + body_nob;
+	msg->ibm_nob = offsetof(struct kib_msg, ibm_u) + body_nob;
 }
 
 static inline int
 kiblnd_rd_size(struct kib_rdma_desc *rd)
 {
-	int   i;
-	int   size;
+	int i;
+	int size;
 
 	for (i = size = 0; i < rd->rd_nfrags; i++)
 		size += rd->rd_frags[i].rf_nob;
@@ -890,7 +892,7 @@ struct kib_peer_ni {
 {
 	if (nob < rd->rd_frags[index].rf_nob) {
 		rd->rd_frags[index].rf_addr += nob;
-		rd->rd_frags[index].rf_nob  -= nob;
+		rd->rd_frags[index].rf_nob -= nob;
 	} else {
 		index++;
 	}
@@ -929,8 +931,8 @@ static inline void kiblnd_dma_unmap_single(struct ib_device *dev,
 	ib_dma_unmap_single(dev, addr, size, direction);
 }
 
-#define KIBLND_UNMAP_ADDR_SET(p, m, a)  do {} while (0)
-#define KIBLND_UNMAP_ADDR(p, m, a)      (a)
+#define KIBLND_UNMAP_ADDR_SET(p, m, a)	do {} while (0)
+#define KIBLND_UNMAP_ADDR(p, m, a)	(a)
 
 static inline int kiblnd_dma_map_sg(struct ib_device *dev,
 				    struct scatterlist *sg, int nents,
@@ -962,34 +964,34 @@ static inline unsigned int kiblnd_sg_dma_len(struct ib_device *dev,
 /* right because OFED1.2 defines it as const, to use it we have to add */
 /* (void *) cast to overcome "const" */
 
-#define KIBLND_CONN_PARAM(e)     ((e)->param.conn.private_data)
-#define KIBLND_CONN_PARAM_LEN(e) ((e)->param.conn.private_data_len)
+#define KIBLND_CONN_PARAM(e)		((e)->param.conn.private_data)
+#define KIBLND_CONN_PARAM_LEN(e)	((e)->param.conn.private_data_len)
 
 void kiblnd_map_rx_descs(struct kib_conn *conn);
 void kiblnd_unmap_rx_descs(struct kib_conn *conn);
 void kiblnd_pool_free_node(struct kib_pool *pool, struct list_head *node);
 struct list_head *kiblnd_pool_alloc_node(struct kib_poolset *ps);
 
-int  kiblnd_fmr_pool_map(struct kib_fmr_poolset *fps, struct kib_tx *tx,
-			 struct kib_rdma_desc *rd, u32 nob, u64 iov,
-			 struct kib_fmr *fmr);
+int kiblnd_fmr_pool_map(struct kib_fmr_poolset *fps, struct kib_tx *tx,
+			struct kib_rdma_desc *rd, u32 nob, u64 iov,
+			struct kib_fmr *fmr);
 void kiblnd_fmr_pool_unmap(struct kib_fmr *fmr, int status);
 
 int kiblnd_tunables_setup(struct lnet_ni *ni);
 void kiblnd_tunables_init(void);
 
-int  kiblnd_connd(void *arg);
-int  kiblnd_scheduler(void *arg);
-int  kiblnd_thread_start(int (*fn)(void *arg), void *arg, char *name);
-int  kiblnd_failover_thread(void *arg);
+int kiblnd_connd(void *arg);
+int kiblnd_scheduler(void *arg);
+int kiblnd_thread_start(int (*fn)(void *arg), void *arg, char *name);
+int kiblnd_failover_thread(void *arg);
 
-int  kiblnd_alloc_pages(struct kib_pages **pp, int cpt, int npages);
+int kiblnd_alloc_pages(struct kib_pages **pp, int cpt, int npages);
 
-int  kiblnd_cm_callback(struct rdma_cm_id *cmid,
-			struct rdma_cm_event *event);
-int  kiblnd_translate_mtu(int value);
+int kiblnd_cm_callback(struct rdma_cm_id *cmid,
+		       struct rdma_cm_event *event);
+int kiblnd_translate_mtu(int value);
 
-int  kiblnd_dev_failover(struct kib_dev *dev);
+int kiblnd_dev_failover(struct kib_dev *dev);
 int kiblnd_create_peer(struct lnet_ni *ni, struct kib_peer_ni **peerp,
 		       lnet_nid_t nid);
 void kiblnd_destroy_peer(struct kib_peer_ni *peer_ni);
@@ -997,9 +999,9 @@ int kiblnd_create_peer(struct lnet_ni *ni, struct kib_peer_ni **peerp,
 void kiblnd_destroy_dev(struct kib_dev *dev);
 void kiblnd_unlink_peer_locked(struct kib_peer_ni *peer_ni);
 struct kib_peer_ni *kiblnd_find_peer_locked(struct lnet_ni *ni, lnet_nid_t nid);
-int  kiblnd_close_stale_conns_locked(struct kib_peer_ni *peer_ni,
-				     int version, u64 incarnation);
-int  kiblnd_close_peer_conns_locked(struct kib_peer_ni *peer_ni, int why);
+int kiblnd_close_stale_conns_locked(struct kib_peer_ni *peer_ni,
+				    int version, u64 incarnation);
+int kiblnd_close_peer_conns_locked(struct kib_peer_ni *peer_ni, int why);
 
 struct kib_conn *kiblnd_create_conn(struct kib_peer_ni *peer_ni,
 				    struct rdma_cm_id *cmid,
@@ -1017,8 +1019,8 @@ struct kib_conn *kiblnd_create_conn(struct kib_peer_ni *peer_ni,
 
 void kiblnd_pack_msg(struct lnet_ni *ni, struct kib_msg *msg, int version,
 		     int credits, lnet_nid_t dstnid, u64 dststamp);
-int  kiblnd_unpack_msg(struct kib_msg *msg, int nob);
-int  kiblnd_post_rx(struct kib_rx *rx, int credit);
+int kiblnd_unpack_msg(struct kib_msg *msg, int nob);
+int kiblnd_post_rx(struct kib_rx *rx, int credit);
 
 int kiblnd_send(struct lnet_ni *ni, void *private, struct lnet_msg *lntmsg);
 int kiblnd_recv(struct lnet_ni *ni, void *private, struct lnet_msg *lntmsg,
diff --git a/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd_cb.c b/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd_cb.c
index 48f2814..ad17260 100644
--- a/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd_cb.c
+++ b/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd_cb.c
@@ -167,14 +167,14 @@ static int kiblnd_init_rdma(struct kib_conn *conn, struct kib_tx *tx, int type,
 		credit == IBLND_POSTRX_PEER_CREDIT ||
 		credit == IBLND_POSTRX_RSRVD_CREDIT);
 
-	rx->rx_sge.lkey   = conn->ibc_hdev->ibh_pd->local_dma_lkey;
-	rx->rx_sge.addr   = rx->rx_msgaddr;
+	rx->rx_sge.lkey = conn->ibc_hdev->ibh_pd->local_dma_lkey;
+	rx->rx_sge.addr = rx->rx_msgaddr;
 	rx->rx_sge.length = IBLND_MSG_SIZE;
 
-	rx->rx_wrq.next    = NULL;
+	rx->rx_wrq.next = NULL;
 	rx->rx_wrq.sg_list = &rx->rx_sge;
 	rx->rx_wrq.num_sge = 1;
-	rx->rx_wrq.wr_id   = kiblnd_ptr2wreqid(rx, IBLND_WID_RX);
+	rx->rx_wrq.wr_id = kiblnd_ptr2wreqid(rx, IBLND_WID_RX);
 
 	LASSERT(conn->ibc_state >= IBLND_CONN_INIT);
 	LASSERT(rx->rx_nob >= 0);	      /* not posted */
@@ -528,10 +528,10 @@ static int kiblnd_init_rdma(struct kib_conn *conn, struct kib_tx *tx, int type,
 	kiblnd_handle_rx(rx);
 	return;
 
- failed:
+failed:
 	CDEBUG(D_NET, "rx %p conn %p\n", rx, conn);
 	kiblnd_close_conn(conn, err);
- ignore:
+ignore:
 	kiblnd_drop_rx(rx);		     /* Don't re-post rx. */
 }
 
@@ -1068,17 +1068,17 @@ static int kiblnd_map_tx(struct lnet_ni *ni, struct kib_tx *tx,
 
 	kiblnd_init_msg(tx->tx_msg, type, body_nob);
 
-	sge->lkey   = hdev->ibh_pd->local_dma_lkey;
-	sge->addr   = tx->tx_msgaddr;
+	sge->lkey = hdev->ibh_pd->local_dma_lkey;
+	sge->addr = tx->tx_msgaddr;
 	sge->length = nob;
 
 	memset(wrq, 0, sizeof(*wrq));
 
-	wrq->wr.next       = NULL;
-	wrq->wr.wr_id      = kiblnd_ptr2wreqid(tx, IBLND_WID_TX);
-	wrq->wr.sg_list    = sge;
-	wrq->wr.num_sge    = 1;
-	wrq->wr.opcode     = IB_WR_SEND;
+	wrq->wr.next = NULL;
+	wrq->wr.wr_id = kiblnd_ptr2wreqid(tx, IBLND_WID_TX);
+	wrq->wr.sg_list = sge;
+	wrq->wr.num_sge = 1;
+	wrq->wr.opcode = IB_WR_SEND;
 	wrq->wr.send_flags = IB_SEND_SIGNALED;
 
 	tx->tx_nwrq++;
@@ -1133,8 +1133,8 @@ static int kiblnd_map_tx(struct lnet_ni *ni, struct kib_tx *tx,
 			       (u32)resid);
 
 		sge = &tx->tx_sge[tx->tx_nsge];
-		sge->addr   = kiblnd_rd_frag_addr(srcrd, srcidx);
-		sge->lkey   = kiblnd_rd_frag_key(srcrd, srcidx);
+		sge->addr = kiblnd_rd_frag_addr(srcrd, srcidx);
+		sge->lkey = kiblnd_rd_frag_key(srcrd, srcidx);
 		sge->length = sge_nob;
 
 		if (wrq_sge == 0) {
@@ -1329,12 +1329,12 @@ static int kiblnd_resolve_addr(struct rdma_cm_id *cmid,
 
 	return;
 
- failed2:
+failed2:
 	kiblnd_peer_connect_failed(peer_ni, 1, rc);
 	kiblnd_peer_decref(peer_ni);	       /* cmid's ref */
 	rdma_destroy_id(cmid);
 	return;
- failed:
+failed:
 	kiblnd_peer_connect_failed(peer_ni, 1, rc);
 }
 
@@ -1397,7 +1397,7 @@ static int kiblnd_resolve_addr(struct rdma_cm_id *cmid,
 	rwlock_t *g_lock = &kiblnd_data.kib_global_lock;
 	unsigned long flags;
 	int rc;
-	int		   i;
+	int i;
 	struct lnet_ioctl_config_o2iblnd_tunables *tunables;
 
 	/*
@@ -1529,7 +1529,7 @@ static int kiblnd_resolve_addr(struct rdma_cm_id *cmid,
 	unsigned int payload_nob = lntmsg->msg_len;
 	struct iov_iter from;
 	struct kib_msg *ibmsg;
-	struct kib_rdma_desc  *rd;
+	struct kib_rdma_desc *rd;
 	struct kib_tx *tx;
 	int nob;
 	int rc;
@@ -1747,9 +1747,9 @@ static int kiblnd_resolve_addr(struct rdma_cm_id *cmid,
 	kiblnd_queue_tx(tx, rx->rx_conn);
 	return;
 
- failed_1:
+failed_1:
 	kiblnd_tx_done(tx);
- failed_0:
+failed_0:
 	lnet_finalize(lntmsg, -EIO);
 }
 
@@ -1797,7 +1797,7 @@ static int kiblnd_resolve_addr(struct rdma_cm_id *cmid,
 
 	case IBLND_MSG_PUT_REQ: {
 		u64 ibprm_cookie = rxmsg->ibm_u.putreq.ibprm_cookie;
-		struct kib_msg	*txmsg;
+		struct kib_msg *txmsg;
 		struct kib_rdma_desc *rd;
 
 		if (!iov_iter_count(to)) {
@@ -2193,15 +2193,15 @@ static int kiblnd_resolve_addr(struct rdma_cm_id *cmid,
 		peer_ni->ibp_accepting--;
 
 	if (!peer_ni->ibp_version) {
-		peer_ni->ibp_version     = conn->ibc_version;
+		peer_ni->ibp_version = conn->ibc_version;
 		peer_ni->ibp_incarnation = conn->ibc_incarnation;
 	}
 
-	if (peer_ni->ibp_version     != conn->ibc_version ||
+	if (peer_ni->ibp_version != conn->ibc_version ||
 	    peer_ni->ibp_incarnation != conn->ibc_incarnation) {
 		kiblnd_close_stale_conns_locked(peer_ni, conn->ibc_version,
 						conn->ibc_incarnation);
-		peer_ni->ibp_version     = conn->ibc_version;
+		peer_ni->ibp_version = conn->ibc_version;
 		peer_ni->ibp_incarnation = conn->ibc_incarnation;
 	}
 
@@ -2431,13 +2431,13 @@ static int kiblnd_resolve_addr(struct rdma_cm_id *cmid,
 	peer2 = kiblnd_find_peer_locked(ni, nid);
 	if (peer2) {
 		if (!peer2->ibp_version) {
-			peer2->ibp_version     = version;
+			peer2->ibp_version = version;
 			peer2->ibp_incarnation = reqmsg->ibm_srcstamp;
 		}
 
 		/* not the guy I've talked with */
 		if (peer2->ibp_incarnation != reqmsg->ibm_srcstamp ||
-		    peer2->ibp_version     != version) {
+		    peer2->ibp_version != version) {
 			kiblnd_close_peer_conns_locked(peer2, -ESTALE);
 
 			if (kiblnd_peer_active(peer2)) {
@@ -2506,8 +2506,8 @@ static int kiblnd_resolve_addr(struct rdma_cm_id *cmid,
 		LASSERT(!peer_ni->ibp_version &&
 			!peer_ni->ibp_incarnation);
 
-		peer_ni->ibp_accepting   = 1;
-		peer_ni->ibp_version     = version;
+		peer_ni->ibp_accepting = 1;
+		peer_ni->ibp_version = version;
 		peer_ni->ibp_incarnation = reqmsg->ibm_srcstamp;
 
 		/* I have a ref on ni that prevents it being shutdown */
@@ -2532,8 +2532,8 @@ static int kiblnd_resolve_addr(struct rdma_cm_id *cmid,
 	 * conn now "owns" cmid, so I return success from here on to ensure the
 	 * CM callback doesn't destroy cmid.
 	 */
-	conn->ibc_incarnation      = reqmsg->ibm_srcstamp;
-	conn->ibc_credits          = conn->ibc_queue_depth;
+	conn->ibc_incarnation = reqmsg->ibm_srcstamp;
+	conn->ibc_credits = conn->ibc_queue_depth;
 	conn->ibc_reserved_credits = conn->ibc_queue_depth;
 	LASSERT(conn->ibc_credits + conn->ibc_reserved_credits +
 		IBLND_OOB_MSGS(version) <= IBLND_RX_MSGS(conn));
@@ -2564,7 +2564,7 @@ static int kiblnd_resolve_addr(struct rdma_cm_id *cmid,
 	if (rc) {
 		CERROR("Can't accept %s: %d\n", libcfs_nid2str(nid), rc);
 		rej.ibr_version = version;
-		rej.ibr_why     = IBLND_REJECT_FATAL;
+		rej.ibr_why = IBLND_REJECT_FATAL;
 
 		kiblnd_reject(cmid, &rej);
 		kiblnd_connreq_done(conn, rc);
@@ -2574,14 +2574,14 @@ static int kiblnd_resolve_addr(struct rdma_cm_id *cmid,
 	lnet_ni_decref(ni);
 	return 0;
 
- failed:
+failed:
 	if (ni) {
 		rej.ibr_cp.ibcp_queue_depth = kiblnd_msg_queue_size(version, ni);
 		rej.ibr_cp.ibcp_max_frags = IBLND_MAX_RDMA_FRAGS;
 		lnet_ni_decref(ni);
 	}
 
-	rej.ibr_version             = version;
+	rej.ibr_version = version;
 	kiblnd_reject(cmid, &rej);
 
 	return -ECONNREFUSED;
@@ -2789,7 +2789,7 @@ static int kiblnd_resolve_addr(struct rdma_cm_id *cmid,
 				break;
 			}
 
-			if (rej->ibr_why     == IBLND_REJECT_FATAL &&
+			if (rej->ibr_why == IBLND_REJECT_FATAL &&
 			    rej->ibr_version == IBLND_MSG_VERSION_1) {
 				CDEBUG(D_NET,
 				       "rejected by old version peer_ni %s: %x\n",
@@ -2927,7 +2927,7 @@ static int kiblnd_resolve_addr(struct rdma_cm_id *cmid,
 	kiblnd_connreq_done(conn, 0);
 	return;
 
- failed:
+failed:
 	/*
 	 * NB My QP has already established itself, so I handle anything going
 	 * wrong here by setting ibc_comms_error.
@@ -2985,12 +2985,12 @@ static int kiblnd_resolve_addr(struct rdma_cm_id *cmid,
 
 	memset(&cp, 0, sizeof(cp));
 	cp.private_data	= msg;
-	cp.private_data_len    = msg->ibm_nob;
+	cp.private_data_len = msg->ibm_nob;
 	cp.responder_resources = 0;	     /* No atomic ops or RDMA reads */
-	cp.initiator_depth     = 0;
-	cp.flow_control        = 1;
-	cp.retry_count         = *kiblnd_tunables.kib_retry_count;
-	cp.rnr_retry_count     = *kiblnd_tunables.kib_rnr_retry_count;
+	cp.initiator_depth = 0;
+	cp.flow_control = 1;
+	cp.retry_count = *kiblnd_tunables.kib_retry_count;
+	cp.rnr_retry_count = *kiblnd_tunables.kib_rnr_retry_count;
 
 	LASSERT(cmid->context == (void *)conn);
 	LASSERT(conn->ibc_cmid == cmid);
@@ -3217,11 +3217,11 @@ static int kiblnd_resolve_addr(struct rdma_cm_id *cmid,
 static int
 kiblnd_conn_timed_out_locked(struct kib_conn *conn)
 {
-	return  kiblnd_check_txs_locked(conn, &conn->ibc_tx_queue) ||
-		kiblnd_check_txs_locked(conn, &conn->ibc_tx_noops) ||
-		kiblnd_check_txs_locked(conn, &conn->ibc_tx_queue_rsrvd) ||
-		kiblnd_check_txs_locked(conn, &conn->ibc_tx_queue_nocred) ||
-		kiblnd_check_txs_locked(conn, &conn->ibc_active_txs);
+	return kiblnd_check_txs_locked(conn, &conn->ibc_tx_queue) ||
+	       kiblnd_check_txs_locked(conn, &conn->ibc_tx_noops) ||
+	       kiblnd_check_txs_locked(conn, &conn->ibc_tx_queue_rsrvd) ||
+	       kiblnd_check_txs_locked(conn, &conn->ibc_tx_queue_nocred) ||
+	       kiblnd_check_txs_locked(conn, &conn->ibc_active_txs);
 }
 
 static void
@@ -3561,9 +3561,9 @@ static int kiblnd_resolve_addr(struct rdma_cm_id *cmid,
 {
 	/*
 	 * NB I'm not allowed to schedule this conn once its refcount has
-	 * reached 0.  Since fundamentally I'm racing with scheduler threads
+	 * reached 0. Since fundamentally I'm racing with scheduler threads
 	 * consuming my CQ I could be called after all completions have
-	 * occurred.  But in this case, !ibc_nrx && !ibc_nsends_posted
+	 * occurred. But in this case, !ibc_nrx && !ibc_nsends_posted
 	 * and this CQ is about to be destroyed so I NOOP.
 	 */
 	struct kib_conn *conn = arg;
@@ -3793,8 +3793,7 @@ static int kiblnd_resolve_addr(struct rdma_cm_id *cmid,
 		add_wait_queue(&kiblnd_data.kib_failover_waitq, &wait);
 		write_unlock_irqrestore(glock, flags);
 
-		rc = schedule_timeout(long_sleep ? 10 * HZ :
-						   HZ);
+		rc = schedule_timeout(long_sleep ? 10 * HZ : HZ);
 		remove_wait_queue(&kiblnd_data.kib_failover_waitq, &wait);
 		write_lock_irqsave(glock, flags);
 
diff --git a/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd_modparams.c b/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd_modparams.c
index 47e8a60..9fb1357 100644
--- a/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd_modparams.c
+++ b/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd_modparams.c
@@ -183,17 +183,17 @@
 MODULE_PARM_DESC(wrq_sge, "# scatter/gather element per work request");
 
 struct kib_tunables kiblnd_tunables = {
-	.kib_dev_failover      = &dev_failover,
-	.kib_service           = &service,
-	.kib_cksum             = &cksum,
-	.kib_timeout           = &timeout,
-	.kib_keepalive         = &keepalive,
-	.kib_default_ipif      = &ipif_name,
-	.kib_retry_count       = &retry_count,
-	.kib_rnr_retry_count   = &rnr_retry_count,
-	.kib_ib_mtu            = &ib_mtu,
-	.kib_require_priv_port = &require_privileged_port,
-	.kib_use_priv_port     = &use_privileged_port,
+	.kib_dev_failover	= &dev_failover,
+	.kib_service		= &service,
+	.kib_cksum		= &cksum,
+	.kib_timeout		= &timeout,
+	.kib_keepalive		= &keepalive,
+	.kib_default_ipif	= &ipif_name,
+	.kib_retry_count	= &retry_count,
+	.kib_rnr_retry_count	= &rnr_retry_count,
+	.kib_ib_mtu		= &ib_mtu,
+	.kib_require_priv_port	= &require_privileged_port,
+	.kib_use_priv_port	= &use_privileged_port,
 	.kib_nscheds		= &nscheds,
 	.kib_wrq_sge		= &wrq_sge,
 	.kib_use_fastreg_gaps	= &use_fastreg_gaps,
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [lustre-devel] [PATCH 24/26] lnet: cleanup white spaces
  2019-01-31 17:19 ` [lustre-devel] [PATCH 24/26] lnet: " James Simmons
@ 2019-02-04  3:13   ` NeilBrown
  0 siblings, 0 replies; 30+ messages in thread
From: NeilBrown @ 2019-02-04  3:13 UTC (permalink / raw)
  To: lustre-devel

On Thu, Jan 31 2019, James Simmons wrote:

> The LNet core code is very messy and difficult to read. Remove
> excess white space and properly align data structures so they
> are easy on the eyes.
>
> Signed-off-by: James Simmons <jsimmons@infradead.org>
> ---

> @@ -218,7 +218,7 @@ struct lnet_lnd {
>  	 */
>  
>  	/*
> -	 * Start sending a preformatted message.  'private' is NULL for PUT and
> +	 * Start sending a preformatted message.x  'private' is NULL for PUT and
>  	 * GET messages; otherwise this is a response to an incoming message
>  	 * and 'private' is the 'private' passed to lnet_parse().  Return
>  	 * non-zero for immediate failure, otherwise complete later with

I suspect you just added that 'x' to see if anyone really read you patch
I did.  I found it.  I've removed it.

:-)

NeilBrown


> @@ -267,7 +267,7 @@ struct lnet_tx_queue {
>  
>  enum lnet_ni_state {
>  	/* set when NI block is allocated */
> -	LNET_NI_STATE_INIT = 0,
> +	LNET_NI_STATE_INIT	= 0,
>  	/* set when NI is started successfully */
>  	LNET_NI_STATE_ACTIVE,
>  	/* set when LND notifies NI failed */
> @@ -279,23 +279,23 @@ enum lnet_ni_state {
>  };
>  
>  enum lnet_stats_type {
> -	LNET_STATS_TYPE_SEND = 0,
> +	LNET_STATS_TYPE_SEND	= 0,
>  	LNET_STATS_TYPE_RECV,
>  	LNET_STATS_TYPE_DROP
>  };
>  
>  struct lnet_comm_count {
> -	atomic_t co_get_count;
> -	atomic_t co_put_count;
> -	atomic_t co_reply_count;
> -	atomic_t co_ack_count;
> -	atomic_t co_hello_count;
> +	atomic_t		co_get_count;
> +	atomic_t		co_put_count;
> +	atomic_t		co_reply_count;
> +	atomic_t		co_ack_count;
> +	atomic_t		co_hello_count;
>  };
>  
>  struct lnet_element_stats {
> -	struct lnet_comm_count el_send_stats;
> -	struct lnet_comm_count el_recv_stats;
> -	struct lnet_comm_count el_drop_stats;
> +	struct lnet_comm_count	el_send_stats;
> +	struct lnet_comm_count	el_recv_stats;
> +	struct lnet_comm_count	el_drop_stats;
>  };
>  
>  struct lnet_net {
> @@ -376,7 +376,7 @@ struct lnet_ni {
>  	struct lnet_lnd_tunables ni_lnd_tunables;
>  
>  	/* lnd tunables set explicitly */
> -	bool ni_lnd_tunables_set;
> +	bool			ni_lnd_tunables_set;
>  
>  	/* NI statistics */
>  	struct lnet_element_stats ni_stats;
> @@ -391,9 +391,9 @@ struct lnet_ni {
>  	 * equivalent interfaces to use
>  	 * This is an array because socklnd bonding can still be configured
>  	 */
> -	char			 *ni_interfaces[LNET_INTERFACES_NUM];
> +	char			*ni_interfaces[LNET_INTERFACES_NUM];
>  	/* original net namespace */
> -	struct net		 *ni_net_ns;
> +	struct net		*ni_net_ns;
>  };
>  
>  #define LNET_PROTO_PING_MATCHBITS	0x8000000000000000LL
> @@ -434,9 +434,9 @@ struct lnet_rc_data {
>  
>  struct lnet_peer_ni {
>  	/* chain on lpn_peer_nis */
> -	struct list_head	lpni_peer_nis;
> +	struct list_head	 lpni_peer_nis;
>  	/* chain on remote peer list */
> -	struct list_head	lpni_on_remote_peer_ni_list;
> +	struct list_head	 lpni_on_remote_peer_ni_list;
>  	/* chain on peer hash */
>  	struct list_head	 lpni_hashlist;
>  	/* messages blocking for tx credits */
> @@ -448,7 +448,7 @@ struct lnet_peer_ni {
>  	/* statistics kept on each peer NI */
>  	struct lnet_element_stats lpni_stats;
>  	/* spin lock protecting credits and lpni_txq / lpni_rtrq */
> -	spinlock_t		lpni_lock;
> +	spinlock_t		 lpni_lock;
>  	/* # tx credits available */
>  	int			 lpni_txcredits;
>  	struct lnet_peer_net	*lpni_peer_net;
> @@ -491,26 +491,26 @@ struct lnet_peer_ni {
>  	/* CPT this peer attached on */
>  	int			 lpni_cpt;
>  	/* state flags -- protected by lpni_lock */
> -	unsigned int		lpni_state;
> +	unsigned int		 lpni_state;
>  	/* # refs from lnet_route::lr_gateway */
>  	int			 lpni_rtr_refcount;
>  	/* sequence number used to round robin over peer nis within a net */
> -	u32			lpni_seq;
> +	u32			 lpni_seq;
>  	/* sequence number used to round robin over gateways */
> -	u32			lpni_gw_seq;
> +	u32			 lpni_gw_seq;
>  	/* health flag */
> -	bool			lpni_healthy;
> +	bool			 lpni_healthy;
>  	/* returned RC ping features. Protected with lpni_lock */
>  	unsigned int		 lpni_ping_feats;
>  	/* routers on this peer */
>  	struct list_head	 lpni_routes;
>  	/* preferred local nids: if only one, use lpni_pref.nid */
>  	union lpni_pref {
> -		lnet_nid_t	nid;
> +		lnet_nid_t	 nid;
>  		lnet_nid_t	*nids;
>  	} lpni_pref;
>  	/* number of preferred NIDs in lnpi_pref_nids */
> -	u32			lpni_pref_nnids;
> +	u32			 lpni_pref_nnids;
>  	/* router checker state */
>  	struct lnet_rc_data	*lpni_rcd;
>  };
> @@ -676,9 +676,9 @@ struct lnet_peer_table {
>  	/* # peers extant */
>  	atomic_t		 pt_number;
>  	/* peers */
> -	struct list_head	pt_peer_list;
> +	struct list_head	 pt_peer_list;
>  	/* # peers */
> -	int			pt_peers;
> +	int			 pt_peers;
>  	/* # zombies to go to deathrow (and not there yet) */
>  	int			 pt_zombies;
>  	/* zombie peers_ni */
> @@ -704,7 +704,7 @@ struct lnet_route {
>  	/* chain on gateway */
>  	struct list_head	lr_gwlist;
>  	/* router node */
> -	struct lnet_peer_ni	*lr_gateway;
> +	struct lnet_peer_ni    *lr_gateway;
>  	/* remote network number */
>  	u32			lr_net;
>  	/* sequence for round-robin */
> @@ -754,9 +754,9 @@ struct lnet_rtrbufpool {
>  };
>  
>  struct lnet_rtrbuf {
> -	struct list_head	 rb_list;	/* chain on rbp_bufs */
> -	struct lnet_rtrbufpool	*rb_pool;	/* owning pool */
> -	struct bio_vec		 rb_kiov[0];	/* the buffer space */
> +	struct list_head	rb_list;	/* chain on rbp_bufs */
> +	struct lnet_rtrbufpool *rb_pool;	/* owning pool */
> +	struct bio_vec		rb_kiov[0];	/* the buffer space */
>  };
>  
>  #define LNET_PEER_HASHSIZE	503	/* prime! */
> @@ -904,58 +904,58 @@ enum lnet_state {
>  
>  struct lnet {
>  	/* CPU partition table of LNet */
> -	struct cfs_cpt_table		 *ln_cpt_table;
> +	struct cfs_cpt_table	       *ln_cpt_table;
>  	/* number of CPTs in ln_cpt_table */
> -	unsigned int			  ln_cpt_number;
> -	unsigned int			  ln_cpt_bits;
> +	unsigned int			ln_cpt_number;
> +	unsigned int			ln_cpt_bits;
>  
>  	/* protect LNet resources (ME/MD/EQ) */
> -	struct cfs_percpt_lock		 *ln_res_lock;
> +	struct cfs_percpt_lock	       *ln_res_lock;
>  	/* # portals */
> -	int				  ln_nportals;
> +	int				ln_nportals;
>  	/* the vector of portals */
> -	struct lnet_portal		**ln_portals;
> +	struct lnet_portal	      **ln_portals;
>  	/* percpt ME containers */
> -	struct lnet_res_container	**ln_me_containers;
> +	struct lnet_res_container     **ln_me_containers;
>  	/* percpt MD container */
> -	struct lnet_res_container	**ln_md_containers;
> +	struct lnet_res_container     **ln_md_containers;
>  
>  	/* Event Queue container */
> -	struct lnet_res_container	  ln_eq_container;
> -	wait_queue_head_t		  ln_eq_waitq;
> -	spinlock_t			  ln_eq_wait_lock;
> -	unsigned int			  ln_remote_nets_hbits;
> +	struct lnet_res_container	ln_eq_container;
> +	wait_queue_head_t		ln_eq_waitq;
> +	spinlock_t			ln_eq_wait_lock;
> +	unsigned int			ln_remote_nets_hbits;
>  
>  	/* protect NI, peer table, credits, routers, rtrbuf... */
> -	struct cfs_percpt_lock		 *ln_net_lock;
> +	struct cfs_percpt_lock	       *ln_net_lock;
>  	/* percpt message containers for active/finalizing/freed message */
> -	struct lnet_msg_container	**ln_msg_containers;
> -	struct lnet_counters		**ln_counters;
> -	struct lnet_peer_table		**ln_peer_tables;
> +	struct lnet_msg_container     **ln_msg_containers;
> +	struct lnet_counters	      **ln_counters;
> +	struct lnet_peer_table	      **ln_peer_tables;
>  	/* list of peer nis not on a local network */
>  	struct list_head		ln_remote_peer_ni_list;
>  	/* failure simulation */
> -	struct list_head		  ln_test_peers;
> -	struct list_head		  ln_drop_rules;
> -	struct list_head		  ln_delay_rules;
> +	struct list_head		ln_test_peers;
> +	struct list_head		ln_drop_rules;
> +	struct list_head		ln_delay_rules;
>  
>  	/* LND instances */
>  	struct list_head		ln_nets;
>  	/* network zombie list */
>  	struct list_head		ln_net_zombie;
>  	/* the loopback NI */
> -	struct lnet_ni			*ln_loni;
> +	struct lnet_ni		       *ln_loni;
>  
>  	/* remote networks with routes to them */
> -	struct list_head		 *ln_remote_nets_hash;
> +	struct list_head	       *ln_remote_nets_hash;
>  	/* validity stamp */
> -	u64				  ln_remote_nets_version;
> +	u64				ln_remote_nets_version;
>  	/* list of all known routers */
> -	struct list_head		  ln_routers;
> +	struct list_head		ln_routers;
>  	/* validity stamp */
> -	u64				  ln_routers_version;
> +	u64				ln_routers_version;
>  	/* percpt router buffer pools */
> -	struct lnet_rtrbufpool		**ln_rtrpools;
> +	struct lnet_rtrbufpool	      **ln_rtrpools;
>  
>  	/*
>  	 * Ping target / Push source
> @@ -964,9 +964,9 @@ struct lnet {
>  	 * ln_ping_target is protected against concurrent updates by
>  	 * ln_api_mutex.
>  	 */
> -	struct lnet_handle_md		  ln_ping_target_md;
> -	struct lnet_handle_eq		  ln_ping_target_eq;
> -	struct lnet_ping_buffer		 *ln_ping_target;
> +	struct lnet_handle_md		ln_ping_target_md;
> +	struct lnet_handle_eq		ln_ping_target_eq;
> +	struct lnet_ping_buffer	       *ln_ping_target;
>  	atomic_t			ln_ping_target_seqno;
>  
>  	/*
> @@ -979,7 +979,7 @@ struct lnet {
>  	 */
>  	struct lnet_handle_eq		ln_push_target_eq;
>  	struct lnet_handle_md		ln_push_target_md;
> -	struct lnet_ping_buffer		*ln_push_target;
> +	struct lnet_ping_buffer	       *ln_push_target;
>  	int				ln_push_target_nnis;
>  
>  	/* discovery event queue handle */
> @@ -996,35 +996,35 @@ struct lnet {
>  	int				ln_dc_state;
>  
>  	/* router checker startup/shutdown state */
> -	enum lnet_rc_state		  ln_rc_state;
> +	enum lnet_rc_state		ln_rc_state;
>  	/* router checker's event queue */
> -	struct lnet_handle_eq		  ln_rc_eqh;
> +	struct lnet_handle_eq		ln_rc_eqh;
>  	/* rcd still pending on net */
> -	struct list_head		  ln_rcd_deathrow;
> +	struct list_head		ln_rcd_deathrow;
>  	/* rcd ready for free */
> -	struct list_head		  ln_rcd_zombie;
> +	struct list_head		ln_rcd_zombie;
>  	/* serialise startup/shutdown */
> -	struct completion		  ln_rc_signal;
> +	struct completion		ln_rc_signal;
>  
> -	struct mutex			  ln_api_mutex;
> -	struct mutex			  ln_lnd_mutex;
> -	struct mutex			  ln_delay_mutex;
> +	struct mutex			ln_api_mutex;
> +	struct mutex			ln_lnd_mutex;
> +	struct mutex			ln_delay_mutex;
>  	/* Have I called LNetNIInit myself? */
> -	int				  ln_niinit_self;
> +	int				ln_niinit_self;
>  	/* LNetNIInit/LNetNIFini counter */
> -	int				  ln_refcount;
> +	int				ln_refcount;
>  	/* SHUTDOWN/RUNNING/STOPPING */
> -	enum lnet_state			  ln_state;
> +	enum lnet_state			ln_state;
>  
> -	int				  ln_routing;	/* am I a router? */
> -	lnet_pid_t			  ln_pid;	/* requested pid */
> +	int				ln_routing;	/* am I a router? */
> +	lnet_pid_t			ln_pid;		/* requested pid */
>  	/* uniquely identifies this ni in this epoch */
> -	u64				  ln_interface_cookie;
> +	u64				ln_interface_cookie;
>  	/* registered LNDs */
> -	struct list_head		  ln_lnds;
> +	struct list_head		ln_lnds;
>  
>  	/* test protocol compatibility flags */
> -	int				  ln_testprotocompat;
> +	int				ln_testprotocompat;
>  
>  	/*
>  	 * 0 - load the NIs from the mod params
> @@ -1032,14 +1032,14 @@ struct lnet {
>  	 * Reverse logic to ensure that other calls to LNetNIInit
>  	 * need no change
>  	 */
> -	bool				  ln_nis_from_mod_params;
> +	bool				ln_nis_from_mod_params;
>  
>  	/*
>  	 * waitq for router checker.  As long as there are no routes in
>  	 * the list, the router checker will sleep on this queue.  when
>  	 * routes are added the thread will wake up
>  	 */
> -	wait_queue_head_t		  ln_rc_waitq;
> +	wait_queue_head_t		ln_rc_waitq;
>  
>  };
>  
> diff --git a/drivers/staging/lustre/lnet/lnet/acceptor.c b/drivers/staging/lustre/lnet/lnet/acceptor.c
> index aa28a9f..83ab3b1 100644
> --- a/drivers/staging/lustre/lnet/lnet/acceptor.c
> +++ b/drivers/staging/lustre/lnet/lnet/acceptor.c
> @@ -36,9 +36,9 @@
>  #include <net/sock.h>
>  #include <linux/lnet/lib-lnet.h>
>  
> -static int   accept_port    = 988;
> -static int   accept_backlog = 127;
> -static int   accept_timeout = 5;
> +static int accept_port = 988;
> +static int accept_backlog = 127;
> +static int accept_timeout = 5;
>  
>  static struct {
>  	int			pta_shutdown;
> @@ -167,9 +167,9 @@
>  
>  		BUILD_BUG_ON(LNET_PROTO_ACCEPTOR_VERSION != 1);
>  
> -		cr.acr_magic   = LNET_PROTO_ACCEPTOR_MAGIC;
> +		cr.acr_magic = LNET_PROTO_ACCEPTOR_MAGIC;
>  		cr.acr_version = LNET_PROTO_ACCEPTOR_VERSION;
> -		cr.acr_nid     = peer_nid;
> +		cr.acr_nid = peer_nid;
>  
>  		if (the_lnet.ln_testprotocompat) {
>  			/* single-shot proto check */
> @@ -196,9 +196,9 @@
>  	rc = -EADDRINUSE;
>  	goto failed;
>  
> - failed_sock:
> +failed_sock:
>  	sock_release(sock);
> - failed:
> +failed:
>  	lnet_connect_console_error(rc, peer_nid, peer_ip, peer_port);
>  	return rc;
>  }
> @@ -297,7 +297,7 @@
>  		__swab64s(&cr.acr_nid);
>  
>  	ni = lnet_nid2ni_addref(cr.acr_nid);
> -	if (!ni ||	       /* no matching net */
> +	if (!ni ||			/* no matching net */
>  	    ni->ni_nid != cr.acr_nid) { /* right NET, wrong NID! */
>  		if (ni)
>  			lnet_ni_decref(ni);
> diff --git a/drivers/staging/lustre/lnet/lnet/api-ni.c b/drivers/staging/lustre/lnet/lnet/api-ni.c
> index be77e10..64b8bef9 100644
> --- a/drivers/staging/lustre/lnet/lnet/api-ni.c
> +++ b/drivers/staging/lustre/lnet/lnet/api-ni.c
> @@ -47,7 +47,7 @@
>   * before module init completes. The mutex needs to be ready for use then.
>   */
>  struct lnet the_lnet = {
> -	.ln_api_mutex = __MUTEX_INITIALIZER(the_lnet.ln_api_mutex),
> +	.ln_api_mutex		= __MUTEX_INITIALIZER(the_lnet.ln_api_mutex),
>  };		/* THE state of the network */
>  EXPORT_SYMBOL(the_lnet);
>  
> @@ -281,7 +281,7 @@ static int lnet_discover(struct lnet_process_id id, u32 force,
>  
>  	return 0;
>  
> - failed:
> +failed:
>  	lnet_destroy_locks();
>  	return -ENOMEM;
>  }
> @@ -476,17 +476,17 @@ static void lnet_assert_wire_constants(void)
>  	lnet_net_lock(LNET_LOCK_EX);
>  
>  	cfs_percpt_for_each(ctr, i, the_lnet.ln_counters) {
> -		counters->msgs_max     += ctr->msgs_max;
> -		counters->msgs_alloc   += ctr->msgs_alloc;
> -		counters->errors       += ctr->errors;
> -		counters->send_count   += ctr->send_count;
> -		counters->recv_count   += ctr->recv_count;
> -		counters->route_count  += ctr->route_count;
> -		counters->drop_count   += ctr->drop_count;
> -		counters->send_length  += ctr->send_length;
> -		counters->recv_length  += ctr->recv_length;
> +		counters->msgs_max += ctr->msgs_max;
> +		counters->msgs_alloc += ctr->msgs_alloc;
> +		counters->errors += ctr->errors;
> +		counters->send_count += ctr->send_count;
> +		counters->recv_count += ctr->recv_count;
> +		counters->route_count += ctr->route_count;
> +		counters->drop_count += ctr->drop_count;
> +		counters->send_length += ctr->send_length;
> +		counters->recv_length += ctr->recv_length;
>  		counters->route_length += ctr->route_length;
> -		counters->drop_length  += ctr->drop_length;
> +		counters->drop_length += ctr->drop_length;
>  	}
>  	lnet_net_unlock(LNET_LOCK_EX);
>  }
> @@ -755,7 +755,7 @@ struct lnet_libhandle *
>  
>  	return 0;
>  
> - failed:
> +failed:
>  	lnet_unprepare();
>  	return rc;
>  }
> @@ -942,7 +942,7 @@ struct lnet_net *
>  	return false;
>  }
>  
> -struct lnet_ni  *
> +struct lnet_ni *
>  lnet_nid2ni_locked(lnet_nid_t nid, int cpt)
>  {
>  	struct lnet_net *net;
> @@ -1146,8 +1146,10 @@ struct lnet_ping_buffer *
>  		       struct lnet_handle_md *ping_mdh,
>  		       int ni_count, bool set_eq)
>  {
> -	struct lnet_process_id id = { .nid = LNET_NID_ANY,
> -				      .pid = LNET_PID_ANY };
> +	struct lnet_process_id id = {
> +		.nid = LNET_NID_ANY,
> +		.pid = LNET_PID_ANY
> +	};
>  	struct lnet_handle_me me_handle;
>  	struct lnet_md md = { NULL };
>  	int rc, rc2;
> @@ -1244,7 +1246,7 @@ struct lnet_ping_buffer *
>  
>  			lnet_ni_lock(ni);
>  			ns->ns_status = ni->ni_status ?
> -					 ni->ni_status->ns_status :
> +					ni->ni_status->ns_status :
>  						LNET_NI_STATUS_UP;
>  			ni->ni_status = ns;
>  			lnet_ni_unlock(ni);
> @@ -1322,7 +1324,10 @@ struct lnet_ping_buffer *
>  /* Resize the push target. */
>  int lnet_push_target_resize(void)
>  {
> -	struct lnet_process_id id = { LNET_NID_ANY, LNET_PID_ANY };
> +	struct lnet_process_id id = {
> +		.nid	= LNET_NID_ANY,
> +		.pid	= LNET_PID_ANY
> +	};
>  	struct lnet_md md = { NULL };
>  	struct lnet_handle_me meh;
>  	struct lnet_handle_md mdh;
> @@ -1353,13 +1358,13 @@ int lnet_push_target_resize(void)
>  	}
>  
>  	/* initialize md content */
> -	md.start     = &pbuf->pb_info;
> -	md.length    = LNET_PING_INFO_SIZE(nnis);
> +	md.start = &pbuf->pb_info;
> +	md.length = LNET_PING_INFO_SIZE(nnis);
>  	md.threshold = LNET_MD_THRESH_INF;
> -	md.max_size  = 0;
> -	md.options   = LNET_MD_OP_PUT | LNET_MD_TRUNCATE |
> -		       LNET_MD_MANAGE_REMOTE;
> -	md.user_ptr  = pbuf;
> +	md.max_size = 0;
> +	md.options = LNET_MD_OP_PUT | LNET_MD_TRUNCATE |
> +		     LNET_MD_MANAGE_REMOTE;
> +	md.user_ptr = pbuf;
>  	md.eq_handle = the_lnet.ln_push_target_eq;
>  
>  	rc = LNetMDAttach(meh, md, LNET_RETAIN, &mdh);
> @@ -1428,7 +1433,6 @@ static int lnet_push_target_init(void)
>  	the_lnet.ln_push_target_nnis = LNET_INTERFACES_MIN;
>  
>  	rc = lnet_push_target_resize();
> -
>  	if (rc) {
>  		LNetEQFree(the_lnet.ln_push_target_eq);
>  		LNetInvalidateEQHandle(&the_lnet.ln_push_target_eq);
> @@ -1723,10 +1727,10 @@ static void lnet_push_target_fini(void)
>  
>  	CDEBUG(D_LNI, "Added LNI %s [%d/%d/%d/%d]\n",
>  	       libcfs_nid2str(ni->ni_nid),
> -		ni->ni_net->net_tunables.lct_peer_tx_credits,
> +	       ni->ni_net->net_tunables.lct_peer_tx_credits,
>  	       lnet_ni_tq_credits(ni) * LNET_CPT_NUMBER,
>  	       ni->ni_net->net_tunables.lct_peer_rtr_credits,
> -		ni->ni_net->net_tunables.lct_peer_timeout);
> +	       ni->ni_net->net_tunables.lct_peer_timeout);
>  
>  	return 0;
>  failed0:
> @@ -1932,7 +1936,6 @@ static void lnet_push_target_fini(void)
>  		list_del_init(&net->net_list);
>  
>  		rc = lnet_startup_lndnet(net, NULL);
> -
>  		if (rc < 0)
>  			goto failed;
>  
> @@ -1963,8 +1966,8 @@ int lnet_lib_init(void)
>  	lnet_assert_wire_constants();
>  
>  	/* refer to global cfs_cpt_tab for now */
> -	the_lnet.ln_cpt_table	= cfs_cpt_tab;
> -	the_lnet.ln_cpt_number	= cfs_cpt_number(cfs_cpt_tab);
> +	the_lnet.ln_cpt_table = cfs_cpt_tab;
> +	the_lnet.ln_cpt_number = cfs_cpt_number(cfs_cpt_tab);
>  
>  	LASSERT(the_lnet.ln_cpt_number > 0);
>  	if (the_lnet.ln_cpt_number > LNET_CPT_MAX) {
> @@ -2409,7 +2412,7 @@ struct lnet_ni *
>  	if (!prev) {
>  		if (!net)
>  			net = list_entry(the_lnet.ln_nets.next, struct lnet_net,
> -					net_list);
> +					 net_list);
>  		ni = list_entry(net->net_ni_list.next, struct lnet_ni,
>  				ni_netlist);
>  
> @@ -2455,7 +2458,6 @@ struct lnet_ni *
>  	cpt = lnet_net_lock_current();
>  
>  	ni = lnet_get_ni_idx_locked(idx);
> -
>  	if (ni) {
>  		rc = 0;
>  		lnet_ni_lock(ni);
> @@ -2483,7 +2485,6 @@ struct lnet_ni *
>  	cpt = lnet_net_lock_current();
>  
>  	ni = lnet_get_ni_idx_locked(cfg_ni->lic_idx);
> -
>  	if (ni) {
>  		rc = 0;
>  		lnet_ni_lock(ni);
> @@ -2705,7 +2706,7 @@ int lnet_dyn_del_ni(struct lnet_ioctl_config_ni *conf)
>  	struct lnet_ni *ni;
>  	u32 net_id = LNET_NIDNET(conf->lic_nid);
>  	struct lnet_ping_buffer *pbuf;
> -	struct lnet_handle_md  ping_mdh;
> +	struct lnet_handle_md ping_mdh;
>  	int rc;
>  	int net_count;
>  	u32 addr;
> @@ -2912,7 +2913,7 @@ u32 lnet_get_dlc_seq_locked(void)
>  {
>  	struct libcfs_ioctl_data *data = arg;
>  	struct lnet_ioctl_config_data *config;
> -	struct lnet_process_id id = {0};
> +	struct lnet_process_id id = { 0 };
>  	struct lnet_ni *ni;
>  	int rc;
>  
> @@ -3357,7 +3358,7 @@ static int lnet_ping(struct lnet_process_id id, signed long timeout,
>  	int which;
>  	int unlinked = 0;
>  	int replied = 0;
> -	const signed long a_long_time = 60*HZ;
> +	const signed long a_long_time = 60 * HZ;
>  	struct lnet_ping_buffer *pbuf;
>  	struct lnet_process_id tmpid;
>  	int i;
> @@ -3384,12 +3385,12 @@ static int lnet_ping(struct lnet_process_id id, signed long timeout,
>  	}
>  
>  	/* initialize md content */
> -	md.start     = &pbuf->pb_info;
> -	md.length    = LNET_PING_INFO_SIZE(n_ids);
> +	md.start = &pbuf->pb_info;
> +	md.length = LNET_PING_INFO_SIZE(n_ids);
>  	md.threshold = 2; /* GET/REPLY */
> -	md.max_size  = 0;
> -	md.options   = LNET_MD_TRUNCATE;
> -	md.user_ptr  = NULL;
> +	md.max_size = 0;
> +	md.options = LNET_MD_TRUNCATE;
> +	md.user_ptr = NULL;
>  	md.eq_handle = eqh;
>  
>  	rc = LNetMDBind(md, LNET_UNLINK, &mdh);
> @@ -3401,7 +3402,6 @@ static int lnet_ping(struct lnet_process_id id, signed long timeout,
>  	rc = LNetGet(LNET_NID_ANY, mdh, id,
>  		     LNET_RESERVED_PORTAL,
>  		     LNET_PROTO_PING_MATCHBITS, 0);
> -
>  	if (rc) {
>  		/* Don't CERROR; this could be deliberate! */
>  		rc2 = LNetMDUnlink(mdh);
> @@ -3414,7 +3414,6 @@ static int lnet_ping(struct lnet_process_id id, signed long timeout,
>  
>  	do {
>  		/* MUST block for unlink to complete */
> -
>  		rc2 = LNetEQPoll(&eqh, 1, timeout, !unlinked,
>  				 &event, &which);
>  
> @@ -3510,13 +3509,13 @@ static int lnet_ping(struct lnet_process_id id, signed long timeout,
>  	}
>  	rc = pbuf->pb_info.pi_nnis;
>  
> - fail_free_eq:
> +fail_free_eq:
>  	rc2 = LNetEQFree(eqh);
>  	if (rc2)
>  		CERROR("rc2 %d\n", rc2);
>  	LASSERT(!rc2);
>  
> - fail_ping_buffer_decref:
> +fail_ping_buffer_decref:
>  	lnet_ping_buffer_decref(pbuf);
>  	return rc;
>  }
> diff --git a/drivers/staging/lustre/lnet/lnet/config.c b/drivers/staging/lustre/lnet/lnet/config.c
> index 16c42bf..ecf656b 100644
> --- a/drivers/staging/lustre/lnet/lnet/config.c
> +++ b/drivers/staging/lustre/lnet/lnet/config.c
> @@ -38,15 +38,15 @@
>  #include <linux/lnet/lib-lnet.h>
>  #include <linux/inetdevice.h>
>  
> -struct lnet_text_buf {	    /* tmp struct for parsing routes */
> -	struct list_head ltb_list;	/* stash on lists */
> -	int ltb_size;	/* allocated size */
> -	char ltb_text[0];     /* text buffer */
> +struct lnet_text_buf {		/* tmp struct for parsing routes */
> +	struct list_head	ltb_list;	/* stash on lists */
> +	int			ltb_size;	/* allocated size */
> +	char			ltb_text[0];	/* text buffer */
>  };
>  
> -static int lnet_tbnob;			/* track text buf allocation */
> -#define LNET_MAX_TEXTBUF_NOB     (64 << 10)	/* bound allocation */
> -#define LNET_SINGLE_TEXTBUF_NOB  (4 << 10)
> +static int lnet_tbnob;		/* track text buf allocation */
> +#define LNET_MAX_TEXTBUF_NOB	(64 << 10)	/* bound allocation */
> +#define LNET_SINGLE_TEXTBUF_NOB	(4 << 10)
>  
>  #define SPACESTR " \t\v\r\n"
>  #define DELIMITERS ":()[]"
> @@ -126,6 +126,7 @@ struct lnet_text_buf {	    /* tmp struct for parsing routes */
>  lnet_ni_unique_ni(char *iface_list[LNET_INTERFACES_NUM], char *iface)
>  {
>  	int i;
> +
>  	for (i = 0; i < LNET_INTERFACES_NUM; i++) {
>  		if (iface_list[i] &&
>  		    strncmp(iface_list[i], iface, strlen(iface)) == 0)
> @@ -554,7 +555,7 @@ struct lnet_ni *
>  		goto failed;
>  
>  	return ni;
> - failed:
> +failed:
>  	lnet_ni_free(ni);
>  	return NULL;
>  }
> @@ -743,9 +744,9 @@ struct lnet_ni *
>  					goto failed_syntax;
>  				}
>  				rc = cfs_expr_list_parse(elstr,
> -							nistr - elstr + 1,
> -							0, LNET_CPT_NUMBER - 1,
> -							&ni_el);
> +							 nistr - elstr + 1,
> +							 0, LNET_CPT_NUMBER - 1,
> +							 &ni_el);
>  				if (rc != 0) {
>  					str = elstr;
>  					goto failed_syntax;
> @@ -812,9 +813,9 @@ struct lnet_ni *
>  	kfree(tokens);
>  	return nnets;
>  
> - failed_syntax:
> +failed_syntax:
>  	lnet_syntax("networks", networks, (int)(str - tokens), strlen(str));
> - failed:
> +failed:
>  	/* free the net list and all the nis on each net */
>  	while (!list_empty(netlist)) {
>  		net = list_entry(netlist->next, struct lnet_net, net_list);
> @@ -1038,7 +1039,7 @@ struct lnet_ni *
>  	list_splice(&pending, tbs->prev);
>  	return 1;
>  
> - failed:
> +failed:
>  	lnet_free_text_bufs(&pending);
>  	return -EINVAL;
>  }
> @@ -1093,7 +1094,6 @@ struct lnet_ni *
>  {
>  	/* static scratch buffer OK (single threaded) */
>  	static char cmd[LNET_SINGLE_TEXTBUF_NOB];
> -
>  	struct list_head nets;
>  	struct list_head gateways;
>  	struct list_head *tmp1;
> @@ -1226,9 +1226,9 @@ struct lnet_ni *
>  	myrc = 0;
>  	goto out;
>  
> - token_error:
> +token_error:
>  	lnet_syntax("routes", cmd, (int)(token - str), strlen(token));
> - out:
> +out:
>  	lnet_free_text_bufs(&nets);
>  	lnet_free_text_bufs(&gateways);
>  	return myrc;
> @@ -1298,7 +1298,6 @@ struct lnet_ni *
>  lnet_match_network_tokens(char *net_entry, u32 *ipaddrs, int nip)
>  {
>  	static char tokens[LNET_SINGLE_TEXTBUF_NOB];
> -
>  	int matched = 0;
>  	int ntokens = 0;
>  	int len;
> @@ -1451,7 +1450,6 @@ struct lnet_ni *
>  {
>  	static char networks[LNET_SINGLE_TEXTBUF_NOB];
>  	static char source[LNET_SINGLE_TEXTBUF_NOB];
> -
>  	struct list_head raw_entries;
>  	struct list_head matched_nets;
>  	struct list_head current_nets;
> @@ -1549,7 +1547,7 @@ struct lnet_ni *
>  		count++;
>  	}
>  
> - out:
> +out:
>  	lnet_free_text_bufs(&raw_entries);
>  	lnet_free_text_bufs(&matched_nets);
>  	lnet_free_text_bufs(&current_nets);
> diff --git a/drivers/staging/lustre/lnet/lnet/lib-eq.c b/drivers/staging/lustre/lnet/lnet/lib-eq.c
> index f085388..f500b49 100644
> --- a/drivers/staging/lustre/lnet/lnet/lib-eq.c
> +++ b/drivers/staging/lustre/lnet/lnet/lib-eq.c
> @@ -198,7 +198,7 @@
>  	lnet_res_lh_invalidate(&eq->eq_lh);
>  	list_del(&eq->eq_list);
>  	kfree(eq);
> - out:
> +out:
>  	lnet_eq_wait_unlock();
>  	lnet_res_unlock(LNET_LOCK_EX);
>  
> diff --git a/drivers/staging/lustre/lnet/lnet/lib-move.c b/drivers/staging/lustre/lnet/lnet/lib-move.c
> index 639f67ed..92c6a34 100644
> --- a/drivers/staging/lustre/lnet/lnet/lib-move.c
> +++ b/drivers/staging/lustre/lnet/lnet/lib-move.c
> @@ -489,7 +489,7 @@ void lnet_usr_translate_stats(struct lnet_ioctl_element_msg_stats *msg_stats,
>  
>  		if (mlen) {
>  			niov = msg->msg_niov;
> -			iov  = msg->msg_iov;
> +			iov = msg->msg_iov;
>  			kiov = msg->msg_kiov;
>  
>  			LASSERT(niov > 0);
> @@ -541,12 +541,12 @@ void lnet_usr_translate_stats(struct lnet_ioctl_element_msg_stats *msg_stats,
>  		lnet_setpayloadbuffer(msg);
>  
>  	memset(&msg->msg_hdr, 0, sizeof(msg->msg_hdr));
> -	msg->msg_hdr.type	   = cpu_to_le32(type);
> +	msg->msg_hdr.type = cpu_to_le32(type);
>  	/* dest_nid will be overwritten by lnet_select_pathway() */
> -	msg->msg_hdr.dest_nid       = cpu_to_le64(target.nid);
> -	msg->msg_hdr.dest_pid       = cpu_to_le32(target.pid);
> +	msg->msg_hdr.dest_nid = cpu_to_le64(target.nid);
> +	msg->msg_hdr.dest_pid = cpu_to_le32(target.pid);
>  	/* src_nid will be set later */
> -	msg->msg_hdr.src_pid	= cpu_to_le32(the_lnet.ln_pid);
> +	msg->msg_hdr.src_pid = cpu_to_le32(the_lnet.ln_pid);
>  	msg->msg_hdr.payload_length = cpu_to_le32(len);
>  }
>  
> @@ -635,7 +635,7 @@ void lnet_usr_translate_stats(struct lnet_ioctl_element_msg_stats *msg_stats,
>  	}
>  
>  	deadline = lp->lpni_last_alive +
> -		lp->lpni_net->net_tunables.lct_peer_timeout;
> +		   lp->lpni_net->net_tunables.lct_peer_timeout;
>  	alive = deadline > now;
>  
>  	/* Update obsolete lpni_alive except for routers assumed to be dead
> @@ -911,7 +911,7 @@ void lnet_usr_translate_stats(struct lnet_ioctl_element_msg_stats *msg_stats,
>  {
>  	struct lnet_peer_ni *txpeer = msg->msg_txpeer;
>  	struct lnet_msg *msg2;
> -	struct lnet_ni	*txni = msg->msg_txni;
> +	struct lnet_ni *txni = msg->msg_txni;
>  
>  	if (msg->msg_txcredit) {
>  		struct lnet_ni *ni = msg->msg_txni;
> @@ -1044,7 +1044,7 @@ void lnet_usr_translate_stats(struct lnet_ioctl_element_msg_stats *msg_stats,
>  lnet_return_rx_credits_locked(struct lnet_msg *msg)
>  {
>  	struct lnet_peer_ni *rxpeer = msg->msg_rxpeer;
> -	struct lnet_ni	*rxni = msg->msg_rxni;
> +	struct lnet_ni *rxni = msg->msg_rxni;
>  	struct lnet_msg *msg2;
>  
>  	if (msg->msg_rtrcredit) {
> @@ -1796,7 +1796,7 @@ void lnet_usr_translate_stats(struct lnet_ioctl_element_msg_stats *msg_stats,
>  	/* if we still can't find a peer ni then we can't reach it */
>  	if (!best_lpni) {
>  		u32 net_id = peer_net ? peer_net->lpn_net_id :
> -			LNET_NIDNET(dst_nid);
> +					LNET_NIDNET(dst_nid);
>  
>  		lnet_net_unlock(cpt);
>  		LCONSOLE_WARN("no peer_ni found on peer net %s\n",
> @@ -1912,7 +1912,6 @@ void lnet_usr_translate_stats(struct lnet_ioctl_element_msg_stats *msg_stats,
>  	}
>  
>  	rc = lnet_post_send_locked(msg, 0);
> -
>  	if (!rc)
>  		CDEBUG(D_NET, "TRACE: %s(%s:%s) -> %s(%s:%s) : %s\n",
>  		       libcfs_nid2str(msg->msg_hdr.src_nid),
> @@ -1931,8 +1930,8 @@ void lnet_usr_translate_stats(struct lnet_ioctl_element_msg_stats *msg_stats,
>  int
>  lnet_send(lnet_nid_t src_nid, struct lnet_msg *msg, lnet_nid_t rtr_nid)
>  {
> -	lnet_nid_t		dst_nid = msg->msg_target.nid;
> -	int			rc;
> +	lnet_nid_t dst_nid = msg->msg_target.nid;
> +	int rc;
>  
>  	/*
>  	 * NB: rtr_nid is set to LNET_NID_ANY for all current use-cases,
> @@ -2008,19 +2007,19 @@ void lnet_usr_translate_stats(struct lnet_ioctl_element_msg_stats *msg_stats,
>  	le32_to_cpus(&hdr->msg.put.offset);
>  
>  	/* Primary peer NID. */
> -	info.mi_id.nid	= msg->msg_initiator;
> -	info.mi_id.pid	= hdr->src_pid;
> -	info.mi_opc	= LNET_MD_OP_PUT;
> -	info.mi_portal	= hdr->msg.put.ptl_index;
> +	info.mi_id.nid = msg->msg_initiator;
> +	info.mi_id.pid = hdr->src_pid;
> +	info.mi_opc = LNET_MD_OP_PUT;
> +	info.mi_portal = hdr->msg.put.ptl_index;
>  	info.mi_rlength	= hdr->payload_length;
>  	info.mi_roffset	= hdr->msg.put.offset;
> -	info.mi_mbits	= hdr->msg.put.match_bits;
> -	info.mi_cpt	= lnet_cpt_of_nid(msg->msg_rxpeer->lpni_nid, ni);
> +	info.mi_mbits = hdr->msg.put.match_bits;
> +	info.mi_cpt = lnet_cpt_of_nid(msg->msg_rxpeer->lpni_nid, ni);
>  
>  	msg->msg_rx_ready_delay = !ni->ni_net->net_lnd->lnd_eager_recv;
>  	ready_delay = msg->msg_rx_ready_delay;
>  
> - again:
> +again:
>  	rc = lnet_ptl_match_md(&info, msg);
>  	switch (rc) {
>  	default:
> @@ -2069,17 +2068,17 @@ void lnet_usr_translate_stats(struct lnet_ioctl_element_msg_stats *msg_stats,
>  	le32_to_cpus(&hdr->msg.get.sink_length);
>  	le32_to_cpus(&hdr->msg.get.src_offset);
>  
> -	source_id.nid   = hdr->src_nid;
> -	source_id.pid   = hdr->src_pid;
> +	source_id.nid = hdr->src_nid;
> +	source_id.pid = hdr->src_pid;
>  	/* Primary peer NID */
> -	info.mi_id.nid  = msg->msg_initiator;
> -	info.mi_id.pid  = hdr->src_pid;
> -	info.mi_opc     = LNET_MD_OP_GET;
> -	info.mi_portal  = hdr->msg.get.ptl_index;
> +	info.mi_id.nid = msg->msg_initiator;
> +	info.mi_id.pid = hdr->src_pid;
> +	info.mi_opc = LNET_MD_OP_GET;
> +	info.mi_portal = hdr->msg.get.ptl_index;
>  	info.mi_rlength = hdr->msg.get.sink_length;
>  	info.mi_roffset = hdr->msg.get.src_offset;
> -	info.mi_mbits   = hdr->msg.get.match_bits;
> -	info.mi_cpt	= lnet_cpt_of_nid(msg->msg_rxpeer->lpni_nid, ni);
> +	info.mi_mbits = hdr->msg.get.match_bits;
> +	info.mi_cpt = lnet_cpt_of_nid(msg->msg_rxpeer->lpni_nid, ni);
>  
>  	rc = lnet_ptl_match_md(&info, msg);
>  	if (rc == LNET_MATCHMD_DROP) {
> @@ -2128,7 +2127,7 @@ void lnet_usr_translate_stats(struct lnet_ioctl_element_msg_stats *msg_stats,
>  {
>  	void *private = msg->msg_private;
>  	struct lnet_hdr *hdr = &msg->msg_hdr;
> -	struct lnet_process_id src = {0};
> +	struct lnet_process_id src = { 0 };
>  	struct lnet_libmd *md;
>  	int rlength;
>  	int mlength;
> @@ -2192,7 +2191,7 @@ void lnet_usr_translate_stats(struct lnet_ioctl_element_msg_stats *msg_stats,
>  lnet_parse_ack(struct lnet_ni *ni, struct lnet_msg *msg)
>  {
>  	struct lnet_hdr *hdr = &msg->msg_hdr;
> -	struct lnet_process_id src = {0};
> +	struct lnet_process_id src = { 0 };
>  	struct lnet_libmd *md;
>  	int cpt;
>  
> @@ -2316,8 +2315,8 @@ void lnet_usr_translate_stats(struct lnet_ioctl_element_msg_stats *msg_stats,
>  void
>  lnet_print_hdr(struct lnet_hdr *hdr)
>  {
> -	struct lnet_process_id src = {0};
> -	struct lnet_process_id dst = {0};
> +	struct lnet_process_id src = { 0 };
> +	struct lnet_process_id dst = { 0 };
>  	char *type_str = lnet_msgtyp2str(hdr->type);
>  
>  	src.nid = hdr->src_nid;
> @@ -2533,17 +2532,16 @@ void lnet_usr_translate_stats(struct lnet_ioctl_element_msg_stats *msg_stats,
>  	/* for building message event */
>  	msg->msg_from = from_nid;
>  	if (!for_me) {
> -		msg->msg_target.pid	= dest_pid;
> -		msg->msg_target.nid	= dest_nid;
> -		msg->msg_routing	= 1;
> -
> +		msg->msg_target.pid = dest_pid;
> +		msg->msg_target.nid = dest_nid;
> +		msg->msg_routing = 1;
>  	} else {
>  		/* convert common msg->hdr fields to host byteorder */
> -		msg->msg_hdr.type	= type;
> -		msg->msg_hdr.src_nid	= src_nid;
> +		msg->msg_hdr.type = type;
> +		msg->msg_hdr.src_nid = src_nid;
>  		le32_to_cpus(&msg->msg_hdr.src_pid);
> -		msg->msg_hdr.dest_nid	= dest_nid;
> -		msg->msg_hdr.dest_pid	= dest_pid;
> +		msg->msg_hdr.dest_nid = dest_nid;
> +		msg->msg_hdr.dest_pid = dest_pid;
>  		msg->msg_hdr.payload_length = payload_length;
>  	}
>  
> @@ -2609,11 +2607,11 @@ void lnet_usr_translate_stats(struct lnet_ioctl_element_msg_stats *msg_stats,
>  		goto free_drop;
>  	return 0;
>  
> - free_drop:
> +free_drop:
>  	LASSERT(!msg->msg_md);
>  	lnet_finalize(msg, rc);
>  
> - drop:
> +drop:
>  	lnet_drop_message(ni, cpt, private, payload_length, type);
>  	return 0;
>  }
> @@ -2623,7 +2621,7 @@ void lnet_usr_translate_stats(struct lnet_ioctl_element_msg_stats *msg_stats,
>  lnet_drop_delayed_msg_list(struct list_head *head, char *reason)
>  {
>  	while (!list_empty(head)) {
> -		struct lnet_process_id id = {0};
> +		struct lnet_process_id id = { 0 };
>  		struct lnet_msg *msg;
>  
>  		msg = list_entry(head->next, struct lnet_msg, msg_list);
> @@ -2887,7 +2885,7 @@ struct lnet_msg *
>  
>  	return msg;
>  
> - drop:
> +drop:
>  	cpt = lnet_cpt_of_nid(peer_id.nid, ni);
>  
>  	lnet_net_lock(cpt);
> diff --git a/drivers/staging/lustre/lnet/lnet/lib-msg.c b/drivers/staging/lustre/lnet/lnet/lib-msg.c
> index 7f58cfe..b9e9257 100644
> --- a/drivers/staging/lustre/lnet/lnet/lib-msg.c
> +++ b/drivers/staging/lustre/lnet/lnet/lib-msg.c
> @@ -44,9 +44,9 @@
>  {
>  	memset(ev, 0, sizeof(*ev));
>  
> -	ev->status   = 0;
> +	ev->status = 0;
>  	ev->unlinked = 1;
> -	ev->type     = LNET_EVENT_UNLINK;
> +	ev->type = LNET_EVENT_UNLINK;
>  	lnet_md_deconstruct(md, &ev->md);
>  	lnet_md2handle(&ev->md_handle, md);
>  }
> @@ -58,7 +58,7 @@
>  lnet_build_msg_event(struct lnet_msg *msg, enum lnet_event_kind ev_type)
>  {
>  	struct lnet_hdr *hdr = &msg->msg_hdr;
> -	struct lnet_event *ev  = &msg->msg_ev;
> +	struct lnet_event *ev = &msg->msg_ev;
>  
>  	LASSERT(!msg->msg_routing);
>  
> @@ -67,27 +67,27 @@
>  
>  	if (ev_type == LNET_EVENT_SEND) {
>  		/* event for active message */
> -		ev->target.nid    = le64_to_cpu(hdr->dest_nid);
> -		ev->target.pid    = le32_to_cpu(hdr->dest_pid);
> +		ev->target.nid = le64_to_cpu(hdr->dest_nid);
> +		ev->target.pid = le32_to_cpu(hdr->dest_pid);
>  		ev->initiator.nid = LNET_NID_ANY;
>  		ev->initiator.pid = the_lnet.ln_pid;
> -		ev->source.nid	  = LNET_NID_ANY;
> -		ev->source.pid    = the_lnet.ln_pid;
> -		ev->sender        = LNET_NID_ANY;
> +		ev->source.nid = LNET_NID_ANY;
> +		ev->source.pid = the_lnet.ln_pid;
> +		ev->sender = LNET_NID_ANY;
>  	} else {
>  		/* event for passive message */
> -		ev->target.pid    = hdr->dest_pid;
> -		ev->target.nid    = hdr->dest_nid;
> +		ev->target.pid = hdr->dest_pid;
> +		ev->target.nid = hdr->dest_nid;
>  		ev->initiator.pid = hdr->src_pid;
>  		/* Multi-Rail: resolve src_nid to "primary" peer NID */
>  		ev->initiator.nid = msg->msg_initiator;
>  		/* Multi-Rail: track source NID. */
> -		ev->source.pid	  = hdr->src_pid;
> -		ev->source.nid	  = hdr->src_nid;
> -		ev->rlength       = hdr->payload_length;
> -		ev->sender        = msg->msg_from;
> -		ev->mlength       = msg->msg_wanted;
> -		ev->offset        = msg->msg_offset;
> +		ev->source.pid = hdr->src_pid;
> +		ev->source.nid = hdr->src_nid;
> +		ev->rlength = hdr->payload_length;
> +		ev->sender = msg->msg_from;
> +		ev->mlength = msg->msg_wanted;
> +		ev->offset = msg->msg_offset;
>  	}
>  
>  	switch (ev_type) {
> @@ -95,20 +95,20 @@
>  		LBUG();
>  
>  	case LNET_EVENT_PUT: /* passive PUT */
> -		ev->pt_index   = hdr->msg.put.ptl_index;
> +		ev->pt_index = hdr->msg.put.ptl_index;
>  		ev->match_bits = hdr->msg.put.match_bits;
> -		ev->hdr_data   = hdr->msg.put.hdr_data;
> +		ev->hdr_data = hdr->msg.put.hdr_data;
>  		return;
>  
>  	case LNET_EVENT_GET: /* passive GET */
> -		ev->pt_index   = hdr->msg.get.ptl_index;
> +		ev->pt_index = hdr->msg.get.ptl_index;
>  		ev->match_bits = hdr->msg.get.match_bits;
> -		ev->hdr_data   = 0;
> +		ev->hdr_data = 0;
>  		return;
>  
>  	case LNET_EVENT_ACK: /* ACK */
>  		ev->match_bits = hdr->msg.ack.match_bits;
> -		ev->mlength    = hdr->msg.ack.mlength;
> +		ev->mlength = hdr->msg.ack.mlength;
>  		return;
>  
>  	case LNET_EVENT_REPLY: /* REPLY */
> @@ -116,21 +116,21 @@
>  
>  	case LNET_EVENT_SEND: /* active message */
>  		if (msg->msg_type == LNET_MSG_PUT) {
> -			ev->pt_index   = le32_to_cpu(hdr->msg.put.ptl_index);
> +			ev->pt_index = le32_to_cpu(hdr->msg.put.ptl_index);
>  			ev->match_bits = le64_to_cpu(hdr->msg.put.match_bits);
> -			ev->offset     = le32_to_cpu(hdr->msg.put.offset);
> -			ev->mlength    =
> -			ev->rlength    = le32_to_cpu(hdr->payload_length);
> -			ev->hdr_data   = le64_to_cpu(hdr->msg.put.hdr_data);
> +			ev->offset = le32_to_cpu(hdr->msg.put.offset);
> +			ev->mlength =
> +			ev->rlength = le32_to_cpu(hdr->payload_length);
> +			ev->hdr_data = le64_to_cpu(hdr->msg.put.hdr_data);
>  
>  		} else {
>  			LASSERT(msg->msg_type == LNET_MSG_GET);
> -			ev->pt_index   = le32_to_cpu(hdr->msg.get.ptl_index);
> +			ev->pt_index = le32_to_cpu(hdr->msg.get.ptl_index);
>  			ev->match_bits = le64_to_cpu(hdr->msg.get.match_bits);
> -			ev->mlength    =
> -			ev->rlength    = le32_to_cpu(hdr->msg.get.sink_length);
> -			ev->offset     = le32_to_cpu(hdr->msg.get.src_offset);
> -			ev->hdr_data   = 0;
> +			ev->mlength =
> +			ev->rlength = le32_to_cpu(hdr->msg.get.sink_length);
> +			ev->offset = le32_to_cpu(hdr->msg.get.src_offset);
> +			ev->hdr_data = 0;
>  		}
>  		return;
>  	}
> @@ -140,7 +140,7 @@
>  lnet_msg_commit(struct lnet_msg *msg, int cpt)
>  {
>  	struct lnet_msg_container *container = the_lnet.ln_msg_containers[cpt];
> -	struct lnet_counters *counters  = the_lnet.ln_counters[cpt];
> +	struct lnet_counters *counters = the_lnet.ln_counters[cpt];
>  
>  	/* routed message can be committed for both receiving and sending */
>  	LASSERT(!msg->msg_tx_committed);
> @@ -172,7 +172,7 @@
>  static void
>  lnet_msg_decommit_tx(struct lnet_msg *msg, int status)
>  {
> -	struct lnet_counters	*counters;
> +	struct lnet_counters *counters;
>  	struct lnet_event *ev = &msg->msg_ev;
>  
>  	LASSERT(msg->msg_tx_committed);
> @@ -294,7 +294,7 @@
>  	if (ev->type == LNET_EVENT_PUT || ev->type == LNET_EVENT_REPLY)
>  		counters->recv_length += msg->msg_wanted;
>  
> - out:
> +out:
>  	lnet_return_rx_credits_locked(msg);
>  	msg->msg_rx_committed = 0;
>  }
> @@ -375,7 +375,7 @@
>  
>  	unlink = lnet_md_unlinkable(md);
>  	if (md->md_eq) {
> -		msg->msg_ev.status   = status;
> +		msg->msg_ev.status = status;
>  		msg->msg_ev.unlinked = unlink;
>  		lnet_eq_enqueue_event(md->md_eq, &msg->msg_ev);
>  	}
> @@ -488,7 +488,7 @@
>  		lnet_res_unlock(cpt);
>  	}
>  
> - again:
> +again:
>  	rc = 0;
>  	if (!msg->msg_tx_committed && !msg->msg_rx_committed) {
>  		/* not committed to network yet */
> diff --git a/drivers/staging/lustre/lnet/lnet/lib-ptl.c b/drivers/staging/lustre/lnet/lnet/lib-ptl.c
> index fa391ee..ea232c7 100644
> --- a/drivers/staging/lustre/lnet/lnet/lib-ptl.c
> +++ b/drivers/staging/lustre/lnet/lnet/lib-ptl.c
> @@ -74,7 +74,7 @@
>  
>  	return 1;
>  
> - match:
> +match:
>  	if ((lnet_ptl_is_unique(ptl) && !unique) ||
>  	    (lnet_ptl_is_wildcard(ptl) && unique))
>  		return 0;
> @@ -387,7 +387,7 @@ struct list_head *
>  		head = &mtable->mt_mhash[LNET_MT_HASH_IGNORE];
>  	else
>  		head = lnet_mt_match_head(mtable, info->mi_id, info->mi_mbits);
> - again:
> +again:
>  	/* NB: only wildcard portal needs to return LNET_MATCHMD_EXHAUSTED */
>  	if (lnet_ptl_is_wildcard(the_lnet.ln_portals[mtable->mt_portal]))
>  		exhausted = LNET_MATCHMD_EXHAUSTED;
> @@ -634,9 +634,9 @@ struct list_head *
>  		       info->mi_mbits, info->mi_roffset, info->mi_rlength);
>  	}
>  	goto out0;
> - out1:
> +out1:
>  	lnet_res_unlock(mtable->mt_cpt);
> - out0:
> +out0:
>  	/* EXHAUSTED bit is only meaningful for internal functions */
>  	return rc & ~LNET_MATCHMD_EXHAUSTED;
>  }
> @@ -678,7 +678,7 @@ struct list_head *
>  
>  	lnet_ptl_lock(ptl);
>  	head = &ptl->ptl_msg_stealing;
> - again:
> +again:
>  	list_for_each_entry_safe(msg, tmp, head, msg_list) {
>  		struct lnet_match_info info;
>  		struct lnet_hdr *hdr;
> @@ -688,13 +688,13 @@ struct list_head *
>  
>  		hdr = &msg->msg_hdr;
>  		/* Multi-Rail: Primary peer NID */
> -		info.mi_id.nid  = msg->msg_initiator;
> -		info.mi_id.pid  = hdr->src_pid;
> -		info.mi_opc     = LNET_MD_OP_PUT;
> -		info.mi_portal  = hdr->msg.put.ptl_index;
> +		info.mi_id.nid = msg->msg_initiator;
> +		info.mi_id.pid = hdr->src_pid;
> +		info.mi_opc = LNET_MD_OP_PUT;
> +		info.mi_portal = hdr->msg.put.ptl_index;
>  		info.mi_rlength = hdr->payload_length;
>  		info.mi_roffset = hdr->msg.put.offset;
> -		info.mi_mbits   = hdr->msg.put.match_bits;
> +		info.mi_mbits = hdr->msg.put.match_bits;
>  
>  		rc = lnet_try_match_md(md, &info, msg);
>  
> @@ -824,7 +824,7 @@ struct list_head *
>  	}
>  
>  	return 0;
> - failed:
> +failed:
>  	lnet_ptl_cleanup(ptl);
>  	return -ENOMEM;
>  }
> diff --git a/drivers/staging/lustre/lnet/lnet/lib-socket.c b/drivers/staging/lustre/lnet/lnet/lib-socket.c
> index cff3d1e..095f9f5 100644
> --- a/drivers/staging/lustre/lnet/lnet/lib-socket.c
> +++ b/drivers/staging/lustre/lnet/lnet/lib-socket.c
> @@ -50,8 +50,11 @@
>  	long jiffies_left = timeout * msecs_to_jiffies(MSEC_PER_SEC);
>  	unsigned long then;
>  	struct timeval tv;
> -	struct kvec  iov = { .iov_base = buffer, .iov_len  = nob };
> -	struct msghdr msg = {NULL,};
> +	struct kvec iov = {
> +		.iov_base = buffer,
> +		.iov_len = nob
> +	};
> +	struct msghdr msg = { NULL, };
>  
>  	LASSERT(nob > 0);
>  	/*
> @@ -102,9 +105,9 @@
>  	long jiffies_left = timeout * msecs_to_jiffies(MSEC_PER_SEC);
>  	unsigned long then;
>  	struct timeval tv;
> -	struct kvec  iov = {
> +	struct kvec iov = {
>  		.iov_base = buffer,
> -		.iov_len  = nob
> +		.iov_len = nob
>  	};
>  	struct msghdr msg = {
>  		.msg_flags = 0
> diff --git a/drivers/staging/lustre/lnet/lnet/module.c b/drivers/staging/lustre/lnet/lnet/module.c
> index 4c08c74..f306569 100644
> --- a/drivers/staging/lustre/lnet/lnet/module.c
> +++ b/drivers/staging/lustre/lnet/lnet/module.c
> @@ -52,7 +52,6 @@
>  
>  	if (!the_lnet.ln_niinit_self) {
>  		rc = try_module_get(THIS_MODULE);
> -
>  		if (rc != 1)
>  			goto out;
>  
> @@ -229,7 +228,7 @@
>  }
>  
>  static struct notifier_block lnet_ioctl_handler = {
> -	.notifier_call = lnet_ioctl,
> +	.notifier_call		= lnet_ioctl,
>  };
>  
>  static int __init lnet_init(void)
> diff --git a/drivers/staging/lustre/lnet/lnet/net_fault.c b/drivers/staging/lustre/lnet/lnet/net_fault.c
> index e2c7468..4234ce1 100644
> --- a/drivers/staging/lustre/lnet/lnet/net_fault.c
> +++ b/drivers/staging/lustre/lnet/lnet/net_fault.c
> @@ -614,7 +614,6 @@ struct delay_daemon_data {
>  			rc = lnet_parse_local(ni, msg);
>  			if (!rc)
>  				continue;
> -
>  		} else {
>  			lnet_net_lock(cpt);
>  			rc = lnet_parse_forward_locked(ni, msg);
> diff --git a/drivers/staging/lustre/lnet/lnet/nidstrings.c b/drivers/staging/lustre/lnet/lnet/nidstrings.c
> index 0f2b75e..8f3d87c 100644
> --- a/drivers/staging/lustre/lnet/lnet/nidstrings.c
> +++ b/drivers/staging/lustre/lnet/lnet/nidstrings.c
> @@ -60,8 +60,8 @@
>   * between getting its string and using it.
>   */
>  
> -static char      libcfs_nidstrings[LNET_NIDSTR_COUNT][LNET_NIDSTR_SIZE];
> -static int       libcfs_nidstring_idx;
> +static char libcfs_nidstrings[LNET_NIDSTR_COUNT][LNET_NIDSTR_SIZE];
> +static int libcfs_nidstring_idx;
>  
>  static DEFINE_SPINLOCK(libcfs_nidstring_lock);
>  
> @@ -117,23 +117,23 @@ struct nidrange {
>  	 * Link to list of this structures which is built on nid range
>  	 * list parsing.
>  	 */
> -	struct list_head nr_link;
> +	struct list_head	nr_link;
>  	/**
>  	 * List head for addrrange::ar_link.
>  	 */
> -	struct list_head nr_addrranges;
> +	struct list_head	nr_addrranges;
>  	/**
>  	 * Flag indicating that *@<net> is found.
>  	 */
> -	int nr_all;
> +	int			nr_all;
>  	/**
>  	 * Pointer to corresponding element of libcfs_netstrfns.
>  	 */
> -	struct netstrfns *nr_netstrfns;
> +	struct netstrfns	*nr_netstrfns;
>  	/**
>  	 * Number of network. E.g. 5 if \<net\> is "elan5".
>  	 */
> -	int nr_netnum;
> +	int			nr_netnum;
>  };
>  
>  /**
> @@ -143,11 +143,11 @@ struct addrrange {
>  	/**
>  	 * Link to nidrange::nr_addrranges.
>  	 */
> -	struct list_head ar_link;
> +	struct list_head	ar_link;
>  	/**
>  	 * List head for cfs_expr_list::el_list.
>  	 */
> -	struct list_head ar_numaddr_ranges;
> +	struct list_head	ar_numaddr_ranges;
>  };
>  
>  /**
> @@ -471,8 +471,8 @@ static void cfs_ip_ar_min_max(struct addrrange *ar, u32 *min_nid,
>  	struct cfs_expr_list *el;
>  	struct cfs_range_expr *re;
>  	u32 tmp_ip_addr = 0;
> -	unsigned int min_ip[4] = {0};
> -	unsigned int max_ip[4] = {0};
> +	unsigned int min_ip[4] = { 0 };
> +	unsigned int max_ip[4] = { 0 };
>  	int re_count = 0;
>  
>  	list_for_each_entry(el, &ar->ar_numaddr_ranges, el_link) {
> @@ -794,11 +794,11 @@ static void cfs_ip_min_max(struct list_head *nidlist, u32 *min_nid,
>  static int
>  libcfs_ip_str2addr(const char *str, int nob, u32 *addr)
>  {
> -	unsigned int	a;
> -	unsigned int	b;
> -	unsigned int	c;
> -	unsigned int	d;
> -	int		n = nob; /* XscanfX */
> +	unsigned int a;
> +	unsigned int b;
> +	unsigned int c;
> +	unsigned int d;
> +	int n = nob; /* XscanfX */
>  
>  	/* numeric IP? */
>  	if (sscanf(str, "%u.%u.%u.%u%n", &a, &b, &c, &d, &n) >= 4 &&
> @@ -897,7 +897,7 @@ static void cfs_ip_min_max(struct list_head *nidlist, u32 *min_nid,
>  static int
>  libcfs_num_str2addr(const char *str, int nob, u32 *addr)
>  {
> -	int     n;
> +	int n;
>  
>  	n = nob;
>  	if (sscanf(str, "0x%x%n", addr, &n) >= 1 && n == nob)
> @@ -926,7 +926,7 @@ static void cfs_ip_min_max(struct list_head *nidlist, u32 *min_nid,
>  libcfs_num_parse(char *str, int len, struct list_head *list)
>  {
>  	struct cfs_expr_list *el;
> -	int	rc;
> +	int rc;
>  
>  	rc = cfs_expr_list_parse(str, len, 0, MAX_NUMERIC_VALUE, &el);
>  	if (!rc)
> @@ -1049,7 +1049,7 @@ static void cfs_ip_min_max(struct list_head *nidlist, u32 *min_nid,
>  static struct netstrfns *
>  libcfs_name2netstrfns(const char *name)
>  {
> -	int    i;
> +	int i;
>  
>  	for (i = 0; i < libcfs_nnetstrfns; i++)
>  		if (!strcmp(libcfs_netstrfns[i].nf_name, name))
> @@ -1194,7 +1194,7 @@ static void cfs_ip_min_max(struct list_head *nidlist, u32 *min_nid,
>  u32
>  libcfs_str2net(const char *str)
>  {
> -	u32  net;
> +	u32 net;
>  
>  	if (libcfs_str2net_internal(str, &net))
>  		return net;
> diff --git a/drivers/staging/lustre/lnet/lnet/peer.c b/drivers/staging/lustre/lnet/lnet/peer.c
> index d807dd4..dfe1f3d 100644
> --- a/drivers/staging/lustre/lnet/lnet/peer.c
> +++ b/drivers/staging/lustre/lnet/lnet/peer.c
> @@ -586,8 +586,8 @@ void lnet_peer_uninit(void)
>  static struct lnet_peer_ni *
>  lnet_get_peer_ni_locked(struct lnet_peer_table *ptable, lnet_nid_t nid)
>  {
> -	struct list_head	*peers;
> -	struct lnet_peer_ni	*lp;
> +	struct list_head *peers;
> +	struct lnet_peer_ni *lp;
>  
>  	LASSERT(the_lnet.ln_state == LNET_STATE_RUNNING);
>  
> @@ -1069,6 +1069,7 @@ struct lnet_peer_net *
>  lnet_peer_get_net_locked(struct lnet_peer *peer, u32 net_id)
>  {
>  	struct lnet_peer_net *peer_net;
> +
>  	list_for_each_entry(peer_net, &peer->lp_peer_nets, lpn_peer_nets) {
>  		if (peer_net->lpn_net_id == net_id)
>  			return peer_net;
> @@ -1088,9 +1089,9 @@ struct lnet_peer_net *
>   */
>  static int
>  lnet_peer_attach_peer_ni(struct lnet_peer *lp,
> -				struct lnet_peer_net *lpn,
> -				struct lnet_peer_ni *lpni,
> -				unsigned int flags)
> +			 struct lnet_peer_net *lpn,
> +			 struct lnet_peer_ni *lpni,
> +			 unsigned int flags)
>  {
>  	struct lnet_peer_table *ptable;
>  
> @@ -2686,12 +2687,12 @@ static int lnet_peer_send_ping(struct lnet_peer *lp)
>  	}
>  
>  	/* initialize md content */
> -	md.start     = &pbuf->pb_info;
> -	md.length    = LNET_PING_INFO_SIZE(nnis);
> +	md.start = &pbuf->pb_info;
> +	md.length = LNET_PING_INFO_SIZE(nnis);
>  	md.threshold = 2; /* GET/REPLY */
> -	md.max_size  = 0;
> -	md.options   = LNET_MD_TRUNCATE;
> -	md.user_ptr  = lp;
> +	md.max_size = 0;
> +	md.options = LNET_MD_TRUNCATE;
> +	md.user_ptr = lp;
>  	md.eq_handle = the_lnet.ln_dc_eqh;
>  
>  	rc = LNetMDBind(md, LNET_UNLINK, &lp->lp_ping_mdh);
> @@ -2715,7 +2716,6 @@ static int lnet_peer_send_ping(struct lnet_peer *lp)
>  	rc = LNetGet(LNET_NID_ANY, lp->lp_ping_mdh, id,
>  		     LNET_RESERVED_PORTAL,
>  		     LNET_PROTO_PING_MATCHBITS, 0);
> -
>  	if (rc)
>  		goto fail_unlink_md;
>  
> @@ -2792,13 +2792,13 @@ static int lnet_peer_send_push(struct lnet_peer *lp)
>  	lnet_net_unlock(cpt);
>  
>  	/* Push source MD */
> -	md.start     = &pbuf->pb_info;
> -	md.length    = LNET_PING_INFO_SIZE(pbuf->pb_nnis);
> +	md.start = &pbuf->pb_info;
> +	md.length = LNET_PING_INFO_SIZE(pbuf->pb_nnis);
>  	md.threshold = 2; /* Put/Ack */
> -	md.max_size  = 0;
> -	md.options   = 0;
> +	md.max_size = 0;
> +	md.options = 0;
>  	md.eq_handle = the_lnet.ln_dc_eqh;
> -	md.user_ptr  = lp;
> +	md.user_ptr = lp;
>  
>  	rc = LNetMDBind(md, LNET_UNLINK, &lp->lp_push_mdh);
>  	if (rc) {
> @@ -2821,7 +2821,6 @@ static int lnet_peer_send_push(struct lnet_peer *lp)
>  	rc = LNetPut(LNET_NID_ANY, lp->lp_push_mdh,
>  		     LNET_ACK_REQ, id, LNET_RESERVED_PORTAL,
>  		     LNET_PROTO_PING_MATCHBITS, 0, 0);
> -
>  	if (rc)
>  		goto fail_unlink;
>  
> @@ -3315,8 +3314,8 @@ int lnet_get_peer_info(struct lnet_ioctl_peer_cfg *cfg, void __user *bulk)
>  		goto out;
>  	}
>  
> -	size = sizeof(nid) + sizeof(*lpni_info) + sizeof(*lpni_stats)
> -		+ sizeof(*lpni_msg_stats);
> +	size = sizeof(nid) + sizeof(*lpni_info) + sizeof(*lpni_stats) +
> +	       sizeof(*lpni_msg_stats);
>  	size *= lp->lp_nnis;
>  	if (size > cfg->prcfg_size) {
>  		cfg->prcfg_size = size;
> diff --git a/drivers/staging/lustre/lnet/lnet/router.c b/drivers/staging/lustre/lnet/lnet/router.c
> index 22c88ec..463b123 100644
> --- a/drivers/staging/lustre/lnet/lnet/router.c
> +++ b/drivers/staging/lustre/lnet/lnet/router.c
> @@ -172,7 +172,7 @@
>  		notifylnd = lp->lpni_notifylnd;
>  
>  		lp->lpni_notifylnd = 0;
> -		lp->lpni_notify    = 0;
> +		lp->lpni_notify = 0;
>  
>  		if (notifylnd && ni->ni_net->net_lnd->lnd_notify) {
>  			spin_unlock(&lp->lpni_lock);
> @@ -274,6 +274,7 @@ static void lnet_shuffle_seed(void)
>  	 * the NID for this node gives the most entropy in the low bits */
>  	while ((ni = lnet_get_next_ni_locked(NULL, ni))) {
>  		u32 lnd_type, seed;
> +
>  		lnd_type = LNET_NETTYP(LNET_NIDNET(ni->ni_nid));
>  		if (lnd_type != LOLND) {
>  			seed = (LNET_NIDADDR(ni->ni_nid) | lnd_type);
> @@ -386,7 +387,6 @@ static void lnet_shuffle_seed(void)
>  	/* Search for a duplicate route (it's a NOOP if it is) */
>  	add_route = 1;
>  	list_for_each_entry(route2, &rnet2->lrn_routes, lr_list) {
> -
>  		if (route2->lr_gateway == route->lr_gateway) {
>  			add_route = 0;
>  			break;
> @@ -501,7 +501,7 @@ static void lnet_shuffle_seed(void)
>  	else
>  		rn_list = lnet_net2rnethash(net);
>  
> - again:
> +again:
>  	list_for_each_entry(rnet, rn_list, lrn_list) {
>  		if (!(net == LNET_NIDNET(LNET_NID_ANY) ||
>  		      net == rnet->lrn_net))
> @@ -601,10 +601,10 @@ int lnet_get_rtr_pool_cfg(int idx, struct lnet_ioctl_pool_cfg *pool_cfg)
>  		list_for_each_entry(rnet, rn_list, lrn_list) {
>  			list_for_each_entry(route, &rnet->lrn_routes, lr_list) {
>  				if (!idx--) {
> -					*net      = rnet->lrn_net;
> -					*hops     = route->lr_hops;
> +					*net = rnet->lrn_net;
> +					*hops = route->lr_hops;
>  					*priority = route->lr_priority;
> -					*gateway  = route->lr_gateway->lpni_nid;
> +					*gateway = route->lr_gateway->lpni_nid;
>  					*alive = lnet_is_route_alive(route);
>  					lnet_net_unlock(cpt);
>  					return 0;
> @@ -648,7 +648,7 @@ int lnet_get_rtr_pool_cfg(int idx, struct lnet_ioctl_pool_cfg *pool_cfg)
>  	struct lnet_ping_buffer *pbuf = rcd->rcd_pingbuffer;
>  	struct lnet_peer_ni *gw = rcd->rcd_gateway;
>  	struct lnet_route *rte;
> -	int			nnis;
> +	int nnis;
>  
>  	if (!gw->lpni_alive || !pbuf)
>  		return;
> @@ -799,7 +799,7 @@ int lnet_get_rtr_pool_cfg(int idx, struct lnet_ioctl_pool_cfg *pool_cfg)
>  	if (avoid_asym_router_failure && !event->status)
>  		lnet_parse_rc_info(rcd);
>  
> - out:
> +out:
>  	lnet_net_unlock(lp->lpni_cpt);
>  }
>  
> @@ -1069,14 +1069,14 @@ int lnet_get_rtr_pool_cfg(int idx, struct lnet_ioctl_pool_cfg *pool_cfg)
>  		id.pid = LNET_PID_LUSTRE;
>  		CDEBUG(D_NET, "Check: %s\n", libcfs_id2str(id));
>  
> -		rtr->lpni_ping_notsent   = 1;
> +		rtr->lpni_ping_notsent = 1;
>  		rtr->lpni_ping_timestamp = now;
>  
>  		mdh = rcd->rcd_mdh;
>  
>  		if (!rtr->lpni_ping_deadline) {
>  			rtr->lpni_ping_deadline = ktime_get_seconds() +
> -						router_ping_timeout;
> +						  router_ping_timeout;
>  		}
>  
>  		lnet_net_unlock(rtr->lpni_cpt);
> @@ -1652,7 +1652,7 @@ int lnet_get_rtr_pool_cfg(int idx, struct lnet_ioctl_pool_cfg *pool_cfg)
>  
>  	return 0;
>  
> - failed:
> +failed:
>  	lnet_rtrpools_free(0);
>  	return rc;
>  }
> @@ -1797,8 +1797,8 @@ int lnet_get_rtr_pool_cfg(int idx, struct lnet_ioctl_pool_cfg *pool_cfg)
>  		return -EINVAL;
>  	}
>  
> -	if (ni && !alive &&	     /* LND telling me she's down */
> -	    !auto_down) {		       /* auto-down disabled */
> +	if (ni && !alive &&	/* LND telling me she's down */
> +	    !auto_down) {	/* auto-down disabled */
>  		CDEBUG(D_NET, "Auto-down disabled\n");
>  		return 0;
>  	}
> diff --git a/drivers/staging/lustre/lnet/lnet/router_proc.c b/drivers/staging/lustre/lnet/lnet/router_proc.c
> index e8cc70f..94ef441 100644
> --- a/drivers/staging/lustre/lnet/lnet/router_proc.c
> +++ b/drivers/staging/lustre/lnet/lnet/router_proc.c
> @@ -66,8 +66,8 @@
>  #define LNET_PROC_HOFF_GET(pos)				\
>  	(int)((pos) & LNET_PROC_HOFF_MASK)
>  
> -#define LNET_PROC_POS_MAKE(cpt, ver, hash, off)		\
> -	(((((loff_t)(cpt)) & LNET_PROC_CPT_MASK) << LNET_PROC_VPOS_BITS) |   \
> +#define LNET_PROC_POS_MAKE(cpt, ver, hash, off)				    \
> +	(((((loff_t)(cpt)) & LNET_PROC_CPT_MASK) << LNET_PROC_VPOS_BITS) |  \
>  	((((loff_t)(ver)) & LNET_PROC_VER_MASK) << LNET_PROC_HPOS_BITS) |   \
>  	((((loff_t)(hash)) & LNET_PROC_HASH_MASK) << LNET_PROC_HOFF_BITS) | \
>  	((off) & LNET_PROC_HOFF_MASK))
> @@ -91,7 +91,6 @@ static int proc_lnet_stats(struct ctl_table *table, int write,
>  	}
>  
>  	/* read */
> -
>  	ctrs = kzalloc(sizeof(*ctrs), GFP_NOFS);
>  	if (!ctrs)
>  		return -ENOMEM;
> @@ -395,8 +394,8 @@ static int proc_lnet_peers(struct ctl_table *table, int write,
>  	struct lnet_peer_table *ptable;
>  	char *tmpstr = NULL;
>  	char *s;
> -	int cpt  = LNET_PROC_CPT_GET(*ppos);
> -	int ver  = LNET_PROC_VER_GET(*ppos);
> +	int cpt = LNET_PROC_CPT_GET(*ppos);
> +	int ver = LNET_PROC_VER_GET(*ppos);
>  	int hash = LNET_PROC_HASH_GET(*ppos);
>  	int hoff = LNET_PROC_HOFF_GET(*ppos);
>  	int rc = 0;
> @@ -456,7 +455,7 @@ static int proc_lnet_peers(struct ctl_table *table, int write,
>  		struct lnet_peer_ni *peer;
>  		struct list_head *p;
>  		int skip;
> - again:
> +again:
>  		p = NULL;
>  		peer = NULL;
>  		skip = hoff - 1;
> @@ -630,7 +629,7 @@ static int proc_lnet_buffers(struct ctl_table *table, int write,
>  		lnet_net_unlock(LNET_LOCK_EX);
>  	}
>  
> - out:
> +out:
>  	len = s - tmpstr;
>  
>  	if (pos >= min_t(int, len, strlen(tmpstr)))
> @@ -787,9 +786,9 @@ static int proc_lnet_nis(struct ctl_table *table, int write,
>  }
>  
>  struct lnet_portal_rotors {
> -	int pr_value;
> -	const char *pr_name;
> -	const char *pr_desc;
> +	int		 pr_value;
> +	const char	*pr_name;
> +	const char	*pr_desc;
>  };
>  
>  static struct lnet_portal_rotors	portal_rotors[] = {
> @@ -890,39 +889,39 @@ static int proc_lnet_portal_rotor(struct ctl_table *table, int write,
>  	 * to go via /proc for portability.
>  	 */
>  	{
> -		.procname     = "stats",
> -		.mode         = 0644,
> -		.proc_handler = &proc_lnet_stats,
> +		.procname	= "stats",
> +		.mode		= 0644,
> +		.proc_handler	= &proc_lnet_stats,
>  	},
>  	{
> -		.procname     = "routes",
> -		.mode         = 0444,
> -		.proc_handler = &proc_lnet_routes,
> +		.procname	= "routes",
> +		.mode		= 0444,
> +		.proc_handler	= &proc_lnet_routes,
>  	},
>  	{
> -		.procname     = "routers",
> -		.mode         = 0444,
> -		.proc_handler = &proc_lnet_routers,
> +		.procname	= "routers",
> +		.mode		= 0444,
> +		.proc_handler	= &proc_lnet_routers,
>  	},
>  	{
> -		.procname     = "peers",
> -		.mode         = 0644,
> -		.proc_handler = &proc_lnet_peers,
> +		.procname	= "peers",
> +		.mode		= 0644,
> +		.proc_handler	= &proc_lnet_peers,
>  	},
>  	{
> -		.procname     = "buffers",
> -		.mode         = 0444,
> -		.proc_handler = &proc_lnet_buffers,
> +		.procname	= "buffers",
> +		.mode		= 0444,
> +		.proc_handler	= &proc_lnet_buffers,
>  	},
>  	{
> -		.procname     = "nis",
> -		.mode         = 0644,
> -		.proc_handler = &proc_lnet_nis,
> +		.procname	= "nis",
> +		.mode		= 0644,
> +		.proc_handler	= &proc_lnet_nis,
>  	},
>  	{
> -		.procname     = "portal_rotor",
> -		.mode         = 0644,
> -		.proc_handler = &proc_lnet_portal_rotor,
> +		.procname	= "portal_rotor",
> +		.mode		= 0644,
> +		.proc_handler	= &proc_lnet_portal_rotor,
>  	},
>  	{
>  	}
> -- 
> 1.8.3.1
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 832 bytes
Desc: not available
URL: <http://lists.lustre.org/pipermail/lustre-devel-lustre.org/attachments/20190204/a8d5ab01/attachment-0001.sig>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [lustre-devel] [PATCH 19/26] ptlrpc: cleanup white spaces
  2019-01-31 17:19 ` [lustre-devel] [PATCH 19/26] ptlrpc: " James Simmons
@ 2019-02-04  3:18   ` NeilBrown
  0 siblings, 0 replies; 30+ messages in thread
From: NeilBrown @ 2019-02-04  3:18 UTC (permalink / raw)
  To: lustre-devel

On Thu, Jan 31 2019, James Simmons wrote:

> The ptlrpc code is very messy and difficult to read. Remove excess
> white space and properly align data structures so they are easy on
> the eyes.
>
> Signed-off-by: James Simmons <jsimmons@infradead.org>
> ---
>  drivers/staging/lustre/lustre/ptlrpc/client.c      |  45 ++--
>  drivers/staging/lustre/lustre/ptlrpc/import.c      |   2 +-
>  drivers/staging/lustre/lustre/ptlrpc/layout.c      |   3 -
>  .../staging/lustre/lustre/ptlrpc/lproc_ptlrpc.c    | 278 ++++++++++-----------
>  drivers/staging/lustre/lustre/ptlrpc/niobuf.c      |   7 +-
>  drivers/staging/lustre/lustre/ptlrpc/nrs.c         |   1 -
>  .../staging/lustre/lustre/ptlrpc/ptlrpc_internal.h |  14 +-
>  drivers/staging/lustre/lustre/ptlrpc/ptlrpcd.c     |  20 +-
>  drivers/staging/lustre/lustre/ptlrpc/recover.c     |   1 +
>  drivers/staging/lustre/lustre/ptlrpc/sec.c         |   4 +-
>  drivers/staging/lustre/lustre/ptlrpc/sec_bulk.c    |  74 +++---
>  drivers/staging/lustre/lustre/ptlrpc/sec_config.c  |  22 +-
>  drivers/staging/lustre/lustre/ptlrpc/sec_null.c    |  34 +--
>  drivers/staging/lustre/lustre/ptlrpc/sec_plain.c   |  68 ++---
>  drivers/staging/lustre/lustre/ptlrpc/service.c     |  20 +-
>  15 files changed, 293 insertions(+), 300 deletions(-)
>
> diff --git a/drivers/staging/lustre/lustre/ptlrpc/client.c b/drivers/staging/lustre/lustre/ptlrpc/client.c
> index f4b3875..0831810 100644
> --- a/drivers/staging/lustre/lustre/ptlrpc/client.c
> +++ b/drivers/staging/lustre/lustre/ptlrpc/client.c
> @@ -49,14 +49,14 @@
>  #include "ptlrpc_internal.h"
>  
>  const struct ptlrpc_bulk_frag_ops ptlrpc_bulk_kiov_pin_ops = {
> -	.add_kiov_frag	= ptlrpc_prep_bulk_page_pin,
> -	.release_frags	= ptlrpc_release_bulk_page_pin,
> +	.add_kiov_frag		= ptlrpc_prep_bulk_page_pin,
> +	.release_frags		= ptlrpc_release_bulk_page_pin,
>  };
>  EXPORT_SYMBOL(ptlrpc_bulk_kiov_pin_ops);
>  
>  const struct ptlrpc_bulk_frag_ops ptlrpc_bulk_kiov_nopin_ops = {
> -	.add_kiov_frag	= ptlrpc_prep_bulk_page_nopin,
> -	.release_frags	= NULL,
> +	.add_kiov_frag		= ptlrpc_prep_bulk_page_nopin,
> +	.release_frags		= NULL,
>  };
>  EXPORT_SYMBOL(ptlrpc_bulk_kiov_nopin_ops);
>  
> @@ -658,15 +658,14 @@ static void __ptlrpc_free_req_to_pool(struct ptlrpc_request *request)
>  
>  void ptlrpc_add_unreplied(struct ptlrpc_request *req)
>  {
> -	struct obd_import	*imp = req->rq_import;
> -	struct ptlrpc_request	*iter;
> +	struct obd_import *imp = req->rq_import;
> +	struct ptlrpc_request *iter;
>  
>  	assert_spin_locked(&imp->imp_lock);
>  	LASSERT(list_empty(&req->rq_unreplied_list));
>  
>  	/* unreplied list is sorted by xid in ascending order */
>  	list_for_each_entry_reverse(iter, &imp->imp_unreplied_list, rq_unreplied_list) {
> -
>  		LASSERT(req->rq_xid != iter->rq_xid);
>  		if (req->rq_xid < iter->rq_xid)
>  			continue;
> @@ -1318,10 +1317,10 @@ static int after_reply(struct ptlrpc_request *req)
>  		 * reply).  NB: no need to round up because alloc_repbuf will
>  		 * round it up
>  		 */
> -		req->rq_replen       = req->rq_nob_received;
> +		req->rq_replen = req->rq_nob_received;
>  		req->rq_nob_received = 0;
>  		spin_lock(&req->rq_lock);
> -		req->rq_resend       = 1;
> +		req->rq_resend = 1;
>  		spin_unlock(&req->rq_lock);
>  		return 0;
>  	}
> @@ -1359,7 +1358,7 @@ static int after_reply(struct ptlrpc_request *req)
>  		spin_unlock(&req->rq_lock);
>  		req->rq_nr_resend++;
>  
> -		/* Readjust the timeout for current conditions */
> +		/* Read just the timeout for current conditions */
>  		ptlrpc_at_set_req_timeout(req);
>  		/*
>  		 * delay resend to give a chance to the server to get ready.

Uhmm... I don't think this function us reading anything, not even "just
the timeout".
I think it is adjusting the timeout - which has already been done
previously.
So this time it is re-adjusting.
??
I've changed this to Re-adjust

There is a similar comment in ptlrpc_replay_req which I've also
changed from readjust to re-adjust.

NeilBrown


> @@ -1620,7 +1619,7 @@ static inline int ptlrpc_set_producer(struct ptlrpc_request_set *set)
>  		rc = set->set_producer(set, set->set_producer_arg);
>  		if (rc == -ENOENT) {
>  			/* no more RPC to produce */
> -			set->set_producer     = NULL;
> +			set->set_producer = NULL;
>  			set->set_producer_arg = NULL;
>  			return 0;
>  		}
> @@ -1654,7 +1653,7 @@ int ptlrpc_check_set(const struct lu_env *env, struct ptlrpc_request_set *set)
>  
>  		/*
>  		 * This schedule point is mainly for the ptlrpcd caller of this
> -		 * function.  Most ptlrpc sets are not long-lived and unbounded
> +		 * function. Most ptlrpc sets are not long-lived and unbounded
>  		 * in length, but at the least the set used by the ptlrpcd is.
>  		 * Since the processing time is unbounded, we need to insert an
>  		 * explicit schedule point to make the thread well-behaved.
> @@ -2130,7 +2129,6 @@ void ptlrpc_expired_set(struct ptlrpc_request_set *set)
>  
>  	/* A timeout expired. See which reqs it applies to...  */
>  	list_for_each_entry(req, &set->set_requests, rq_set_chain) {
> -
>  		/* don't expire request waiting for context */
>  		if (req->rq_wait_ctx)
>  			continue;
> @@ -2185,7 +2183,6 @@ int ptlrpc_set_next_timeout(struct ptlrpc_request_set *set)
>  	time64_t deadline;
>  
>  	list_for_each_entry(req, &set->set_requests, rq_set_chain) {
> -
>  		/* Request in-flight? */
>  		if (!(((req->rq_phase == RQ_PHASE_RPC) && !req->rq_waiting) ||
>  		      (req->rq_phase == RQ_PHASE_BULK) ||
> @@ -2568,7 +2565,7 @@ static void ptlrpc_free_request(struct ptlrpc_request *req)
>   */
>  void ptlrpc_request_committed(struct ptlrpc_request *req, int force)
>  {
> -	struct obd_import	*imp = req->rq_import;
> +	struct obd_import *imp = req->rq_import;
>  
>  	spin_lock(&imp->imp_lock);
>  	if (list_empty(&req->rq_replay_list)) {
> @@ -2896,7 +2893,7 @@ static int ptlrpc_replay_interpret(const struct lu_env *env,
>  
>  	/* continue with recovery */
>  	rc = ptlrpc_import_recovery_state_machine(imp);
> - out:
> +out:
>  	req->rq_send_state = aa->praa_old_state;
>  
>  	if (rc != 0)
> @@ -3031,7 +3028,7 @@ void ptlrpc_abort_set(struct ptlrpc_request_set *set)
>  /**
>   * Initialize the XID for the node.  This is common among all requests on
>   * this node, and only requires the property that it is monotonically
> - * increasing.  It does not need to be sequential.  Since this is also used
> + * increasing. It does not need to be sequential.  Since this is also used
>   * as the RDMA match bits, it is important that a single client NOT have
>   * the same match bits for two different in-flight requests, hence we do
>   * NOT want to have an XID per target or similar.
> @@ -3198,12 +3195,12 @@ struct ptlrpc_work_async_args {
>  static void ptlrpcd_add_work_req(struct ptlrpc_request *req)
>  {
>  	/* re-initialize the req */
> -	req->rq_timeout		= obd_timeout;
> -	req->rq_sent		= ktime_get_real_seconds();
> -	req->rq_deadline	= req->rq_sent + req->rq_timeout;
> -	req->rq_phase		= RQ_PHASE_INTERPRET;
> -	req->rq_next_phase	= RQ_PHASE_COMPLETE;
> -	req->rq_xid		= ptlrpc_next_xid();
> +	req->rq_timeout	= obd_timeout;
> +	req->rq_sent = ktime_get_real_seconds();
> +	req->rq_deadline = req->rq_sent + req->rq_timeout;
> +	req->rq_phase = RQ_PHASE_INTERPRET;
> +	req->rq_next_phase = RQ_PHASE_COMPLETE;
> +	req->rq_xid = ptlrpc_next_xid();
>  	req->rq_import_generation = req->rq_import->imp_generation;
>  
>  	ptlrpcd_add_req(req);
> @@ -3241,7 +3238,7 @@ static int ptlrpcd_check_work(struct ptlrpc_request *req)
>  void *ptlrpcd_alloc_work(struct obd_import *imp,
>  			 int (*cb)(const struct lu_env *, void *), void *cbdata)
>  {
> -	struct ptlrpc_request	 *req = NULL;
> +	struct ptlrpc_request *req = NULL;
>  	struct ptlrpc_work_async_args *args;
>  
>  	might_sleep();
> diff --git a/drivers/staging/lustre/lustre/ptlrpc/import.c b/drivers/staging/lustre/lustre/ptlrpc/import.c
> index 56a0b76..7bb2e06 100644
> --- a/drivers/staging/lustre/lustre/ptlrpc/import.c
> +++ b/drivers/staging/lustre/lustre/ptlrpc/import.c
> @@ -51,7 +51,7 @@
>  #include "ptlrpc_internal.h"
>  
>  struct ptlrpc_connect_async_args {
> -	 u64 pcaa_peer_committed;
> +	u64 pcaa_peer_committed;
>  	int pcaa_initial_connect;
>  };
>  
> diff --git a/drivers/staging/lustre/lustre/ptlrpc/layout.c b/drivers/staging/lustre/lustre/ptlrpc/layout.c
> index 2848f2f..f1f7d70 100644
> --- a/drivers/staging/lustre/lustre/ptlrpc/layout.c
> +++ b/drivers/staging/lustre/lustre/ptlrpc/layout.c
> @@ -1907,9 +1907,7 @@ static void *__req_capsule_get(struct req_capsule *pill,
>  	void *value;
>  	u32 len;
>  	u32 offset;
> -
>  	void *(*getter)(struct lustre_msg *m, u32 n, u32 minlen);
> -
>  	static const char *rcl_names[RCL_NR] = {
>  		[RCL_CLIENT] = "client",
>  		[RCL_SERVER] = "server"
> @@ -2176,7 +2174,6 @@ void req_capsule_extend(struct req_capsule *pill, const struct req_format *fmt)
>  {
>  	int i;
>  	size_t j;
> -
>  	const struct req_format *old;
>  
>  	LASSERT(pill->rc_fmt);
> diff --git a/drivers/staging/lustre/lustre/ptlrpc/lproc_ptlrpc.c b/drivers/staging/lustre/lustre/ptlrpc/lproc_ptlrpc.c
> index 92e3e0f..25858b8 100644
> --- a/drivers/staging/lustre/lustre/ptlrpc/lproc_ptlrpc.c
> +++ b/drivers/staging/lustre/lustre/ptlrpc/lproc_ptlrpc.c
> @@ -42,115 +42,115 @@
>  #include "ptlrpc_internal.h"
>  
>  static struct ll_rpc_opcode {
> -	u32       opcode;
> -	const char *opname;
> +	u32				opcode;
> +	const char			*opname;
>  } ll_rpc_opcode_table[LUSTRE_MAX_OPCODES] = {
> -	{ OST_REPLY,	"ost_reply" },
> -	{ OST_GETATTR,      "ost_getattr" },
> -	{ OST_SETATTR,      "ost_setattr" },
> -	{ OST_READ,	 "ost_read" },
> -	{ OST_WRITE,	"ost_write" },
> -	{ OST_CREATE,       "ost_create" },
> -	{ OST_DESTROY,      "ost_destroy" },
> -	{ OST_GET_INFO,     "ost_get_info" },
> -	{ OST_CONNECT,      "ost_connect" },
> -	{ OST_DISCONNECT,   "ost_disconnect" },
> -	{ OST_PUNCH,	"ost_punch" },
> -	{ OST_OPEN,	 "ost_open" },
> -	{ OST_CLOSE,	"ost_close" },
> -	{ OST_STATFS,       "ost_statfs" },
> -	{ 14,		NULL },    /* formerly OST_SAN_READ */
> -	{ 15,		NULL },    /* formerly OST_SAN_WRITE */
> -	{ OST_SYNC,	 "ost_sync" },
> -	{ OST_SET_INFO,     "ost_set_info" },
> -	{ OST_QUOTACHECK,   "ost_quotacheck" },
> -	{ OST_QUOTACTL,     "ost_quotactl" },
> -	{ OST_QUOTA_ADJUST_QUNIT, "ost_quota_adjust_qunit" },
> -	{ OST_LADVISE,			"ost_ladvise" },
> -	{ MDS_GETATTR,      "mds_getattr" },
> -	{ MDS_GETATTR_NAME, "mds_getattr_lock" },
> -	{ MDS_CLOSE,	"mds_close" },
> -	{ MDS_REINT,	"mds_reint" },
> -	{ MDS_READPAGE,     "mds_readpage" },
> -	{ MDS_CONNECT,      "mds_connect" },
> -	{ MDS_DISCONNECT,   "mds_disconnect" },
> -	{ MDS_GET_ROOT,			"mds_get_root" },
> -	{ MDS_STATFS,       "mds_statfs" },
> -	{ MDS_PIN,	  "mds_pin" },
> -	{ MDS_UNPIN,	"mds_unpin" },
> -	{ MDS_SYNC,	 "mds_sync" },
> -	{ MDS_DONE_WRITING, "mds_done_writing" },
> -	{ MDS_SET_INFO,     "mds_set_info" },
> -	{ MDS_QUOTACHECK,   "mds_quotacheck" },
> -	{ MDS_QUOTACTL,     "mds_quotactl" },
> -	{ MDS_GETXATTR,     "mds_getxattr" },
> -	{ MDS_SETXATTR,     "mds_setxattr" },
> -	{ MDS_WRITEPAGE,    "mds_writepage" },
> -	{ MDS_IS_SUBDIR,    "mds_is_subdir" },
> -	{ MDS_GET_INFO,     "mds_get_info" },
> -	{ MDS_HSM_STATE_GET, "mds_hsm_state_get" },
> -	{ MDS_HSM_STATE_SET, "mds_hsm_state_set" },
> -	{ MDS_HSM_ACTION,   "mds_hsm_action" },
> -	{ MDS_HSM_PROGRESS, "mds_hsm_progress" },
> -	{ MDS_HSM_REQUEST,  "mds_hsm_request" },
> -	{ MDS_HSM_CT_REGISTER, "mds_hsm_ct_register" },
> -	{ MDS_HSM_CT_UNREGISTER, "mds_hsm_ct_unregister" },
> -	{ MDS_SWAP_LAYOUTS,	"mds_swap_layouts" },
> -	{ LDLM_ENQUEUE,     "ldlm_enqueue" },
> -	{ LDLM_CONVERT,     "ldlm_convert" },
> -	{ LDLM_CANCEL,      "ldlm_cancel" },
> -	{ LDLM_BL_CALLBACK, "ldlm_bl_callback" },
> -	{ LDLM_CP_CALLBACK, "ldlm_cp_callback" },
> -	{ LDLM_GL_CALLBACK, "ldlm_gl_callback" },
> -	{ LDLM_SET_INFO,    "ldlm_set_info" },
> -	{ MGS_CONNECT,      "mgs_connect" },
> -	{ MGS_DISCONNECT,   "mgs_disconnect" },
> -	{ MGS_EXCEPTION,    "mgs_exception" },
> -	{ MGS_TARGET_REG,   "mgs_target_reg" },
> -	{ MGS_TARGET_DEL,   "mgs_target_del" },
> -	{ MGS_SET_INFO,     "mgs_set_info" },
> -	{ MGS_CONFIG_READ,  "mgs_config_read" },
> -	{ OBD_PING,	 "obd_ping" },
> -	{ OBD_LOG_CANCEL,	"llog_cancel" },
> -	{ OBD_QC_CALLBACK,  "obd_quota_callback" },
> -	{ OBD_IDX_READ,	    "dt_index_read" },
> -	{ LLOG_ORIGIN_HANDLE_CREATE,	 "llog_origin_handle_open" },
> -	{ LLOG_ORIGIN_HANDLE_NEXT_BLOCK, "llog_origin_handle_next_block" },
> -	{ LLOG_ORIGIN_HANDLE_READ_HEADER, "llog_origin_handle_read_header" },
> -	{ LLOG_ORIGIN_HANDLE_WRITE_REC,  "llog_origin_handle_write_rec" },
> -	{ LLOG_ORIGIN_HANDLE_CLOSE,      "llog_origin_handle_close" },
> -	{ LLOG_ORIGIN_CONNECT,	   "llog_origin_connect" },
> -	{ LLOG_CATINFO,		  "llog_catinfo" },
> -	{ LLOG_ORIGIN_HANDLE_PREV_BLOCK, "llog_origin_handle_prev_block" },
> -	{ LLOG_ORIGIN_HANDLE_DESTROY,    "llog_origin_handle_destroy" },
> -	{ QUOTA_DQACQ,      "quota_acquire" },
> -	{ QUOTA_DQREL,      "quota_release" },
> -	{ SEQ_QUERY,	"seq_query" },
> -	{ SEC_CTX_INIT,     "sec_ctx_init" },
> -	{ SEC_CTX_INIT_CONT, "sec_ctx_init_cont" },
> -	{ SEC_CTX_FINI,     "sec_ctx_fini" },
> -	{ FLD_QUERY,	"fld_query" },
> -	{ FLD_READ,	"fld_read" },
> +	{ OST_REPLY,				"ost_reply" },
> +	{ OST_GETATTR,				"ost_getattr" },
> +	{ OST_SETATTR,				"ost_setattr" },
> +	{ OST_READ,				"ost_read" },
> +	{ OST_WRITE,				"ost_write" },
> +	{ OST_CREATE,				"ost_create" },
> +	{ OST_DESTROY,				"ost_destroy" },
> +	{ OST_GET_INFO,				"ost_get_info" },
> +	{ OST_CONNECT,				"ost_connect" },
> +	{ OST_DISCONNECT,			"ost_disconnect" },
> +	{ OST_PUNCH,				"ost_punch" },
> +	{ OST_OPEN,				"ost_open" },
> +	{ OST_CLOSE,				"ost_close" },
> +	{ OST_STATFS,				"ost_statfs" },
> +	{ 14,					NULL },	/* formerly OST_SAN_READ */
> +	{ 15,					NULL }, /* formerly OST_SAN_WRITE */
> +	{ OST_SYNC,				"ost_sync" },
> +	{ OST_SET_INFO,				"ost_set_info" },
> +	{ OST_QUOTACHECK,			"ost_quotacheck" },
> +	{ OST_QUOTACTL,				"ost_quotactl" },
> +	{ OST_QUOTA_ADJUST_QUNIT,		"ost_quota_adjust_qunit" },
> +	{ OST_LADVISE,				"ost_ladvise" },
> +	{ MDS_GETATTR,				"mds_getattr" },
> +	{ MDS_GETATTR_NAME,			"mds_getattr_lock" },
> +	{ MDS_CLOSE,				"mds_close" },
> +	{ MDS_REINT,				"mds_reint" },
> +	{ MDS_READPAGE,				"mds_readpage" },
> +	{ MDS_CONNECT,				"mds_connect" },
> +	{ MDS_DISCONNECT,			"mds_disconnect" },
> +	{ MDS_GET_ROOT,				"mds_get_root" },
> +	{ MDS_STATFS,				"mds_statfs" },
> +	{ MDS_PIN,				"mds_pin" },
> +	{ MDS_UNPIN,				"mds_unpin" },
> +	{ MDS_SYNC,				"mds_sync" },
> +	{ MDS_DONE_WRITING,			"mds_done_writing" },
> +	{ MDS_SET_INFO,				"mds_set_info" },
> +	{ MDS_QUOTACHECK,			"mds_quotacheck" },
> +	{ MDS_QUOTACTL,				"mds_quotactl" },
> +	{ MDS_GETXATTR,				"mds_getxattr" },
> +	{ MDS_SETXATTR,				"mds_setxattr" },
> +	{ MDS_WRITEPAGE,			"mds_writepage" },
> +	{ MDS_IS_SUBDIR,			"mds_is_subdir" },
> +	{ MDS_GET_INFO,				"mds_get_info" },
> +	{ MDS_HSM_STATE_GET,			"mds_hsm_state_get" },
> +	{ MDS_HSM_STATE_SET,			"mds_hsm_state_set" },
> +	{ MDS_HSM_ACTION,			"mds_hsm_action" },
> +	{ MDS_HSM_PROGRESS,			"mds_hsm_progress" },
> +	{ MDS_HSM_REQUEST,			"mds_hsm_request" },
> +	{ MDS_HSM_CT_REGISTER,			"mds_hsm_ct_register" },
> +	{ MDS_HSM_CT_UNREGISTER,		"mds_hsm_ct_unregister" },
> +	{ MDS_SWAP_LAYOUTS,			"mds_swap_layouts" },
> +	{ LDLM_ENQUEUE,				"ldlm_enqueue" },
> +	{ LDLM_CONVERT,				"ldlm_convert" },
> +	{ LDLM_CANCEL,				"ldlm_cancel" },
> +	{ LDLM_BL_CALLBACK,			"ldlm_bl_callback" },
> +	{ LDLM_CP_CALLBACK,			"ldlm_cp_callback" },
> +	{ LDLM_GL_CALLBACK,			"ldlm_gl_callback" },
> +	{ LDLM_SET_INFO,			"ldlm_set_info" },
> +	{ MGS_CONNECT,				"mgs_connect" },
> +	{ MGS_DISCONNECT,			"mgs_disconnect" },
> +	{ MGS_EXCEPTION,			"mgs_exception" },
> +	{ MGS_TARGET_REG,			"mgs_target_reg" },
> +	{ MGS_TARGET_DEL,			"mgs_target_del" },
> +	{ MGS_SET_INFO,				"mgs_set_info" },
> +	{ MGS_CONFIG_READ,			"mgs_config_read" },
> +	{ OBD_PING,				"obd_ping" },
> +	{ OBD_LOG_CANCEL,			"llog_cancel" },
> +	{ OBD_QC_CALLBACK,			"obd_quota_callback" },
> +	{ OBD_IDX_READ,				"dt_index_read" },
> +	{ LLOG_ORIGIN_HANDLE_CREATE,		 "llog_origin_handle_open" },
> +	{ LLOG_ORIGIN_HANDLE_NEXT_BLOCK,	"llog_origin_handle_next_block" },
> +	{ LLOG_ORIGIN_HANDLE_READ_HEADER,	"llog_origin_handle_read_header" },
> +	{ LLOG_ORIGIN_HANDLE_WRITE_REC,		"llog_origin_handle_write_rec" },
> +	{ LLOG_ORIGIN_HANDLE_CLOSE,		"llog_origin_handle_close" },
> +	{ LLOG_ORIGIN_CONNECT,			"llog_origin_connect" },
> +	{ LLOG_CATINFO,				"llog_catinfo" },
> +	{ LLOG_ORIGIN_HANDLE_PREV_BLOCK,	"llog_origin_handle_prev_block" },
> +	{ LLOG_ORIGIN_HANDLE_DESTROY,		"llog_origin_handle_destroy" },
> +	{ QUOTA_DQACQ,				"quota_acquire" },
> +	{ QUOTA_DQREL,				"quota_release" },
> +	{ SEQ_QUERY,				"seq_query" },
> +	{ SEC_CTX_INIT,				"sec_ctx_init" },
> +	{ SEC_CTX_INIT_CONT,			"sec_ctx_init_cont" },
> +	{ SEC_CTX_FINI,				"sec_ctx_fini" },
> +	{ FLD_QUERY,				"fld_query" },
> +	{ FLD_READ,				"fld_read" },
>  };
>  
>  static struct ll_eopcode {
> -	u32       opcode;
> -	const char *opname;
> +	u32			opcode;
> +	const char		*opname;
>  } ll_eopcode_table[EXTRA_LAST_OPC] = {
> -	{ LDLM_GLIMPSE_ENQUEUE, "ldlm_glimpse_enqueue" },
> -	{ LDLM_PLAIN_ENQUEUE,   "ldlm_plain_enqueue" },
> -	{ LDLM_EXTENT_ENQUEUE,  "ldlm_extent_enqueue" },
> -	{ LDLM_FLOCK_ENQUEUE,   "ldlm_flock_enqueue" },
> -	{ LDLM_IBITS_ENQUEUE,   "ldlm_ibits_enqueue" },
> -	{ MDS_REINT_SETATTR,    "mds_reint_setattr" },
> -	{ MDS_REINT_CREATE,     "mds_reint_create" },
> -	{ MDS_REINT_LINK,       "mds_reint_link" },
> -	{ MDS_REINT_UNLINK,     "mds_reint_unlink" },
> -	{ MDS_REINT_RENAME,     "mds_reint_rename" },
> -	{ MDS_REINT_OPEN,       "mds_reint_open" },
> -	{ MDS_REINT_SETXATTR,   "mds_reint_setxattr" },
> -	{ BRW_READ_BYTES,       "read_bytes" },
> -	{ BRW_WRITE_BYTES,      "write_bytes" },
> +	{ LDLM_GLIMPSE_ENQUEUE,			"ldlm_glimpse_enqueue" },
> +	{ LDLM_PLAIN_ENQUEUE,			"ldlm_plain_enqueue" },
> +	{ LDLM_EXTENT_ENQUEUE,			"ldlm_extent_enqueue" },
> +	{ LDLM_FLOCK_ENQUEUE,			"ldlm_flock_enqueue" },
> +	{ LDLM_IBITS_ENQUEUE,			"ldlm_ibits_enqueue" },
> +	{ MDS_REINT_SETATTR,			"mds_reint_setattr" },
> +	{ MDS_REINT_CREATE,			"mds_reint_create" },
> +	{ MDS_REINT_LINK,			"mds_reint_link" },
> +	{ MDS_REINT_UNLINK,			"mds_reint_unlink" },
> +	{ MDS_REINT_RENAME,			"mds_reint_rename" },
> +	{ MDS_REINT_OPEN,			"mds_reint_open" },
> +	{ MDS_REINT_SETXATTR,			"mds_reint_setxattr" },
> +	{ BRW_READ_BYTES,			"read_bytes" },
> +	{ BRW_WRITE_BYTES,			"write_bytes" },
>  };
>  
>  const char *ll_opcode2str(u32 opcode)
> @@ -450,13 +450,13 @@ static void nrs_policy_get_info_locked(struct ptlrpc_nrs_policy *policy,
>  
>  	memcpy(info->pi_name, policy->pol_desc->pd_name, NRS_POL_NAME_MAX);
>  
> -	info->pi_fallback    = !!(policy->pol_flags & PTLRPC_NRS_FL_FALLBACK);
> -	info->pi_state	     = policy->pol_state;
> +	info->pi_fallback = !!(policy->pol_flags & PTLRPC_NRS_FL_FALLBACK);
> +	info->pi_state = policy->pol_state;
>  	/**
>  	 * XXX: These are accessed without holding
>  	 * ptlrpc_service_part::scp_req_lock.
>  	 */
> -	info->pi_req_queued  = policy->pol_req_queued;
> +	info->pi_req_queued = policy->pol_req_queued;
>  	info->pi_req_started = policy->pol_req_started;
>  }
>  
> @@ -788,18 +788,18 @@ struct ptlrpc_srh_iterator {
>  /* convert position to sequence */
>  #define PTLRPC_REQ_POS2SEQ(svc, pos)			\
>  	((svc)->srv_cpt_bits == 0 ? (pos) :		\
> -	 ((u64)(pos) << (svc)->srv_cpt_bits) |	\
> +	 ((u64)(pos) << (svc)->srv_cpt_bits) |		\
>  	 ((u64)(pos) >> (64 - (svc)->srv_cpt_bits)))
>  
>  static void *
>  ptlrpc_lprocfs_svc_req_history_start(struct seq_file *s, loff_t *pos)
>  {
> -	struct ptlrpc_service		*svc = s->private;
> -	struct ptlrpc_service_part	*svcpt;
> -	struct ptlrpc_srh_iterator	*srhi;
> -	unsigned int			cpt;
> -	int				rc;
> -	int				i;
> +	struct ptlrpc_service *svc = s->private;
> +	struct ptlrpc_service_part *svcpt;
> +	struct ptlrpc_srh_iterator *srhi;
> +	unsigned int cpt;
> +	int rc;
> +	int i;
>  
>  	if (sizeof(loff_t) != sizeof(u64)) { /* can't support */
>  		CWARN("Failed to read request history because size of loff_t %d can't match size of u64\n",
> @@ -940,10 +940,10 @@ static int ptlrpc_lprocfs_svc_req_history_show(struct seq_file *s, void *iter)
>  ptlrpc_lprocfs_svc_req_history_open(struct inode *inode, struct file *file)
>  {
>  	static const struct seq_operations sops = {
> -		.start = ptlrpc_lprocfs_svc_req_history_start,
> -		.stop  = ptlrpc_lprocfs_svc_req_history_stop,
> -		.next  = ptlrpc_lprocfs_svc_req_history_next,
> -		.show  = ptlrpc_lprocfs_svc_req_history_show,
> +		.start	= ptlrpc_lprocfs_svc_req_history_start,
> +		.stop	= ptlrpc_lprocfs_svc_req_history_stop,
> +		.next	= ptlrpc_lprocfs_svc_req_history_next,
> +		.show	= ptlrpc_lprocfs_svc_req_history_show,
>  	};
>  	struct seq_file *seqf;
>  	int rc;
> @@ -975,9 +975,9 @@ static int ptlrpc_lprocfs_timeouts_seq_show(struct seq_file *m, void *n)
>  	}
>  
>  	ptlrpc_service_for_each_part(svcpt, i, svc) {
> -		cur	= at_get(&svcpt->scp_at_estimate);
> -		worst	= svcpt->scp_at_estimate.at_worst_ever;
> -		worstt	= svcpt->scp_at_estimate.at_worst_time;
> +		cur = at_get(&svcpt->scp_at_estimate);
> +		worst = svcpt->scp_at_estimate.at_worst_ever;
> +		worstt = svcpt->scp_at_estimate.at_worst_time;
>  		s2dhms(&ts, ktime_get_real_seconds() - worstt);
>  
>  		seq_printf(m, "%10s : cur %3u  worst %3u (at %lld, "
> @@ -1074,26 +1074,26 @@ void ptlrpc_ldebugfs_register_service(struct dentry *entry,
>  				      struct ptlrpc_service *svc)
>  {
>  	struct lprocfs_vars lproc_vars[] = {
> -		{.name       = "req_buffer_history_len",
> -		 .fops	     = &ptlrpc_lprocfs_req_history_len_fops,
> -		 .data       = svc},
> -		{.name       = "req_buffer_history_max",
> -		 .fops	     = &ptlrpc_lprocfs_req_history_max_fops,
> -		 .data       = svc},
> -		{.name       = "timeouts",
> -		 .fops	     = &ptlrpc_lprocfs_timeouts_fops,
> -		 .data       = svc},
> -		{.name       = "nrs_policies",
> -		 .fops	     = &ptlrpc_lprocfs_nrs_fops,
> -		 .data	     = svc},
> -		{NULL}
> +		{ .name		= "req_buffer_history_len",
> +		  .fops		= &ptlrpc_lprocfs_req_history_len_fops,
> +		  .data		= svc },
> +		{ .name		= "req_buffer_history_max",
> +		  .fops		= &ptlrpc_lprocfs_req_history_max_fops,
> +		  .data		= svc },
> +		{ .name		= "timeouts",
> +		  .fops		= &ptlrpc_lprocfs_timeouts_fops,
> +		  .data		= svc },
> +		{ .name		= "nrs_policies",
> +		  .fops		= &ptlrpc_lprocfs_nrs_fops,
> +		  .data		= svc },
> +		{ NULL }
>  	};
>  	static const struct file_operations req_history_fops = {
> -		.owner       = THIS_MODULE,
> -		.open	= ptlrpc_lprocfs_svc_req_history_open,
> -		.read	= seq_read,
> -		.llseek      = seq_lseek,
> -		.release     = lprocfs_seq_release,
> +		.owner		= THIS_MODULE,
> +		.open		= ptlrpc_lprocfs_svc_req_history_open,
> +		.read		= seq_read,
> +		.llseek		= seq_lseek,
> +		.release	= lprocfs_seq_release,
>  	};
>  
>  	ptlrpc_ldebugfs_register(entry, svc->srv_name,
> diff --git a/drivers/staging/lustre/lustre/ptlrpc/niobuf.c b/drivers/staging/lustre/lustre/ptlrpc/niobuf.c
> index d3044a7..ea7a7f9 100644
> --- a/drivers/staging/lustre/lustre/ptlrpc/niobuf.c
> +++ b/drivers/staging/lustre/lustre/ptlrpc/niobuf.c
> @@ -280,6 +280,7 @@ int ptlrpc_unregister_bulk(struct ptlrpc_request *req, int async)
>  		 * timeout lets us CWARN for visibility of sluggish LNDs
>  		 */
>  		int cnt = 0;
> +
>  		while (cnt < LONG_UNLINK &&
>  		       (rc = wait_event_idle_timeout(*wq,
>  						     !ptlrpc_client_bulk_active(req),
> @@ -685,7 +686,7 @@ int ptl_send_rpc(struct ptlrpc_request *request, int noreply)
>  	 * add the network latency for our local timeout.
>  	 */
>  	request->rq_deadline = request->rq_sent + request->rq_timeout +
> -		ptlrpc_at_get_net_latency(request);
> +			       ptlrpc_at_get_net_latency(request);
>  
>  	ptlrpc_pinger_sending_on_import(imp);
>  
> @@ -705,7 +706,7 @@ int ptl_send_rpc(struct ptlrpc_request *request, int noreply)
>  	if (noreply)
>  		goto out;
>  
> - cleanup_me:
> +cleanup_me:
>  	/* MEUnlink is safe; the PUT didn't even get off the ground, and
>  	 * nobody apart from the PUT's target has the right nid+XID to
>  	 * access the reply buffer.
> @@ -715,7 +716,7 @@ int ptl_send_rpc(struct ptlrpc_request *request, int noreply)
>  	/* UNLINKED callback called synchronously */
>  	LASSERT(!request->rq_receiving_reply);
>  
> - cleanup_bulk:
> +cleanup_bulk:
>  	/* We do sync unlink here as there was no real transfer here so
>  	 * the chance to have long unlink to sluggish net is smaller here.
>  	 */
> diff --git a/drivers/staging/lustre/lustre/ptlrpc/nrs.c b/drivers/staging/lustre/lustre/ptlrpc/nrs.c
> index 248ba04..ef7dd5d 100644
> --- a/drivers/staging/lustre/lustre/ptlrpc/nrs.c
> +++ b/drivers/staging/lustre/lustre/ptlrpc/nrs.c
> @@ -118,7 +118,6 @@ static int nrs_policy_stop_locked(struct ptlrpc_nrs_policy *policy)
>  	/* Immediately make it invisible */
>  	if (nrs->nrs_policy_primary == policy) {
>  		nrs->nrs_policy_primary = NULL;
> -
>  	} else {
>  		LASSERT(nrs->nrs_policy_fallback == policy);
>  		nrs->nrs_policy_fallback = NULL;
> diff --git a/drivers/staging/lustre/lustre/ptlrpc/ptlrpc_internal.h b/drivers/staging/lustre/lustre/ptlrpc/ptlrpc_internal.h
> index 10c2520..5383b68 100644
> --- a/drivers/staging/lustre/lustre/ptlrpc/ptlrpc_internal.h
> +++ b/drivers/staging/lustre/lustre/ptlrpc/ptlrpc_internal.h
> @@ -111,12 +111,12 @@ struct nrs_core {
>  	 * Protects nrs_core::nrs_policies, serializes external policy
>  	 * registration/unregistration, and NRS core lprocfs operations.
>  	 */
> -	struct mutex nrs_mutex;
> +	struct mutex		nrs_mutex;
>  	/**
>  	 * List of all policy descriptors registered with NRS core; protected
>  	 * by nrs_core::nrs_mutex.
>  	 */
> -	struct list_head nrs_policies;
> +	struct list_head	nrs_policies;
>  
>  };
>  
> @@ -251,15 +251,15 @@ struct ptlrpc_reply_state *
>  void ptlrpc_pinger_wake_up(void);
>  
>  /* sec_null.c */
> -int  sptlrpc_null_init(void);
> +int sptlrpc_null_init(void);
>  void sptlrpc_null_fini(void);
>  
>  /* sec_plain.c */
> -int  sptlrpc_plain_init(void);
> +int sptlrpc_plain_init(void);
>  void sptlrpc_plain_fini(void);
>  
>  /* sec_bulk.c */
> -int  sptlrpc_enc_pool_init(void);
> +int sptlrpc_enc_pool_init(void);
>  void sptlrpc_enc_pool_fini(void);
>  int sptlrpc_proc_enc_pool_seq_show(struct seq_file *m, void *v);
>  
> @@ -277,11 +277,11 @@ void sptlrpc_conf_choose_flavor(enum lustre_sec_part from,
>  				struct obd_uuid *target,
>  				lnet_nid_t nid,
>  				struct sptlrpc_flavor *sf);
> -int  sptlrpc_conf_init(void);
> +int sptlrpc_conf_init(void);
>  void sptlrpc_conf_fini(void);
>  
>  /* sec.c */
> -int  sptlrpc_init(void);
> +int sptlrpc_init(void);
>  void sptlrpc_fini(void);
>  
>  /* layout.c */
> diff --git a/drivers/staging/lustre/lustre/ptlrpc/ptlrpcd.c b/drivers/staging/lustre/lustre/ptlrpc/ptlrpcd.c
> index e39c38a..f0ac296 100644
> --- a/drivers/staging/lustre/lustre/ptlrpc/ptlrpcd.c
> +++ b/drivers/staging/lustre/lustre/ptlrpc/ptlrpcd.c
> @@ -69,13 +69,13 @@
>  
>  /* One of these per CPT. */
>  struct ptlrpcd {
> -	int pd_size;
> -	int pd_index;
> -	int pd_cpt;
> -	int pd_cursor;
> -	int pd_nthreads;
> -	int pd_groupsize;
> -	struct ptlrpcd_ctl pd_threads[0];
> +	int			pd_size;
> +	int			pd_index;
> +	int			pd_cpt;
> +	int			pd_cursor;
> +	int			pd_nthreads;
> +	int			pd_groupsize;
> +	struct ptlrpcd_ctl	pd_threads[0];
>  };
>  
>  /*
> @@ -171,9 +171,9 @@ void ptlrpcd_wake(struct ptlrpc_request *req)
>  static struct ptlrpcd_ctl *
>  ptlrpcd_select_pc(struct ptlrpc_request *req)
>  {
> -	struct ptlrpcd	*pd;
> -	int		cpt;
> -	int		idx;
> +	struct ptlrpcd *pd;
> +	int cpt;
> +	int idx;
>  
>  	if (req && req->rq_send_state != LUSTRE_IMP_FULL)
>  		return &ptlrpcd_rcv;
> diff --git a/drivers/staging/lustre/lustre/ptlrpc/recover.c b/drivers/staging/lustre/lustre/ptlrpc/recover.c
> index ed769a4..af672ab 100644
> --- a/drivers/staging/lustre/lustre/ptlrpc/recover.c
> +++ b/drivers/staging/lustre/lustre/ptlrpc/recover.c
> @@ -119,6 +119,7 @@ int ptlrpc_replay_next(struct obd_import *imp, int *inflight)
>  	 */
>  	if (!req) {
>  		struct ptlrpc_request *tmp;
> +
>  		list_for_each_entry_safe(tmp, pos, &imp->imp_replay_list,
>  					 rq_replay_list) {
>  			if (tmp->rq_transno > last_transno) {
> diff --git a/drivers/staging/lustre/lustre/ptlrpc/sec.c b/drivers/staging/lustre/lustre/ptlrpc/sec.c
> index 165082a..6dc7731 100644
> --- a/drivers/staging/lustre/lustre/ptlrpc/sec.c
> +++ b/drivers/staging/lustre/lustre/ptlrpc/sec.c
> @@ -171,7 +171,7 @@ u32 sptlrpc_name2flavor_base(const char *name)
>  
>  const char *sptlrpc_flavor2name_base(u32 flvr)
>  {
> -	u32   base = SPTLRPC_FLVR_BASE(flvr);
> +	u32 base = SPTLRPC_FLVR_BASE(flvr);
>  
>  	if (base == SPTLRPC_FLVR_BASE(SPTLRPC_FLVR_NULL))
>  		return "null";
> @@ -365,7 +365,7 @@ int sptlrpc_req_get_ctx(struct ptlrpc_request *req)
>  {
>  	struct obd_import *imp = req->rq_import;
>  	struct ptlrpc_sec *sec;
> -	int		rc;
> +	int rc;
>  
>  	LASSERT(!req->rq_cli_ctx);
>  	LASSERT(imp);
> diff --git a/drivers/staging/lustre/lustre/ptlrpc/sec_bulk.c b/drivers/staging/lustre/lustre/ptlrpc/sec_bulk.c
> index 93dcb6d..74cfdd8 100644
> --- a/drivers/staging/lustre/lustre/ptlrpc/sec_bulk.c
> +++ b/drivers/staging/lustre/lustre/ptlrpc/sec_bulk.c
> @@ -57,8 +57,8 @@
>  #define POINTERS_PER_PAGE	(PAGE_SIZE / sizeof(void *))
>  #define PAGES_PER_POOL		(POINTERS_PER_PAGE)
>  
> -#define IDLE_IDX_MAX	 (100)
> -#define IDLE_IDX_WEIGHT	 (3)
> +#define IDLE_IDX_MAX		(100)
> +#define IDLE_IDX_WEIGHT		(3)
>  
>  #define CACHE_QUIESCENT_PERIOD  (20)
>  
> @@ -66,16 +66,16 @@
>  	/*
>  	 * constants
>  	 */
> -	unsigned long    epp_max_pages;   /* maximum pages can hold, const */
> -	unsigned int     epp_max_pools;   /* number of pools, const */
> +	unsigned long		epp_max_pages;	/* maximum pages can hold, const */
> +	unsigned int		epp_max_pools;	/* number of pools, const */
>  
>  	/*
>  	 * wait queue in case of not enough free pages.
>  	 */
> -	wait_queue_head_t      epp_waitq;       /* waiting threads */
> -	unsigned int     epp_waitqlen;    /* wait queue length */
> -	unsigned long    epp_pages_short; /* # of pages wanted of in-q users */
> -	unsigned int     epp_growing:1;   /* during adding pages */
> +	wait_queue_head_t	epp_waitq;	/* waiting threads */
> +	unsigned int		epp_waitqlen;	/* wait queue length */
> +	unsigned long		epp_pages_short; /* # of pages wanted of in-q users */
> +	unsigned int		epp_growing:1;	/* during adding pages */
>  
>  	/*
>  	 * indicating how idle the pools are, from 0 to MAX_IDLE_IDX
> @@ -84,36 +84,36 @@
>  	 * is idled for a while but the idle_idx might still be low if no
>  	 * activities happened in the pools.
>  	 */
> -	unsigned long    epp_idle_idx;
> +	unsigned long		epp_idle_idx;
>  
>  	/* last shrink time due to mem tight */
> -	time64_t         epp_last_shrink;
> -	time64_t         epp_last_access;
> +	time64_t		epp_last_shrink;
> +	time64_t		epp_last_access;
>  
>  	/*
>  	 * in-pool pages bookkeeping
>  	 */
> -	spinlock_t	 epp_lock;	   /* protect following fields */
> -	unsigned long    epp_total_pages; /* total pages in pools */
> -	unsigned long    epp_free_pages;  /* current pages available */
> +	spinlock_t		epp_lock;	 /* protect following fields */
> +	unsigned long		epp_total_pages; /* total pages in pools */
> +	unsigned long		epp_free_pages;	 /* current pages available */
>  
>  	/*
>  	 * statistics
>  	 */
> -	unsigned long    epp_st_max_pages;      /* # of pages ever reached */
> -	unsigned int     epp_st_grows;	  /* # of grows */
> -	unsigned int     epp_st_grow_fails;     /* # of add pages failures */
> -	unsigned int     epp_st_shrinks;	/* # of shrinks */
> -	unsigned long    epp_st_access;	 /* # of access */
> -	unsigned long    epp_st_missings;       /* # of cache missing */
> -	unsigned long    epp_st_lowfree;	/* lowest free pages reached */
> -	unsigned int     epp_st_max_wqlen;      /* highest waitqueue length */
> -	unsigned long       epp_st_max_wait;       /* in jiffies */
> -	unsigned long	 epp_st_outofmem;	/* # of out of mem requests */
> +	unsigned long		epp_st_max_pages;	/* # of pages ever reached */
> +	unsigned int		epp_st_grows;		/* # of grows */
> +	unsigned int		epp_st_grow_fails;	/* # of add pages failures */
> +	unsigned int		epp_st_shrinks;		/* # of shrinks */
> +	unsigned long		epp_st_access;		/* # of access */
> +	unsigned long		epp_st_missings;	/* # of cache missing */
> +	unsigned long		epp_st_lowfree;		/* lowest free pages reached */
> +	unsigned int		epp_st_max_wqlen;	/* highest waitqueue length */
> +	unsigned long		epp_st_max_wait;	/* in jiffies */
> +	unsigned long		epp_st_outofmem;	/* # of out of mem requests */
>  	/*
>  	 * pointers to pools
>  	 */
> -	struct page    ***epp_pools;
> +	struct page		***epp_pools;
>  } page_pools;
>  
>  /*
> @@ -394,9 +394,9 @@ static inline void enc_pools_free(void)
>  }
>  
>  static struct shrinker pools_shrinker = {
> -	.count_objects	= enc_pools_shrink_count,
> -	.scan_objects	= enc_pools_shrink_scan,
> -	.seeks		= DEFAULT_SEEKS,
> +	.count_objects		= enc_pools_shrink_count,
> +	.scan_objects		= enc_pools_shrink_scan,
> +	.seeks			= DEFAULT_SEEKS,
>  };
>  
>  int sptlrpc_enc_pool_init(void)
> @@ -475,14 +475,14 @@ void sptlrpc_enc_pool_fini(void)
>  }
>  
>  static int cfs_hash_alg_id[] = {
> -	[BULK_HASH_ALG_NULL]	= CFS_HASH_ALG_NULL,
> -	[BULK_HASH_ALG_ADLER32]	= CFS_HASH_ALG_ADLER32,
> -	[BULK_HASH_ALG_CRC32]	= CFS_HASH_ALG_CRC32,
> -	[BULK_HASH_ALG_MD5]	= CFS_HASH_ALG_MD5,
> -	[BULK_HASH_ALG_SHA1]	= CFS_HASH_ALG_SHA1,
> -	[BULK_HASH_ALG_SHA256]	= CFS_HASH_ALG_SHA256,
> -	[BULK_HASH_ALG_SHA384]	= CFS_HASH_ALG_SHA384,
> -	[BULK_HASH_ALG_SHA512]	= CFS_HASH_ALG_SHA512,
> +	[BULK_HASH_ALG_NULL]		= CFS_HASH_ALG_NULL,
> +	[BULK_HASH_ALG_ADLER32]		= CFS_HASH_ALG_ADLER32,
> +	[BULK_HASH_ALG_CRC32]		= CFS_HASH_ALG_CRC32,
> +	[BULK_HASH_ALG_MD5]		= CFS_HASH_ALG_MD5,
> +	[BULK_HASH_ALG_SHA1]		= CFS_HASH_ALG_SHA1,
> +	[BULK_HASH_ALG_SHA256]		= CFS_HASH_ALG_SHA256,
> +	[BULK_HASH_ALG_SHA384]		= CFS_HASH_ALG_SHA384,
> +	[BULK_HASH_ALG_SHA512]		= CFS_HASH_ALG_SHA512,
>  };
>  
>  const char *sptlrpc_get_hash_name(u8 hash_alg)
> @@ -498,7 +498,7 @@ u8 sptlrpc_get_hash_alg(const char *algname)
>  int bulk_sec_desc_unpack(struct lustre_msg *msg, int offset, int swabbed)
>  {
>  	struct ptlrpc_bulk_sec_desc *bsd;
> -	int			  size = msg->lm_buflens[offset];
> +	int size = msg->lm_buflens[offset];
>  
>  	bsd = lustre_msg_buf(msg, offset, sizeof(*bsd));
>  	if (!bsd) {
> diff --git a/drivers/staging/lustre/lustre/ptlrpc/sec_config.c b/drivers/staging/lustre/lustre/ptlrpc/sec_config.c
> index 1844ada..54130ae 100644
> --- a/drivers/staging/lustre/lustre/ptlrpc/sec_config.c
> +++ b/drivers/staging/lustre/lustre/ptlrpc/sec_config.c
> @@ -408,19 +408,19 @@ static int sptlrpc_rule_set_choose(struct sptlrpc_rule_set *rset,
>   **********************************/
>  
>  struct sptlrpc_conf_tgt {
> -	struct list_head	      sct_list;
> -	char		    sct_name[MAX_OBD_NAME];
> -	struct sptlrpc_rule_set sct_rset;
> +	struct list_head		sct_list;
> +	char				sct_name[MAX_OBD_NAME];
> +	struct sptlrpc_rule_set		sct_rset;
>  };
>  
>  struct sptlrpc_conf {
> -	struct list_head	      sc_list;
> -	char		    sc_fsname[MTI_NAME_MAXLEN];
> -	unsigned int	    sc_modified;  /* modified during updating */
> -	unsigned int	    sc_updated:1, /* updated copy from MGS */
> -				sc_local:1;   /* local copy from target */
> -	struct sptlrpc_rule_set sc_rset;      /* fs general rules */
> -	struct list_head	      sc_tgts;      /* target-specific rules */
> +	struct list_head		sc_list;
> +	char				sc_fsname[MTI_NAME_MAXLEN];
> +	unsigned int			sc_modified;	/* modified during updating */
> +	unsigned int			sc_updated:1,	/* updated copy from MGS */
> +					sc_local:1;	/* local copy from target */
> +	struct sptlrpc_rule_set		sc_rset;	/* fs general rules */
> +	struct list_head		sc_tgts;	/* target-specific rules */
>  };
>  
>  static struct mutex sptlrpc_conf_lock;
> @@ -801,7 +801,7 @@ void sptlrpc_conf_choose_flavor(enum lustre_sec_part from,
>  	flavor_set_flags(sf, from, to, 1);
>  }
>  
> -#define SEC_ADAPT_DELAY	 (10)
> +#define SEC_ADAPT_DELAY		(10)
>  
>  /**
>   * called by client devices, notify the sptlrpc config has changed and
> diff --git a/drivers/staging/lustre/lustre/ptlrpc/sec_null.c b/drivers/staging/lustre/lustre/ptlrpc/sec_null.c
> index 6933a53..df6ef4f 100644
> --- a/drivers/staging/lustre/lustre/ptlrpc/sec_null.c
> +++ b/drivers/staging/lustre/lustre/ptlrpc/sec_null.c
> @@ -277,8 +277,8 @@ int null_enlarge_reqbuf(struct ptlrpc_sec *sec,
>  }
>  
>  static struct ptlrpc_svc_ctx null_svc_ctx = {
> -	.sc_refcount    = ATOMIC_INIT(1),
> -	.sc_policy      = &null_policy,
> +	.sc_refcount	= ATOMIC_INIT(1),
> +	.sc_policy	= &null_policy,
>  };
>  
>  static
> @@ -373,33 +373,33 @@ int null_authorize(struct ptlrpc_request *req)
>  
>  static struct ptlrpc_ctx_ops null_ctx_ops = {
>  	.refresh		= null_ctx_refresh,
> -	.sign		   = null_ctx_sign,
> -	.verify		 = null_ctx_verify,
> +	.sign			= null_ctx_sign,
> +	.verify			= null_ctx_verify,
>  };
>  
>  static struct ptlrpc_sec_cops null_sec_cops = {
> -	.create_sec	     = null_create_sec,
> -	.destroy_sec	    = null_destroy_sec,
> -	.lookup_ctx	     = null_lookup_ctx,
> +	.create_sec		= null_create_sec,
> +	.destroy_sec		= null_destroy_sec,
> +	.lookup_ctx		= null_lookup_ctx,
>  	.flush_ctx_cache	= null_flush_ctx_cache,
> -	.alloc_reqbuf	   = null_alloc_reqbuf,
> -	.alloc_repbuf	   = null_alloc_repbuf,
> -	.free_reqbuf	    = null_free_reqbuf,
> -	.free_repbuf	    = null_free_repbuf,
> -	.enlarge_reqbuf	 = null_enlarge_reqbuf,
> +	.alloc_reqbuf		= null_alloc_reqbuf,
> +	.alloc_repbuf		= null_alloc_repbuf,
> +	.free_reqbuf		= null_free_reqbuf,
> +	.free_repbuf		= null_free_repbuf,
> +	.enlarge_reqbuf		= null_enlarge_reqbuf,
>  };
>  
>  static struct ptlrpc_sec_sops null_sec_sops = {
> -	.accept		 = null_accept,
> -	.alloc_rs	       = null_alloc_rs,
> -	.authorize	      = null_authorize,
> +	.accept			= null_accept,
> +	.alloc_rs		= null_alloc_rs,
> +	.authorize		= null_authorize,
>  	.free_rs		= null_free_rs,
>  };
>  
>  static struct ptlrpc_sec_policy null_policy = {
> -	.sp_owner	       = THIS_MODULE,
> +	.sp_owner		= THIS_MODULE,
>  	.sp_name		= "sec.null",
> -	.sp_policy	      = SPTLRPC_POLICY_NULL,
> +	.sp_policy		= SPTLRPC_POLICY_NULL,
>  	.sp_cops		= &null_sec_cops,
>  	.sp_sops		= &null_sec_sops,
>  };
> diff --git a/drivers/staging/lustre/lustre/ptlrpc/sec_plain.c b/drivers/staging/lustre/lustre/ptlrpc/sec_plain.c
> index 0a31ff4..021bf7f 100644
> --- a/drivers/staging/lustre/lustre/ptlrpc/sec_plain.c
> +++ b/drivers/staging/lustre/lustre/ptlrpc/sec_plain.c
> @@ -46,9 +46,9 @@
>  #include "ptlrpc_internal.h"
>  
>  struct plain_sec {
> -	struct ptlrpc_sec       pls_base;
> -	rwlock_t	    pls_lock;
> -	struct ptlrpc_cli_ctx  *pls_ctx;
> +	struct ptlrpc_sec	 pls_base;
> +	rwlock_t		 pls_lock;
> +	struct ptlrpc_cli_ctx	*pls_ctx;
>  };
>  
>  static inline struct plain_sec *sec2plsec(struct ptlrpc_sec *sec)
> @@ -65,15 +65,15 @@ static inline struct plain_sec *sec2plsec(struct ptlrpc_sec *sec)
>  /*
>   * for simplicity, plain policy rpc use fixed layout.
>   */
> -#define PLAIN_PACK_SEGMENTS	     (4)
> +#define PLAIN_PACK_SEGMENTS	(4)
>  
> -#define PLAIN_PACK_HDR_OFF	      (0)
> -#define PLAIN_PACK_MSG_OFF	      (1)
> -#define PLAIN_PACK_USER_OFF	     (2)
> -#define PLAIN_PACK_BULK_OFF	     (3)
> +#define PLAIN_PACK_HDR_OFF	(0)
> +#define PLAIN_PACK_MSG_OFF	(1)
> +#define PLAIN_PACK_USER_OFF	(2)
> +#define PLAIN_PACK_BULK_OFF	(3)
>  
> -#define PLAIN_FL_USER		   (0x01)
> -#define PLAIN_FL_BULK		   (0x02)
> +#define PLAIN_FL_USER		(0x01)
> +#define PLAIN_FL_BULK		(0x02)
>  
>  struct plain_header {
>  	u8	    ph_ver;	    /* 0 */
> @@ -711,8 +711,8 @@ int plain_enlarge_reqbuf(struct ptlrpc_sec *sec,
>   ****************************************/
>  
>  static struct ptlrpc_svc_ctx plain_svc_ctx = {
> -	.sc_refcount    = ATOMIC_INIT(1),
> -	.sc_policy      = &plain_policy,
> +	.sc_refcount	= ATOMIC_INIT(1),
> +	.sc_policy	= &plain_policy,
>  };
>  
>  static
> @@ -961,40 +961,40 @@ int plain_svc_wrap_bulk(struct ptlrpc_request *req,
>  
>  static struct ptlrpc_ctx_ops plain_ctx_ops = {
>  	.refresh		= plain_ctx_refresh,
> -	.validate	       = plain_ctx_validate,
> -	.sign		   = plain_ctx_sign,
> -	.verify		 = plain_ctx_verify,
> -	.wrap_bulk	      = plain_cli_wrap_bulk,
> -	.unwrap_bulk	    = plain_cli_unwrap_bulk,
> +	.validate		= plain_ctx_validate,
> +	.sign			= plain_ctx_sign,
> +	.verify			= plain_ctx_verify,
> +	.wrap_bulk		= plain_cli_wrap_bulk,
> +	.unwrap_bulk		= plain_cli_unwrap_bulk,
>  };
>  
>  static struct ptlrpc_sec_cops plain_sec_cops = {
> -	.create_sec	     = plain_create_sec,
> -	.destroy_sec	    = plain_destroy_sec,
> -	.kill_sec	       = plain_kill_sec,
> -	.lookup_ctx	     = plain_lookup_ctx,
> -	.release_ctx	    = plain_release_ctx,
> +	.create_sec		= plain_create_sec,
> +	.destroy_sec		= plain_destroy_sec,
> +	.kill_sec		= plain_kill_sec,
> +	.lookup_ctx		= plain_lookup_ctx,
> +	.release_ctx		= plain_release_ctx,
>  	.flush_ctx_cache	= plain_flush_ctx_cache,
> -	.alloc_reqbuf	   = plain_alloc_reqbuf,
> -	.free_reqbuf	    = plain_free_reqbuf,
> -	.alloc_repbuf	   = plain_alloc_repbuf,
> -	.free_repbuf	    = plain_free_repbuf,
> -	.enlarge_reqbuf	 = plain_enlarge_reqbuf,
> +	.alloc_reqbuf		= plain_alloc_reqbuf,
> +	.free_reqbuf		= plain_free_reqbuf,
> +	.alloc_repbuf		= plain_alloc_repbuf,
> +	.free_repbuf		= plain_free_repbuf,
> +	.enlarge_reqbuf		= plain_enlarge_reqbuf,
>  };
>  
>  static struct ptlrpc_sec_sops plain_sec_sops = {
> -	.accept		 = plain_accept,
> -	.alloc_rs	       = plain_alloc_rs,
> -	.authorize	      = plain_authorize,
> +	.accept			= plain_accept,
> +	.alloc_rs		= plain_alloc_rs,
> +	.authorize		= plain_authorize,
>  	.free_rs		= plain_free_rs,
> -	.unwrap_bulk	    = plain_svc_unwrap_bulk,
> -	.wrap_bulk	      = plain_svc_wrap_bulk,
> +	.unwrap_bulk		= plain_svc_unwrap_bulk,
> +	.wrap_bulk		= plain_svc_wrap_bulk,
>  };
>  
>  static struct ptlrpc_sec_policy plain_policy = {
> -	.sp_owner	       = THIS_MODULE,
> +	.sp_owner		= THIS_MODULE,
>  	.sp_name		= "plain",
> -	.sp_policy	      = SPTLRPC_POLICY_PLAIN,
> +	.sp_policy		= SPTLRPC_POLICY_PLAIN,
>  	.sp_cops		= &plain_sec_cops,
>  	.sp_sops		= &plain_sec_sops,
>  };
> diff --git a/drivers/staging/lustre/lustre/ptlrpc/service.c b/drivers/staging/lustre/lustre/ptlrpc/service.c
> index 1030f65..5b97f2a 100644
> --- a/drivers/staging/lustre/lustre/ptlrpc/service.c
> +++ b/drivers/staging/lustre/lustre/ptlrpc/service.c
> @@ -173,7 +173,7 @@
>  	       svc->srv_name, i, svc->srv_buf_size, svcpt->scp_nrqbds_posted,
>  	       svcpt->scp_nrqbds_total, rc);
>  
> - try_post:
> +try_post:
>  	if (post && rc == 0)
>  		rc = ptlrpc_server_post_idle_rqbds(svcpt);
>  
> @@ -185,8 +185,8 @@
>  struct ptlrpc_hr_thread {
>  	int				hrt_id;		/* thread ID */
>  	spinlock_t			hrt_lock;
> -	wait_queue_head_t			hrt_waitq;
> -	struct list_head			hrt_queue;	/* RS queue */
> +	wait_queue_head_t		hrt_waitq;
> +	struct list_head		hrt_queue;	/* RS queue */
>  	struct ptlrpc_hr_partition	*hrt_partition;
>  };
>  
> @@ -212,7 +212,7 @@ struct ptlrpc_hr_service {
>  	/* CPU partition table, it's just cfs_cpt_tab for now */
>  	struct cfs_cpt_table		*hr_cpt_table;
>  	/** controller sleep waitq */
> -	wait_queue_head_t			hr_waitq;
> +	wait_queue_head_t		hr_waitq;
>  	unsigned int			hr_stopping;
>  	/** roundrobin rotor for non-affinity service */
>  	unsigned int			hr_rotor;
> @@ -236,7 +236,6 @@ struct ptlrpc_hr_service {
>  	    svcpt->scp_service->srv_cptable == ptlrpc_hr.hr_cpt_table) {
>  		/* directly match partition */
>  		hrp = ptlrpc_hr.hr_partitions[svcpt->scp_cpt];
> -
>  	} else {
>  		rotor = ptlrpc_hr.hr_rotor++;
>  		rotor %= cfs_cpt_number(ptlrpc_hr.hr_cpt_table);
> @@ -440,7 +439,7 @@ static void ptlrpc_at_timer(struct timer_list *t)
>  		nthrs = max(tc->tc_nthrs_base,
>  			    tc->tc_nthrs_max / svc->srv_ncpts);
>  	}
> - out:
> +out:
>  	nthrs = max(nthrs, tc->tc_nthrs_init);
>  	svc->srv_nthrs_cpt_limit = nthrs;
>  	svc->srv_nthrs_cpt_init = init;
> @@ -459,7 +458,7 @@ static void ptlrpc_at_timer(struct timer_list *t)
>  ptlrpc_service_part_init(struct ptlrpc_service *svc,
>  			 struct ptlrpc_service_part *svcpt, int cpt)
>  {
> -	struct ptlrpc_at_array	*array;
> +	struct ptlrpc_at_array *array;
>  	int size;
>  	int index;
>  	int rc;
> @@ -1125,7 +1124,6 @@ static int ptlrpc_at_send_early_reply(struct ptlrpc_request *req)
>  		goto out_put;
>  
>  	rc = ptlrpc_send_reply(reqcopy, PTLRPC_REPLY_EARLY);
> -
>  	if (!rc) {
>  		/* Adjust our own deadline to what we told the client */
>  		req->rq_deadline = newdl;
> @@ -1316,7 +1314,7 @@ static void ptlrpc_server_hpreq_fini(struct ptlrpc_request *req)
>  static int ptlrpc_server_request_add(struct ptlrpc_service_part *svcpt,
>  				     struct ptlrpc_request *req)
>  {
> -	int	rc;
> +	int rc;
>  
>  	rc = ptlrpc_server_hpreq_init(svcpt, req);
>  	if (rc < 0)
> @@ -2412,7 +2410,7 @@ int ptlrpc_start_threads(struct ptlrpc_service *svc)
>  	}
>  
>  	return 0;
> - failed:
> +failed:
>  	CERROR("cannot start %s thread #%d_%d: rc %d\n",
>  	       svc->srv_thread_name, i, j, rc);
>  	ptlrpc_stop_all_threads(svc);
> @@ -2432,7 +2430,7 @@ int ptlrpc_start_thread(struct ptlrpc_service_part *svcpt, int wait)
>  	       svc->srv_name, svcpt->scp_cpt, svcpt->scp_nthrs_running,
>  	       svc->srv_nthrs_cpt_init, svc->srv_nthrs_cpt_limit);
>  
> - again:
> +again:
>  	if (unlikely(svc->srv_is_stopping))
>  		return -ESRCH;
>  
> -- 
> 1.8.3.1
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 832 bytes
Desc: not available
URL: <http://lists.lustre.org/pipermail/lustre-devel-lustre.org/attachments/20190204/e62771ca/attachment-0001.sig>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [lustre-devel] [PATCH 00/26] lustre: cleanups with no code changes
  2019-01-31 17:19 [lustre-devel] [PATCH 00/26] lustre: cleanups with no code changes James Simmons
                   ` (25 preceding siblings ...)
  2019-01-31 17:19 ` [lustre-devel] [PATCH 26/26] o2iblnd: " James Simmons
@ 2019-02-04  8:44 ` Andreas Dilger
  26 siblings, 0 replies; 30+ messages in thread
From: Andreas Dilger @ 2019-02-04  8:44 UTC (permalink / raw)
  To: lustre-devel

On Jan 31, 2019, at 10:19, James Simmons <jsimmons@infradead.org> wrote:
> 
> 
> 192 files changed, 6805 insertions(+), 6827 deletions(-)

Reviewed-by: Andreas Dilger <adilger@whamcloud.com>

Cheers, Andreas
---
Andreas Dilger
Principal Lustre Architect
Whamcloud

^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2019-02-04  8:44 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-01-31 17:19 [lustre-devel] [PATCH 00/26] lustre: cleanups with no code changes James Simmons
2019-01-31 17:19 ` [lustre-devel] [PATCH 01/26] lnet: use kernel types for lnet core kernel code James Simmons
2019-01-31 17:19 ` [lustre-devel] [PATCH 02/26] lnet: use kernel types for lnet klnd " James Simmons
2019-01-31 17:19 ` [lustre-devel] [PATCH 03/26] lnet: use kernel types for lnet selftest " James Simmons
2019-01-31 17:19 ` [lustre-devel] [PATCH 04/26] ptlrpc: use kernel types for " James Simmons
2019-01-31 17:19 ` [lustre-devel] [PATCH 05/26] lustre: use kernel types for lustre internal headers James Simmons
2019-01-31 17:19 ` [lustre-devel] [PATCH 06/26] ldlm: use kernel types for kernel code James Simmons
2019-01-31 17:19 ` [lustre-devel] [PATCH 07/26] obdclass: " James Simmons
2019-01-31 17:19 ` [lustre-devel] [PATCH 08/26] lustre: convert remaining code to kernel types James Simmons
2019-01-31 17:19 ` [lustre-devel] [PATCH 09/26] lustre: cleanup white spaces in fid and fld layer James Simmons
2019-01-31 17:19 ` [lustre-devel] [PATCH 10/26] ldlm: cleanup white spaces James Simmons
2019-01-31 17:19 ` [lustre-devel] [PATCH 11/26] llite: " James Simmons
2019-01-31 17:19 ` [lustre-devel] [PATCH 12/26] lmv: " James Simmons
2019-01-31 17:19 ` [lustre-devel] [PATCH 13/26] lov: " James Simmons
2019-01-31 17:19 ` [lustre-devel] [PATCH 14/26] mdc: " James Simmons
2019-01-31 17:19 ` [lustre-devel] [PATCH 15/26] mgc: " James Simmons
2019-01-31 17:19 ` [lustre-devel] [PATCH 16/26] obdclass: " James Simmons
2019-01-31 17:19 ` [lustre-devel] [PATCH 17/26] obdecho: " James Simmons
2019-01-31 17:19 ` [lustre-devel] [PATCH 18/26] osc: " James Simmons
2019-01-31 17:19 ` [lustre-devel] [PATCH 19/26] ptlrpc: " James Simmons
2019-02-04  3:18   ` NeilBrown
2019-01-31 17:19 ` [lustre-devel] [PATCH 20/26] lustre: first batch to cleanup white spaces in internal headers James Simmons
2019-01-31 17:19 ` [lustre-devel] [PATCH 21/26] lustre: second " James Simmons
2019-01-31 17:19 ` [lustre-devel] [PATCH 22/26] lustre: last " James Simmons
2019-01-31 17:19 ` [lustre-devel] [PATCH 23/26] libcfs: cleanup white spaces James Simmons
2019-01-31 17:19 ` [lustre-devel] [PATCH 24/26] lnet: " James Simmons
2019-02-04  3:13   ` NeilBrown
2019-01-31 17:19 ` [lustre-devel] [PATCH 25/26] socklnd: " James Simmons
2019-01-31 17:19 ` [lustre-devel] [PATCH 26/26] o2iblnd: " James Simmons
2019-02-04  8:44 ` [lustre-devel] [PATCH 00/26] lustre: cleanups with no code changes Andreas Dilger

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).