All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 00/24] NFS/RDMA client for next
@ 2018-12-10 16:29 Chuck Lever
  2018-12-10 16:29 ` [PATCH v3 01/24] xprtrdma: Prevent leak of rpcrdma_rep objects Chuck Lever
                   ` (24 more replies)
  0 siblings, 25 replies; 37+ messages in thread
From: Chuck Lever @ 2018-12-10 16:29 UTC (permalink / raw)
  To: anna.schumaker; +Cc: linux-rdma, linux-nfs

Hi Anna, I'd like to see these patches merged into next.

There have been several regressions related to the ->send_request
changes merged into v4.20. As a result, this series contains some
fixes and clean-ups that resulted from testing and close code
audit while working on those regressions.

The soft IRQ warnings and DMAR faults that I observed with krb5
flavors on NFS/RDMA are resolved by a prototype fix that delays
the xprt_wake_pending_tasks call at disconnect. This fix is not
ready yet and thus does not appear in this series.

However, use of Kerberos seems to trigger a lot of connection loss.
The dynamic rpcrdma_req allocation patches that were in this series
last time have been dropped because they made it even worse.

"xprtrdma: Prevent leak of rpcrdma_rep objects" is included in this
series for convenience. Please apply that to v4.20-rc. Thanks!

Changes since v2:
- Rebased on v4.20-rc6 to pick up recent fixes
- Patches related to "xprtrdma: Dynamically allocate rpcrdma_reqs"
  have been dropped
- A number of revisions of documenting comments have been added
- Several new trace points are introduced


Changes since v1:
- Rebased on v4.20-rc4
- Series includes the full set, not just the RDMA-related fixes
- "Plant XID..." has been improved, based on testing with rxe
- The required rxe driver fix is included for convenience
- "Fix ri_max_segs..." replaces a bogus one-line fix in v1
- The patch description for "Remove support for FMR" was updated

---

Chuck Lever (24):
      xprtrdma: Prevent leak of rpcrdma_rep objects
      IB/rxe: IB_WR_REG_MR does not capture MR's iova field
      xprtrdma: Remove support for FMR memory registration
      xprtrdma: Fix ri_max_segs and the result of ro_maxpages
      xprtrdma: Reduce max_frwr_depth
      xprtrdma: Plant XID in on-the-wire RDMA offset (FRWR)
      xprtrdma: Recognize XDRBUF_SPARSE_PAGES
      xprtrdma: Remove request_module from backchannel
      xprtrdma: Expose transport header errors
      xprtrdma: Simplify locking that protects the rl_allreqs list
      xprtrdma: Cull dprintk() call sites
      xprtrdma: Clean up of xprtrdma chunk trace points
      xprtrdma: Relocate the xprtrdma_mr_map trace points
      xprtrdma: Add trace points for calls to transport switch methods
      NFS: Make "port=" mount option optional for RDMA mounts
      SUNRPC: Remove support for kerberos_v1
      SUNRPC: Fix some kernel doc complaints
      NFS: Fix NFSv4 symbolic trace point output
      SUNRPC: Simplify defining common RPC trace events
      xprtrdma: Trace mapping, alloc, and dereg failures
      xprtrdma: Update comments in frwr_op_send
      xprtrdma: Replace outdated comment for rpcrdma_ep_post
      xprtrdma: Add documenting comment for rpcrdma_buffer_destroy
      xprtrdma: Clarify comments in rpcrdma_ia_remove


 drivers/infiniband/sw/rxe/rxe_req.c      |    1 
 fs/nfs/nfs4trace.h                       |  456 +++++++++++++++++++++---------
 fs/nfs/super.c                           |   10 +
 include/linux/sunrpc/gss_krb5.h          |   39 ---
 include/linux/sunrpc/gss_krb5_enctypes.h |    2 
 include/trace/events/rpcrdma.h           |  216 +++++++++++++-
 include/trace/events/sunrpc.h            |  172 +++++------
 net/sunrpc/Kconfig                       |    3 
 net/sunrpc/auth_gss/Makefile             |    2 
 net/sunrpc/auth_gss/gss_krb5_crypto.c    |  423 ----------------------------
 net/sunrpc/auth_gss/gss_krb5_keys.c      |   53 ---
 net/sunrpc/auth_gss/gss_krb5_mech.c      |  278 ------------------
 net/sunrpc/auth_gss/gss_krb5_seal.c      |   73 -----
 net/sunrpc/auth_gss/gss_krb5_seqnum.c    |  164 -----------
 net/sunrpc/auth_gss/gss_krb5_unseal.c    |   80 -----
 net/sunrpc/auth_gss/gss_krb5_wrap.c      |  254 -----------------
 net/sunrpc/auth_gss/gss_mech_switch.c    |    2 
 net/sunrpc/backchannel_rqst.c            |    2 
 net/sunrpc/xprtmultipath.c               |    4 
 net/sunrpc/xprtrdma/Makefile             |    3 
 net/sunrpc/xprtrdma/backchannel.c        |   25 --
 net/sunrpc/xprtrdma/fmr_ops.c            |  337 ----------------------
 net/sunrpc/xprtrdma/frwr_ops.c           |   44 ++-
 net/sunrpc/xprtrdma/rpc_rdma.c           |   47 ++-
 net/sunrpc/xprtrdma/transport.c          |   56 +---
 net/sunrpc/xprtrdma/verbs.c              |  107 +++----
 net/sunrpc/xprtrdma/xprt_rdma.h          |    9 -
 net/sunrpc/xprtsock.c                    |    2 
 28 files changed, 734 insertions(+), 2130 deletions(-)
 delete mode 100644 net/sunrpc/auth_gss/gss_krb5_seqnum.c
 delete mode 100644 net/sunrpc/xprtrdma/fmr_ops.c

--
Chuck Lever

^ permalink raw reply	[flat|nested] 37+ messages in thread

* [PATCH v3 01/24] xprtrdma: Prevent leak of rpcrdma_rep objects
  2018-12-10 16:29 [PATCH v3 00/24] NFS/RDMA client for next Chuck Lever
@ 2018-12-10 16:29 ` Chuck Lever
  2018-12-10 16:29 ` [PATCH v3 02/24] IB/rxe: IB_WR_REG_MR does not capture MR's iova field Chuck Lever
                   ` (23 subsequent siblings)
  24 siblings, 0 replies; 37+ messages in thread
From: Chuck Lever @ 2018-12-10 16:29 UTC (permalink / raw)
  To: anna.schumaker; +Cc: linux-rdma, linux-nfs

If a reply has been processed but the RPC is later retransmitted
anyway, the req->rl_reply field still contains the only pointer to
the old rpcrdma rep. When the next reply comes in, the reply handler
will stomp on the rl_reply field, leaking the old rep.

A trace event is added to capture such leaks.

This problem seems to be worsened by the restructuring of the RPC
Call path in v4.20. Fully addressing this issue will require at
least a re-architecture of the disconnect logic, which is not
appropriate during -rc.

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
 include/trace/events/rpcrdma.h |   28 ++++++++++++++++++++++++++++
 net/sunrpc/xprtrdma/rpc_rdma.c |    4 ++++
 2 files changed, 32 insertions(+)

diff --git a/include/trace/events/rpcrdma.h b/include/trace/events/rpcrdma.h
index b093058..602972d 100644
--- a/include/trace/events/rpcrdma.h
+++ b/include/trace/events/rpcrdma.h
@@ -917,6 +917,34 @@
 DEFINE_CB_EVENT(xprtrdma_cb_call);
 DEFINE_CB_EVENT(xprtrdma_cb_reply);
 
+TRACE_EVENT(xprtrdma_leaked_rep,
+	TP_PROTO(
+		const struct rpc_rqst *rqst,
+		const struct rpcrdma_rep *rep
+	),
+
+	TP_ARGS(rqst, rep),
+
+	TP_STRUCT__entry(
+		__field(unsigned int, task_id)
+		__field(unsigned int, client_id)
+		__field(u32, xid)
+		__field(const void *, rep)
+	),
+
+	TP_fast_assign(
+		__entry->task_id = rqst->rq_task->tk_pid;
+		__entry->client_id = rqst->rq_task->tk_client->cl_clid;
+		__entry->xid = be32_to_cpu(rqst->rq_xid);
+		__entry->rep = rep;
+	),
+
+	TP_printk("task:%u@%u xid=0x%08x rep=%p",
+		__entry->task_id, __entry->client_id, __entry->xid,
+		__entry->rep
+	)
+);
+
 /**
  ** Server-side RPC/RDMA events
  **/
diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c
index 9f53e02..a2eb647 100644
--- a/net/sunrpc/xprtrdma/rpc_rdma.c
+++ b/net/sunrpc/xprtrdma/rpc_rdma.c
@@ -1356,6 +1356,10 @@ void rpcrdma_reply_handler(struct rpcrdma_rep *rep)
 	}
 
 	req = rpcr_to_rdmar(rqst);
+	if (req->rl_reply) {
+		trace_xprtrdma_leaked_rep(rqst, req->rl_reply);
+		rpcrdma_recv_buffer_put(req->rl_reply);
+	}
 	req->rl_reply = rep;
 	rep->rr_rqst = rqst;
 	clear_bit(RPCRDMA_REQ_F_PENDING, &req->rl_flags);


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 02/24] IB/rxe: IB_WR_REG_MR does not capture MR's iova field
  2018-12-10 16:29 [PATCH v3 00/24] NFS/RDMA client for next Chuck Lever
  2018-12-10 16:29 ` [PATCH v3 01/24] xprtrdma: Prevent leak of rpcrdma_rep objects Chuck Lever
@ 2018-12-10 16:29 ` Chuck Lever
  2018-12-11 14:00   ` Christoph Hellwig
  2018-12-10 16:29 ` [PATCH v3 03/24] xprtrdma: Remove support for FMR memory registration Chuck Lever
                   ` (22 subsequent siblings)
  24 siblings, 1 reply; 37+ messages in thread
From: Chuck Lever @ 2018-12-10 16:29 UTC (permalink / raw)
  To: anna.schumaker; +Cc: linux-rdma, linux-nfs

FRWR memory registration is done with a series of calls and WRs.
1. ULP invokes ib_dma_map_sg()
2. ULP invokes ib_map_mr_sg()
3. ULP posts an IB_WR_REG_MR on the Send queue

Step 2 generates an iova. It is permissible for ULPs to change
this iova (with certain restrictions) between steps 2 and 3.

rxe_map_mr_sg captures the MR's iova but later when rxe processes
the REG_MR WR, it ignores the MR's iova field. If a ULP alters the
MR's iova after step 2 but before step 3, rxe never captures that
change.

When the remote sends an RDMA Read targeting that MR, rxe looks up
the R_key, but the altered iova does not match the iova stored in
the MR, causing the RDMA Read request to fail.

Reported-by: Anna Schumaker <schumaker.anna@gmail.com>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
---
 drivers/infiniband/sw/rxe/rxe_req.c |    1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c
index 6c361d7..46f62f7 100644
--- a/drivers/infiniband/sw/rxe/rxe_req.c
+++ b/drivers/infiniband/sw/rxe/rxe_req.c
@@ -643,6 +643,7 @@ int rxe_requester(void *arg)
 			rmr->access = wqe->wr.wr.reg.access;
 			rmr->lkey = wqe->wr.wr.reg.key;
 			rmr->rkey = wqe->wr.wr.reg.key;
+			rmr->iova = wqe->wr.wr.reg.mr->iova;
 			wqe->state = wqe_state_done;
 			wqe->status = IB_WC_SUCCESS;
 		} else {


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 03/24] xprtrdma: Remove support for FMR memory registration
  2018-12-10 16:29 [PATCH v3 00/24] NFS/RDMA client for next Chuck Lever
  2018-12-10 16:29 ` [PATCH v3 01/24] xprtrdma: Prevent leak of rpcrdma_rep objects Chuck Lever
  2018-12-10 16:29 ` [PATCH v3 02/24] IB/rxe: IB_WR_REG_MR does not capture MR's iova field Chuck Lever
@ 2018-12-10 16:29 ` Chuck Lever
  2018-12-11 14:02   ` Christoph Hellwig
  2018-12-10 16:29 ` [PATCH v3 04/24] xprtrdma: Fix ri_max_segs and the result of ro_maxpages Chuck Lever
                   ` (21 subsequent siblings)
  24 siblings, 1 reply; 37+ messages in thread
From: Chuck Lever @ 2018-12-10 16:29 UTC (permalink / raw)
  To: anna.schumaker; +Cc: linux-rdma, linux-nfs

FMR is not supported on most recent RDMA devices. It is also less
secure than FRWR because an FMR memory registration can expose
adjacent bytes to remote reading or writing. As discussed during the
RDMA BoF at LPC 2018, it is time to remove support for FMR in the
NFS/RDMA client stack.

Note that NFS/RDMA server-side uses either local memory registration
or FRWR. FMR is not used.

There are a few Infiniband/RoCE devices in the kernel tree that do
not appear to support MEM_MGT_EXTENSIONS (FRWR), and therefore will
not support client-side NFS/RDMA after this patch. These are:

 - mthca
 - qib
 - hns (RoCE)

Users of these devices can use NFS/TCP on IPoIB instead.

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
 net/sunrpc/xprtrdma/Makefile  |    3 
 net/sunrpc/xprtrdma/fmr_ops.c |  337 -----------------------------------------
 net/sunrpc/xprtrdma/verbs.c   |    6 -
 3 files changed, 1 insertion(+), 345 deletions(-)
 delete mode 100644 net/sunrpc/xprtrdma/fmr_ops.c

diff --git a/net/sunrpc/xprtrdma/Makefile b/net/sunrpc/xprtrdma/Makefile
index 8bf19e1..8ed0377 100644
--- a/net/sunrpc/xprtrdma/Makefile
+++ b/net/sunrpc/xprtrdma/Makefile
@@ -1,8 +1,7 @@
 # SPDX-License-Identifier: GPL-2.0
 obj-$(CONFIG_SUNRPC_XPRT_RDMA) += rpcrdma.o
 
-rpcrdma-y := transport.o rpc_rdma.o verbs.o \
-	fmr_ops.o frwr_ops.o \
+rpcrdma-y := transport.o rpc_rdma.o verbs.o frwr_ops.o \
 	svc_rdma.o svc_rdma_backchannel.o svc_rdma_transport.o \
 	svc_rdma_sendto.o svc_rdma_recvfrom.o svc_rdma_rw.o \
 	module.o
diff --git a/net/sunrpc/xprtrdma/fmr_ops.c b/net/sunrpc/xprtrdma/fmr_ops.c
deleted file mode 100644
index 7f5632c..0000000
--- a/net/sunrpc/xprtrdma/fmr_ops.c
+++ /dev/null
@@ -1,337 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/*
- * Copyright (c) 2015, 2017 Oracle.  All rights reserved.
- * Copyright (c) 2003-2007 Network Appliance, Inc. All rights reserved.
- */
-
-/* Lightweight memory registration using Fast Memory Regions (FMR).
- * Referred to sometimes as MTHCAFMR mode.
- *
- * FMR uses synchronous memory registration and deregistration.
- * FMR registration is known to be fast, but FMR deregistration
- * can take tens of usecs to complete.
- */
-
-/* Normal operation
- *
- * A Memory Region is prepared for RDMA READ or WRITE using the
- * ib_map_phys_fmr verb (fmr_op_map). When the RDMA operation is
- * finished, the Memory Region is unmapped using the ib_unmap_fmr
- * verb (fmr_op_unmap).
- */
-
-#include <linux/sunrpc/svc_rdma.h>
-
-#include "xprt_rdma.h"
-#include <trace/events/rpcrdma.h>
-
-#if IS_ENABLED(CONFIG_SUNRPC_DEBUG)
-# define RPCDBG_FACILITY	RPCDBG_TRANS
-#endif
-
-/* Maximum scatter/gather per FMR */
-#define RPCRDMA_MAX_FMR_SGES	(64)
-
-/* Access mode of externally registered pages */
-enum {
-	RPCRDMA_FMR_ACCESS_FLAGS	= IB_ACCESS_REMOTE_WRITE |
-					  IB_ACCESS_REMOTE_READ,
-};
-
-bool
-fmr_is_supported(struct rpcrdma_ia *ia)
-{
-	if (!ia->ri_device->alloc_fmr) {
-		pr_info("rpcrdma: 'fmr' mode is not supported by device %s\n",
-			ia->ri_device->name);
-		return false;
-	}
-	return true;
-}
-
-static void
-__fmr_unmap(struct rpcrdma_mr *mr)
-{
-	LIST_HEAD(l);
-	int rc;
-
-	list_add(&mr->fmr.fm_mr->list, &l);
-	rc = ib_unmap_fmr(&l);
-	list_del(&mr->fmr.fm_mr->list);
-	if (rc)
-		pr_err("rpcrdma: final ib_unmap_fmr for %p failed %i\n",
-		       mr, rc);
-}
-
-/* Release an MR.
- */
-static void
-fmr_op_release_mr(struct rpcrdma_mr *mr)
-{
-	int rc;
-
-	kfree(mr->fmr.fm_physaddrs);
-	kfree(mr->mr_sg);
-
-	/* In case this one was left mapped, try to unmap it
-	 * to prevent dealloc_fmr from failing with EBUSY
-	 */
-	__fmr_unmap(mr);
-
-	rc = ib_dealloc_fmr(mr->fmr.fm_mr);
-	if (rc)
-		pr_err("rpcrdma: final ib_dealloc_fmr for %p returned %i\n",
-		       mr, rc);
-
-	kfree(mr);
-}
-
-/* MRs are dynamically allocated, so simply clean up and release the MR.
- * A replacement MR will subsequently be allocated on demand.
- */
-static void
-fmr_mr_recycle_worker(struct work_struct *work)
-{
-	struct rpcrdma_mr *mr = container_of(work, struct rpcrdma_mr, mr_recycle);
-	struct rpcrdma_xprt *r_xprt = mr->mr_xprt;
-
-	trace_xprtrdma_mr_recycle(mr);
-
-	trace_xprtrdma_mr_unmap(mr);
-	ib_dma_unmap_sg(r_xprt->rx_ia.ri_device,
-			mr->mr_sg, mr->mr_nents, mr->mr_dir);
-
-	spin_lock(&r_xprt->rx_buf.rb_mrlock);
-	list_del(&mr->mr_all);
-	r_xprt->rx_stats.mrs_recycled++;
-	spin_unlock(&r_xprt->rx_buf.rb_mrlock);
-	fmr_op_release_mr(mr);
-}
-
-static int
-fmr_op_init_mr(struct rpcrdma_ia *ia, struct rpcrdma_mr *mr)
-{
-	static struct ib_fmr_attr fmr_attr = {
-		.max_pages	= RPCRDMA_MAX_FMR_SGES,
-		.max_maps	= 1,
-		.page_shift	= PAGE_SHIFT
-	};
-
-	mr->fmr.fm_physaddrs = kcalloc(RPCRDMA_MAX_FMR_SGES,
-				       sizeof(u64), GFP_KERNEL);
-	if (!mr->fmr.fm_physaddrs)
-		goto out_free;
-
-	mr->mr_sg = kcalloc(RPCRDMA_MAX_FMR_SGES,
-			    sizeof(*mr->mr_sg), GFP_KERNEL);
-	if (!mr->mr_sg)
-		goto out_free;
-
-	sg_init_table(mr->mr_sg, RPCRDMA_MAX_FMR_SGES);
-
-	mr->fmr.fm_mr = ib_alloc_fmr(ia->ri_pd, RPCRDMA_FMR_ACCESS_FLAGS,
-				     &fmr_attr);
-	if (IS_ERR(mr->fmr.fm_mr))
-		goto out_fmr_err;
-
-	INIT_LIST_HEAD(&mr->mr_list);
-	INIT_WORK(&mr->mr_recycle, fmr_mr_recycle_worker);
-	return 0;
-
-out_fmr_err:
-	dprintk("RPC:       %s: ib_alloc_fmr returned %ld\n", __func__,
-		PTR_ERR(mr->fmr.fm_mr));
-
-out_free:
-	kfree(mr->mr_sg);
-	kfree(mr->fmr.fm_physaddrs);
-	return -ENOMEM;
-}
-
-/* On success, sets:
- *	ep->rep_attr.cap.max_send_wr
- *	ep->rep_attr.cap.max_recv_wr
- *	cdata->max_requests
- *	ia->ri_max_segs
- */
-static int
-fmr_op_open(struct rpcrdma_ia *ia, struct rpcrdma_ep *ep,
-	    struct rpcrdma_create_data_internal *cdata)
-{
-	int max_qp_wr;
-
-	max_qp_wr = ia->ri_device->attrs.max_qp_wr;
-	max_qp_wr -= RPCRDMA_BACKWARD_WRS;
-	max_qp_wr -= 1;
-	if (max_qp_wr < RPCRDMA_MIN_SLOT_TABLE)
-		return -ENOMEM;
-	if (cdata->max_requests > max_qp_wr)
-		cdata->max_requests = max_qp_wr;
-	ep->rep_attr.cap.max_send_wr = cdata->max_requests;
-	ep->rep_attr.cap.max_send_wr += RPCRDMA_BACKWARD_WRS;
-	ep->rep_attr.cap.max_send_wr += 1; /* for ib_drain_sq */
-	ep->rep_attr.cap.max_recv_wr = cdata->max_requests;
-	ep->rep_attr.cap.max_recv_wr += RPCRDMA_BACKWARD_WRS;
-	ep->rep_attr.cap.max_recv_wr += 1; /* for ib_drain_rq */
-
-	ia->ri_max_segs = max_t(unsigned int, 1, RPCRDMA_MAX_DATA_SEGS /
-				RPCRDMA_MAX_FMR_SGES);
-	ia->ri_max_segs += 2;	/* segments for head and tail buffers */
-	return 0;
-}
-
-/* FMR mode conveys up to 64 pages of payload per chunk segment.
- */
-static size_t
-fmr_op_maxpages(struct rpcrdma_xprt *r_xprt)
-{
-	return min_t(unsigned int, RPCRDMA_MAX_DATA_SEGS,
-		     RPCRDMA_MAX_HDR_SEGS * RPCRDMA_MAX_FMR_SGES);
-}
-
-/* Use the ib_map_phys_fmr() verb to register a memory region
- * for remote access via RDMA READ or RDMA WRITE.
- */
-static struct rpcrdma_mr_seg *
-fmr_op_map(struct rpcrdma_xprt *r_xprt, struct rpcrdma_mr_seg *seg,
-	   int nsegs, bool writing, struct rpcrdma_mr **out)
-{
-	struct rpcrdma_mr_seg *seg1 = seg;
-	int len, pageoff, i, rc;
-	struct rpcrdma_mr *mr;
-	u64 *dma_pages;
-
-	mr = rpcrdma_mr_get(r_xprt);
-	if (!mr)
-		return ERR_PTR(-EAGAIN);
-
-	pageoff = offset_in_page(seg1->mr_offset);
-	seg1->mr_offset -= pageoff;	/* start of page */
-	seg1->mr_len += pageoff;
-	len = -pageoff;
-	if (nsegs > RPCRDMA_MAX_FMR_SGES)
-		nsegs = RPCRDMA_MAX_FMR_SGES;
-	for (i = 0; i < nsegs;) {
-		if (seg->mr_page)
-			sg_set_page(&mr->mr_sg[i],
-				    seg->mr_page,
-				    seg->mr_len,
-				    offset_in_page(seg->mr_offset));
-		else
-			sg_set_buf(&mr->mr_sg[i], seg->mr_offset,
-				   seg->mr_len);
-		len += seg->mr_len;
-		++seg;
-		++i;
-		/* Check for holes */
-		if ((i < nsegs && offset_in_page(seg->mr_offset)) ||
-		    offset_in_page((seg-1)->mr_offset + (seg-1)->mr_len))
-			break;
-	}
-	mr->mr_dir = rpcrdma_data_dir(writing);
-
-	mr->mr_nents = ib_dma_map_sg(r_xprt->rx_ia.ri_device,
-				     mr->mr_sg, i, mr->mr_dir);
-	if (!mr->mr_nents)
-		goto out_dmamap_err;
-	trace_xprtrdma_mr_map(mr);
-
-	for (i = 0, dma_pages = mr->fmr.fm_physaddrs; i < mr->mr_nents; i++)
-		dma_pages[i] = sg_dma_address(&mr->mr_sg[i]);
-	rc = ib_map_phys_fmr(mr->fmr.fm_mr, dma_pages, mr->mr_nents,
-			     dma_pages[0]);
-	if (rc)
-		goto out_maperr;
-
-	mr->mr_handle = mr->fmr.fm_mr->rkey;
-	mr->mr_length = len;
-	mr->mr_offset = dma_pages[0] + pageoff;
-
-	*out = mr;
-	return seg;
-
-out_dmamap_err:
-	pr_err("rpcrdma: failed to DMA map sg %p sg_nents %d\n",
-	       mr->mr_sg, i);
-	rpcrdma_mr_put(mr);
-	return ERR_PTR(-EIO);
-
-out_maperr:
-	pr_err("rpcrdma: ib_map_phys_fmr %u@0x%llx+%i (%d) status %i\n",
-	       len, (unsigned long long)dma_pages[0],
-	       pageoff, mr->mr_nents, rc);
-	rpcrdma_mr_unmap_and_put(mr);
-	return ERR_PTR(-EIO);
-}
-
-/* Post Send WR containing the RPC Call message.
- */
-static int
-fmr_op_send(struct rpcrdma_ia *ia, struct rpcrdma_req *req)
-{
-	return ib_post_send(ia->ri_id->qp, &req->rl_sendctx->sc_wr, NULL);
-}
-
-/* Invalidate all memory regions that were registered for "req".
- *
- * Sleeps until it is safe for the host CPU to access the
- * previously mapped memory regions.
- *
- * Caller ensures that @mrs is not empty before the call. This
- * function empties the list.
- */
-static void
-fmr_op_unmap_sync(struct rpcrdma_xprt *r_xprt, struct list_head *mrs)
-{
-	struct rpcrdma_mr *mr;
-	LIST_HEAD(unmap_list);
-	int rc;
-
-	/* ORDER: Invalidate all of the req's MRs first
-	 *
-	 * ib_unmap_fmr() is slow, so use a single call instead
-	 * of one call per mapped FMR.
-	 */
-	list_for_each_entry(mr, mrs, mr_list) {
-		dprintk("RPC:       %s: unmapping fmr %p\n",
-			__func__, &mr->fmr);
-		trace_xprtrdma_mr_localinv(mr);
-		list_add_tail(&mr->fmr.fm_mr->list, &unmap_list);
-	}
-	r_xprt->rx_stats.local_inv_needed++;
-	rc = ib_unmap_fmr(&unmap_list);
-	if (rc)
-		goto out_release;
-
-	/* ORDER: Now DMA unmap all of the req's MRs, and return
-	 * them to the free MW list.
-	 */
-	while (!list_empty(mrs)) {
-		mr = rpcrdma_mr_pop(mrs);
-		list_del(&mr->fmr.fm_mr->list);
-		rpcrdma_mr_unmap_and_put(mr);
-	}
-
-	return;
-
-out_release:
-	pr_err("rpcrdma: ib_unmap_fmr failed (%i)\n", rc);
-
-	while (!list_empty(mrs)) {
-		mr = rpcrdma_mr_pop(mrs);
-		list_del(&mr->fmr.fm_mr->list);
-		rpcrdma_mr_recycle(mr);
-	}
-}
-
-const struct rpcrdma_memreg_ops rpcrdma_fmr_memreg_ops = {
-	.ro_map				= fmr_op_map,
-	.ro_send			= fmr_op_send,
-	.ro_unmap_sync			= fmr_op_unmap_sync,
-	.ro_open			= fmr_op_open,
-	.ro_maxpages			= fmr_op_maxpages,
-	.ro_init_mr			= fmr_op_init_mr,
-	.ro_release_mr			= fmr_op_release_mr,
-	.ro_displayname			= "fmr",
-	.ro_send_w_inv_ok		= 0,
-};
diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
index 3ddba94..a8d4def 100644
--- a/net/sunrpc/xprtrdma/verbs.c
+++ b/net/sunrpc/xprtrdma/verbs.c
@@ -434,12 +434,6 @@
 			break;
 		}
 		/*FALLTHROUGH*/
-	case RPCRDMA_MTHCAFMR:
-		if (fmr_is_supported(ia)) {
-			ia->ri_ops = &rpcrdma_fmr_memreg_ops;
-			break;
-		}
-		/*FALLTHROUGH*/
 	default:
 		pr_err("rpcrdma: Device %s does not support memreg mode %d\n",
 		       ia->ri_device->name, xprt_rdma_memreg_strategy);


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 04/24] xprtrdma: Fix ri_max_segs and the result of ro_maxpages
  2018-12-10 16:29 [PATCH v3 00/24] NFS/RDMA client for next Chuck Lever
                   ` (2 preceding siblings ...)
  2018-12-10 16:29 ` [PATCH v3 03/24] xprtrdma: Remove support for FMR memory registration Chuck Lever
@ 2018-12-10 16:29 ` Chuck Lever
  2018-12-10 16:29 ` [PATCH v3 05/24] xprtrdma: Reduce max_frwr_depth Chuck Lever
                   ` (20 subsequent siblings)
  24 siblings, 0 replies; 37+ messages in thread
From: Chuck Lever @ 2018-12-10 16:29 UTC (permalink / raw)
  To: anna.schumaker; +Cc: linux-rdma, linux-nfs

With certain combinations of krb5i/p, MR size, and r/wsize, I/O can
fail with EMSGSIZE. This is because the calculated value of
ri_max_segs (the max number of MRs per RPC) exceeded
RPCRDMA_MAX_HDR_SEGS, which caused Read or Write list encoding to
walk off the end of the transport header.

Once that was addressed, the ro_maxpages result has to be corrected
to account for the number of MRs needed for Reply chunks, which is
2 MRs smaller than a normal Read or Write chunk.

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
 net/sunrpc/xprtrdma/frwr_ops.c  |    7 +++++--
 net/sunrpc/xprtrdma/transport.c |    6 ++++--
 2 files changed, 9 insertions(+), 4 deletions(-)

diff --git a/net/sunrpc/xprtrdma/frwr_ops.c b/net/sunrpc/xprtrdma/frwr_ops.c
index fc6378cc..ae94de9 100644
--- a/net/sunrpc/xprtrdma/frwr_ops.c
+++ b/net/sunrpc/xprtrdma/frwr_ops.c
@@ -242,7 +242,10 @@
 
 	ia->ri_max_segs = max_t(unsigned int, 1, RPCRDMA_MAX_DATA_SEGS /
 				ia->ri_max_frwr_depth);
-	ia->ri_max_segs += 2;	/* segments for head and tail buffers */
+	/* Reply chunks require segments for head and tail buffers */
+	ia->ri_max_segs += 2;
+	if (ia->ri_max_segs > RPCRDMA_MAX_HDR_SEGS)
+		ia->ri_max_segs = RPCRDMA_MAX_HDR_SEGS;
 	return 0;
 }
 
@@ -255,7 +258,7 @@
 	struct rpcrdma_ia *ia = &r_xprt->rx_ia;
 
 	return min_t(unsigned int, RPCRDMA_MAX_DATA_SEGS,
-		     RPCRDMA_MAX_HDR_SEGS * ia->ri_max_frwr_depth);
+		     (ia->ri_max_segs - 2) * ia->ri_max_frwr_depth);
 }
 
 static void
diff --git a/net/sunrpc/xprtrdma/transport.c b/net/sunrpc/xprtrdma/transport.c
index ae2a838..2ba8be1 100644
--- a/net/sunrpc/xprtrdma/transport.c
+++ b/net/sunrpc/xprtrdma/transport.c
@@ -696,8 +696,10 @@
  *	%-ENOTCONN if the caller should reconnect and call again
  *	%-EAGAIN if the caller should call again
  *	%-ENOBUFS if the caller should call again after a delay
- *	%-EIO if a permanent error occurred and the request was not
- *		sent. Do not try to send this message again.
+ *	%-EMSGSIZE if encoding ran out of buffer space. The request
+ *		was not sent. Do not try to send this message again.
+ *	%-EIO if an I/O error occurred. The request was not sent.
+ *		Do not try to send this message again.
  */
 static int
 xprt_rdma_send_request(struct rpc_rqst *rqst)


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 05/24] xprtrdma: Reduce max_frwr_depth
  2018-12-10 16:29 [PATCH v3 00/24] NFS/RDMA client for next Chuck Lever
                   ` (3 preceding siblings ...)
  2018-12-10 16:29 ` [PATCH v3 04/24] xprtrdma: Fix ri_max_segs and the result of ro_maxpages Chuck Lever
@ 2018-12-10 16:29 ` Chuck Lever
  2018-12-11 14:02   ` Christoph Hellwig
  2018-12-10 16:29 ` [PATCH v3 06/24] xprtrdma: Plant XID in on-the-wire RDMA offset (FRWR) Chuck Lever
                   ` (19 subsequent siblings)
  24 siblings, 1 reply; 37+ messages in thread
From: Chuck Lever @ 2018-12-10 16:29 UTC (permalink / raw)
  To: anna.schumaker; +Cc: linux-rdma, linux-nfs

Some devices advertise a large max_fast_reg_page_list_len
capability, but perform optimally when MRs are significantly smaller
than that depth -- probably when the MR itself is no larger than a
page.

By default, the RDMA R/W core API uses max_sge_rd as the maximum
page depth for MRs. For some devices, the value of max_sge_rd is
1, which is also not optimal. Thus, when max_sge_rd is larger than
1, use that value. Otherwise use the value of the
max_fast_reg_page_list_len attribute.

I've tested this with a couple of devices, and it reproducibly
improves the throughput of large I/Os by several percent.

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
 net/sunrpc/xprtrdma/frwr_ops.c |   15 +++++++++++----
 1 file changed, 11 insertions(+), 4 deletions(-)

diff --git a/net/sunrpc/xprtrdma/frwr_ops.c b/net/sunrpc/xprtrdma/frwr_ops.c
index ae94de9..72c6d32 100644
--- a/net/sunrpc/xprtrdma/frwr_ops.c
+++ b/net/sunrpc/xprtrdma/frwr_ops.c
@@ -191,10 +191,17 @@
 	if (attrs->device_cap_flags & IB_DEVICE_SG_GAPS_REG)
 		ia->ri_mrtype = IB_MR_TYPE_SG_GAPS;
 
-	ia->ri_max_frwr_depth =
-			min_t(unsigned int, RPCRDMA_MAX_DATA_SEGS,
-			      attrs->max_fast_reg_page_list_len);
-	dprintk("RPC:       %s: device's max FR page list len = %u\n",
+	/* Quirk: Some devices advertise a large max_fast_reg_page_list_len
+	 * capability, but perform optimally when the MRs are not larger
+	 * than a page.
+	 */
+	if (attrs->max_sge_rd > 1)
+		ia->ri_max_frwr_depth = attrs->max_sge_rd;
+	else
+		ia->ri_max_frwr_depth = attrs->max_fast_reg_page_list_len;
+	if (ia->ri_max_frwr_depth > RPCRDMA_MAX_DATA_SEGS)
+		ia->ri_max_frwr_depth = RPCRDMA_MAX_DATA_SEGS;
+	dprintk("RPC:       %s: max FR page list depth = %u\n",
 		__func__, ia->ri_max_frwr_depth);
 
 	/* Add room for frwr register and invalidate WRs.


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 06/24] xprtrdma: Plant XID in on-the-wire RDMA offset (FRWR)
  2018-12-10 16:29 [PATCH v3 00/24] NFS/RDMA client for next Chuck Lever
                   ` (4 preceding siblings ...)
  2018-12-10 16:29 ` [PATCH v3 05/24] xprtrdma: Reduce max_frwr_depth Chuck Lever
@ 2018-12-10 16:29 ` Chuck Lever
  2018-12-10 16:29 ` [PATCH v3 07/24] xprtrdma: Recognize XDRBUF_SPARSE_PAGES Chuck Lever
                   ` (18 subsequent siblings)
  24 siblings, 0 replies; 37+ messages in thread
From: Chuck Lever @ 2018-12-10 16:29 UTC (permalink / raw)
  To: anna.schumaker; +Cc: linux-rdma, linux-nfs

Place the associated RPC transaction's XID in the upper 32 bits of
each RDMA segment's rdma_offset field. There are two reasons to do
this:

- The R_key only has 8 bits that are different from registration to
  registration. The XID adds more uniqueness to each RDMA segment to
  reduce the likelihood of a software bug on the server reading from
  or writing into memory it's not supposed to.

- On-the-wire RDMA Read and Write requests do not otherwise carry
  any identifier that matches them up to an RPC. The XID in the
  upper 32 bits will act as an eye-catcher in network captures.

Suggested-by: Tom Talpey <ttalpey@microsoft.com>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
 net/sunrpc/xprtrdma/frwr_ops.c  |    4 +++-
 net/sunrpc/xprtrdma/rpc_rdma.c  |    6 +++---
 net/sunrpc/xprtrdma/xprt_rdma.h |    2 +-
 3 files changed, 7 insertions(+), 5 deletions(-)

diff --git a/net/sunrpc/xprtrdma/frwr_ops.c b/net/sunrpc/xprtrdma/frwr_ops.c
index 72c6d32..e758c0d 100644
--- a/net/sunrpc/xprtrdma/frwr_ops.c
+++ b/net/sunrpc/xprtrdma/frwr_ops.c
@@ -347,7 +347,7 @@
  */
 static struct rpcrdma_mr_seg *
 frwr_op_map(struct rpcrdma_xprt *r_xprt, struct rpcrdma_mr_seg *seg,
-	    int nsegs, bool writing, struct rpcrdma_mr **out)
+	    int nsegs, bool writing, u32 xid, struct rpcrdma_mr **out)
 {
 	struct rpcrdma_ia *ia = &r_xprt->rx_ia;
 	bool holes_ok = ia->ri_mrtype == IB_MR_TYPE_SG_GAPS;
@@ -401,6 +401,8 @@
 	if (unlikely(n != mr->mr_nents))
 		goto out_mapmr_err;
 
+	ibmr->iova &= 0x00000000ffffffff;
+	ibmr->iova |= ((u64)cpu_to_be32(xid)) << 32;
 	key = (u8)(ibmr->rkey & 0x000000FF);
 	ib_update_fast_reg_key(ibmr, ++key);
 
diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c
index a2eb647..4cf0855 100644
--- a/net/sunrpc/xprtrdma/rpc_rdma.c
+++ b/net/sunrpc/xprtrdma/rpc_rdma.c
@@ -357,7 +357,7 @@ static bool rpcrdma_results_inline(struct rpcrdma_xprt *r_xprt,
 
 	do {
 		seg = r_xprt->rx_ia.ri_ops->ro_map(r_xprt, seg, nsegs,
-						   false, &mr);
+						   false, rqst->rq_xid, &mr);
 		if (IS_ERR(seg))
 			return PTR_ERR(seg);
 		rpcrdma_mr_push(mr, &req->rl_registered);
@@ -415,7 +415,7 @@ static bool rpcrdma_results_inline(struct rpcrdma_xprt *r_xprt,
 	nchunks = 0;
 	do {
 		seg = r_xprt->rx_ia.ri_ops->ro_map(r_xprt, seg, nsegs,
-						   true, &mr);
+						   true, rqst->rq_xid, &mr);
 		if (IS_ERR(seg))
 			return PTR_ERR(seg);
 		rpcrdma_mr_push(mr, &req->rl_registered);
@@ -473,7 +473,7 @@ static bool rpcrdma_results_inline(struct rpcrdma_xprt *r_xprt,
 	nchunks = 0;
 	do {
 		seg = r_xprt->rx_ia.ri_ops->ro_map(r_xprt, seg, nsegs,
-						   true, &mr);
+						   true, rqst->rq_xid, &mr);
 		if (IS_ERR(seg))
 			return PTR_ERR(seg);
 		rpcrdma_mr_push(mr, &req->rl_registered);
diff --git a/net/sunrpc/xprtrdma/xprt_rdma.h b/net/sunrpc/xprtrdma/xprt_rdma.h
index a13ccb6..2ae1ee2 100644
--- a/net/sunrpc/xprtrdma/xprt_rdma.h
+++ b/net/sunrpc/xprtrdma/xprt_rdma.h
@@ -472,7 +472,7 @@ struct rpcrdma_memreg_ops {
 	struct rpcrdma_mr_seg *
 			(*ro_map)(struct rpcrdma_xprt *,
 				  struct rpcrdma_mr_seg *, int, bool,
-				  struct rpcrdma_mr **);
+				  u32, struct rpcrdma_mr **);
 	int		(*ro_send)(struct rpcrdma_ia *ia,
 				   struct rpcrdma_req *req);
 	void		(*ro_reminv)(struct rpcrdma_rep *rep,


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 07/24] xprtrdma: Recognize XDRBUF_SPARSE_PAGES
  2018-12-10 16:29 [PATCH v3 00/24] NFS/RDMA client for next Chuck Lever
                   ` (5 preceding siblings ...)
  2018-12-10 16:29 ` [PATCH v3 06/24] xprtrdma: Plant XID in on-the-wire RDMA offset (FRWR) Chuck Lever
@ 2018-12-10 16:29 ` Chuck Lever
  2018-12-10 16:30 ` [PATCH v3 08/24] xprtrdma: Remove request_module from backchannel Chuck Lever
                   ` (17 subsequent siblings)
  24 siblings, 0 replies; 37+ messages in thread
From: Chuck Lever @ 2018-12-10 16:29 UTC (permalink / raw)
  To: anna.schumaker; +Cc: linux-rdma, linux-nfs

Commit 431f6eb3570f ("SUNRPC: Add a label for RPC calls that require
allocation on receive") didn't update similar logic in rpc_rdma.c.
I don't think this is a bug, per-se; the commit just adds more
careful checking for broken upper layer behavior.

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
 net/sunrpc/xprtrdma/rpc_rdma.c |   11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c
index 4cf0855..601dbe5 100644
--- a/net/sunrpc/xprtrdma/rpc_rdma.c
+++ b/net/sunrpc/xprtrdma/rpc_rdma.c
@@ -218,11 +218,12 @@ static bool rpcrdma_results_inline(struct rpcrdma_xprt *r_xprt,
 	ppages = xdrbuf->pages + (xdrbuf->page_base >> PAGE_SHIFT);
 	page_base = offset_in_page(xdrbuf->page_base);
 	while (len) {
-		if (unlikely(!*ppages)) {
-			/* XXX: Certain upper layer operations do
-			 *	not provide receive buffer pages.
-			 */
-			*ppages = alloc_page(GFP_ATOMIC);
+		/* ACL likes to be lazy in allocating pages - ACLs
+		 * are small by default but can get huge.
+		 */
+		if (unlikely(xdrbuf->flags & XDRBUF_SPARSE_PAGES)) {
+			if (!*ppages)
+				*ppages = alloc_page(GFP_ATOMIC);
 			if (!*ppages)
 				return -ENOBUFS;
 		}


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 08/24] xprtrdma: Remove request_module from backchannel
  2018-12-10 16:29 [PATCH v3 00/24] NFS/RDMA client for next Chuck Lever
                   ` (6 preceding siblings ...)
  2018-12-10 16:29 ` [PATCH v3 07/24] xprtrdma: Recognize XDRBUF_SPARSE_PAGES Chuck Lever
@ 2018-12-10 16:30 ` Chuck Lever
  2018-12-10 16:30 ` [PATCH v3 09/24] xprtrdma: Expose transport header errors Chuck Lever
                   ` (16 subsequent siblings)
  24 siblings, 0 replies; 37+ messages in thread
From: Chuck Lever @ 2018-12-10 16:30 UTC (permalink / raw)
  To: anna.schumaker; +Cc: linux-rdma, linux-nfs

Since commit ffe1f0df5862 ("rpcrdma: Merge svcrdma and xprtrdma
modules into one"), the forward and backchannel components are part
of the same kernel module. A separate request_module() call in the
backchannel code is no longer necessary.

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
 net/sunrpc/xprtrdma/backchannel.c |    2 --
 1 file changed, 2 deletions(-)

diff --git a/net/sunrpc/xprtrdma/backchannel.c b/net/sunrpc/xprtrdma/backchannel.c
index e5b367a..e31cf26 100644
--- a/net/sunrpc/xprtrdma/backchannel.c
+++ b/net/sunrpc/xprtrdma/backchannel.c
@@ -5,7 +5,6 @@
  * Support for backward direction RPCs on RPC/RDMA.
  */
 
-#include <linux/module.h>
 #include <linux/sunrpc/xprt.h>
 #include <linux/sunrpc/svc.h>
 #include <linux/sunrpc/svc_xprt.h>
@@ -101,7 +100,6 @@ int xprt_rdma_bc_setup(struct rpc_xprt *xprt, unsigned int reqs)
 		goto out_free;
 
 	r_xprt->rx_buf.rb_bc_srv_max_requests = reqs;
-	request_module("svcrdma");
 	trace_xprtrdma_cb_setup(r_xprt, reqs);
 	return 0;
 


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 09/24] xprtrdma: Expose transport header errors
  2018-12-10 16:29 [PATCH v3 00/24] NFS/RDMA client for next Chuck Lever
                   ` (7 preceding siblings ...)
  2018-12-10 16:30 ` [PATCH v3 08/24] xprtrdma: Remove request_module from backchannel Chuck Lever
@ 2018-12-10 16:30 ` Chuck Lever
  2018-12-10 16:30 ` [PATCH v3 10/24] xprtrdma: Simplify locking that protects the rl_allreqs list Chuck Lever
                   ` (15 subsequent siblings)
  24 siblings, 0 replies; 37+ messages in thread
From: Chuck Lever @ 2018-12-10 16:30 UTC (permalink / raw)
  To: anna.schumaker; +Cc: linux-rdma, linux-nfs

For better observability of parsing errors, return the error code
generated in the decoders to the upper layer consumer.

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
 net/sunrpc/xprtrdma/rpc_rdma.c |    1 -
 1 file changed, 1 deletion(-)

diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c
index 601dbe5..2bb9052 100644
--- a/net/sunrpc/xprtrdma/rpc_rdma.c
+++ b/net/sunrpc/xprtrdma/rpc_rdma.c
@@ -1249,7 +1249,6 @@ void rpcrdma_complete_rqst(struct rpcrdma_rep *rep)
 out_badheader:
 	trace_xprtrdma_reply_hdr(rep);
 	r_xprt->rx_stats.bad_reply_count++;
-	status = -EIO;
 	goto out;
 }
 


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 10/24] xprtrdma: Simplify locking that protects the rl_allreqs list
  2018-12-10 16:29 [PATCH v3 00/24] NFS/RDMA client for next Chuck Lever
                   ` (8 preceding siblings ...)
  2018-12-10 16:30 ` [PATCH v3 09/24] xprtrdma: Expose transport header errors Chuck Lever
@ 2018-12-10 16:30 ` Chuck Lever
  2018-12-10 16:30 ` [PATCH v3 11/24] xprtrdma: Cull dprintk() call sites Chuck Lever
                   ` (14 subsequent siblings)
  24 siblings, 0 replies; 37+ messages in thread
From: Chuck Lever @ 2018-12-10 16:30 UTC (permalink / raw)
  To: anna.schumaker; +Cc: linux-rdma, linux-nfs

Clean up: There's little chance of contention between the use of
rb_lock and rb_reqslock, so merge the two. This avoids having to
take both in some (possibly future) cases.

Transport tear-down is already serialized, thus there is no need for
locking at all when destroying rpcrdma_reqs.

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
 net/sunrpc/xprtrdma/backchannel.c |   20 +++-----------------
 net/sunrpc/xprtrdma/verbs.c       |   31 +++++++++++++++++--------------
 net/sunrpc/xprtrdma/xprt_rdma.h   |    7 +++----
 3 files changed, 23 insertions(+), 35 deletions(-)

diff --git a/net/sunrpc/xprtrdma/backchannel.c b/net/sunrpc/xprtrdma/backchannel.c
index e31cf26..41d263c 100644
--- a/net/sunrpc/xprtrdma/backchannel.c
+++ b/net/sunrpc/xprtrdma/backchannel.c
@@ -19,29 +19,16 @@
 
 #undef RPCRDMA_BACKCHANNEL_DEBUG
 
-static void rpcrdma_bc_free_rqst(struct rpcrdma_xprt *r_xprt,
-				 struct rpc_rqst *rqst)
-{
-	struct rpcrdma_buffer *buf = &r_xprt->rx_buf;
-	struct rpcrdma_req *req = rpcr_to_rdmar(rqst);
-
-	spin_lock(&buf->rb_reqslock);
-	list_del(&req->rl_all);
-	spin_unlock(&buf->rb_reqslock);
-
-	rpcrdma_destroy_req(req);
-}
-
 static int rpcrdma_bc_setup_reqs(struct rpcrdma_xprt *r_xprt,
 				 unsigned int count)
 {
 	struct rpc_xprt *xprt = &r_xprt->rx_xprt;
+	struct rpcrdma_req *req;
 	struct rpc_rqst *rqst;
 	unsigned int i;
 
 	for (i = 0; i < (count << 1); i++) {
 		struct rpcrdma_regbuf *rb;
-		struct rpcrdma_req *req;
 		size_t size;
 
 		req = rpcrdma_create_req(r_xprt);
@@ -67,7 +54,7 @@ static int rpcrdma_bc_setup_reqs(struct rpcrdma_xprt *r_xprt,
 	return 0;
 
 out_fail:
-	rpcrdma_bc_free_rqst(r_xprt, rqst);
+	rpcrdma_req_destroy(req);
 	return -ENOMEM;
 }
 
@@ -225,7 +212,6 @@ int xprt_rdma_bc_send_reply(struct rpc_rqst *rqst)
  */
 void xprt_rdma_bc_destroy(struct rpc_xprt *xprt, unsigned int reqs)
 {
-	struct rpcrdma_xprt *r_xprt = rpcx_to_rdmax(xprt);
 	struct rpc_rqst *rqst, *tmp;
 
 	spin_lock(&xprt->bc_pa_lock);
@@ -233,7 +219,7 @@ void xprt_rdma_bc_destroy(struct rpc_xprt *xprt, unsigned int reqs)
 		list_del(&rqst->rq_bc_pa_list);
 		spin_unlock(&xprt->bc_pa_lock);
 
-		rpcrdma_bc_free_rqst(r_xprt, rqst);
+		rpcrdma_req_destroy(rpcr_to_rdmar(rqst));
 
 		spin_lock(&xprt->bc_pa_lock);
 	}
diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
index a8d4def..bd6bc45 100644
--- a/net/sunrpc/xprtrdma/verbs.c
+++ b/net/sunrpc/xprtrdma/verbs.c
@@ -1083,9 +1083,9 @@ struct rpcrdma_req *
 	req->rl_buffer = buffer;
 	INIT_LIST_HEAD(&req->rl_registered);
 
-	spin_lock(&buffer->rb_reqslock);
+	spin_lock(&buffer->rb_lock);
 	list_add(&req->rl_all, &buffer->rb_allreqs);
-	spin_unlock(&buffer->rb_reqslock);
+	spin_unlock(&buffer->rb_lock);
 	return req;
 }
 
@@ -1153,7 +1153,6 @@ struct rpcrdma_req *
 
 	INIT_LIST_HEAD(&buf->rb_send_bufs);
 	INIT_LIST_HEAD(&buf->rb_allreqs);
-	spin_lock_init(&buf->rb_reqslock);
 	for (i = 0; i < buf->rb_max_requests; i++) {
 		struct rpcrdma_req *req;
 
@@ -1188,9 +1187,18 @@ struct rpcrdma_req *
 	kfree(rep);
 }
 
+/**
+ * rpcrdma_req_destroy - Destroy an rpcrdma_req object
+ * @req: unused object to be destroyed
+ *
+ * This function assumes that the caller prevents concurrent device
+ * unload and transport tear-down.
+ */
 void
-rpcrdma_destroy_req(struct rpcrdma_req *req)
+rpcrdma_req_destroy(struct rpcrdma_req *req)
 {
+	list_del(&req->rl_all);
+
 	rpcrdma_free_regbuf(req->rl_recvbuf);
 	rpcrdma_free_regbuf(req->rl_sendbuf);
 	rpcrdma_free_regbuf(req->rl_rdmabuf);
@@ -1244,19 +1252,14 @@ struct rpcrdma_req *
 		rpcrdma_destroy_rep(rep);
 	}
 
-	spin_lock(&buf->rb_reqslock);
-	while (!list_empty(&buf->rb_allreqs)) {
+	while (!list_empty(&buf->rb_send_bufs)) {
 		struct rpcrdma_req *req;
 
-		req = list_first_entry(&buf->rb_allreqs,
-				       struct rpcrdma_req, rl_all);
-		list_del(&req->rl_all);
-
-		spin_unlock(&buf->rb_reqslock);
-		rpcrdma_destroy_req(req);
-		spin_lock(&buf->rb_reqslock);
+		req = list_first_entry(&buf->rb_send_bufs,
+				       struct rpcrdma_req, rl_list);
+		list_del(&req->rl_list);
+		rpcrdma_req_destroy(req);
 	}
-	spin_unlock(&buf->rb_reqslock);
 
 	rpcrdma_mrs_destroy(buf);
 }
diff --git a/net/sunrpc/xprtrdma/xprt_rdma.h b/net/sunrpc/xprtrdma/xprt_rdma.h
index 2ae1ee2..e1c6235 100644
--- a/net/sunrpc/xprtrdma/xprt_rdma.h
+++ b/net/sunrpc/xprtrdma/xprt_rdma.h
@@ -401,15 +401,14 @@ struct rpcrdma_buffer {
 	spinlock_t		rb_lock;	/* protect buf lists */
 	struct list_head	rb_send_bufs;
 	struct list_head	rb_recv_bufs;
+	struct list_head	rb_allreqs;
+
 	unsigned long		rb_flags;
 	u32			rb_max_requests;
 	u32			rb_credits;	/* most recent credit grant */
 	int			rb_posted_receives;
 
 	u32			rb_bc_srv_max_requests;
-	spinlock_t		rb_reqslock;	/* protect rb_allreqs */
-	struct list_head	rb_allreqs;
-
 	u32			rb_bc_max_requests;
 
 	struct delayed_work	rb_refresh_worker;
@@ -566,7 +565,7 @@ int rpcrdma_ep_post(struct rpcrdma_ia *, struct rpcrdma_ep *,
  * Buffer calls - xprtrdma/verbs.c
  */
 struct rpcrdma_req *rpcrdma_create_req(struct rpcrdma_xprt *);
-void rpcrdma_destroy_req(struct rpcrdma_req *);
+void rpcrdma_req_destroy(struct rpcrdma_req *req);
 int rpcrdma_buffer_create(struct rpcrdma_xprt *);
 void rpcrdma_buffer_destroy(struct rpcrdma_buffer *);
 struct rpcrdma_sendctx *rpcrdma_sendctx_get_locked(struct rpcrdma_buffer *buf);


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 11/24] xprtrdma: Cull dprintk() call sites
  2018-12-10 16:29 [PATCH v3 00/24] NFS/RDMA client for next Chuck Lever
                   ` (9 preceding siblings ...)
  2018-12-10 16:30 ` [PATCH v3 10/24] xprtrdma: Simplify locking that protects the rl_allreqs list Chuck Lever
@ 2018-12-10 16:30 ` Chuck Lever
  2018-12-10 16:30 ` [PATCH v3 12/24] xprtrdma: Clean up of xprtrdma chunk trace points Chuck Lever
                   ` (13 subsequent siblings)
  24 siblings, 0 replies; 37+ messages in thread
From: Chuck Lever @ 2018-12-10 16:30 UTC (permalink / raw)
  To: anna.schumaker; +Cc: linux-rdma, linux-nfs

Clean up: Remove dprintk() call sites that report rare or impossible
errors. Leave a few that display high-value low noise status
information.

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
 net/sunrpc/xprtrdma/backchannel.c |    3 ---
 net/sunrpc/xprtrdma/rpc_rdma.c    |   17 ++++++++++-------
 net/sunrpc/xprtrdma/transport.c   |   34 ++++------------------------------
 net/sunrpc/xprtrdma/verbs.c       |   34 +++++-----------------------------
 4 files changed, 19 insertions(+), 69 deletions(-)

diff --git a/net/sunrpc/xprtrdma/backchannel.c b/net/sunrpc/xprtrdma/backchannel.c
index 41d263c..2353a9e 100644
--- a/net/sunrpc/xprtrdma/backchannel.c
+++ b/net/sunrpc/xprtrdma/backchannel.c
@@ -235,9 +235,6 @@ void xprt_rdma_bc_free_rqst(struct rpc_rqst *rqst)
 	struct rpcrdma_req *req = rpcr_to_rdmar(rqst);
 	struct rpc_xprt *xprt = rqst->rq_xprt;
 
-	dprintk("RPC:       %s: freeing rqst %p (req %p)\n",
-		__func__, rqst, req);
-
 	rpcrdma_recv_buffer_put(req->rl_reply);
 	req->rl_reply = NULL;
 
diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c
index 2bb9052..636e0f1 100644
--- a/net/sunrpc/xprtrdma/rpc_rdma.c
+++ b/net/sunrpc/xprtrdma/rpc_rdma.c
@@ -1189,17 +1189,20 @@ static int decode_reply_chunk(struct xdr_stream *xdr, u32 *length)
 		p = xdr_inline_decode(xdr, 2 * sizeof(*p));
 		if (!p)
 			break;
-		dprintk("RPC: %5u: %s: server reports version error (%u-%u)\n",
-			rqst->rq_task->tk_pid, __func__,
-			be32_to_cpup(p), be32_to_cpu(*(p + 1)));
+		dprintk("RPC:       %s: server reports "
+			"version error (%u-%u), xid %08x\n", __func__,
+			be32_to_cpup(p), be32_to_cpu(*(p + 1)),
+			be32_to_cpu(rep->rr_xid));
 		break;
 	case err_chunk:
-		dprintk("RPC: %5u: %s: server reports header decoding error\n",
-			rqst->rq_task->tk_pid, __func__);
+		dprintk("RPC:       %s: server reports "
+			"header decoding error, xid %08x\n", __func__,
+			be32_to_cpu(rep->rr_xid));
 		break;
 	default:
-		dprintk("RPC: %5u: %s: server reports unrecognized error %d\n",
-			rqst->rq_task->tk_pid, __func__, be32_to_cpup(p));
+		dprintk("RPC:       %s: server reports "
+			"unrecognized error %d, xid %08x\n", __func__,
+			be32_to_cpup(p), be32_to_cpu(rep->rr_xid));
 	}
 
 	r_xprt->rx_stats.bad_reply_count++;
diff --git a/net/sunrpc/xprtrdma/transport.c b/net/sunrpc/xprtrdma/transport.c
index 2ba8be1..5d6c3b3 100644
--- a/net/sunrpc/xprtrdma/transport.c
+++ b/net/sunrpc/xprtrdma/transport.c
@@ -318,17 +318,12 @@
 	struct sockaddr *sap;
 	int rc;
 
-	if (args->addrlen > sizeof(xprt->addr)) {
-		dprintk("RPC:       %s: address too large\n", __func__);
+	if (args->addrlen > sizeof(xprt->addr))
 		return ERR_PTR(-EBADF);
-	}
 
 	xprt = xprt_alloc(args->net, sizeof(struct rpcrdma_xprt), 0, 0);
-	if (xprt == NULL) {
-		dprintk("RPC:       %s: couldn't allocate rpcrdma_xprt\n",
-			__func__);
+	if (!xprt)
 		return ERR_PTR(-ENOMEM);
-	}
 
 	/* 60 second timeout, no retries */
 	xprt->timeout = &xprt_rdma_default_timeout;
@@ -444,8 +439,6 @@
 	struct rpcrdma_ep *ep = &r_xprt->rx_ep;
 	struct rpcrdma_ia *ia = &r_xprt->rx_ia;
 
-	dprintk("RPC:       %s: closing xprt %p\n", __func__, xprt);
-
 	if (test_and_clear_bit(RPCRDMA_IAF_REMOVING, &ia->ri_flags)) {
 		xprt_clear_connected(xprt);
 		rpcrdma_ia_remove(ia);
@@ -846,26 +839,16 @@ void xprt_rdma_print_stats(struct rpc_xprt *xprt, struct seq_file *seq)
 
 void xprt_rdma_cleanup(void)
 {
-	int rc;
-
-	dprintk("RPCRDMA Module Removed, deregister RPC RDMA transport\n");
 #if IS_ENABLED(CONFIG_SUNRPC_DEBUG)
 	if (sunrpc_table_header) {
 		unregister_sysctl_table(sunrpc_table_header);
 		sunrpc_table_header = NULL;
 	}
 #endif
-	rc = xprt_unregister_transport(&xprt_rdma);
-	if (rc)
-		dprintk("RPC:       %s: xprt_unregister returned %i\n",
-			__func__, rc);
 
+	xprt_unregister_transport(&xprt_rdma);
 	rpcrdma_destroy_wq();
-
-	rc = xprt_unregister_transport(&xprt_rdma_bc);
-	if (rc)
-		dprintk("RPC:       %s: xprt_unregister(bc) returned %i\n",
-			__func__, rc);
+	xprt_unregister_transport(&xprt_rdma_bc);
 }
 
 int xprt_rdma_init(void)
@@ -889,15 +872,6 @@ int xprt_rdma_init(void)
 		return rc;
 	}
 
-	dprintk("RPCRDMA Module Init, register RPC RDMA transport\n");
-
-	dprintk("Defaults:\n");
-	dprintk("\tSlots %d\n"
-		"\tMaxInlineRead %d\n\tMaxInlineWrite %d\n",
-		xprt_rdma_slot_table_entries,
-		xprt_rdma_max_inline_read, xprt_rdma_max_inline_write);
-	dprintk("\tPadding 0\n\tMemreg %d\n", xprt_rdma_memreg_strategy);
-
 #if IS_ENABLED(CONFIG_SUNRPC_DEBUG)
 	if (!sunrpc_table_header)
 		sunrpc_table_header = register_sysctl_table(sunrpc_table);
diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
index bd6bc45..4afed9f 100644
--- a/net/sunrpc/xprtrdma/verbs.c
+++ b/net/sunrpc/xprtrdma/verbs.c
@@ -347,22 +347,15 @@
 
 	id = rdma_create_id(xprt->rx_xprt.xprt_net, rpcrdma_cm_event_handler,
 			    xprt, RDMA_PS_TCP, IB_QPT_RC);
-	if (IS_ERR(id)) {
-		rc = PTR_ERR(id);
-		dprintk("RPC:       %s: rdma_create_id() failed %i\n",
-			__func__, rc);
+	if (IS_ERR(id))
 		return id;
-	}
 
 	ia->ri_async_rc = -ETIMEDOUT;
 	rc = rdma_resolve_addr(id, NULL,
 			       (struct sockaddr *)&xprt->rx_xprt.addr,
 			       RDMA_RESOLVE_TIMEOUT);
-	if (rc) {
-		dprintk("RPC:       %s: rdma_resolve_addr() failed %i\n",
-			__func__, rc);
+	if (rc)
 		goto out;
-	}
 	rc = wait_for_completion_interruptible_timeout(&ia->ri_done, wtimeout);
 	if (rc < 0) {
 		trace_xprtrdma_conn_tout(xprt);
@@ -375,11 +368,8 @@
 
 	ia->ri_async_rc = -ETIMEDOUT;
 	rc = rdma_resolve_route(id, RDMA_RESOLVE_TIMEOUT);
-	if (rc) {
-		dprintk("RPC:       %s: rdma_resolve_route() failed %i\n",
-			__func__, rc);
+	if (rc)
 		goto out;
-	}
 	rc = wait_for_completion_interruptible_timeout(&ia->ri_done, wtimeout);
 	if (rc < 0) {
 		trace_xprtrdma_conn_tout(xprt);
@@ -581,8 +571,6 @@
 			     1, IB_POLL_WORKQUEUE);
 	if (IS_ERR(sendcq)) {
 		rc = PTR_ERR(sendcq);
-		dprintk("RPC:       %s: failed to create send CQ: %i\n",
-			__func__, rc);
 		goto out1;
 	}
 
@@ -591,8 +579,6 @@
 			     0, IB_POLL_WORKQUEUE);
 	if (IS_ERR(recvcq)) {
 		rc = PTR_ERR(recvcq);
-		dprintk("RPC:       %s: failed to create recv CQ: %i\n",
-			__func__, rc);
 		goto out2;
 	}
 
@@ -734,11 +720,8 @@
 	}
 
 	err = rdma_create_qp(id, ia->ri_pd, &ep->rep_attr);
-	if (err) {
-		dprintk("RPC:       %s: rdma_create_qp returned %d\n",
-			__func__, err);
+	if (err)
 		goto out_destroy;
-	}
 
 	/* Atomically replace the transport's ID and QP. */
 	rc = 0;
@@ -769,8 +752,6 @@
 		dprintk("RPC:       %s: connecting...\n", __func__);
 		rc = rdma_create_qp(ia->ri_id, ia->ri_pd, &ep->rep_attr);
 		if (rc) {
-			dprintk("RPC:       %s: rdma_create_qp failed %i\n",
-				__func__, rc);
 			rc = -ENETUNREACH;
 			goto out_noupdate;
 		}
@@ -792,11 +773,8 @@
 	rpcrdma_post_recvs(r_xprt, true);
 
 	rc = rdma_connect(ia->ri_id, &ep->rep_remote_cma);
-	if (rc) {
-		dprintk("RPC:       %s: rdma_connect() failed with %i\n",
-				__func__, rc);
+	if (rc)
 		goto out;
-	}
 
 	wait_event_interruptible(ep->rep_connect_wait, ep->rep_connected != 0);
 	if (ep->rep_connected <= 0) {
@@ -1128,8 +1106,6 @@ struct rpcrdma_req *
 out_free:
 	kfree(rep);
 out:
-	dprintk("RPC:       %s: reply buffer %d alloc failed\n",
-		__func__, rc);
 	return rc;
 }
 


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 12/24] xprtrdma: Clean up of xprtrdma chunk trace points
  2018-12-10 16:29 [PATCH v3 00/24] NFS/RDMA client for next Chuck Lever
                   ` (10 preceding siblings ...)
  2018-12-10 16:30 ` [PATCH v3 11/24] xprtrdma: Cull dprintk() call sites Chuck Lever
@ 2018-12-10 16:30 ` Chuck Lever
  2018-12-10 16:30 ` [PATCH v3 13/24] xprtrdma: Relocate the xprtrdma_mr_map " Chuck Lever
                   ` (12 subsequent siblings)
  24 siblings, 0 replies; 37+ messages in thread
From: Chuck Lever @ 2018-12-10 16:30 UTC (permalink / raw)
  To: anna.schumaker; +Cc: linux-rdma, linux-nfs

The chunk-related trace points capture nearly the same information
as the MR-related trace points.

Also, rename them so globbing can be used to enable or disable
these trace points more easily.

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
 include/trace/events/rpcrdma.h |   42 +++++++++++++++++++++++++---------------
 net/sunrpc/xprtrdma/rpc_rdma.c |    6 +++---
 2 files changed, 29 insertions(+), 19 deletions(-)

diff --git a/include/trace/events/rpcrdma.h b/include/trace/events/rpcrdma.h
index 602972d..807669c 100644
--- a/include/trace/events/rpcrdma.h
+++ b/include/trace/events/rpcrdma.h
@@ -97,7 +97,6 @@
 	TP_STRUCT__entry(
 		__field(unsigned int, task_id)
 		__field(unsigned int, client_id)
-		__field(const void *, mr)
 		__field(unsigned int, pos)
 		__field(int, nents)
 		__field(u32, handle)
@@ -109,7 +108,6 @@
 	TP_fast_assign(
 		__entry->task_id = task->tk_pid;
 		__entry->client_id = task->tk_client->cl_clid;
-		__entry->mr = mr;
 		__entry->pos = pos;
 		__entry->nents = mr->mr_nents;
 		__entry->handle = mr->mr_handle;
@@ -118,8 +116,8 @@
 		__entry->nsegs = nsegs;
 	),
 
-	TP_printk("task:%u@%u mr=%p pos=%u %u@0x%016llx:0x%08x (%s)",
-		__entry->task_id, __entry->client_id, __entry->mr,
+	TP_printk("task:%u@%u pos=%u %u@0x%016llx:0x%08x (%s)",
+		__entry->task_id, __entry->client_id,
 		__entry->pos, __entry->length,
 		(unsigned long long)__entry->offset, __entry->handle,
 		__entry->nents < __entry->nsegs ? "more" : "last"
@@ -127,7 +125,7 @@
 );
 
 #define DEFINE_RDCH_EVENT(name)						\
-		DEFINE_EVENT(xprtrdma_rdch_event, name,			\
+		DEFINE_EVENT(xprtrdma_rdch_event, xprtrdma_chunk_##name,\
 				TP_PROTO(				\
 					const struct rpc_task *task,	\
 					unsigned int pos,		\
@@ -148,7 +146,6 @@
 	TP_STRUCT__entry(
 		__field(unsigned int, task_id)
 		__field(unsigned int, client_id)
-		__field(const void *, mr)
 		__field(int, nents)
 		__field(u32, handle)
 		__field(u32, length)
@@ -159,7 +156,6 @@
 	TP_fast_assign(
 		__entry->task_id = task->tk_pid;
 		__entry->client_id = task->tk_client->cl_clid;
-		__entry->mr = mr;
 		__entry->nents = mr->mr_nents;
 		__entry->handle = mr->mr_handle;
 		__entry->length = mr->mr_length;
@@ -167,8 +163,8 @@
 		__entry->nsegs = nsegs;
 	),
 
-	TP_printk("task:%u@%u mr=%p %u@0x%016llx:0x%08x (%s)",
-		__entry->task_id, __entry->client_id, __entry->mr,
+	TP_printk("task:%u@%u %u@0x%016llx:0x%08x (%s)",
+		__entry->task_id, __entry->client_id,
 		__entry->length, (unsigned long long)__entry->offset,
 		__entry->handle,
 		__entry->nents < __entry->nsegs ? "more" : "last"
@@ -176,7 +172,7 @@
 );
 
 #define DEFINE_WRCH_EVENT(name)						\
-		DEFINE_EVENT(xprtrdma_wrch_event, name,			\
+		DEFINE_EVENT(xprtrdma_wrch_event, xprtrdma_chunk_##name,\
 				TP_PROTO(				\
 					const struct rpc_task *task,	\
 					struct rpcrdma_mr *mr,		\
@@ -234,6 +230,18 @@
 				),					\
 				TP_ARGS(wc, frwr))
 
+TRACE_DEFINE_ENUM(DMA_BIDIRECTIONAL);
+TRACE_DEFINE_ENUM(DMA_TO_DEVICE);
+TRACE_DEFINE_ENUM(DMA_FROM_DEVICE);
+TRACE_DEFINE_ENUM(DMA_NONE);
+
+#define xprtrdma_show_direction(x)					\
+		__print_symbolic(x,					\
+				{ DMA_BIDIRECTIONAL, "BIDIR" },		\
+				{ DMA_TO_DEVICE, "TO_DEVICE" },		\
+				{ DMA_FROM_DEVICE, "FROM_DEVICE" },	\
+				{ DMA_NONE, "NONE" })
+
 DECLARE_EVENT_CLASS(xprtrdma_mr,
 	TP_PROTO(
 		const struct rpcrdma_mr *mr
@@ -246,6 +254,7 @@
 		__field(u32, handle)
 		__field(u32, length)
 		__field(u64, offset)
+		__field(u32, dir)
 	),
 
 	TP_fast_assign(
@@ -253,12 +262,13 @@
 		__entry->handle = mr->mr_handle;
 		__entry->length = mr->mr_length;
 		__entry->offset = mr->mr_offset;
+		__entry->dir    = mr->mr_dir;
 	),
 
-	TP_printk("mr=%p %u@0x%016llx:0x%08x",
+	TP_printk("mr=%p %u@0x%016llx:0x%08x (%s)",
 		__entry->mr, __entry->length,
-		(unsigned long long)__entry->offset,
-		__entry->handle
+		(unsigned long long)__entry->offset, __entry->handle,
+		xprtrdma_show_direction(__entry->dir)
 	)
 );
 
@@ -437,9 +447,9 @@
 
 DEFINE_RXPRT_EVENT(xprtrdma_nomrs);
 
-DEFINE_RDCH_EVENT(xprtrdma_read_chunk);
-DEFINE_WRCH_EVENT(xprtrdma_write_chunk);
-DEFINE_WRCH_EVENT(xprtrdma_reply_chunk);
+DEFINE_RDCH_EVENT(read);
+DEFINE_WRCH_EVENT(write);
+DEFINE_WRCH_EVENT(reply);
 
 TRACE_DEFINE_ENUM(rpcrdma_noch);
 TRACE_DEFINE_ENUM(rpcrdma_readch);
diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c
index 636e0f1..b89342d 100644
--- a/net/sunrpc/xprtrdma/rpc_rdma.c
+++ b/net/sunrpc/xprtrdma/rpc_rdma.c
@@ -366,7 +366,7 @@ static bool rpcrdma_results_inline(struct rpcrdma_xprt *r_xprt,
 		if (encode_read_segment(xdr, mr, pos) < 0)
 			return -EMSGSIZE;
 
-		trace_xprtrdma_read_chunk(rqst->rq_task, pos, mr, nsegs);
+		trace_xprtrdma_chunk_read(rqst->rq_task, pos, mr, nsegs);
 		r_xprt->rx_stats.read_chunk_count++;
 		nsegs -= mr->mr_nents;
 	} while (nsegs);
@@ -424,7 +424,7 @@ static bool rpcrdma_results_inline(struct rpcrdma_xprt *r_xprt,
 		if (encode_rdma_segment(xdr, mr) < 0)
 			return -EMSGSIZE;
 
-		trace_xprtrdma_write_chunk(rqst->rq_task, mr, nsegs);
+		trace_xprtrdma_chunk_write(rqst->rq_task, mr, nsegs);
 		r_xprt->rx_stats.write_chunk_count++;
 		r_xprt->rx_stats.total_rdma_request += mr->mr_length;
 		nchunks++;
@@ -482,7 +482,7 @@ static bool rpcrdma_results_inline(struct rpcrdma_xprt *r_xprt,
 		if (encode_rdma_segment(xdr, mr) < 0)
 			return -EMSGSIZE;
 
-		trace_xprtrdma_reply_chunk(rqst->rq_task, mr, nsegs);
+		trace_xprtrdma_chunk_reply(rqst->rq_task, mr, nsegs);
 		r_xprt->rx_stats.reply_chunk_count++;
 		r_xprt->rx_stats.total_rdma_request += mr->mr_length;
 		nchunks++;


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 13/24] xprtrdma: Relocate the xprtrdma_mr_map trace points
  2018-12-10 16:29 [PATCH v3 00/24] NFS/RDMA client for next Chuck Lever
                   ` (11 preceding siblings ...)
  2018-12-10 16:30 ` [PATCH v3 12/24] xprtrdma: Clean up of xprtrdma chunk trace points Chuck Lever
@ 2018-12-10 16:30 ` Chuck Lever
  2018-12-10 16:30 ` [PATCH v3 14/24] xprtrdma: Add trace points for calls to transport switch methods Chuck Lever
                   ` (11 subsequent siblings)
  24 siblings, 0 replies; 37+ messages in thread
From: Chuck Lever @ 2018-12-10 16:30 UTC (permalink / raw)
  To: anna.schumaker; +Cc: linux-rdma, linux-nfs

The mr_map trace points were capturing information about the previous
use of the MR rather than about the segment that was just mapped.

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
 net/sunrpc/xprtrdma/frwr_ops.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/net/sunrpc/xprtrdma/frwr_ops.c b/net/sunrpc/xprtrdma/frwr_ops.c
index e758c0d..6d6cc80 100644
--- a/net/sunrpc/xprtrdma/frwr_ops.c
+++ b/net/sunrpc/xprtrdma/frwr_ops.c
@@ -394,7 +394,6 @@
 	mr->mr_nents = ib_dma_map_sg(ia->ri_device, mr->mr_sg, i, mr->mr_dir);
 	if (!mr->mr_nents)
 		goto out_dmamap_err;
-	trace_xprtrdma_mr_map(mr);
 
 	ibmr = frwr->fr_mr;
 	n = ib_map_mr_sg(ibmr, mr->mr_sg, mr->mr_nents, NULL, PAGE_SIZE);
@@ -416,6 +415,7 @@
 	mr->mr_handle = ibmr->rkey;
 	mr->mr_length = ibmr->length;
 	mr->mr_offset = ibmr->iova;
+	trace_xprtrdma_mr_map(mr);
 
 	*out = mr;
 	return seg;


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 14/24] xprtrdma: Add trace points for calls to transport switch methods
  2018-12-10 16:29 [PATCH v3 00/24] NFS/RDMA client for next Chuck Lever
                   ` (12 preceding siblings ...)
  2018-12-10 16:30 ` [PATCH v3 13/24] xprtrdma: Relocate the xprtrdma_mr_map " Chuck Lever
@ 2018-12-10 16:30 ` Chuck Lever
  2018-12-10 16:30 ` [PATCH v3 15/24] NFS: Make "port=" mount option optional for RDMA mounts Chuck Lever
                   ` (10 subsequent siblings)
  24 siblings, 0 replies; 37+ messages in thread
From: Chuck Lever @ 2018-12-10 16:30 UTC (permalink / raw)
  To: anna.schumaker; +Cc: linux-rdma, linux-nfs

Name them "trace_xprtrdma_op_*" so they can be easily enabled as a
group. No trace point is added where the generic layer already has
observability.

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
 include/trace/events/rpcrdma.h  |   10 ++++++----
 net/sunrpc/xprtrdma/transport.c |   18 +++++++++++-------
 2 files changed, 17 insertions(+), 11 deletions(-)

diff --git a/include/trace/events/rpcrdma.h b/include/trace/events/rpcrdma.h
index 807669c..727786f 100644
--- a/include/trace/events/rpcrdma.h
+++ b/include/trace/events/rpcrdma.h
@@ -381,11 +381,13 @@
 DEFINE_RXPRT_EVENT(xprtrdma_conn_start);
 DEFINE_RXPRT_EVENT(xprtrdma_conn_tout);
 DEFINE_RXPRT_EVENT(xprtrdma_create);
-DEFINE_RXPRT_EVENT(xprtrdma_destroy);
+DEFINE_RXPRT_EVENT(xprtrdma_op_destroy);
 DEFINE_RXPRT_EVENT(xprtrdma_remove);
 DEFINE_RXPRT_EVENT(xprtrdma_reinsert);
 DEFINE_RXPRT_EVENT(xprtrdma_reconnect);
-DEFINE_RXPRT_EVENT(xprtrdma_inject_dsc);
+DEFINE_RXPRT_EVENT(xprtrdma_op_inject_dsc);
+DEFINE_RXPRT_EVENT(xprtrdma_op_close);
+DEFINE_RXPRT_EVENT(xprtrdma_op_connect);
 
 TRACE_EVENT(xprtrdma_qp_event,
 	TP_PROTO(
@@ -834,7 +836,7 @@
  ** Allocation/release of rpcrdma_reqs and rpcrdma_reps
  **/
 
-TRACE_EVENT(xprtrdma_allocate,
+TRACE_EVENT(xprtrdma_op_allocate,
 	TP_PROTO(
 		const struct rpc_task *task,
 		const struct rpcrdma_req *req
@@ -864,7 +866,7 @@
 	)
 );
 
-TRACE_EVENT(xprtrdma_rpc_done,
+TRACE_EVENT(xprtrdma_op_free,
 	TP_PROTO(
 		const struct rpc_task *task,
 		const struct rpcrdma_req *req
diff --git a/net/sunrpc/xprtrdma/transport.c b/net/sunrpc/xprtrdma/transport.c
index 5d6c3b3..7e9f143 100644
--- a/net/sunrpc/xprtrdma/transport.c
+++ b/net/sunrpc/xprtrdma/transport.c
@@ -268,7 +268,7 @@
 {
 	struct rpcrdma_xprt *r_xprt = rpcx_to_rdmax(xprt);
 
-	trace_xprtrdma_inject_dsc(r_xprt);
+	trace_xprtrdma_op_inject_dsc(r_xprt);
 	rdma_disconnect(r_xprt->rx_ia.ri_id);
 }
 
@@ -284,7 +284,7 @@
 {
 	struct rpcrdma_xprt *r_xprt = rpcx_to_rdmax(xprt);
 
-	trace_xprtrdma_destroy(r_xprt);
+	trace_xprtrdma_op_destroy(r_xprt);
 
 	cancel_delayed_work_sync(&r_xprt->rx_connect_worker);
 
@@ -418,7 +418,7 @@
 out2:
 	rpcrdma_ia_close(&new_xprt->rx_ia);
 out1:
-	trace_xprtrdma_destroy(new_xprt);
+	trace_xprtrdma_op_destroy(new_xprt);
 	xprt_rdma_free_addresses(xprt);
 	xprt_free(xprt);
 	return ERR_PTR(rc);
@@ -428,7 +428,8 @@
  * xprt_rdma_close - close a transport connection
  * @xprt: transport context
  *
- * Called during transport shutdown, reconnect, or device removal.
+ * Called during autoclose or device removal.
+ *
  * Caller holds @xprt's send lock to prevent activity on this
  * transport while the connection is torn down.
  */
@@ -439,6 +440,8 @@
 	struct rpcrdma_ep *ep = &r_xprt->rx_ep;
 	struct rpcrdma_ia *ia = &r_xprt->rx_ia;
 
+	trace_xprtrdma_op_close(r_xprt);
+
 	if (test_and_clear_bit(RPCRDMA_IAF_REMOVING, &ia->ri_flags)) {
 		xprt_clear_connected(xprt);
 		rpcrdma_ia_remove(ia);
@@ -518,6 +521,7 @@
 {
 	struct rpcrdma_xprt *r_xprt = rpcx_to_rdmax(xprt);
 
+	trace_xprtrdma_op_connect(r_xprt);
 	if (r_xprt->rx_ep.rep_connected != 0) {
 		/* Reconnect */
 		schedule_delayed_work(&r_xprt->rx_connect_worker,
@@ -652,11 +656,11 @@
 
 	rqst->rq_buffer = req->rl_sendbuf->rg_base;
 	rqst->rq_rbuffer = req->rl_recvbuf->rg_base;
-	trace_xprtrdma_allocate(task, req);
+	trace_xprtrdma_op_allocate(task, req);
 	return 0;
 
 out_fail:
-	trace_xprtrdma_allocate(task, NULL);
+	trace_xprtrdma_op_allocate(task, NULL);
 	return -ENOMEM;
 }
 
@@ -675,7 +679,7 @@
 
 	if (test_bit(RPCRDMA_REQ_F_PENDING, &req->rl_flags))
 		rpcrdma_release_rqst(r_xprt, req);
-	trace_xprtrdma_rpc_done(task, req);
+	trace_xprtrdma_op_free(task, req);
 }
 
 /**


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 15/24] NFS: Make "port=" mount option optional for RDMA mounts
  2018-12-10 16:29 [PATCH v3 00/24] NFS/RDMA client for next Chuck Lever
                   ` (13 preceding siblings ...)
  2018-12-10 16:30 ` [PATCH v3 14/24] xprtrdma: Add trace points for calls to transport switch methods Chuck Lever
@ 2018-12-10 16:30 ` Chuck Lever
  2018-12-10 16:30 ` [PATCH v3 16/24] SUNRPC: Remove support for kerberos_v1 Chuck Lever
                   ` (9 subsequent siblings)
  24 siblings, 0 replies; 37+ messages in thread
From: Chuck Lever @ 2018-12-10 16:30 UTC (permalink / raw)
  To: anna.schumaker; +Cc: linux-rdma, linux-nfs

Having to specify "proto=rdma,port=20049" is cumbersome.

RFC 8267 Section 6.3 requires NFSv4 clients to use "the alternative
well-known port number", which is 20049. Make the use of the well-
known port number automatic, just as it is for NFS/TCP and port
2049.

For NFSv2/3, Section 4.2 allows clients to simply choose 20049 as
the default or use rpcbind. I don't know of an NFS/RDMA server
implementation that registers it's NFS/RDMA service with rpcbind,
so automatically choosing 20049 seems like the better choice. The
other widely-deployed NFS/RDMA client, Solaris, also uses 20049
as the default port.

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
 fs/nfs/super.c |   10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/fs/nfs/super.c b/fs/nfs/super.c
index ac4b2f0..22247c2 100644
--- a/fs/nfs/super.c
+++ b/fs/nfs/super.c
@@ -2168,7 +2168,10 @@ static int nfs_validate_text_mount_data(void *options,
 
 	if (args->version == 4) {
 #if IS_ENABLED(CONFIG_NFS_V4)
-		port = NFS_PORT;
+		if (args->nfs_server.protocol == XPRT_TRANSPORT_RDMA)
+			port = NFS_RDMA_PORT;
+		else
+			port = NFS_PORT;
 		max_namelen = NFS4_MAXNAMLEN;
 		max_pathlen = NFS4_MAXPATHLEN;
 		nfs_validate_transport_protocol(args);
@@ -2178,8 +2181,11 @@ static int nfs_validate_text_mount_data(void *options,
 #else
 		goto out_v4_not_compiled;
 #endif /* CONFIG_NFS_V4 */
-	} else
+	} else {
 		nfs_set_mount_transport_protocol(args);
+		if (args->nfs_server.protocol == XPRT_TRANSPORT_RDMA)
+			port = NFS_RDMA_PORT;
+	}
 
 	nfs_set_port(sap, &args->nfs_server.port, port);
 


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 16/24] SUNRPC: Remove support for kerberos_v1
  2018-12-10 16:29 [PATCH v3 00/24] NFS/RDMA client for next Chuck Lever
                   ` (14 preceding siblings ...)
  2018-12-10 16:30 ` [PATCH v3 15/24] NFS: Make "port=" mount option optional for RDMA mounts Chuck Lever
@ 2018-12-10 16:30 ` Chuck Lever
  2018-12-12 21:20   ` Chuck Lever
  2018-12-10 16:30 ` [PATCH v3 17/24] SUNRPC: Fix some kernel doc complaints Chuck Lever
                   ` (8 subsequent siblings)
  24 siblings, 1 reply; 37+ messages in thread
From: Chuck Lever @ 2018-12-10 16:30 UTC (permalink / raw)
  To: anna.schumaker; +Cc: linux-rdma, linux-nfs

Kerberos v1 allows the selection of encryption types that are known
to be insecure and are no longer widely deployed. Also there is no
convenient facility for testing v1 or these enctypes, so essentially
this code has been untested for some time.

Note that RFC 6649 deprecates DES and Arcfour_56 in Kerberos, and
RFC 8429 (October 2018) deprecates DES3 and Arcfour.

Support for DES_CBC_RAW, DES_CBC_CRC, DES_CBC_MD4, DES_CBC_MD5,
DES3_CBC_RAW, and ARCFOUR_HMAC encryption in the Linux kernel
RPCSEC_GSS implementation is removed by this patch.

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
 include/linux/sunrpc/gss_krb5.h          |   39 ---
 include/linux/sunrpc/gss_krb5_enctypes.h |    2 
 net/sunrpc/Kconfig                       |    3 
 net/sunrpc/auth_gss/Makefile             |    2 
 net/sunrpc/auth_gss/gss_krb5_crypto.c    |  423 ------------------------------
 net/sunrpc/auth_gss/gss_krb5_keys.c      |   53 ----
 net/sunrpc/auth_gss/gss_krb5_mech.c      |  278 --------------------
 net/sunrpc/auth_gss/gss_krb5_seal.c      |   73 -----
 net/sunrpc/auth_gss/gss_krb5_seqnum.c    |  164 ------------
 net/sunrpc/auth_gss/gss_krb5_unseal.c    |   80 ------
 net/sunrpc/auth_gss/gss_krb5_wrap.c      |  254 ------------------
 11 files changed, 12 insertions(+), 1359 deletions(-)
 delete mode 100644 net/sunrpc/auth_gss/gss_krb5_seqnum.c

diff --git a/include/linux/sunrpc/gss_krb5.h b/include/linux/sunrpc/gss_krb5.h
index 02c0412..57f4a49 100644
--- a/include/linux/sunrpc/gss_krb5.h
+++ b/include/linux/sunrpc/gss_krb5.h
@@ -105,7 +105,6 @@ struct krb5_ctx {
 	struct crypto_sync_skcipher *acceptor_enc_aux;
 	struct crypto_sync_skcipher *initiator_enc_aux;
 	u8			Ksess[GSS_KRB5_MAX_KEYLEN]; /* session key */
-	u8			cksum[GSS_KRB5_MAX_KEYLEN];
 	s32			endtime;
 	atomic_t		seq_send;
 	atomic64_t		seq_send64;
@@ -235,11 +234,6 @@ enum seal_alg {
 	+ GSS_KRB5_MAX_CKSUM_LEN)
 
 u32
-make_checksum(struct krb5_ctx *kctx, char *header, int hdrlen,
-		struct xdr_buf *body, int body_offset, u8 *cksumkey,
-		unsigned int usage, struct xdr_netobj *cksumout);
-
-u32
 make_checksum_v2(struct krb5_ctx *, char *header, int hdrlen,
 		 struct xdr_buf *body, int body_offset, u8 *key,
 		 unsigned int usage, struct xdr_netobj *cksum);
@@ -268,25 +262,6 @@ u32 gss_verify_mic_kerberos(struct gss_ctx *, struct xdr_buf *,
 	     void *iv, void *in, void *out, int length); 
 
 int
-gss_encrypt_xdr_buf(struct crypto_sync_skcipher *tfm, struct xdr_buf *outbuf,
-		    int offset, struct page **pages);
-
-int
-gss_decrypt_xdr_buf(struct crypto_sync_skcipher *tfm, struct xdr_buf *inbuf,
-		    int offset);
-
-s32
-krb5_make_seq_num(struct krb5_ctx *kctx,
-		struct crypto_sync_skcipher *key,
-		int direction,
-		u32 seqnum, unsigned char *cksum, unsigned char *buf);
-
-s32
-krb5_get_seq_num(struct krb5_ctx *kctx,
-	       unsigned char *cksum,
-	       unsigned char *buf, int *direction, u32 *seqnum);
-
-int
 xdr_extend_head(struct xdr_buf *buf, unsigned int base, unsigned int shiftlen);
 
 u32
@@ -297,11 +272,6 @@ u32 gss_verify_mic_kerberos(struct gss_ctx *, struct xdr_buf *,
 		gfp_t gfp_mask);
 
 u32
-gss_krb5_des3_make_key(const struct gss_krb5_enctype *gk5e,
-		       struct xdr_netobj *randombits,
-		       struct xdr_netobj *key);
-
-u32
 gss_krb5_aes_make_key(const struct gss_krb5_enctype *gk5e,
 		      struct xdr_netobj *randombits,
 		      struct xdr_netobj *key);
@@ -316,14 +286,5 @@ u32 gss_verify_mic_kerberos(struct gss_ctx *, struct xdr_buf *,
 		     struct xdr_buf *buf, u32 *plainoffset,
 		     u32 *plainlen);
 
-int
-krb5_rc4_setup_seq_key(struct krb5_ctx *kctx,
-		       struct crypto_sync_skcipher *cipher,
-		       unsigned char *cksum);
-
-int
-krb5_rc4_setup_enc_key(struct krb5_ctx *kctx,
-		       struct crypto_sync_skcipher *cipher,
-		       s32 seqnum);
 void
 gss_krb5_make_confounder(char *p, u32 conflen);
diff --git a/include/linux/sunrpc/gss_krb5_enctypes.h b/include/linux/sunrpc/gss_krb5_enctypes.h
index ec6234e..7a8abcf 100644
--- a/include/linux/sunrpc/gss_krb5_enctypes.h
+++ b/include/linux/sunrpc/gss_krb5_enctypes.h
@@ -1,4 +1,4 @@
 /*
  * Dumb way to share this static piece of information with nfsd
  */
-#define KRB5_SUPPORTED_ENCTYPES "18,17,16,23,3,1,2"
+#define KRB5_SUPPORTED_ENCTYPES "18,17"
diff --git a/net/sunrpc/Kconfig b/net/sunrpc/Kconfig
index ac09ca8..80c8efc 100644
--- a/net/sunrpc/Kconfig
+++ b/net/sunrpc/Kconfig
@@ -18,9 +18,8 @@ config SUNRPC_SWAP
 config RPCSEC_GSS_KRB5
 	tristate "Secure RPC: Kerberos V mechanism"
 	depends on SUNRPC && CRYPTO
-	depends on CRYPTO_MD5 && CRYPTO_DES && CRYPTO_CBC && CRYPTO_CTS
+	depends on CRYPTO_MD5 && CRYPTO_CTS
 	depends on CRYPTO_ECB && CRYPTO_HMAC && CRYPTO_SHA1 && CRYPTO_AES
-	depends on CRYPTO_ARC4
 	default y
 	select SUNRPC_GSS
 	help
diff --git a/net/sunrpc/auth_gss/Makefile b/net/sunrpc/auth_gss/Makefile
index c374268..b5a65a0 100644
--- a/net/sunrpc/auth_gss/Makefile
+++ b/net/sunrpc/auth_gss/Makefile
@@ -12,4 +12,4 @@ auth_rpcgss-y := auth_gss.o gss_generic_token.o \
 obj-$(CONFIG_RPCSEC_GSS_KRB5) += rpcsec_gss_krb5.o
 
 rpcsec_gss_krb5-y := gss_krb5_mech.o gss_krb5_seal.o gss_krb5_unseal.o \
-	gss_krb5_seqnum.o gss_krb5_wrap.o gss_krb5_crypto.o gss_krb5_keys.o
+	gss_krb5_wrap.o gss_krb5_crypto.o gss_krb5_keys.o
diff --git a/net/sunrpc/auth_gss/gss_krb5_crypto.c b/net/sunrpc/auth_gss/gss_krb5_crypto.c
index 4f43383..896dd87 100644
--- a/net/sunrpc/auth_gss/gss_krb5_crypto.c
+++ b/net/sunrpc/auth_gss/gss_krb5_crypto.c
@@ -138,230 +138,6 @@
 	return crypto_ahash_update(req);
 }
 
-static int
-arcfour_hmac_md5_usage_to_salt(unsigned int usage, u8 salt[4])
-{
-	unsigned int ms_usage;
-
-	switch (usage) {
-	case KG_USAGE_SIGN:
-		ms_usage = 15;
-		break;
-	case KG_USAGE_SEAL:
-		ms_usage = 13;
-		break;
-	default:
-		return -EINVAL;
-	}
-	salt[0] = (ms_usage >> 0) & 0xff;
-	salt[1] = (ms_usage >> 8) & 0xff;
-	salt[2] = (ms_usage >> 16) & 0xff;
-	salt[3] = (ms_usage >> 24) & 0xff;
-
-	return 0;
-}
-
-static u32
-make_checksum_hmac_md5(struct krb5_ctx *kctx, char *header, int hdrlen,
-		       struct xdr_buf *body, int body_offset, u8 *cksumkey,
-		       unsigned int usage, struct xdr_netobj *cksumout)
-{
-	struct scatterlist              sg[1];
-	int err = -1;
-	u8 *checksumdata;
-	u8 *rc4salt;
-	struct crypto_ahash *md5;
-	struct crypto_ahash *hmac_md5;
-	struct ahash_request *req;
-
-	if (cksumkey == NULL)
-		return GSS_S_FAILURE;
-
-	if (cksumout->len < kctx->gk5e->cksumlength) {
-		dprintk("%s: checksum buffer length, %u, too small for %s\n",
-			__func__, cksumout->len, kctx->gk5e->name);
-		return GSS_S_FAILURE;
-	}
-
-	rc4salt = kmalloc_array(4, sizeof(*rc4salt), GFP_NOFS);
-	if (!rc4salt)
-		return GSS_S_FAILURE;
-
-	if (arcfour_hmac_md5_usage_to_salt(usage, rc4salt)) {
-		dprintk("%s: invalid usage value %u\n", __func__, usage);
-		goto out_free_rc4salt;
-	}
-
-	checksumdata = kmalloc(GSS_KRB5_MAX_CKSUM_LEN, GFP_NOFS);
-	if (!checksumdata)
-		goto out_free_rc4salt;
-
-	md5 = crypto_alloc_ahash("md5", 0, CRYPTO_ALG_ASYNC);
-	if (IS_ERR(md5))
-		goto out_free_cksum;
-
-	hmac_md5 = crypto_alloc_ahash(kctx->gk5e->cksum_name, 0,
-				      CRYPTO_ALG_ASYNC);
-	if (IS_ERR(hmac_md5))
-		goto out_free_md5;
-
-	req = ahash_request_alloc(md5, GFP_NOFS);
-	if (!req)
-		goto out_free_hmac_md5;
-
-	ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_SLEEP, NULL, NULL);
-
-	err = crypto_ahash_init(req);
-	if (err)
-		goto out;
-	sg_init_one(sg, rc4salt, 4);
-	ahash_request_set_crypt(req, sg, NULL, 4);
-	err = crypto_ahash_update(req);
-	if (err)
-		goto out;
-
-	sg_init_one(sg, header, hdrlen);
-	ahash_request_set_crypt(req, sg, NULL, hdrlen);
-	err = crypto_ahash_update(req);
-	if (err)
-		goto out;
-	err = xdr_process_buf(body, body_offset, body->len - body_offset,
-			      checksummer, req);
-	if (err)
-		goto out;
-	ahash_request_set_crypt(req, NULL, checksumdata, 0);
-	err = crypto_ahash_final(req);
-	if (err)
-		goto out;
-
-	ahash_request_free(req);
-	req = ahash_request_alloc(hmac_md5, GFP_NOFS);
-	if (!req)
-		goto out_free_hmac_md5;
-
-	ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_SLEEP, NULL, NULL);
-
-	err = crypto_ahash_setkey(hmac_md5, cksumkey, kctx->gk5e->keylength);
-	if (err)
-		goto out;
-
-	sg_init_one(sg, checksumdata, crypto_ahash_digestsize(md5));
-	ahash_request_set_crypt(req, sg, checksumdata,
-				crypto_ahash_digestsize(md5));
-	err = crypto_ahash_digest(req);
-	if (err)
-		goto out;
-
-	memcpy(cksumout->data, checksumdata, kctx->gk5e->cksumlength);
-	cksumout->len = kctx->gk5e->cksumlength;
-out:
-	ahash_request_free(req);
-out_free_hmac_md5:
-	crypto_free_ahash(hmac_md5);
-out_free_md5:
-	crypto_free_ahash(md5);
-out_free_cksum:
-	kfree(checksumdata);
-out_free_rc4salt:
-	kfree(rc4salt);
-	return err ? GSS_S_FAILURE : 0;
-}
-
-/*
- * checksum the plaintext data and hdrlen bytes of the token header
- * The checksum is performed over the first 8 bytes of the
- * gss token header and then over the data body
- */
-u32
-make_checksum(struct krb5_ctx *kctx, char *header, int hdrlen,
-	      struct xdr_buf *body, int body_offset, u8 *cksumkey,
-	      unsigned int usage, struct xdr_netobj *cksumout)
-{
-	struct crypto_ahash *tfm;
-	struct ahash_request *req;
-	struct scatterlist              sg[1];
-	int err = -1;
-	u8 *checksumdata;
-	unsigned int checksumlen;
-
-	if (kctx->gk5e->ctype == CKSUMTYPE_HMAC_MD5_ARCFOUR)
-		return make_checksum_hmac_md5(kctx, header, hdrlen,
-					      body, body_offset,
-					      cksumkey, usage, cksumout);
-
-	if (cksumout->len < kctx->gk5e->cksumlength) {
-		dprintk("%s: checksum buffer length, %u, too small for %s\n",
-			__func__, cksumout->len, kctx->gk5e->name);
-		return GSS_S_FAILURE;
-	}
-
-	checksumdata = kmalloc(GSS_KRB5_MAX_CKSUM_LEN, GFP_NOFS);
-	if (checksumdata == NULL)
-		return GSS_S_FAILURE;
-
-	tfm = crypto_alloc_ahash(kctx->gk5e->cksum_name, 0, CRYPTO_ALG_ASYNC);
-	if (IS_ERR(tfm))
-		goto out_free_cksum;
-
-	req = ahash_request_alloc(tfm, GFP_NOFS);
-	if (!req)
-		goto out_free_ahash;
-
-	ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_SLEEP, NULL, NULL);
-
-	checksumlen = crypto_ahash_digestsize(tfm);
-
-	if (cksumkey != NULL) {
-		err = crypto_ahash_setkey(tfm, cksumkey,
-					  kctx->gk5e->keylength);
-		if (err)
-			goto out;
-	}
-
-	err = crypto_ahash_init(req);
-	if (err)
-		goto out;
-	sg_init_one(sg, header, hdrlen);
-	ahash_request_set_crypt(req, sg, NULL, hdrlen);
-	err = crypto_ahash_update(req);
-	if (err)
-		goto out;
-	err = xdr_process_buf(body, body_offset, body->len - body_offset,
-			      checksummer, req);
-	if (err)
-		goto out;
-	ahash_request_set_crypt(req, NULL, checksumdata, 0);
-	err = crypto_ahash_final(req);
-	if (err)
-		goto out;
-
-	switch (kctx->gk5e->ctype) {
-	case CKSUMTYPE_RSA_MD5:
-		err = kctx->gk5e->encrypt(kctx->seq, NULL, checksumdata,
-					  checksumdata, checksumlen);
-		if (err)
-			goto out;
-		memcpy(cksumout->data,
-		       checksumdata + checksumlen - kctx->gk5e->cksumlength,
-		       kctx->gk5e->cksumlength);
-		break;
-	case CKSUMTYPE_HMAC_SHA1_DES3:
-		memcpy(cksumout->data, checksumdata, kctx->gk5e->cksumlength);
-		break;
-	default:
-		BUG();
-		break;
-	}
-	cksumout->len = kctx->gk5e->cksumlength;
-out:
-	ahash_request_free(req);
-out_free_ahash:
-	crypto_free_ahash(tfm);
-out_free_cksum:
-	kfree(checksumdata);
-	return err ? GSS_S_FAILURE : 0;
-}
-
 /*
  * checksum the plaintext data and hdrlen bytes of the token header
  * Per rfc4121, sec. 4.2.4, the checksum is performed over the data
@@ -526,35 +302,6 @@ struct encryptor_desc {
 	return 0;
 }
 
-int
-gss_encrypt_xdr_buf(struct crypto_sync_skcipher *tfm, struct xdr_buf *buf,
-		    int offset, struct page **pages)
-{
-	int ret;
-	struct encryptor_desc desc;
-	SYNC_SKCIPHER_REQUEST_ON_STACK(req, tfm);
-
-	BUG_ON((buf->len - offset) % crypto_sync_skcipher_blocksize(tfm) != 0);
-
-	skcipher_request_set_sync_tfm(req, tfm);
-	skcipher_request_set_callback(req, 0, NULL, NULL);
-
-	memset(desc.iv, 0, sizeof(desc.iv));
-	desc.req = req;
-	desc.pos = offset;
-	desc.outbuf = buf;
-	desc.pages = pages;
-	desc.fragno = 0;
-	desc.fraglen = 0;
-
-	sg_init_table(desc.infrags, 4);
-	sg_init_table(desc.outfrags, 4);
-
-	ret = xdr_process_buf(buf, offset, buf->len - offset, encryptor, &desc);
-	skcipher_request_zero(req);
-	return ret;
-}
-
 struct decryptor_desc {
 	u8 iv[GSS_KRB5_MAX_BLOCKSIZE];
 	struct skcipher_request *req;
@@ -609,32 +356,6 @@ struct decryptor_desc {
 	return 0;
 }
 
-int
-gss_decrypt_xdr_buf(struct crypto_sync_skcipher *tfm, struct xdr_buf *buf,
-		    int offset)
-{
-	int ret;
-	struct decryptor_desc desc;
-	SYNC_SKCIPHER_REQUEST_ON_STACK(req, tfm);
-
-	/* XXXJBF: */
-	BUG_ON((buf->len - offset) % crypto_sync_skcipher_blocksize(tfm) != 0);
-
-	skcipher_request_set_sync_tfm(req, tfm);
-	skcipher_request_set_callback(req, 0, NULL, NULL);
-
-	memset(desc.iv, 0, sizeof(desc.iv));
-	desc.req = req;
-	desc.fragno = 0;
-	desc.fraglen = 0;
-
-	sg_init_table(desc.frags, 4);
-
-	ret = xdr_process_buf(buf, offset, buf->len - offset, decryptor, &desc);
-	skcipher_request_zero(req);
-	return ret;
-}
-
 /*
  * This function makes the assumption that it was ultimately called
  * from gss_wrap().
@@ -942,147 +663,3 @@ struct decryptor_desc {
 		ret = GSS_S_FAILURE;
 	return ret;
 }
-
-/*
- * Compute Kseq given the initial session key and the checksum.
- * Set the key of the given cipher.
- */
-int
-krb5_rc4_setup_seq_key(struct krb5_ctx *kctx,
-		       struct crypto_sync_skcipher *cipher,
-		       unsigned char *cksum)
-{
-	struct crypto_shash *hmac;
-	struct shash_desc *desc;
-	u8 Kseq[GSS_KRB5_MAX_KEYLEN];
-	u32 zeroconstant = 0;
-	int err;
-
-	dprintk("%s: entered\n", __func__);
-
-	hmac = crypto_alloc_shash(kctx->gk5e->cksum_name, 0, 0);
-	if (IS_ERR(hmac)) {
-		dprintk("%s: error %ld, allocating hash '%s'\n",
-			__func__, PTR_ERR(hmac), kctx->gk5e->cksum_name);
-		return PTR_ERR(hmac);
-	}
-
-	desc = kmalloc(sizeof(*desc) + crypto_shash_descsize(hmac),
-		       GFP_NOFS);
-	if (!desc) {
-		dprintk("%s: failed to allocate shash descriptor for '%s'\n",
-			__func__, kctx->gk5e->cksum_name);
-		crypto_free_shash(hmac);
-		return -ENOMEM;
-	}
-
-	desc->tfm = hmac;
-	desc->flags = 0;
-
-	/* Compute intermediate Kseq from session key */
-	err = crypto_shash_setkey(hmac, kctx->Ksess, kctx->gk5e->keylength);
-	if (err)
-		goto out_err;
-
-	err = crypto_shash_digest(desc, (u8 *)&zeroconstant, 4, Kseq);
-	if (err)
-		goto out_err;
-
-	/* Compute final Kseq from the checksum and intermediate Kseq */
-	err = crypto_shash_setkey(hmac, Kseq, kctx->gk5e->keylength);
-	if (err)
-		goto out_err;
-
-	err = crypto_shash_digest(desc, cksum, 8, Kseq);
-	if (err)
-		goto out_err;
-
-	err = crypto_sync_skcipher_setkey(cipher, Kseq, kctx->gk5e->keylength);
-	if (err)
-		goto out_err;
-
-	err = 0;
-
-out_err:
-	kzfree(desc);
-	crypto_free_shash(hmac);
-	dprintk("%s: returning %d\n", __func__, err);
-	return err;
-}
-
-/*
- * Compute Kcrypt given the initial session key and the plaintext seqnum.
- * Set the key of cipher kctx->enc.
- */
-int
-krb5_rc4_setup_enc_key(struct krb5_ctx *kctx,
-		       struct crypto_sync_skcipher *cipher,
-		       s32 seqnum)
-{
-	struct crypto_shash *hmac;
-	struct shash_desc *desc;
-	u8 Kcrypt[GSS_KRB5_MAX_KEYLEN];
-	u8 zeroconstant[4] = {0};
-	u8 seqnumarray[4];
-	int err, i;
-
-	dprintk("%s: entered, seqnum %u\n", __func__, seqnum);
-
-	hmac = crypto_alloc_shash(kctx->gk5e->cksum_name, 0, 0);
-	if (IS_ERR(hmac)) {
-		dprintk("%s: error %ld, allocating hash '%s'\n",
-			__func__, PTR_ERR(hmac), kctx->gk5e->cksum_name);
-		return PTR_ERR(hmac);
-	}
-
-	desc = kmalloc(sizeof(*desc) + crypto_shash_descsize(hmac),
-		       GFP_NOFS);
-	if (!desc) {
-		dprintk("%s: failed to allocate shash descriptor for '%s'\n",
-			__func__, kctx->gk5e->cksum_name);
-		crypto_free_shash(hmac);
-		return -ENOMEM;
-	}
-
-	desc->tfm = hmac;
-	desc->flags = 0;
-
-	/* Compute intermediate Kcrypt from session key */
-	for (i = 0; i < kctx->gk5e->keylength; i++)
-		Kcrypt[i] = kctx->Ksess[i] ^ 0xf0;
-
-	err = crypto_shash_setkey(hmac, Kcrypt, kctx->gk5e->keylength);
-	if (err)
-		goto out_err;
-
-	err = crypto_shash_digest(desc, zeroconstant, 4, Kcrypt);
-	if (err)
-		goto out_err;
-
-	/* Compute final Kcrypt from the seqnum and intermediate Kcrypt */
-	err = crypto_shash_setkey(hmac, Kcrypt, kctx->gk5e->keylength);
-	if (err)
-		goto out_err;
-
-	seqnumarray[0] = (unsigned char) ((seqnum >> 24) & 0xff);
-	seqnumarray[1] = (unsigned char) ((seqnum >> 16) & 0xff);
-	seqnumarray[2] = (unsigned char) ((seqnum >> 8) & 0xff);
-	seqnumarray[3] = (unsigned char) ((seqnum >> 0) & 0xff);
-
-	err = crypto_shash_digest(desc, seqnumarray, 4, Kcrypt);
-	if (err)
-		goto out_err;
-
-	err = crypto_sync_skcipher_setkey(cipher, Kcrypt,
-					  kctx->gk5e->keylength);
-	if (err)
-		goto out_err;
-
-	err = 0;
-
-out_err:
-	kzfree(desc);
-	crypto_free_shash(hmac);
-	dprintk("%s: returning %d\n", __func__, err);
-	return err;
-}
diff --git a/net/sunrpc/auth_gss/gss_krb5_keys.c b/net/sunrpc/auth_gss/gss_krb5_keys.c
index 550fdf1..de327ae 100644
--- a/net/sunrpc/auth_gss/gss_krb5_keys.c
+++ b/net/sunrpc/auth_gss/gss_krb5_keys.c
@@ -242,59 +242,6 @@ u32 krb5_derive_key(const struct gss_krb5_enctype *gk5e,
 	return ret;
 }
 
-#define smask(step) ((1<<step)-1)
-#define pstep(x, step) (((x)&smask(step))^(((x)>>step)&smask(step)))
-#define parity_char(x) pstep(pstep(pstep((x), 4), 2), 1)
-
-static void mit_des_fixup_key_parity(u8 key[8])
-{
-	int i;
-	for (i = 0; i < 8; i++) {
-		key[i] &= 0xfe;
-		key[i] |= 1^parity_char(key[i]);
-	}
-}
-
-/*
- * This is the des3 key derivation postprocess function
- */
-u32 gss_krb5_des3_make_key(const struct gss_krb5_enctype *gk5e,
-			   struct xdr_netobj *randombits,
-			   struct xdr_netobj *key)
-{
-	int i;
-	u32 ret = EINVAL;
-
-	if (key->len != 24) {
-		dprintk("%s: key->len is %d\n", __func__, key->len);
-		goto err_out;
-	}
-	if (randombits->len != 21) {
-		dprintk("%s: randombits->len is %d\n",
-			__func__, randombits->len);
-		goto err_out;
-	}
-
-	/* take the seven bytes, move them around into the top 7 bits of the
-	   8 key bytes, then compute the parity bits.  Do this three times. */
-
-	for (i = 0; i < 3; i++) {
-		memcpy(key->data + i*8, randombits->data + i*7, 7);
-		key->data[i*8+7] = (((key->data[i*8]&1)<<1) |
-				    ((key->data[i*8+1]&1)<<2) |
-				    ((key->data[i*8+2]&1)<<3) |
-				    ((key->data[i*8+3]&1)<<4) |
-				    ((key->data[i*8+4]&1)<<5) |
-				    ((key->data[i*8+5]&1)<<6) |
-				    ((key->data[i*8+6]&1)<<7));
-
-		mit_des_fixup_key_parity(key->data + i*8);
-	}
-	ret = 0;
-err_out:
-	return ret;
-}
-
 /*
  * This is the aes key derivation postprocess function
  */
diff --git a/net/sunrpc/auth_gss/gss_krb5_mech.c b/net/sunrpc/auth_gss/gss_krb5_mech.c
index eab71fc..0837543 100644
--- a/net/sunrpc/auth_gss/gss_krb5_mech.c
+++ b/net/sunrpc/auth_gss/gss_krb5_mech.c
@@ -54,69 +54,6 @@
 
 static const struct gss_krb5_enctype supported_gss_krb5_enctypes[] = {
 	/*
-	 * DES (All DES enctypes are mapped to the same gss functionality)
-	 */
-	{
-	  .etype = ENCTYPE_DES_CBC_RAW,
-	  .ctype = CKSUMTYPE_RSA_MD5,
-	  .name = "des-cbc-crc",
-	  .encrypt_name = "cbc(des)",
-	  .cksum_name = "md5",
-	  .encrypt = krb5_encrypt,
-	  .decrypt = krb5_decrypt,
-	  .mk_key = NULL,
-	  .signalg = SGN_ALG_DES_MAC_MD5,
-	  .sealalg = SEAL_ALG_DES,
-	  .keybytes = 7,
-	  .keylength = 8,
-	  .blocksize = 8,
-	  .conflen = 8,
-	  .cksumlength = 8,
-	  .keyed_cksum = 0,
-	},
-	/*
-	 * RC4-HMAC
-	 */
-	{
-	  .etype = ENCTYPE_ARCFOUR_HMAC,
-	  .ctype = CKSUMTYPE_HMAC_MD5_ARCFOUR,
-	  .name = "rc4-hmac",
-	  .encrypt_name = "ecb(arc4)",
-	  .cksum_name = "hmac(md5)",
-	  .encrypt = krb5_encrypt,
-	  .decrypt = krb5_decrypt,
-	  .mk_key = NULL,
-	  .signalg = SGN_ALG_HMAC_MD5,
-	  .sealalg = SEAL_ALG_MICROSOFT_RC4,
-	  .keybytes = 16,
-	  .keylength = 16,
-	  .blocksize = 1,
-	  .conflen = 8,
-	  .cksumlength = 8,
-	  .keyed_cksum = 1,
-	},
-	/*
-	 * 3DES
-	 */
-	{
-	  .etype = ENCTYPE_DES3_CBC_RAW,
-	  .ctype = CKSUMTYPE_HMAC_SHA1_DES3,
-	  .name = "des3-hmac-sha1",
-	  .encrypt_name = "cbc(des3_ede)",
-	  .cksum_name = "hmac(sha1)",
-	  .encrypt = krb5_encrypt,
-	  .decrypt = krb5_decrypt,
-	  .mk_key = gss_krb5_des3_make_key,
-	  .signalg = SGN_ALG_HMAC_SHA1_DES3_KD,
-	  .sealalg = SEAL_ALG_DES3KD,
-	  .keybytes = 21,
-	  .keylength = 24,
-	  .blocksize = 8,
-	  .conflen = 8,
-	  .cksumlength = 20,
-	  .keyed_cksum = 1,
-	},
-	/*
 	 * AES128
 	 */
 	{
@@ -227,15 +164,6 @@
 	if (IS_ERR(p))
 		goto out_err;
 
-	switch (alg) {
-	case ENCTYPE_DES_CBC_CRC:
-	case ENCTYPE_DES_CBC_MD4:
-	case ENCTYPE_DES_CBC_MD5:
-		/* Map all these key types to ENCTYPE_DES_CBC_RAW */
-		alg = ENCTYPE_DES_CBC_RAW;
-		break;
-	}
-
 	if (!supported_gss_krb5_enctype(alg)) {
 		printk(KERN_WARNING "gss_kerberos_mech: unsupported "
 			"encryption key algorithm %d\n", alg);
@@ -271,81 +199,6 @@
 	return p;
 }
 
-static int
-gss_import_v1_context(const void *p, const void *end, struct krb5_ctx *ctx)
-{
-	u32 seq_send;
-	int tmp;
-
-	p = simple_get_bytes(p, end, &ctx->initiate, sizeof(ctx->initiate));
-	if (IS_ERR(p))
-		goto out_err;
-
-	/* Old format supports only DES!  Any other enctype uses new format */
-	ctx->enctype = ENCTYPE_DES_CBC_RAW;
-
-	ctx->gk5e = get_gss_krb5_enctype(ctx->enctype);
-	if (ctx->gk5e == NULL) {
-		p = ERR_PTR(-EINVAL);
-		goto out_err;
-	}
-
-	/* The downcall format was designed before we completely understood
-	 * the uses of the context fields; so it includes some stuff we
-	 * just give some minimal sanity-checking, and some we ignore
-	 * completely (like the next twenty bytes): */
-	if (unlikely(p + 20 > end || p + 20 < p)) {
-		p = ERR_PTR(-EFAULT);
-		goto out_err;
-	}
-	p += 20;
-	p = simple_get_bytes(p, end, &tmp, sizeof(tmp));
-	if (IS_ERR(p))
-		goto out_err;
-	if (tmp != SGN_ALG_DES_MAC_MD5) {
-		p = ERR_PTR(-ENOSYS);
-		goto out_err;
-	}
-	p = simple_get_bytes(p, end, &tmp, sizeof(tmp));
-	if (IS_ERR(p))
-		goto out_err;
-	if (tmp != SEAL_ALG_DES) {
-		p = ERR_PTR(-ENOSYS);
-		goto out_err;
-	}
-	p = simple_get_bytes(p, end, &ctx->endtime, sizeof(ctx->endtime));
-	if (IS_ERR(p))
-		goto out_err;
-	p = simple_get_bytes(p, end, &seq_send, sizeof(seq_send));
-	if (IS_ERR(p))
-		goto out_err;
-	atomic_set(&ctx->seq_send, seq_send);
-	p = simple_get_netobj(p, end, &ctx->mech_used);
-	if (IS_ERR(p))
-		goto out_err;
-	p = get_key(p, end, ctx, &ctx->enc);
-	if (IS_ERR(p))
-		goto out_err_free_mech;
-	p = get_key(p, end, ctx, &ctx->seq);
-	if (IS_ERR(p))
-		goto out_err_free_key1;
-	if (p != end) {
-		p = ERR_PTR(-EFAULT);
-		goto out_err_free_key2;
-	}
-
-	return 0;
-
-out_err_free_key2:
-	crypto_free_sync_skcipher(ctx->seq);
-out_err_free_key1:
-	crypto_free_sync_skcipher(ctx->enc);
-out_err_free_mech:
-	kfree(ctx->mech_used.data);
-out_err:
-	return PTR_ERR(p);
-}
-
 static struct crypto_sync_skcipher *
 context_v2_alloc_cipher(struct krb5_ctx *ctx, const char *cname, u8 *key)
 {
@@ -377,124 +230,6 @@
 }
 
 static int
-context_derive_keys_des3(struct krb5_ctx *ctx, gfp_t gfp_mask)
-{
-	struct xdr_netobj c, keyin, keyout;
-	u8 cdata[GSS_KRB5_K5CLENGTH];
-	u32 err;
-
-	c.len = GSS_KRB5_K5CLENGTH;
-	c.data = cdata;
-
-	keyin.data = ctx->Ksess;
-	keyin.len = ctx->gk5e->keylength;
-	keyout.len = ctx->gk5e->keylength;
-
-	/* seq uses the raw key */
-	ctx->seq = context_v2_alloc_cipher(ctx, ctx->gk5e->encrypt_name,
-					   ctx->Ksess);
-	if (ctx->seq == NULL)
-		goto out_err;
-
-	ctx->enc = context_v2_alloc_cipher(ctx, ctx->gk5e->encrypt_name,
-					   ctx->Ksess);
-	if (ctx->enc == NULL)
-		goto out_free_seq;
-
-	/* derive cksum */
-	set_cdata(cdata, KG_USAGE_SIGN, KEY_USAGE_SEED_CHECKSUM);
-	keyout.data = ctx->cksum;
-	err = krb5_derive_key(ctx->gk5e, &keyin, &keyout, &c, gfp_mask);
-	if (err) {
-		dprintk("%s: Error %d deriving cksum key\n",
-			__func__, err);
-		goto out_free_enc;
-	}
-
-	return 0;
-
-out_free_enc:
-	crypto_free_sync_skcipher(ctx->enc);
-out_free_seq:
-	crypto_free_sync_skcipher(ctx->seq);
-out_err:
-	return -EINVAL;
-}
-
-/*
- * Note that RC4 depends on deriving keys using the sequence
- * number or the checksum of a token.  Therefore, the final keys
- * cannot be calculated until the token is being constructed!
- */
-static int
-context_derive_keys_rc4(struct krb5_ctx *ctx)
-{
-	struct crypto_shash *hmac;
-	char sigkeyconstant[] = "signaturekey";
-	int slen = strlen(sigkeyconstant) + 1;	/* include null terminator */
-	struct shash_desc *desc;
-	int err;
-
-	dprintk("RPC:       %s: entered\n", __func__);
-	/*
-	 * derive cksum (aka Ksign) key
-	 */
-	hmac = crypto_alloc_shash(ctx->gk5e->cksum_name, 0, 0);
-	if (IS_ERR(hmac)) {
-		dprintk("%s: error %ld allocating hash '%s'\n",
-			__func__, PTR_ERR(hmac), ctx->gk5e->cksum_name);
-		err = PTR_ERR(hmac);
-		goto out_err;
-	}
-
-	err = crypto_shash_setkey(hmac, ctx->Ksess, ctx->gk5e->keylength);
-	if (err)
-		goto out_err_free_hmac;
-
-
-	desc = kmalloc(sizeof(*desc) + crypto_shash_descsize(hmac), GFP_NOFS);
-	if (!desc) {
-		dprintk("%s: failed to allocate hash descriptor for '%s'\n",
-			__func__, ctx->gk5e->cksum_name);
-		err = -ENOMEM;
-		goto out_err_free_hmac;
-	}
-
-	desc->tfm = hmac;
-	desc->flags = 0;
-
-	err = crypto_shash_digest(desc, sigkeyconstant, slen, ctx->cksum);
-	kzfree(desc);
-	if (err)
-		goto out_err_free_hmac;
-	/*
-	 * allocate hash, and skciphers for data and seqnum encryption
-	 */
-	ctx->enc = crypto_alloc_sync_skcipher(ctx->gk5e->encrypt_name, 0, 0);
-	if (IS_ERR(ctx->enc)) {
-		err = PTR_ERR(ctx->enc);
-		goto out_err_free_hmac;
-	}
-
-	ctx->seq = crypto_alloc_sync_skcipher(ctx->gk5e->encrypt_name, 0, 0);
-	if (IS_ERR(ctx->seq)) {
-		crypto_free_sync_skcipher(ctx->enc);
-		err = PTR_ERR(ctx->seq);
-		goto out_err_free_hmac;
-	}
-
-	dprintk("RPC:       %s: returning success\n", __func__);
-
-	err = 0;
-
-out_err_free_hmac:
-	crypto_free_shash(hmac);
-out_err:
-	dprintk("RPC:       %s: returning %d\n", __func__, err);
-	return err;
-}
-
-static int
 context_derive_keys_new(struct krb5_ctx *ctx, gfp_t gfp_mask)
 {
 	struct xdr_netobj c, keyin, keyout;
@@ -635,9 +370,6 @@
 	p = simple_get_bytes(p, end, &ctx->enctype, sizeof(ctx->enctype));
 	if (IS_ERR(p))
 		goto out_err;
-	/* Map ENCTYPE_DES3_CBC_SHA1 to ENCTYPE_DES3_CBC_RAW */
-	if (ctx->enctype == ENCTYPE_DES3_CBC_SHA1)
-		ctx->enctype = ENCTYPE_DES3_CBC_RAW;
 	ctx->gk5e = get_gss_krb5_enctype(ctx->enctype);
 	if (ctx->gk5e == NULL) {
 		dprintk("gss_kerberos_mech: unsupported krb5 enctype %u\n",
@@ -665,10 +397,6 @@
 	ctx->mech_used.len = gss_kerberos_mech.gm_oid.len;
 
 	switch (ctx->enctype) {
-	case ENCTYPE_DES3_CBC_RAW:
-		return context_derive_keys_des3(ctx, gfp_mask);
-	case ENCTYPE_ARCFOUR_HMAC:
-		return context_derive_keys_rc4(ctx);
 	case ENCTYPE_AES128_CTS_HMAC_SHA1_96:
 	case ENCTYPE_AES256_CTS_HMAC_SHA1_96:
 		return context_derive_keys_new(ctx, gfp_mask);
@@ -694,11 +422,7 @@
 	if (ctx == NULL)
 		return -ENOMEM;
 
-	if (len == 85)
-		ret = gss_import_v1_context(p, end, ctx);
-	else
-		ret = gss_import_v2_context(p, end, ctx, gfp_mask);
-
+	ret = gss_import_v2_context(p, end, ctx, gfp_mask);
 	if (ret == 0) {
 		ctx_id->internal_ctx_id = ctx;
 		if (endtime)
diff --git a/net/sunrpc/auth_gss/gss_krb5_seal.c b/net/sunrpc/auth_gss/gss_krb5_seal.c
index 48fe4a5..feb0f2a 100644
--- a/net/sunrpc/auth_gss/gss_krb5_seal.c
+++ b/net/sunrpc/auth_gss/gss_krb5_seal.c
@@ -70,32 +70,6 @@
 #endif
 
 static void *
-setup_token(struct krb5_ctx *ctx, struct xdr_netobj *token)
-{
-	u16 *ptr;
-	void *krb5_hdr;
-	int body_size = GSS_KRB5_TOK_HDR_LEN + ctx->gk5e->cksumlength;
-
-	token->len = g_token_size(&ctx->mech_used, body_size);
-
-	ptr = (u16 *)token->data;
-	g_make_token_header(&ctx->mech_used, body_size, (unsigned char **)&ptr);
-
-	/* ptr now at start of header described in rfc 1964, section 1.2.1: */
-	krb5_hdr = ptr;
-	*ptr++ = KG_TOK_MIC_MSG;
-	/*
-	 * signalg is stored as if it were converted from LE to host endian, even
-	 * though it's an opaque pair of bytes according to the RFC.
-	 */
-	*ptr++ = (__force u16)cpu_to_le16(ctx->gk5e->signalg);
-	*ptr++ = SEAL_ALG_NONE;
-	*ptr = 0xffff;
-
-	return krb5_hdr;
-}
-
-static void *
 setup_token_v2(struct krb5_ctx *ctx, struct xdr_netobj *token)
 {
 	u16 *ptr;
@@ -124,45 +98,6 @@
 }
 
 static u32
-gss_get_mic_v1(struct krb5_ctx *ctx, struct xdr_buf *text,
-		struct xdr_netobj *token)
-{
-	char			cksumdata[GSS_KRB5_MAX_CKSUM_LEN];
-	struct xdr_netobj	md5cksum = {.len = sizeof(cksumdata),
-					    .data = cksumdata};
-	void			*ptr;
-	s32			now;
-	u32			seq_send;
-	u8			*cksumkey;
-
-	dprintk("RPC:       %s\n", __func__);
-	BUG_ON(ctx == NULL);
-
-	now = get_seconds();
-
-	ptr = setup_token(ctx, token);
-
-	if (ctx->gk5e->keyed_cksum)
-		cksumkey = ctx->cksum;
-	else
-		cksumkey = NULL;
-
-	if (make_checksum(ctx, ptr, 8, text, 0, cksumkey,
-			  KG_USAGE_SIGN, &md5cksum))
-		return GSS_S_FAILURE;
-
-	memcpy(ptr + GSS_KRB5_TOK_HDR_LEN, md5cksum.data, md5cksum.len);
-
-	seq_send = atomic_fetch_inc(&ctx->seq_send);
-
-	if (krb5_make_seq_num(ctx, ctx->seq, ctx->initiate ? 0 : 0xff,
-			      seq_send, ptr + GSS_KRB5_TOK_HDR_LEN, ptr + 8))
-		return GSS_S_FAILURE;
-
-	return (ctx->endtime < now) ? GSS_S_CONTEXT_EXPIRED : GSS_S_COMPLETE;
-}
-
-static u32
 gss_get_mic_v2(struct krb5_ctx *ctx, struct xdr_buf *text,
 		struct xdr_netobj *token)
 {
@@ -210,14 +145,10 @@
 	struct krb5_ctx		*ctx = gss_ctx->internal_ctx_id;
 
 	switch (ctx->enctype) {
-	default:
-		BUG();
-	case ENCTYPE_DES_CBC_RAW:
-	case ENCTYPE_DES3_CBC_RAW:
-	case ENCTYPE_ARCFOUR_HMAC:
-		return gss_get_mic_v1(ctx, text, token);
 	case ENCTYPE_AES128_CTS_HMAC_SHA1_96:
 	case ENCTYPE_AES256_CTS_HMAC_SHA1_96:
 		return gss_get_mic_v2(ctx, text, token);
+	default:
+		return GSS_S_FAILURE;
 	}
 }
diff --git a/net/sunrpc/auth_gss/gss_krb5_seqnum.c b/net/sunrpc/auth_gss/gss_krb5_seqnum.c
deleted file mode 100644
index fb66562..0000000
--- a/net/sunrpc/auth_gss/gss_krb5_seqnum.c
+++ /dev/null
@@ -1,164 +0,0 @@
-/*
- *  linux/net/sunrpc/gss_krb5_seqnum.c
- *
- *  Adapted from MIT Kerberos 5-1.2.1 lib/gssapi/krb5/util_seqnum.c
- *
- *  Copyright (c) 2000 The Regents of the University of Michigan.
- *  All rights reserved.
- *
- *  Andy Adamson   <andros@umich.edu>
- */
-
-/*
- * Copyright 1993 by OpenVision Technologies, Inc.
- *
- * Permission to use, copy, modify, distribute, and sell this software
- * and its documentation for any purpose is hereby granted without fee,
- * provided that the above copyright notice appears in all copies and
- * that both that copyright notice and this permission notice appear in
- * supporting documentation, and that the name of OpenVision not be used
- * in advertising or publicity pertaining to distribution of the software
- * without specific, written prior permission. OpenVision makes no
- * representations about the suitability of this software for any
- * purpose.  It is provided "as is" without express or implied warranty.
- *
- * OPENVISION DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE,
- * INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN NO
- * EVENT SHALL OPENVISION BE LIABLE FOR ANY SPECIAL, INDIRECT OR
- * CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF
- * USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR
- * OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
- * PERFORMANCE OF THIS SOFTWARE.
- */
-
-#include <crypto/skcipher.h>
-#include <linux/types.h>
-#include <linux/sunrpc/gss_krb5.h>
-
-#if IS_ENABLED(CONFIG_SUNRPC_DEBUG)
-# define RPCDBG_FACILITY        RPCDBG_AUTH
-#endif
-
-static s32
-krb5_make_rc4_seq_num(struct krb5_ctx *kctx, int direction, s32 seqnum,
-		      unsigned char *cksum, unsigned char *buf)
-{
-	struct crypto_sync_skcipher *cipher;
-	unsigned char plain[8];
-	s32 code;
-
-	dprintk("RPC:       %s:\n", __func__);
-	cipher = crypto_alloc_sync_skcipher(kctx->gk5e->encrypt_name, 0, 0);
-	if (IS_ERR(cipher))
-		return PTR_ERR(cipher);
-
-	plain[0] = (unsigned char) ((seqnum >> 24) & 0xff);
-	plain[1] = (unsigned char) ((seqnum >> 16) & 0xff);
-	plain[2] = (unsigned char) ((seqnum >> 8) & 0xff);
-	plain[3] = (unsigned char) ((seqnum >> 0) & 0xff);
-	plain[4] = direction;
-	plain[5] = direction;
-	plain[6] = direction;
-	plain[7] = direction;
-
-	code = krb5_rc4_setup_seq_key(kctx, cipher, cksum);
-	if (code)
-		goto out;
-
-	code = krb5_encrypt(cipher, cksum, plain, buf, 8);
-out:
-	crypto_free_sync_skcipher(cipher);
-	return code;
-}
-s32
-krb5_make_seq_num(struct krb5_ctx *kctx,
-		struct crypto_sync_skcipher *key,
-		int direction,
-		u32 seqnum,
-		unsigned char *cksum, unsigned char *buf)
-{
-	unsigned char plain[8];
-
-	if (kctx->enctype == ENCTYPE_ARCFOUR_HMAC)
-		return krb5_make_rc4_seq_num(kctx, direction, seqnum,
-					     cksum, buf);
-
-	plain[0] = (unsigned char) (seqnum & 0xff);
-	plain[1] = (unsigned char) ((seqnum >> 8) & 0xff);
-	plain[2] = (unsigned char) ((seqnum >> 16) & 0xff);
-	plain[3] = (unsigned char) ((seqnum >> 24) & 0xff);
-
-	plain[4] = direction;
-	plain[5] = direction;
-	plain[6] = direction;
-	plain[7] = direction;
-
-	return krb5_encrypt(key, cksum, plain, buf, 8);
-}
-
-static s32
-krb5_get_rc4_seq_num(struct krb5_ctx *kctx, unsigned char *cksum,
-		     unsigned char *buf, int *direction, s32 *seqnum)
-{
-	struct crypto_sync_skcipher *cipher;
-	unsigned char plain[8];
-	s32 code;
-
-	dprintk("RPC:       %s:\n", __func__);
-	cipher = crypto_alloc_sync_skcipher(kctx->gk5e->encrypt_name, 0, 0);
-	if (IS_ERR(cipher))
-		return PTR_ERR(cipher);
-
-	code = krb5_rc4_setup_seq_key(kctx, cipher, cksum);
-	if (code)
-		goto out;
-
-	code = krb5_decrypt(cipher, cksum, buf, plain, 8);
-	if (code)
-		goto out;
-
-	if ((plain[4] != plain[5]) || (plain[4] != plain[6])
-				   || (plain[4] != plain[7])) {
-		code = (s32)KG_BAD_SEQ;
-		goto out;
-	}
-
-	*direction = plain[4];
-
-	*seqnum = ((plain[0] << 24) | (plain[1] << 16) |
-					(plain[2] << 8) | (plain[3]));
-out:
-	crypto_free_sync_skcipher(cipher);
-	return code;
-}
-
-s32
-krb5_get_seq_num(struct krb5_ctx *kctx,
-	       unsigned char *cksum,
-	       unsigned char *buf,
-	       int *direction, u32 *seqnum)
-{
-	s32 code;
-	unsigned char plain[8];
-	struct crypto_sync_skcipher *key = kctx->seq;
-
-	dprintk("RPC:       krb5_get_seq_num:\n");
-
-	if (kctx->enctype == ENCTYPE_ARCFOUR_HMAC)
-		return krb5_get_rc4_seq_num(kctx, cksum, buf,
-					    direction, seqnum);
-
-	if ((code = krb5_decrypt(key, cksum, buf, plain, 8)))
-		return code;
-
-	if ((plain[4] != plain[5]) || (plain[4] != plain[6]) ||
-	    (plain[4] != plain[7]))
-		return (s32)KG_BAD_SEQ;
-
-	*direction = plain[4];
-
-	*seqnum = ((plain[0]) |
-		   (plain[1] << 8) | (plain[2] << 16) | (plain[3] << 24));
-
-	return 0;
-}
diff --git a/net/sunrpc/auth_gss/gss_krb5_unseal.c b/net/sunrpc/auth_gss/gss_krb5_unseal.c
index ef2b25b..f0f646a 100644
--- a/net/sunrpc/auth_gss/gss_krb5_unseal.c
+++ b/net/sunrpc/auth_gss/gss_krb5_unseal.c
@@ -71,78 +71,6 @@
  * supposedly taken over. */
 
 static u32
-gss_verify_mic_v1(struct krb5_ctx *ctx,
-		struct xdr_buf *message_buffer, struct xdr_netobj *read_token)
-{
-	int			signalg;
-	int			sealalg;
-	char			cksumdata[GSS_KRB5_MAX_CKSUM_LEN];
-	struct xdr_netobj	md5cksum = {.len = sizeof(cksumdata),
-					    .data = cksumdata};
-	s32			now;
-	int			direction;
-	u32			seqnum;
-	unsigned char		*ptr = (unsigned char *)read_token->data;
-	int			bodysize;
-	u8			*cksumkey;
-
-	dprintk("RPC:       krb5_read_token\n");
-
-	if (g_verify_token_header(&ctx->mech_used, &bodysize, &ptr,
-					read_token->len))
-		return GSS_S_DEFECTIVE_TOKEN;
-
-	if ((ptr[0] != ((KG_TOK_MIC_MSG >> 8) & 0xff)) ||
-	    (ptr[1] !=  (KG_TOK_MIC_MSG & 0xff)))
-		return GSS_S_DEFECTIVE_TOKEN;
-
-	/* XXX sanity-check bodysize?? */
-
-	signalg = ptr[2] + (ptr[3] << 8);
-	if (signalg != ctx->gk5e->signalg)
-		return GSS_S_DEFECTIVE_TOKEN;
-
-	sealalg = ptr[4] + (ptr[5] << 8);
-	if (sealalg != SEAL_ALG_NONE)
-		return GSS_S_DEFECTIVE_TOKEN;
-
-	if ((ptr[6] != 0xff) || (ptr[7] != 0xff))
-		return GSS_S_DEFECTIVE_TOKEN;
-
-	if (ctx->gk5e->keyed_cksum)
-		cksumkey = ctx->cksum;
-	else
-		cksumkey = NULL;
-
-	if (make_checksum(ctx, ptr, 8, message_buffer, 0,
-			  cksumkey, KG_USAGE_SIGN, &md5cksum))
-		return GSS_S_FAILURE;
-
-	if (memcmp(md5cksum.data, ptr + GSS_KRB5_TOK_HDR_LEN,
-					ctx->gk5e->cksumlength))
-		return GSS_S_BAD_SIG;
-
-	/* it got through unscathed.  Make sure the context is unexpired */
-
-	now = get_seconds();
-
-	if (now > ctx->endtime)
-		return GSS_S_CONTEXT_EXPIRED;
-
-	/* do sequencing checks */
-
-	if (krb5_get_seq_num(ctx, ptr + GSS_KRB5_TOK_HDR_LEN, ptr + 8,
-			     &direction, &seqnum))
-		return GSS_S_FAILURE;
-
-	if ((ctx->initiate && direction != 0xff) ||
-	    (!ctx->initiate && direction != 0))
-		return GSS_S_BAD_SIG;
-
-	return GSS_S_COMPLETE;
-}
-
-static u32
 gss_verify_mic_v2(struct krb5_ctx *ctx,
 		struct xdr_buf *message_buffer, struct xdr_netobj *read_token)
 {
@@ -214,14 +142,10 @@
 	struct krb5_ctx *ctx = gss_ctx->internal_ctx_id;
 
 	switch (ctx->enctype) {
-	default:
-		BUG();
-	case ENCTYPE_DES_CBC_RAW:
-	case ENCTYPE_DES3_CBC_RAW:
-	case ENCTYPE_ARCFOUR_HMAC:
-		return gss_verify_mic_v1(ctx, message_buffer, read_token);
 	case ENCTYPE_AES128_CTS_HMAC_SHA1_96:
 	case ENCTYPE_AES256_CTS_HMAC_SHA1_96:
 		return gss_verify_mic_v2(ctx, message_buffer, read_token);
+	default:
+		return GSS_S_FAILURE;
 	}
 }
diff --git a/net/sunrpc/auth_gss/gss_krb5_wrap.c b/net/sunrpc/auth_gss/gss_krb5_wrap.c
index 5cdde6c..98c99d3 100644
--- a/net/sunrpc/auth_gss/gss_krb5_wrap.c
+++ b/net/sunrpc/auth_gss/gss_krb5_wrap.c
@@ -146,244 +146,6 @@
 	}
 }
 
-/* Assumptions: the head and tail of inbuf are ours to play with.
- * The pages, however, may be real pages in the page cache and we replace
- * them with scratch pages from **pages before writing to them. */
-/* XXX: obviously the above should be documentation of wrap interface,
- * and shouldn't be in this kerberos-specific file. */
-
-/* XXX factor out common code with seal/unseal. */
-
-static u32
-gss_wrap_kerberos_v1(struct krb5_ctx *kctx, int offset,
-		struct xdr_buf *buf, struct page **pages)
-{
-	char			cksumdata[GSS_KRB5_MAX_CKSUM_LEN];
-	struct xdr_netobj	md5cksum = {.len = sizeof(cksumdata),
-					    .data = cksumdata};
-	int			blocksize = 0, plainlen;
-	unsigned char		*ptr, *msg_start;
-	s32			now;
-	int			headlen;
-	struct page		**tmp_pages;
-	u32			seq_send;
-	u8			*cksumkey;
-	u32			conflen = kctx->gk5e->conflen;
-
-	dprintk("RPC:       %s\n", __func__);
-
-	now = get_seconds();
-
-	blocksize = crypto_sync_skcipher_blocksize(kctx->enc);
-	gss_krb5_add_padding(buf, offset, blocksize);
-	BUG_ON((buf->len - offset) % blocksize);
-	plainlen = conflen + buf->len - offset;
-
-	headlen = g_token_size(&kctx->mech_used,
-		GSS_KRB5_TOK_HDR_LEN + kctx->gk5e->cksumlength + plainlen) -
-		(buf->len - offset);
-
-	ptr = buf->head[0].iov_base + offset;
-	/* shift data to make room for header. */
-	xdr_extend_head(buf, offset, headlen);
-
-	/* XXX Would be cleverer to encrypt while copying. */
-	BUG_ON((buf->len - offset - headlen) % blocksize);
-
-	g_make_token_header(&kctx->mech_used,
-				GSS_KRB5_TOK_HDR_LEN +
-				kctx->gk5e->cksumlength + plainlen, &ptr);
-
-
-	/* ptr now at header described in rfc 1964, section 1.2.1: */
-	ptr[0] = (unsigned char) ((KG_TOK_WRAP_MSG >> 8) & 0xff);
-	ptr[1] = (unsigned char) (KG_TOK_WRAP_MSG & 0xff);
-
-	msg_start = ptr + GSS_KRB5_TOK_HDR_LEN + kctx->gk5e->cksumlength;
-
-	/*
-	 * signalg and sealalg are stored as if they were converted from LE
-	 * to host endian, even though they're opaque pairs of bytes according
-	 * to the RFC.
-	 */
-	*(__le16 *)(ptr + 2) = cpu_to_le16(kctx->gk5e->signalg);
-	*(__le16 *)(ptr + 4) = cpu_to_le16(kctx->gk5e->sealalg);
-	ptr[6] = 0xff;
-	ptr[7] = 0xff;
-
-	gss_krb5_make_confounder(msg_start, conflen);
-
-	if (kctx->gk5e->keyed_cksum)
-		cksumkey = kctx->cksum;
-	else
-		cksumkey = NULL;
-
-	/* XXXJBF: UGH!: */
-	tmp_pages = buf->pages;
-	buf->pages = pages;
-	if (make_checksum(kctx, ptr, 8, buf, offset + headlen - conflen,
-					cksumkey, KG_USAGE_SEAL, &md5cksum))
-		return GSS_S_FAILURE;
-	buf->pages = tmp_pages;
-
-	memcpy(ptr + GSS_KRB5_TOK_HDR_LEN, md5cksum.data, md5cksum.len);
-
-	seq_send = atomic_fetch_inc(&kctx->seq_send);
-
-	/* XXX would probably be more efficient to compute checksum
-	 * and encrypt at the same time: */
-	if ((krb5_make_seq_num(kctx, kctx->seq, kctx->initiate ? 0 : 0xff,
-			       seq_send, ptr + GSS_KRB5_TOK_HDR_LEN, ptr + 8)))
-		return GSS_S_FAILURE;
-
-	if (kctx->enctype == ENCTYPE_ARCFOUR_HMAC) {
-		struct crypto_sync_skcipher *cipher;
-		int err;
-		cipher = crypto_alloc_sync_skcipher(kctx->gk5e->encrypt_name,
-						    0, 0);
-		if (IS_ERR(cipher))
-			return GSS_S_FAILURE;
-
-		krb5_rc4_setup_enc_key(kctx, cipher, seq_send);
-
-		err = gss_encrypt_xdr_buf(cipher, buf,
-					  offset + headlen - conflen, pages);
-		crypto_free_sync_skcipher(cipher);
-		if (err)
-			return GSS_S_FAILURE;
-	} else {
-		if (gss_encrypt_xdr_buf(kctx->enc, buf,
-					offset + headlen - conflen, pages))
-			return GSS_S_FAILURE;
-	}
-
-	return (kctx->endtime < now) ? GSS_S_CONTEXT_EXPIRED : GSS_S_COMPLETE;
-}
-
-static u32
-gss_unwrap_kerberos_v1(struct krb5_ctx *kctx, int offset, struct xdr_buf *buf)
-{
-	int			signalg;
-	int			sealalg;
-	char			cksumdata[GSS_KRB5_MAX_CKSUM_LEN];
-	struct xdr_netobj	md5cksum = {.len = sizeof(cksumdata),
-					    .data = cksumdata};
-	s32			now;
-	int			direction;
-	s32			seqnum;
-	unsigned char		*ptr;
-	int			bodysize;
-	void			*data_start, *orig_start;
-	int			data_len;
-	int			blocksize;
-	u32			conflen = kctx->gk5e->conflen;
-	int			crypt_offset;
-	u8			*cksumkey;
-
-	dprintk("RPC:       gss_unwrap_kerberos\n");
-
-	ptr = (u8 *)buf->head[0].iov_base + offset;
-	if (g_verify_token_header(&kctx->mech_used, &bodysize, &ptr,
-					buf->len - offset))
-		return GSS_S_DEFECTIVE_TOKEN;
-
-	if ((ptr[0] != ((KG_TOK_WRAP_MSG >> 8) & 0xff)) ||
-	    (ptr[1] !=  (KG_TOK_WRAP_MSG & 0xff)))
-		return GSS_S_DEFECTIVE_TOKEN;
-
-	/* XXX sanity-check bodysize?? */
-
-	/* get the sign and seal algorithms */
-
-	signalg = ptr[2] + (ptr[3] << 8);
-	if (signalg != kctx->gk5e->signalg)
-		return GSS_S_DEFECTIVE_TOKEN;
-
-	sealalg = ptr[4] + (ptr[5] << 8);
-	if (sealalg != kctx->gk5e->sealalg)
-		return GSS_S_DEFECTIVE_TOKEN;
-
-	if ((ptr[6] != 0xff) || (ptr[7] != 0xff))
-		return GSS_S_DEFECTIVE_TOKEN;
-
-	/*
-	 * Data starts after token header and checksum.  ptr points
-	 * to the beginning of the token header
-	 */
-	crypt_offset = ptr + (GSS_KRB5_TOK_HDR_LEN + kctx->gk5e->cksumlength) -
-					(unsigned char *)buf->head[0].iov_base;
-
-	/*
-	 * Need plaintext seqnum to derive encryption key for arcfour-hmac
-	 */
-	if (krb5_get_seq_num(kctx, ptr + GSS_KRB5_TOK_HDR_LEN,
-			     ptr + 8, &direction, &seqnum))
-		return GSS_S_BAD_SIG;
-
-	if ((kctx->initiate && direction != 0xff) ||
-	    (!kctx->initiate && direction != 0))
-		return GSS_S_BAD_SIG;
-
-	if (kctx->enctype == ENCTYPE_ARCFOUR_HMAC) {
-		struct crypto_sync_skcipher *cipher;
-		int err;
-
-		cipher = crypto_alloc_sync_skcipher(kctx->gk5e->encrypt_name,
-						    0, 0);
-		if (IS_ERR(cipher))
-			return GSS_S_FAILURE;
-
-		krb5_rc4_setup_enc_key(kctx, cipher, seqnum);
-
-		err = gss_decrypt_xdr_buf(cipher, buf, crypt_offset);
-		crypto_free_sync_skcipher(cipher);
-		if (err)
-			return GSS_S_DEFECTIVE_TOKEN;
-	} else {
-		if (gss_decrypt_xdr_buf(kctx->enc, buf, crypt_offset))
-			return GSS_S_DEFECTIVE_TOKEN;
-	}
-
-	if (kctx->gk5e->keyed_cksum)
-		cksumkey = kctx->cksum;
-	else
-		cksumkey = NULL;
-
-	if (make_checksum(kctx, ptr, 8, buf, crypt_offset,
-					cksumkey, KG_USAGE_SEAL, &md5cksum))
-		return GSS_S_FAILURE;
-
-	if (memcmp(md5cksum.data, ptr + GSS_KRB5_TOK_HDR_LEN,
-						kctx->gk5e->cksumlength))
-		return GSS_S_BAD_SIG;
-
-	/* it got through unscathed.  Make sure the context is unexpired */
-
-	now = get_seconds();
-
-	if (now > kctx->endtime)
-		return GSS_S_CONTEXT_EXPIRED;
-
-	/* do sequencing checks */
-
-	/* Copy the data back to the right position.  XXX: Would probably be
-	 * better to copy and encrypt at the same time. */
-
-	blocksize = crypto_sync_skcipher_blocksize(kctx->enc);
-	data_start = ptr + (GSS_KRB5_TOK_HDR_LEN + kctx->gk5e->cksumlength) +
-					conflen;
-	orig_start = buf->head[0].iov_base + offset;
-	data_len = (buf->head[0].iov_base + buf->head[0].iov_len) - data_start;
-	memmove(orig_start, data_start, data_len);
-	buf->head[0].iov_len -= (data_start - orig_start);
-	buf->len -= (data_start - orig_start);
-
-	if (gss_krb5_remove_padding(buf, blocksize))
-		return GSS_S_DEFECTIVE_TOKEN;
-
-	return GSS_S_COMPLETE;
-}
-
 /*
  * We can shift data by up to LOCAL_BUF_LEN bytes in a pass.  If we need
  * to do more than that, we shift repeatedly.  Kevin Coffman reports
@@ -588,15 +350,11 @@ static void rotate_left(u32 base, struct xdr_buf *buf, unsigned int shift)
 	struct krb5_ctx	*kctx = gctx->internal_ctx_id;
 
 	switch (kctx->enctype) {
-	default:
-		BUG();
-	case ENCTYPE_DES_CBC_RAW:
-	case ENCTYPE_DES3_CBC_RAW:
-	case ENCTYPE_ARCFOUR_HMAC:
-		return gss_wrap_kerberos_v1(kctx, offset, buf, pages);
 	case ENCTYPE_AES128_CTS_HMAC_SHA1_96:
 	case ENCTYPE_AES256_CTS_HMAC_SHA1_96:
 		return gss_wrap_kerberos_v2(kctx, offset, buf, pages);
+	default:
+		return GSS_S_FAILURE;
 	}
 }
 
@@ -606,14 +364,10 @@ static void rotate_left(u32 base, struct xdr_buf *buf, unsigned int shift)
 	struct krb5_ctx	*kctx = gctx->internal_ctx_id;
 
 	switch (kctx->enctype) {
-	default:
-		BUG();
-	case ENCTYPE_DES_CBC_RAW:
-	case ENCTYPE_DES3_CBC_RAW:
-	case ENCTYPE_ARCFOUR_HMAC:
-		return gss_unwrap_kerberos_v1(kctx, offset, buf);
 	case ENCTYPE_AES128_CTS_HMAC_SHA1_96:
 	case ENCTYPE_AES256_CTS_HMAC_SHA1_96:
 		return gss_unwrap_kerberos_v2(kctx, offset, buf);
+	default:
+		return GSS_S_FAILURE;
 	}
 }


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 17/24] SUNRPC: Fix some kernel doc complaints
  2018-12-10 16:29 [PATCH v3 00/24] NFS/RDMA client for next Chuck Lever
                   ` (15 preceding siblings ...)
  2018-12-10 16:30 ` [PATCH v3 16/24] SUNRPC: Remove support for kerberos_v1 Chuck Lever
@ 2018-12-10 16:30 ` Chuck Lever
  2018-12-10 16:30 ` [PATCH v3 18/24] NFS: Fix NFSv4 symbolic trace point output Chuck Lever
                   ` (7 subsequent siblings)
  24 siblings, 0 replies; 37+ messages in thread
From: Chuck Lever @ 2018-12-10 16:30 UTC (permalink / raw)
  To: anna.schumaker; +Cc: linux-rdma, linux-nfs

Clean up some warnings observed when building with "make W=1".

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
 net/sunrpc/auth_gss/gss_mech_switch.c |    2 +-
 net/sunrpc/backchannel_rqst.c         |    2 +-
 net/sunrpc/xprtmultipath.c            |    4 ++--
 net/sunrpc/xprtsock.c                 |    2 ++
 4 files changed, 6 insertions(+), 4 deletions(-)

diff --git a/net/sunrpc/auth_gss/gss_mech_switch.c b/net/sunrpc/auth_gss/gss_mech_switch.c
index 16ac0f4..379318d 100644
--- a/net/sunrpc/auth_gss/gss_mech_switch.c
+++ b/net/sunrpc/auth_gss/gss_mech_switch.c
@@ -244,7 +244,7 @@ struct gss_api_mech *
 
 /**
  * gss_mech_list_pseudoflavors - Discover registered GSS pseudoflavors
- * @array: array to fill in
+ * @array_ptr: array to fill in
  * @size: size of "array"
  *
  * Returns the number of array items filled in, or a negative errno.
diff --git a/net/sunrpc/backchannel_rqst.c b/net/sunrpc/backchannel_rqst.c
index fa5ba6e..ec451b8 100644
--- a/net/sunrpc/backchannel_rqst.c
+++ b/net/sunrpc/backchannel_rqst.c
@@ -197,7 +197,7 @@ int xprt_setup_bc(struct rpc_xprt *xprt, unsigned int min_reqs)
 /**
  * xprt_destroy_backchannel - Destroys the backchannel preallocated structures.
  * @xprt:	the transport holding the preallocated strucures
- * @max_reqs	the maximum number of preallocated structures to destroy
+ * @max_reqs:	the maximum number of preallocated structures to destroy
  *
  * Since these structures may have been allocated by multiple calls
  * to xprt_setup_backchannel, we only destroy up to the maximum number
diff --git a/net/sunrpc/xprtmultipath.c b/net/sunrpc/xprtmultipath.c
index e2d64c7..8394124 100644
--- a/net/sunrpc/xprtmultipath.c
+++ b/net/sunrpc/xprtmultipath.c
@@ -383,7 +383,7 @@ void xprt_iter_init_listall(struct rpc_xprt_iter *xpi,
 /**
  * xprt_iter_xchg_switch - Atomically swap out the rpc_xprt_switch
  * @xpi: pointer to rpc_xprt_iter
- * @xps: pointer to a new rpc_xprt_switch or NULL
+ * @newswitch: pointer to a new rpc_xprt_switch or NULL
  *
  * Swaps out the existing xpi->xpi_xpswitch with a new value.
  */
@@ -401,7 +401,7 @@ struct rpc_xprt_switch *xprt_iter_xchg_switch(struct rpc_xprt_iter *xpi,
 
 /**
  * xprt_iter_destroy - Destroys the xprt iterator
- * @xpi pointer to rpc_xprt_iter
+ * @xpi: pointer to rpc_xprt_iter
  */
 void xprt_iter_destroy(struct rpc_xprt_iter *xpi)
 {
diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index 8a5e823..8ee9831 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -1602,6 +1602,7 @@ static void xs_udp_set_buffer_size(struct rpc_xprt *xprt, size_t sndsize, size_t
 
 /**
  * xs_udp_timer - called when a retransmit timeout occurs on a UDP transport
+ * @xprt: controlling transport
  * @task: task that timed out
  *
  * Adjust the congestion window after a retransmit timeout has occurred.
@@ -2259,6 +2260,7 @@ static int xs_tcp_finish_connecting(struct rpc_xprt *xprt, struct socket *sock)
 
 /**
  * xs_tcp_setup_socket - create a TCP socket and connect to a remote endpoint
+ * @work: queued work item
  *
  * Invoked by a work queue tasklet.
  */


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 18/24] NFS: Fix NFSv4 symbolic trace point output
  2018-12-10 16:29 [PATCH v3 00/24] NFS/RDMA client for next Chuck Lever
                   ` (16 preceding siblings ...)
  2018-12-10 16:30 ` [PATCH v3 17/24] SUNRPC: Fix some kernel doc complaints Chuck Lever
@ 2018-12-10 16:30 ` Chuck Lever
       [not found]   ` <632f5635-4c37-16ae-cdd0-65679d21c9ec@oracle.com>
  2018-12-10 16:31 ` [PATCH v3 19/24] SUNRPC: Simplify defining common RPC trace events Chuck Lever
                   ` (6 subsequent siblings)
  24 siblings, 1 reply; 37+ messages in thread
From: Chuck Lever @ 2018-12-10 16:30 UTC (permalink / raw)
  To: anna.schumaker; +Cc: linux-rdma, linux-nfs

These symbolic values were not being displayed in string form.
TRACE_ENUM_DEFINE was missing in many cases. It also turns out that
__print_symbolic wants an unsigned long in the first field...

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
 fs/nfs/nfs4trace.h |  456 ++++++++++++++++++++++++++++++++++++----------------
 1 file changed, 313 insertions(+), 143 deletions(-)

diff --git a/fs/nfs/nfs4trace.h b/fs/nfs/nfs4trace.h
index b1483b3..b4557cf 100644
--- a/fs/nfs/nfs4trace.h
+++ b/fs/nfs/nfs4trace.h
@@ -10,157 +10,302 @@
 
 #include <linux/tracepoint.h>
 
+TRACE_DEFINE_ENUM(EPERM);
+TRACE_DEFINE_ENUM(ENOENT);
+TRACE_DEFINE_ENUM(EIO);
+TRACE_DEFINE_ENUM(ENXIO);
+TRACE_DEFINE_ENUM(EACCES);
+TRACE_DEFINE_ENUM(EEXIST);
+TRACE_DEFINE_ENUM(EXDEV);
+TRACE_DEFINE_ENUM(ENOTDIR);
+TRACE_DEFINE_ENUM(EISDIR);
+TRACE_DEFINE_ENUM(EFBIG);
+TRACE_DEFINE_ENUM(ENOSPC);
+TRACE_DEFINE_ENUM(EROFS);
+TRACE_DEFINE_ENUM(EMLINK);
+TRACE_DEFINE_ENUM(ENAMETOOLONG);
+TRACE_DEFINE_ENUM(ENOTEMPTY);
+TRACE_DEFINE_ENUM(EDQUOT);
+TRACE_DEFINE_ENUM(ESTALE);
+TRACE_DEFINE_ENUM(EBADHANDLE);
+TRACE_DEFINE_ENUM(EBADCOOKIE);
+TRACE_DEFINE_ENUM(ENOTSUPP);
+TRACE_DEFINE_ENUM(ETOOSMALL);
+TRACE_DEFINE_ENUM(EREMOTEIO);
+TRACE_DEFINE_ENUM(EBADTYPE);
+TRACE_DEFINE_ENUM(EAGAIN);
+TRACE_DEFINE_ENUM(ELOOP);
+TRACE_DEFINE_ENUM(EOPNOTSUPP);
+TRACE_DEFINE_ENUM(EDEADLK);
+TRACE_DEFINE_ENUM(ENOMEM);
+TRACE_DEFINE_ENUM(EKEYEXPIRED);
+TRACE_DEFINE_ENUM(ETIMEDOUT);
+TRACE_DEFINE_ENUM(ERESTARTSYS);
+TRACE_DEFINE_ENUM(ECONNREFUSED);
+TRACE_DEFINE_ENUM(ECONNRESET);
+TRACE_DEFINE_ENUM(ENETUNREACH);
+TRACE_DEFINE_ENUM(EHOSTUNREACH);
+TRACE_DEFINE_ENUM(EHOSTDOWN);
+TRACE_DEFINE_ENUM(EPIPE);
+TRACE_DEFINE_ENUM(EPFNOSUPPORT);
+TRACE_DEFINE_ENUM(EPROTONOSUPPORT);
+
+TRACE_DEFINE_ENUM(NFS4_OK);
+TRACE_DEFINE_ENUM(NFS4ERR_ACCESS);
+TRACE_DEFINE_ENUM(NFS4ERR_ATTRNOTSUPP);
+TRACE_DEFINE_ENUM(NFS4ERR_ADMIN_REVOKED);
+TRACE_DEFINE_ENUM(NFS4ERR_BACK_CHAN_BUSY);
+TRACE_DEFINE_ENUM(NFS4ERR_BADCHAR);
+TRACE_DEFINE_ENUM(NFS4ERR_BADHANDLE);
+TRACE_DEFINE_ENUM(NFS4ERR_BADIOMODE);
+TRACE_DEFINE_ENUM(NFS4ERR_BADLAYOUT);
+TRACE_DEFINE_ENUM(NFS4ERR_BADLABEL);
+TRACE_DEFINE_ENUM(NFS4ERR_BADNAME);
+TRACE_DEFINE_ENUM(NFS4ERR_BADOWNER);
+TRACE_DEFINE_ENUM(NFS4ERR_BADSESSION);
+TRACE_DEFINE_ENUM(NFS4ERR_BADSLOT);
+TRACE_DEFINE_ENUM(NFS4ERR_BADTYPE);
+TRACE_DEFINE_ENUM(NFS4ERR_BADXDR);
+TRACE_DEFINE_ENUM(NFS4ERR_BAD_COOKIE);
+TRACE_DEFINE_ENUM(NFS4ERR_BAD_HIGH_SLOT);
+TRACE_DEFINE_ENUM(NFS4ERR_BAD_RANGE);
+TRACE_DEFINE_ENUM(NFS4ERR_BAD_SEQID);
+TRACE_DEFINE_ENUM(NFS4ERR_BAD_SESSION_DIGEST);
+TRACE_DEFINE_ENUM(NFS4ERR_BAD_STATEID);
+TRACE_DEFINE_ENUM(NFS4ERR_CB_PATH_DOWN);
+TRACE_DEFINE_ENUM(NFS4ERR_CLID_INUSE);
+TRACE_DEFINE_ENUM(NFS4ERR_CLIENTID_BUSY);
+TRACE_DEFINE_ENUM(NFS4ERR_COMPLETE_ALREADY);
+TRACE_DEFINE_ENUM(NFS4ERR_CONN_NOT_BOUND_TO_SESSION);
+TRACE_DEFINE_ENUM(NFS4ERR_DEADLOCK);
+TRACE_DEFINE_ENUM(NFS4ERR_DEADSESSION);
+TRACE_DEFINE_ENUM(NFS4ERR_DELAY);
+TRACE_DEFINE_ENUM(NFS4ERR_DELEG_ALREADY_WANTED);
+TRACE_DEFINE_ENUM(NFS4ERR_DELEG_REVOKED);
+TRACE_DEFINE_ENUM(NFS4ERR_DENIED);
+TRACE_DEFINE_ENUM(NFS4ERR_DIRDELEG_UNAVAIL);
+TRACE_DEFINE_ENUM(NFS4ERR_DQUOT);
+TRACE_DEFINE_ENUM(NFS4ERR_ENCR_ALG_UNSUPP);
+TRACE_DEFINE_ENUM(NFS4ERR_EXIST);
+TRACE_DEFINE_ENUM(NFS4ERR_EXPIRED);
+TRACE_DEFINE_ENUM(NFS4ERR_FBIG);
+TRACE_DEFINE_ENUM(NFS4ERR_FHEXPIRED);
+TRACE_DEFINE_ENUM(NFS4ERR_FILE_OPEN);
+TRACE_DEFINE_ENUM(NFS4ERR_GRACE);
+TRACE_DEFINE_ENUM(NFS4ERR_HASH_ALG_UNSUPP);
+TRACE_DEFINE_ENUM(NFS4ERR_INVAL);
+TRACE_DEFINE_ENUM(NFS4ERR_IO);
+TRACE_DEFINE_ENUM(NFS4ERR_ISDIR);
+TRACE_DEFINE_ENUM(NFS4ERR_LAYOUTTRYLATER);
+TRACE_DEFINE_ENUM(NFS4ERR_LAYOUTUNAVAILABLE);
+TRACE_DEFINE_ENUM(NFS4ERR_LEASE_MOVED);
+TRACE_DEFINE_ENUM(NFS4ERR_LOCKED);
+TRACE_DEFINE_ENUM(NFS4ERR_LOCKS_HELD);
+TRACE_DEFINE_ENUM(NFS4ERR_LOCK_RANGE);
+TRACE_DEFINE_ENUM(NFS4ERR_MINOR_VERS_MISMATCH);
+TRACE_DEFINE_ENUM(NFS4ERR_MLINK);
+TRACE_DEFINE_ENUM(NFS4ERR_MOVED);
+TRACE_DEFINE_ENUM(NFS4ERR_NAMETOOLONG);
+TRACE_DEFINE_ENUM(NFS4ERR_NOENT);
+TRACE_DEFINE_ENUM(NFS4ERR_NOFILEHANDLE);
+TRACE_DEFINE_ENUM(NFS4ERR_NOMATCHING_LAYOUT);
+TRACE_DEFINE_ENUM(NFS4ERR_NOSPC);
+TRACE_DEFINE_ENUM(NFS4ERR_NOTDIR);
+TRACE_DEFINE_ENUM(NFS4ERR_NOTEMPTY);
+TRACE_DEFINE_ENUM(NFS4ERR_NOTSUPP);
+TRACE_DEFINE_ENUM(NFS4ERR_NOT_ONLY_OP);
+TRACE_DEFINE_ENUM(NFS4ERR_NOT_SAME);
+TRACE_DEFINE_ENUM(NFS4ERR_NO_GRACE);
+TRACE_DEFINE_ENUM(NFS4ERR_NXIO);
+TRACE_DEFINE_ENUM(NFS4ERR_OLD_STATEID);
+TRACE_DEFINE_ENUM(NFS4ERR_OPENMODE);
+TRACE_DEFINE_ENUM(NFS4ERR_OP_ILLEGAL);
+TRACE_DEFINE_ENUM(NFS4ERR_OP_NOT_IN_SESSION);
+TRACE_DEFINE_ENUM(NFS4ERR_PERM);
+TRACE_DEFINE_ENUM(NFS4ERR_PNFS_IO_HOLE);
+TRACE_DEFINE_ENUM(NFS4ERR_PNFS_NO_LAYOUT);
+TRACE_DEFINE_ENUM(NFS4ERR_RECALLCONFLICT);
+TRACE_DEFINE_ENUM(NFS4ERR_RECLAIM_BAD);
+TRACE_DEFINE_ENUM(NFS4ERR_RECLAIM_CONFLICT);
+TRACE_DEFINE_ENUM(NFS4ERR_REJECT_DELEG);
+TRACE_DEFINE_ENUM(NFS4ERR_REP_TOO_BIG);
+TRACE_DEFINE_ENUM(NFS4ERR_REP_TOO_BIG_TO_CACHE);
+TRACE_DEFINE_ENUM(NFS4ERR_REQ_TOO_BIG);
+TRACE_DEFINE_ENUM(NFS4ERR_RESOURCE);
+TRACE_DEFINE_ENUM(NFS4ERR_RESTOREFH);
+TRACE_DEFINE_ENUM(NFS4ERR_RETRY_UNCACHED_REP);
+TRACE_DEFINE_ENUM(NFS4ERR_RETURNCONFLICT);
+TRACE_DEFINE_ENUM(NFS4ERR_ROFS);
+TRACE_DEFINE_ENUM(NFS4ERR_SAME);
+TRACE_DEFINE_ENUM(NFS4ERR_SHARE_DENIED);
+TRACE_DEFINE_ENUM(NFS4ERR_SEQUENCE_POS);
+TRACE_DEFINE_ENUM(NFS4ERR_SEQ_FALSE_RETRY);
+TRACE_DEFINE_ENUM(NFS4ERR_SEQ_MISORDERED);
+TRACE_DEFINE_ENUM(NFS4ERR_SERVERFAULT);
+TRACE_DEFINE_ENUM(NFS4ERR_STALE);
+TRACE_DEFINE_ENUM(NFS4ERR_STALE_CLIENTID);
+TRACE_DEFINE_ENUM(NFS4ERR_STALE_STATEID);
+TRACE_DEFINE_ENUM(NFS4ERR_SYMLINK);
+TRACE_DEFINE_ENUM(NFS4ERR_TOOSMALL);
+TRACE_DEFINE_ENUM(NFS4ERR_TOO_MANY_OPS);
+TRACE_DEFINE_ENUM(NFS4ERR_UNKNOWN_LAYOUTTYPE);
+TRACE_DEFINE_ENUM(NFS4ERR_UNSAFE_COMPOUND);
+TRACE_DEFINE_ENUM(NFS4ERR_WRONGSEC);
+TRACE_DEFINE_ENUM(NFS4ERR_WRONG_CRED);
+TRACE_DEFINE_ENUM(NFS4ERR_WRONG_TYPE);
+TRACE_DEFINE_ENUM(NFS4ERR_XDEV);
+
 #define show_nfsv4_errors(error) \
-	__print_symbolic(error, \
+	__print_symbolic(-(error), \
 		{ NFS4_OK, "OK" }, \
 		/* Mapped by nfs4_stat_to_errno() */ \
-		{ -EPERM, "EPERM" }, \
-		{ -ENOENT, "ENOENT" }, \
-		{ -EIO, "EIO" }, \
-		{ -ENXIO, "ENXIO" }, \
-		{ -EACCES, "EACCES" }, \
-		{ -EEXIST, "EEXIST" }, \
-		{ -EXDEV, "EXDEV" }, \
-		{ -ENOTDIR, "ENOTDIR" }, \
-		{ -EISDIR, "EISDIR" }, \
-		{ -EFBIG, "EFBIG" }, \
-		{ -ENOSPC, "ENOSPC" }, \
-		{ -EROFS, "EROFS" }, \
-		{ -EMLINK, "EMLINK" }, \
-		{ -ENAMETOOLONG, "ENAMETOOLONG" }, \
-		{ -ENOTEMPTY, "ENOTEMPTY" }, \
-		{ -EDQUOT, "EDQUOT" }, \
-		{ -ESTALE, "ESTALE" }, \
-		{ -EBADHANDLE, "EBADHANDLE" }, \
-		{ -EBADCOOKIE, "EBADCOOKIE" }, \
-		{ -ENOTSUPP, "ENOTSUPP" }, \
-		{ -ETOOSMALL, "ETOOSMALL" }, \
-		{ -EREMOTEIO, "EREMOTEIO" }, \
-		{ -EBADTYPE, "EBADTYPE" }, \
-		{ -EAGAIN, "EAGAIN" }, \
-		{ -ELOOP, "ELOOP" }, \
-		{ -EOPNOTSUPP, "EOPNOTSUPP" }, \
-		{ -EDEADLK, "EDEADLK" }, \
+		{ EPERM, "EPERM" }, \
+		{ ENOENT, "ENOENT" }, \
+		{ EIO, "EIO" }, \
+		{ ENXIO, "ENXIO" }, \
+		{ EACCES, "EACCES" }, \
+		{ EEXIST, "EEXIST" }, \
+		{ EXDEV, "EXDEV" }, \
+		{ ENOTDIR, "ENOTDIR" }, \
+		{ EISDIR, "EISDIR" }, \
+		{ EFBIG, "EFBIG" }, \
+		{ ENOSPC, "ENOSPC" }, \
+		{ EROFS, "EROFS" }, \
+		{ EMLINK, "EMLINK" }, \
+		{ ENAMETOOLONG, "ENAMETOOLONG" }, \
+		{ ENOTEMPTY, "ENOTEMPTY" }, \
+		{ EDQUOT, "EDQUOT" }, \
+		{ ESTALE, "ESTALE" }, \
+		{ EBADHANDLE, "EBADHANDLE" }, \
+		{ EBADCOOKIE, "EBADCOOKIE" }, \
+		{ ENOTSUPP, "ENOTSUPP" }, \
+		{ ETOOSMALL, "ETOOSMALL" }, \
+		{ EREMOTEIO, "EREMOTEIO" }, \
+		{ EBADTYPE, "EBADTYPE" }, \
+		{ EAGAIN, "EAGAIN" }, \
+		{ ELOOP, "ELOOP" }, \
+		{ EOPNOTSUPP, "EOPNOTSUPP" }, \
+		{ EDEADLK, "EDEADLK" }, \
 		/* RPC errors */ \
-		{ -ENOMEM, "ENOMEM" }, \
-		{ -EKEYEXPIRED, "EKEYEXPIRED" }, \
-		{ -ETIMEDOUT, "ETIMEDOUT" }, \
-		{ -ERESTARTSYS, "ERESTARTSYS" }, \
-		{ -ECONNREFUSED, "ECONNREFUSED" }, \
-		{ -ECONNRESET, "ECONNRESET" }, \
-		{ -ENETUNREACH, "ENETUNREACH" }, \
-		{ -EHOSTUNREACH, "EHOSTUNREACH" }, \
-		{ -EHOSTDOWN, "EHOSTDOWN" }, \
-		{ -EPIPE, "EPIPE" }, \
-		{ -EPFNOSUPPORT, "EPFNOSUPPORT" }, \
-		{ -EPROTONOSUPPORT, "EPROTONOSUPPORT" }, \
+		{ ENOMEM, "ENOMEM" }, \
+		{ EKEYEXPIRED, "EKEYEXPIRED" }, \
+		{ ETIMEDOUT, "ETIMEDOUT" }, \
+		{ ERESTARTSYS, "ERESTARTSYS" }, \
+		{ ECONNREFUSED, "ECONNREFUSED" }, \
+		{ ECONNRESET, "ECONNRESET" }, \
+		{ ENETUNREACH, "ENETUNREACH" }, \
+		{ EHOSTUNREACH, "EHOSTUNREACH" }, \
+		{ EHOSTDOWN, "EHOSTDOWN" }, \
+		{ EPIPE, "EPIPE" }, \
+		{ EPFNOSUPPORT, "EPFNOSUPPORT" }, \
+		{ EPROTONOSUPPORT, "EPROTONOSUPPORT" }, \
 		/* NFSv4 native errors */ \
-		{ -NFS4ERR_ACCESS, "ACCESS" }, \
-		{ -NFS4ERR_ATTRNOTSUPP, "ATTRNOTSUPP" }, \
-		{ -NFS4ERR_ADMIN_REVOKED, "ADMIN_REVOKED" }, \
-		{ -NFS4ERR_BACK_CHAN_BUSY, "BACK_CHAN_BUSY" }, \
-		{ -NFS4ERR_BADCHAR, "BADCHAR" }, \
-		{ -NFS4ERR_BADHANDLE, "BADHANDLE" }, \
-		{ -NFS4ERR_BADIOMODE, "BADIOMODE" }, \
-		{ -NFS4ERR_BADLAYOUT, "BADLAYOUT" }, \
-		{ -NFS4ERR_BADLABEL, "BADLABEL" }, \
-		{ -NFS4ERR_BADNAME, "BADNAME" }, \
-		{ -NFS4ERR_BADOWNER, "BADOWNER" }, \
-		{ -NFS4ERR_BADSESSION, "BADSESSION" }, \
-		{ -NFS4ERR_BADSLOT, "BADSLOT" }, \
-		{ -NFS4ERR_BADTYPE, "BADTYPE" }, \
-		{ -NFS4ERR_BADXDR, "BADXDR" }, \
-		{ -NFS4ERR_BAD_COOKIE, "BAD_COOKIE" }, \
-		{ -NFS4ERR_BAD_HIGH_SLOT, "BAD_HIGH_SLOT" }, \
-		{ -NFS4ERR_BAD_RANGE, "BAD_RANGE" }, \
-		{ -NFS4ERR_BAD_SEQID, "BAD_SEQID" }, \
-		{ -NFS4ERR_BAD_SESSION_DIGEST, "BAD_SESSION_DIGEST" }, \
-		{ -NFS4ERR_BAD_STATEID, "BAD_STATEID" }, \
-		{ -NFS4ERR_CB_PATH_DOWN, "CB_PATH_DOWN" }, \
-		{ -NFS4ERR_CLID_INUSE, "CLID_INUSE" }, \
-		{ -NFS4ERR_CLIENTID_BUSY, "CLIENTID_BUSY" }, \
-		{ -NFS4ERR_COMPLETE_ALREADY, "COMPLETE_ALREADY" }, \
-		{ -NFS4ERR_CONN_NOT_BOUND_TO_SESSION, \
+		{ NFS4ERR_ACCESS, "ACCESS" }, \
+		{ NFS4ERR_ATTRNOTSUPP, "ATTRNOTSUPP" }, \
+		{ NFS4ERR_ADMIN_REVOKED, "ADMIN_REVOKED" }, \
+		{ NFS4ERR_BACK_CHAN_BUSY, "BACK_CHAN_BUSY" }, \
+		{ NFS4ERR_BADCHAR, "BADCHAR" }, \
+		{ NFS4ERR_BADHANDLE, "BADHANDLE" }, \
+		{ NFS4ERR_BADIOMODE, "BADIOMODE" }, \
+		{ NFS4ERR_BADLAYOUT, "BADLAYOUT" }, \
+		{ NFS4ERR_BADLABEL, "BADLABEL" }, \
+		{ NFS4ERR_BADNAME, "BADNAME" }, \
+		{ NFS4ERR_BADOWNER, "BADOWNER" }, \
+		{ NFS4ERR_BADSESSION, "BADSESSION" }, \
+		{ NFS4ERR_BADSLOT, "BADSLOT" }, \
+		{ NFS4ERR_BADTYPE, "BADTYPE" }, \
+		{ NFS4ERR_BADXDR, "BADXDR" }, \
+		{ NFS4ERR_BAD_COOKIE, "BAD_COOKIE" }, \
+		{ NFS4ERR_BAD_HIGH_SLOT, "BAD_HIGH_SLOT" }, \
+		{ NFS4ERR_BAD_RANGE, "BAD_RANGE" }, \
+		{ NFS4ERR_BAD_SEQID, "BAD_SEQID" }, \
+		{ NFS4ERR_BAD_SESSION_DIGEST, "BAD_SESSION_DIGEST" }, \
+		{ NFS4ERR_BAD_STATEID, "BAD_STATEID" }, \
+		{ NFS4ERR_CB_PATH_DOWN, "CB_PATH_DOWN" }, \
+		{ NFS4ERR_CLID_INUSE, "CLID_INUSE" }, \
+		{ NFS4ERR_CLIENTID_BUSY, "CLIENTID_BUSY" }, \
+		{ NFS4ERR_COMPLETE_ALREADY, "COMPLETE_ALREADY" }, \
+		{ NFS4ERR_CONN_NOT_BOUND_TO_SESSION, \
 			"CONN_NOT_BOUND_TO_SESSION" }, \
-		{ -NFS4ERR_DEADLOCK, "DEADLOCK" }, \
-		{ -NFS4ERR_DEADSESSION, "DEAD_SESSION" }, \
-		{ -NFS4ERR_DELAY, "DELAY" }, \
-		{ -NFS4ERR_DELEG_ALREADY_WANTED, \
+		{ NFS4ERR_DEADLOCK, "DEADLOCK" }, \
+		{ NFS4ERR_DEADSESSION, "DEAD_SESSION" }, \
+		{ NFS4ERR_DELAY, "DELAY" }, \
+		{ NFS4ERR_DELEG_ALREADY_WANTED, \
 			"DELEG_ALREADY_WANTED" }, \
-		{ -NFS4ERR_DELEG_REVOKED, "DELEG_REVOKED" }, \
-		{ -NFS4ERR_DENIED, "DENIED" }, \
-		{ -NFS4ERR_DIRDELEG_UNAVAIL, "DIRDELEG_UNAVAIL" }, \
-		{ -NFS4ERR_DQUOT, "DQUOT" }, \
-		{ -NFS4ERR_ENCR_ALG_UNSUPP, "ENCR_ALG_UNSUPP" }, \
-		{ -NFS4ERR_EXIST, "EXIST" }, \
-		{ -NFS4ERR_EXPIRED, "EXPIRED" }, \
-		{ -NFS4ERR_FBIG, "FBIG" }, \
-		{ -NFS4ERR_FHEXPIRED, "FHEXPIRED" }, \
-		{ -NFS4ERR_FILE_OPEN, "FILE_OPEN" }, \
-		{ -NFS4ERR_GRACE, "GRACE" }, \
-		{ -NFS4ERR_HASH_ALG_UNSUPP, "HASH_ALG_UNSUPP" }, \
-		{ -NFS4ERR_INVAL, "INVAL" }, \
-		{ -NFS4ERR_IO, "IO" }, \
-		{ -NFS4ERR_ISDIR, "ISDIR" }, \
-		{ -NFS4ERR_LAYOUTTRYLATER, "LAYOUTTRYLATER" }, \
-		{ -NFS4ERR_LAYOUTUNAVAILABLE, "LAYOUTUNAVAILABLE" }, \
-		{ -NFS4ERR_LEASE_MOVED, "LEASE_MOVED" }, \
-		{ -NFS4ERR_LOCKED, "LOCKED" }, \
-		{ -NFS4ERR_LOCKS_HELD, "LOCKS_HELD" }, \
-		{ -NFS4ERR_LOCK_RANGE, "LOCK_RANGE" }, \
-		{ -NFS4ERR_MINOR_VERS_MISMATCH, "MINOR_VERS_MISMATCH" }, \
-		{ -NFS4ERR_MLINK, "MLINK" }, \
-		{ -NFS4ERR_MOVED, "MOVED" }, \
-		{ -NFS4ERR_NAMETOOLONG, "NAMETOOLONG" }, \
-		{ -NFS4ERR_NOENT, "NOENT" }, \
-		{ -NFS4ERR_NOFILEHANDLE, "NOFILEHANDLE" }, \
-		{ -NFS4ERR_NOMATCHING_LAYOUT, "NOMATCHING_LAYOUT" }, \
-		{ -NFS4ERR_NOSPC, "NOSPC" }, \
-		{ -NFS4ERR_NOTDIR, "NOTDIR" }, \
-		{ -NFS4ERR_NOTEMPTY, "NOTEMPTY" }, \
-		{ -NFS4ERR_NOTSUPP, "NOTSUPP" }, \
-		{ -NFS4ERR_NOT_ONLY_OP, "NOT_ONLY_OP" }, \
-		{ -NFS4ERR_NOT_SAME, "NOT_SAME" }, \
-		{ -NFS4ERR_NO_GRACE, "NO_GRACE" }, \
-		{ -NFS4ERR_NXIO, "NXIO" }, \
-		{ -NFS4ERR_OLD_STATEID, "OLD_STATEID" }, \
-		{ -NFS4ERR_OPENMODE, "OPENMODE" }, \
-		{ -NFS4ERR_OP_ILLEGAL, "OP_ILLEGAL" }, \
-		{ -NFS4ERR_OP_NOT_IN_SESSION, "OP_NOT_IN_SESSION" }, \
-		{ -NFS4ERR_PERM, "PERM" }, \
-		{ -NFS4ERR_PNFS_IO_HOLE, "PNFS_IO_HOLE" }, \
-		{ -NFS4ERR_PNFS_NO_LAYOUT, "PNFS_NO_LAYOUT" }, \
-		{ -NFS4ERR_RECALLCONFLICT, "RECALLCONFLICT" }, \
-		{ -NFS4ERR_RECLAIM_BAD, "RECLAIM_BAD" }, \
-		{ -NFS4ERR_RECLAIM_CONFLICT, "RECLAIM_CONFLICT" }, \
-		{ -NFS4ERR_REJECT_DELEG, "REJECT_DELEG" }, \
-		{ -NFS4ERR_REP_TOO_BIG, "REP_TOO_BIG" }, \
-		{ -NFS4ERR_REP_TOO_BIG_TO_CACHE, \
+		{ NFS4ERR_DELEG_REVOKED, "DELEG_REVOKED" }, \
+		{ NFS4ERR_DENIED, "DENIED" }, \
+		{ NFS4ERR_DIRDELEG_UNAVAIL, "DIRDELEG_UNAVAIL" }, \
+		{ NFS4ERR_DQUOT, "DQUOT" }, \
+		{ NFS4ERR_ENCR_ALG_UNSUPP, "ENCR_ALG_UNSUPP" }, \
+		{ NFS4ERR_EXIST, "EXIST" }, \
+		{ NFS4ERR_EXPIRED, "EXPIRED" }, \
+		{ NFS4ERR_FBIG, "FBIG" }, \
+		{ NFS4ERR_FHEXPIRED, "FHEXPIRED" }, \
+		{ NFS4ERR_FILE_OPEN, "FILE_OPEN" }, \
+		{ NFS4ERR_GRACE, "GRACE" }, \
+		{ NFS4ERR_HASH_ALG_UNSUPP, "HASH_ALG_UNSUPP" }, \
+		{ NFS4ERR_INVAL, "INVAL" }, \
+		{ NFS4ERR_IO, "IO" }, \
+		{ NFS4ERR_ISDIR, "ISDIR" }, \
+		{ NFS4ERR_LAYOUTTRYLATER, "LAYOUTTRYLATER" }, \
+		{ NFS4ERR_LAYOUTUNAVAILABLE, "LAYOUTUNAVAILABLE" }, \
+		{ NFS4ERR_LEASE_MOVED, "LEASE_MOVED" }, \
+		{ NFS4ERR_LOCKED, "LOCKED" }, \
+		{ NFS4ERR_LOCKS_HELD, "LOCKS_HELD" }, \
+		{ NFS4ERR_LOCK_RANGE, "LOCK_RANGE" }, \
+		{ NFS4ERR_MINOR_VERS_MISMATCH, "MINOR_VERS_MISMATCH" }, \
+		{ NFS4ERR_MLINK, "MLINK" }, \
+		{ NFS4ERR_MOVED, "MOVED" }, \
+		{ NFS4ERR_NAMETOOLONG, "NAMETOOLONG" }, \
+		{ NFS4ERR_NOENT, "NOENT" }, \
+		{ NFS4ERR_NOFILEHANDLE, "NOFILEHANDLE" }, \
+		{ NFS4ERR_NOMATCHING_LAYOUT, "NOMATCHING_LAYOUT" }, \
+		{ NFS4ERR_NOSPC, "NOSPC" }, \
+		{ NFS4ERR_NOTDIR, "NOTDIR" }, \
+		{ NFS4ERR_NOTEMPTY, "NOTEMPTY" }, \
+		{ NFS4ERR_NOTSUPP, "NOTSUPP" }, \
+		{ NFS4ERR_NOT_ONLY_OP, "NOT_ONLY_OP" }, \
+		{ NFS4ERR_NOT_SAME, "NOT_SAME" }, \
+		{ NFS4ERR_NO_GRACE, "NO_GRACE" }, \
+		{ NFS4ERR_NXIO, "NXIO" }, \
+		{ NFS4ERR_OLD_STATEID, "OLD_STATEID" }, \
+		{ NFS4ERR_OPENMODE, "OPENMODE" }, \
+		{ NFS4ERR_OP_ILLEGAL, "OP_ILLEGAL" }, \
+		{ NFS4ERR_OP_NOT_IN_SESSION, "OP_NOT_IN_SESSION" }, \
+		{ NFS4ERR_PERM, "PERM" }, \
+		{ NFS4ERR_PNFS_IO_HOLE, "PNFS_IO_HOLE" }, \
+		{ NFS4ERR_PNFS_NO_LAYOUT, "PNFS_NO_LAYOUT" }, \
+		{ NFS4ERR_RECALLCONFLICT, "RECALLCONFLICT" }, \
+		{ NFS4ERR_RECLAIM_BAD, "RECLAIM_BAD" }, \
+		{ NFS4ERR_RECLAIM_CONFLICT, "RECLAIM_CONFLICT" }, \
+		{ NFS4ERR_REJECT_DELEG, "REJECT_DELEG" }, \
+		{ NFS4ERR_REP_TOO_BIG, "REP_TOO_BIG" }, \
+		{ NFS4ERR_REP_TOO_BIG_TO_CACHE, \
 			"REP_TOO_BIG_TO_CACHE" }, \
-		{ -NFS4ERR_REQ_TOO_BIG, "REQ_TOO_BIG" }, \
-		{ -NFS4ERR_RESOURCE, "RESOURCE" }, \
-		{ -NFS4ERR_RESTOREFH, "RESTOREFH" }, \
-		{ -NFS4ERR_RETRY_UNCACHED_REP, "RETRY_UNCACHED_REP" }, \
-		{ -NFS4ERR_RETURNCONFLICT, "RETURNCONFLICT" }, \
-		{ -NFS4ERR_ROFS, "ROFS" }, \
-		{ -NFS4ERR_SAME, "SAME" }, \
-		{ -NFS4ERR_SHARE_DENIED, "SHARE_DENIED" }, \
-		{ -NFS4ERR_SEQUENCE_POS, "SEQUENCE_POS" }, \
-		{ -NFS4ERR_SEQ_FALSE_RETRY, "SEQ_FALSE_RETRY" }, \
-		{ -NFS4ERR_SEQ_MISORDERED, "SEQ_MISORDERED" }, \
-		{ -NFS4ERR_SERVERFAULT, "SERVERFAULT" }, \
-		{ -NFS4ERR_STALE, "STALE" }, \
-		{ -NFS4ERR_STALE_CLIENTID, "STALE_CLIENTID" }, \
-		{ -NFS4ERR_STALE_STATEID, "STALE_STATEID" }, \
-		{ -NFS4ERR_SYMLINK, "SYMLINK" }, \
-		{ -NFS4ERR_TOOSMALL, "TOOSMALL" }, \
-		{ -NFS4ERR_TOO_MANY_OPS, "TOO_MANY_OPS" }, \
-		{ -NFS4ERR_UNKNOWN_LAYOUTTYPE, "UNKNOWN_LAYOUTTYPE" }, \
-		{ -NFS4ERR_UNSAFE_COMPOUND, "UNSAFE_COMPOUND" }, \
-		{ -NFS4ERR_WRONGSEC, "WRONGSEC" }, \
-		{ -NFS4ERR_WRONG_CRED, "WRONG_CRED" }, \
-		{ -NFS4ERR_WRONG_TYPE, "WRONG_TYPE" }, \
-		{ -NFS4ERR_XDEV, "XDEV" })
+		{ NFS4ERR_REQ_TOO_BIG, "REQ_TOO_BIG" }, \
+		{ NFS4ERR_RESOURCE, "RESOURCE" }, \
+		{ NFS4ERR_RESTOREFH, "RESTOREFH" }, \
+		{ NFS4ERR_RETRY_UNCACHED_REP, "RETRY_UNCACHED_REP" }, \
+		{ NFS4ERR_RETURNCONFLICT, "RETURNCONFLICT" }, \
+		{ NFS4ERR_ROFS, "ROFS" }, \
+		{ NFS4ERR_SAME, "SAME" }, \
+		{ NFS4ERR_SHARE_DENIED, "SHARE_DENIED" }, \
+		{ NFS4ERR_SEQUENCE_POS, "SEQUENCE_POS" }, \
+		{ NFS4ERR_SEQ_FALSE_RETRY, "SEQ_FALSE_RETRY" }, \
+		{ NFS4ERR_SEQ_MISORDERED, "SEQ_MISORDERED" }, \
+		{ NFS4ERR_SERVERFAULT, "SERVERFAULT" }, \
+		{ NFS4ERR_STALE, "STALE" }, \
+		{ NFS4ERR_STALE_CLIENTID, "STALE_CLIENTID" }, \
+		{ NFS4ERR_STALE_STATEID, "STALE_STATEID" }, \
+		{ NFS4ERR_SYMLINK, "SYMLINK" }, \
+		{ NFS4ERR_TOOSMALL, "TOOSMALL" }, \
+		{ NFS4ERR_TOO_MANY_OPS, "TOO_MANY_OPS" }, \
+		{ NFS4ERR_UNKNOWN_LAYOUTTYPE, "UNKNOWN_LAYOUTTYPE" }, \
+		{ NFS4ERR_UNSAFE_COMPOUND, "UNSAFE_COMPOUND" }, \
+		{ NFS4ERR_WRONGSEC, "WRONGSEC" }, \
+		{ NFS4ERR_WRONG_CRED, "WRONG_CRED" }, \
+		{ NFS4ERR_WRONG_TYPE, "WRONG_TYPE" }, \
+		{ NFS4ERR_XDEV, "XDEV" })
 
 #define show_open_flags(flags) \
 	__print_flags(flags, "|", \
@@ -558,6 +703,13 @@
 		)
 );
 
+TRACE_DEFINE_ENUM(F_GETLK);
+TRACE_DEFINE_ENUM(F_SETLK);
+TRACE_DEFINE_ENUM(F_SETLKW);
+TRACE_DEFINE_ENUM(F_RDLCK);
+TRACE_DEFINE_ENUM(F_WRLCK);
+TRACE_DEFINE_ENUM(F_UNLCK);
+
 #define show_lock_cmd(type) \
 	__print_symbolic((int)type, \
 		{ F_GETLK, "GETLK" }, \
@@ -1451,6 +1603,10 @@
 #ifdef CONFIG_NFS_V4_1
 DEFINE_NFS4_COMMIT_EVENT(nfs4_pnfs_commit_ds);
 
+TRACE_DEFINE_ENUM(IOMODE_READ);
+TRACE_DEFINE_ENUM(IOMODE_RW);
+TRACE_DEFINE_ENUM(IOMODE_ANY);
+
 #define show_pnfs_iomode(iomode) \
 	__print_symbolic(iomode, \
 		{ IOMODE_READ, "READ" }, \
@@ -1528,6 +1684,20 @@
 DEFINE_NFS4_INODE_STATEID_EVENT(nfs4_layoutreturn);
 DEFINE_NFS4_INODE_EVENT(nfs4_layoutreturn_on_close);
 
+TRACE_DEFINE_ENUM(PNFS_UPDATE_LAYOUT_UNKNOWN);
+TRACE_DEFINE_ENUM(PNFS_UPDATE_LAYOUT_NO_PNFS);
+TRACE_DEFINE_ENUM(PNFS_UPDATE_LAYOUT_RD_ZEROLEN);
+TRACE_DEFINE_ENUM(PNFS_UPDATE_LAYOUT_MDSTHRESH);
+TRACE_DEFINE_ENUM(PNFS_UPDATE_LAYOUT_NOMEM);
+TRACE_DEFINE_ENUM(PNFS_UPDATE_LAYOUT_BULK_RECALL);
+TRACE_DEFINE_ENUM(PNFS_UPDATE_LAYOUT_IO_TEST_FAIL);
+TRACE_DEFINE_ENUM(PNFS_UPDATE_LAYOUT_FOUND_CACHED);
+TRACE_DEFINE_ENUM(PNFS_UPDATE_LAYOUT_RETURN);
+TRACE_DEFINE_ENUM(PNFS_UPDATE_LAYOUT_BLOCKED);
+TRACE_DEFINE_ENUM(PNFS_UPDATE_LAYOUT_INVALID_OPEN);
+TRACE_DEFINE_ENUM(PNFS_UPDATE_LAYOUT_RETRY);
+TRACE_DEFINE_ENUM(PNFS_UPDATE_LAYOUT_SEND_LAYOUTGET);
+
 #define show_pnfs_update_layout_reason(reason)				\
 	__print_symbolic(reason,					\
 		{ PNFS_UPDATE_LAYOUT_UNKNOWN, "unknown" },		\


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 19/24] SUNRPC: Simplify defining common RPC trace events
  2018-12-10 16:29 [PATCH v3 00/24] NFS/RDMA client for next Chuck Lever
                   ` (17 preceding siblings ...)
  2018-12-10 16:30 ` [PATCH v3 18/24] NFS: Fix NFSv4 symbolic trace point output Chuck Lever
@ 2018-12-10 16:31 ` Chuck Lever
  2018-12-10 16:31 ` [PATCH v3 20/24] xprtrdma: Trace mapping, alloc, and dereg failures Chuck Lever
                   ` (5 subsequent siblings)
  24 siblings, 0 replies; 37+ messages in thread
From: Chuck Lever @ 2018-12-10 16:31 UTC (permalink / raw)
  To: anna.schumaker; +Cc: linux-rdma, linux-nfs

Clean up, no functional change is expected.

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
 include/trace/events/sunrpc.h |  172 ++++++++++++++++-------------------------
 1 file changed, 69 insertions(+), 103 deletions(-)

diff --git a/include/trace/events/sunrpc.h b/include/trace/events/sunrpc.h
index 28e3841..88bda93 100644
--- a/include/trace/events/sunrpc.h
+++ b/include/trace/events/sunrpc.h
@@ -16,40 +16,6 @@
 
 DECLARE_EVENT_CLASS(rpc_task_status,
 
-	TP_PROTO(struct rpc_task *task),
-
-	TP_ARGS(task),
-
-	TP_STRUCT__entry(
-		__field(unsigned int, task_id)
-		__field(unsigned int, client_id)
-		__field(int, status)
-	),
-
-	TP_fast_assign(
-		__entry->task_id = task->tk_pid;
-		__entry->client_id = task->tk_client->cl_clid;
-		__entry->status = task->tk_status;
-	),
-
-	TP_printk("task:%u@%u status=%d",
-		__entry->task_id, __entry->client_id,
-		__entry->status)
-);
-
-DEFINE_EVENT(rpc_task_status, rpc_call_status,
-	TP_PROTO(struct rpc_task *task),
-
-	TP_ARGS(task)
-);
-
-DEFINE_EVENT(rpc_task_status, rpc_bind_status,
-	TP_PROTO(struct rpc_task *task),
-
-	TP_ARGS(task)
-);
-
-TRACE_EVENT(rpc_connect_status,
 	TP_PROTO(const struct rpc_task *task),
 
 	TP_ARGS(task),
@@ -70,6 +36,16 @@
 		__entry->task_id, __entry->client_id,
 		__entry->status)
 );
+#define DEFINE_RPC_STATUS_EVENT(name) \
+	DEFINE_EVENT(rpc_task_status, rpc_##name##_status, \
+			TP_PROTO( \
+				const struct rpc_task *task \
+			), \
+			TP_ARGS(task))
+
+DEFINE_RPC_STATUS_EVENT(call);
+DEFINE_RPC_STATUS_EVENT(bind);
+DEFINE_RPC_STATUS_EVENT(connect);
 
 TRACE_EVENT(rpc_request,
 	TP_PROTO(const struct rpc_task *task),
@@ -134,30 +110,17 @@
 		__entry->action
 		)
 );
+#define DEFINE_RPC_RUNNING_EVENT(name) \
+	DEFINE_EVENT(rpc_task_running, rpc_task_##name, \
+			TP_PROTO( \
+				const struct rpc_task *task, \
+				const void *action \
+			), \
+			TP_ARGS(task, action))
 
-DEFINE_EVENT(rpc_task_running, rpc_task_begin,
-
-	TP_PROTO(const struct rpc_task *task, const void *action),
-
-	TP_ARGS(task, action)
-
-);
-
-DEFINE_EVENT(rpc_task_running, rpc_task_run_action,
-
-	TP_PROTO(const struct rpc_task *task, const void *action),
-
-	TP_ARGS(task, action)
-
-);
-
-DEFINE_EVENT(rpc_task_running, rpc_task_complete,
-
-	TP_PROTO(const struct rpc_task *task, const void *action),
-
-	TP_ARGS(task, action)
-
-);
+DEFINE_RPC_RUNNING_EVENT(begin);
+DEFINE_RPC_RUNNING_EVENT(run_action);
+DEFINE_RPC_RUNNING_EVENT(complete);
 
 DECLARE_EVENT_CLASS(rpc_task_queued,
 
@@ -195,22 +158,16 @@
 		__get_str(q_name)
 		)
 );
+#define DEFINE_RPC_QUEUED_EVENT(name) \
+	DEFINE_EVENT(rpc_task_queued, rpc_task_##name, \
+			TP_PROTO( \
+				const struct rpc_task *task, \
+				const struct rpc_wait_queue *q \
+			), \
+			TP_ARGS(task, q))
 
-DEFINE_EVENT(rpc_task_queued, rpc_task_sleep,
-
-	TP_PROTO(const struct rpc_task *task, const struct rpc_wait_queue *q),
-
-	TP_ARGS(task, q)
-
-);
-
-DEFINE_EVENT(rpc_task_queued, rpc_task_wakeup,
-
-	TP_PROTO(const struct rpc_task *task, const struct rpc_wait_queue *q),
-
-	TP_ARGS(task, q)
-
-);
+DEFINE_RPC_QUEUED_EVENT(sleep);
+DEFINE_RPC_QUEUED_EVENT(wakeup);
 
 TRACE_EVENT(rpc_stats_latency,
 
@@ -410,7 +367,11 @@
 DEFINE_RPC_SOCKET_EVENT(rpc_socket_shutdown);
 
 DECLARE_EVENT_CLASS(rpc_xprt_event,
-	TP_PROTO(struct rpc_xprt *xprt, __be32 xid, int status),
+	TP_PROTO(
+		const struct rpc_xprt *xprt,
+		__be32 xid,
+		int status
+	),
 
 	TP_ARGS(xprt, xid, status),
 
@@ -432,22 +393,19 @@
 			__get_str(port), __entry->xid,
 			__entry->status)
 );
+#define DEFINE_RPC_XPRT_EVENT(name) \
+	DEFINE_EVENT(rpc_xprt_event, xprt_##name, \
+			TP_PROTO( \
+				const struct rpc_xprt *xprt, \
+				__be32 xid, \
+				int status \
+			), \
+			TP_ARGS(xprt, xid, status))
 
-DEFINE_EVENT(rpc_xprt_event, xprt_timer,
-	TP_PROTO(struct rpc_xprt *xprt, __be32 xid, int status),
-	TP_ARGS(xprt, xid, status));
-
-DEFINE_EVENT(rpc_xprt_event, xprt_lookup_rqst,
-	TP_PROTO(struct rpc_xprt *xprt, __be32 xid, int status),
-	TP_ARGS(xprt, xid, status));
-
-DEFINE_EVENT(rpc_xprt_event, xprt_transmit,
-	TP_PROTO(struct rpc_xprt *xprt, __be32 xid, int status),
-	TP_ARGS(xprt, xid, status));
-
-DEFINE_EVENT(rpc_xprt_event, xprt_complete_rqst,
-	TP_PROTO(struct rpc_xprt *xprt, __be32 xid, int status),
-	TP_ARGS(xprt, xid, status));
+DEFINE_RPC_XPRT_EVENT(timer);
+DEFINE_RPC_XPRT_EVENT(lookup_rqst);
+DEFINE_RPC_XPRT_EVENT(transmit);
+DEFINE_RPC_XPRT_EVENT(complete_rqst);
 
 TRACE_EVENT(xprt_ping,
 	TP_PROTO(const struct rpc_xprt *xprt, int status),
@@ -587,7 +545,9 @@
 
 DECLARE_EVENT_CLASS(svc_rqst_event,
 
-	TP_PROTO(struct svc_rqst *rqst),
+	TP_PROTO(
+		const struct svc_rqst *rqst
+	),
 
 	TP_ARGS(rqst),
 
@@ -607,14 +567,15 @@
 			__get_str(addr), __entry->xid,
 			show_rqstp_flags(__entry->flags))
 );
+#define DEFINE_SVC_RQST_EVENT(name) \
+	DEFINE_EVENT(svc_rqst_event, svc_##name, \
+			TP_PROTO( \
+				const struct svc_rqst *rqst \
+			), \
+			TP_ARGS(rqst))
 
-DEFINE_EVENT(svc_rqst_event, svc_defer,
-	TP_PROTO(struct svc_rqst *rqst),
-	TP_ARGS(rqst));
-
-DEFINE_EVENT(svc_rqst_event, svc_drop,
-	TP_PROTO(struct svc_rqst *rqst),
-	TP_ARGS(rqst));
+DEFINE_SVC_RQST_EVENT(defer);
+DEFINE_SVC_RQST_EVENT(drop);
 
 DECLARE_EVENT_CLASS(svc_rqst_status,
 
@@ -801,7 +762,9 @@
 );
 
 DECLARE_EVENT_CLASS(svc_deferred_event,
-	TP_PROTO(struct svc_deferred_req *dr),
+	TP_PROTO(
+		const struct svc_deferred_req *dr
+	),
 
 	TP_ARGS(dr),
 
@@ -818,13 +781,16 @@
 
 	TP_printk("addr=%s xid=0x%08x", __get_str(addr), __entry->xid)
 );
+#define DEFINE_SVC_DEFERRED_EVENT(name) \
+	DEFINE_EVENT(svc_deferred_event, svc_##name##_deferred, \
+			TP_PROTO( \
+				const struct svc_deferred_req *dr \
+			), \
+			TP_ARGS(dr))
+
+DEFINE_SVC_DEFERRED_EVENT(drop);
+DEFINE_SVC_DEFERRED_EVENT(revisit);
 
-DEFINE_EVENT(svc_deferred_event, svc_drop_deferred,
-	TP_PROTO(struct svc_deferred_req *dr),
-	TP_ARGS(dr));
-DEFINE_EVENT(svc_deferred_event, svc_revisit_deferred,
-	TP_PROTO(struct svc_deferred_req *dr),
-	TP_ARGS(dr));
 #endif /* _TRACE_SUNRPC_H */
 
 #include <trace/define_trace.h>


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 20/24] xprtrdma: Trace mapping, alloc, and dereg failures
  2018-12-10 16:29 [PATCH v3 00/24] NFS/RDMA client for next Chuck Lever
                   ` (18 preceding siblings ...)
  2018-12-10 16:31 ` [PATCH v3 19/24] SUNRPC: Simplify defining common RPC trace events Chuck Lever
@ 2018-12-10 16:31 ` Chuck Lever
  2018-12-10 16:31 ` [PATCH v3 21/24] xprtrdma: Update comments in frwr_op_send Chuck Lever
                   ` (4 subsequent siblings)
  24 siblings, 0 replies; 37+ messages in thread
From: Chuck Lever @ 2018-12-10 16:31 UTC (permalink / raw)
  To: anna.schumaker; +Cc: linux-rdma, linux-nfs

These are rare, but can be helpful at tracking down DMAR and other
problems.

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
 include/trace/events/rpcrdma.h |  136 ++++++++++++++++++++++++++++++++++++++++
 net/sunrpc/xprtrdma/frwr_ops.c |   12 +---
 net/sunrpc/xprtrdma/rpc_rdma.c |    2 -
 net/sunrpc/xprtrdma/verbs.c    |    4 +
 4 files changed, 144 insertions(+), 10 deletions(-)

diff --git a/include/trace/events/rpcrdma.h b/include/trace/events/rpcrdma.h
index 727786f..f4537a6 100644
--- a/include/trace/events/rpcrdma.h
+++ b/include/trace/events/rpcrdma.h
@@ -10,6 +10,7 @@
 #if !defined(_TRACE_RPCRDMA_H) || defined(TRACE_HEADER_MULTI_READ)
 #define _TRACE_RPCRDMA_H
 
+#include <linux/scatterlist.h>
 #include <linux/tracepoint.h>
 #include <trace/events/rdma.h>
 
@@ -663,12 +664,147 @@
 DEFINE_FRWR_DONE_EVENT(xprtrdma_wc_li);
 DEFINE_FRWR_DONE_EVENT(xprtrdma_wc_li_wake);
 
+TRACE_EVENT(xprtrdma_frwr_alloc,
+	TP_PROTO(
+		const struct rpcrdma_mr *mr,
+		int rc
+	),
+
+	TP_ARGS(mr, rc),
+
+	TP_STRUCT__entry(
+		__field(const void *, mr)
+		__field(int, rc)
+	),
+
+	TP_fast_assign(
+		__entry->mr = mr;
+		__entry->rc	= rc;
+	),
+
+	TP_printk("mr=%p: rc=%d",
+		__entry->mr, __entry->rc
+	)
+);
+
+TRACE_EVENT(xprtrdma_frwr_dereg,
+	TP_PROTO(
+		const struct rpcrdma_mr *mr,
+		int rc
+	),
+
+	TP_ARGS(mr, rc),
+
+	TP_STRUCT__entry(
+		__field(const void *, mr)
+		__field(u32, handle)
+		__field(u32, length)
+		__field(u64, offset)
+		__field(u32, dir)
+		__field(int, rc)
+	),
+
+	TP_fast_assign(
+		__entry->mr = mr;
+		__entry->handle = mr->mr_handle;
+		__entry->length = mr->mr_length;
+		__entry->offset = mr->mr_offset;
+		__entry->dir    = mr->mr_dir;
+		__entry->rc	= rc;
+	),
+
+	TP_printk("mr=%p %u@0x%016llx:0x%08x (%s): rc=%d",
+		__entry->mr, __entry->length,
+		(unsigned long long)__entry->offset, __entry->handle,
+		xprtrdma_show_direction(__entry->dir),
+		__entry->rc
+	)
+);
+
+TRACE_EVENT(xprtrdma_frwr_sgerr,
+	TP_PROTO(
+		const struct rpcrdma_mr *mr,
+		int sg_nents
+	),
+
+	TP_ARGS(mr, sg_nents),
+
+	TP_STRUCT__entry(
+		__field(const void *, mr)
+		__field(u64, addr)
+		__field(u32, dir)
+		__field(int, nents)
+	),
+
+	TP_fast_assign(
+		__entry->mr = mr;
+		__entry->addr = mr->mr_sg->dma_address;
+		__entry->dir = mr->mr_dir;
+		__entry->nents = sg_nents;
+	),
+
+	TP_printk("mr=%p addr=%llx (%s) sg_nents=%d",
+		__entry->mr, __entry->addr,
+		xprtrdma_show_direction(__entry->dir),
+		__entry->nents
+	)
+);
+
+TRACE_EVENT(xprtrdma_frwr_maperr,
+	TP_PROTO(
+		const struct rpcrdma_mr *mr,
+		int num_mapped
+	),
+
+	TP_ARGS(mr, num_mapped),
+
+	TP_STRUCT__entry(
+		__field(const void *, mr)
+		__field(u64, addr)
+		__field(u32, dir)
+		__field(int, num_mapped)
+		__field(int, nents)
+	),
+
+	TP_fast_assign(
+		__entry->mr = mr;
+		__entry->addr = mr->mr_sg->dma_address;
+		__entry->dir = mr->mr_dir;
+		__entry->num_mapped = num_mapped;
+		__entry->nents = mr->mr_nents;
+	),
+
+	TP_printk("mr=%p addr=%llx (%s) nents=%d of %d",
+		__entry->mr, __entry->addr,
+		xprtrdma_show_direction(__entry->dir),
+		__entry->num_mapped, __entry->nents
+	)
+);
+
 DEFINE_MR_EVENT(localinv);
 DEFINE_MR_EVENT(map);
 DEFINE_MR_EVENT(unmap);
 DEFINE_MR_EVENT(remoteinv);
 DEFINE_MR_EVENT(recycle);
 
+TRACE_EVENT(xprtrdma_dma_maperr,
+	TP_PROTO(
+		u64 addr
+	),
+
+	TP_ARGS(addr),
+
+	TP_STRUCT__entry(
+		__field(u64, addr)
+	),
+
+	TP_fast_assign(
+		__entry->addr = addr;
+	),
+
+	TP_printk("dma addr=0x%llx\n", __entry->addr)
+);
+
 /**
  ** Reply events
  **/
diff --git a/net/sunrpc/xprtrdma/frwr_ops.c b/net/sunrpc/xprtrdma/frwr_ops.c
index 6d6cc80..529865a 100644
--- a/net/sunrpc/xprtrdma/frwr_ops.c
+++ b/net/sunrpc/xprtrdma/frwr_ops.c
@@ -104,8 +104,7 @@
 
 	rc = ib_dereg_mr(mr->frwr.fr_mr);
 	if (rc)
-		pr_err("rpcrdma: final ib_dereg_mr for %p returned %i\n",
-		       mr, rc);
+		trace_xprtrdma_frwr_dereg(mr, rc);
 	kfree(mr->mr_sg);
 	kfree(mr);
 }
@@ -158,8 +157,7 @@
 
 out_mr_err:
 	rc = PTR_ERR(frwr->fr_mr);
-	dprintk("RPC:       %s: ib_alloc_mr status %i\n",
-		__func__, rc);
+	trace_xprtrdma_frwr_alloc(mr, rc);
 	return rc;
 
 out_list_err:
@@ -421,15 +419,13 @@
 	return seg;
 
 out_dmamap_err:
-	pr_err("rpcrdma: failed to DMA map sg %p sg_nents %d\n",
-	       mr->mr_sg, i);
 	frwr->fr_state = FRWR_IS_INVALID;
+	trace_xprtrdma_frwr_sgerr(mr, i);
 	rpcrdma_mr_put(mr);
 	return ERR_PTR(-EIO);
 
 out_mapmr_err:
-	pr_err("rpcrdma: failed to map mr %p (%d/%d)\n",
-	       frwr->fr_mr, n, mr->mr_nents);
+	trace_xprtrdma_frwr_maperr(mr, n);
 	rpcrdma_mr_recycle(mr);
 	return ERR_PTR(-EIO);
 }
diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c
index b89342d..422d793 100644
--- a/net/sunrpc/xprtrdma/rpc_rdma.c
+++ b/net/sunrpc/xprtrdma/rpc_rdma.c
@@ -668,7 +668,7 @@ static bool rpcrdma_results_inline(struct rpcrdma_xprt *r_xprt,
 
 out_mapping_err:
 	rpcrdma_unmap_sendctx(sc);
-	pr_err("rpcrdma: Send mapping error\n");
+	trace_xprtrdma_dma_maperr(sge[sge_no].addr);
 	return false;
 }
 
diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
index 4afed9f..1ee55d1 100644
--- a/net/sunrpc/xprtrdma/verbs.c
+++ b/net/sunrpc/xprtrdma/verbs.c
@@ -1419,8 +1419,10 @@ struct rpcrdma_regbuf *
 					    (void *)rb->rg_base,
 					    rdmab_length(rb),
 					    rb->rg_direction);
-	if (ib_dma_mapping_error(device, rdmab_addr(rb)))
+	if (ib_dma_mapping_error(device, rdmab_addr(rb))) {
+		trace_xprtrdma_dma_maperr(rdmab_addr(rb));
 		return false;
+	}
 
 	rb->rg_device = device;
 	rb->rg_iov.lkey = ia->ri_pd->local_dma_lkey;


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 21/24] xprtrdma: Update comments in frwr_op_send
  2018-12-10 16:29 [PATCH v3 00/24] NFS/RDMA client for next Chuck Lever
                   ` (19 preceding siblings ...)
  2018-12-10 16:31 ` [PATCH v3 20/24] xprtrdma: Trace mapping, alloc, and dereg failures Chuck Lever
@ 2018-12-10 16:31 ` Chuck Lever
  2018-12-10 16:31 ` [PATCH v3 22/24] xprtrdma: Replace outdated comment for rpcrdma_ep_post Chuck Lever
                   ` (3 subsequent siblings)
  24 siblings, 0 replies; 37+ messages in thread
From: Chuck Lever @ 2018-12-10 16:31 UTC (permalink / raw)
  To: anna.schumaker; +Cc: linux-rdma, linux-nfs

Commit f2877623082b ("xprtrdma: Chain Send to FastReg WRs") was
written before commit ce5b37178283 ("xprtrdma: Replace all usage of
"frmr" with "frwr""), but was merged afterwards. Thus it still
refers to FRMR and MWs.

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
 net/sunrpc/xprtrdma/frwr_ops.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/net/sunrpc/xprtrdma/frwr_ops.c b/net/sunrpc/xprtrdma/frwr_ops.c
index 529865a..42a18a2 100644
--- a/net/sunrpc/xprtrdma/frwr_ops.c
+++ b/net/sunrpc/xprtrdma/frwr_ops.c
@@ -432,7 +432,7 @@
 
 /* Post Send WR containing the RPC Call message.
  *
- * For FRMR, chain any FastReg WRs to the Send WR. Only a
+ * For FRWR, chain any FastReg WRs to the Send WR. Only a
  * single ib_post_send call is needed to register memory
  * and then post the Send WR.
  */
@@ -459,7 +459,7 @@
 	}
 
 	/* If ib_post_send fails, the next ->send_request for
-	 * @req will queue these MWs for recovery.
+	 * @req will queue these MRs for recovery.
 	 */
 	return ib_post_send(ia->ri_id->qp, post_wr, NULL);
 }


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 22/24] xprtrdma: Replace outdated comment for rpcrdma_ep_post
  2018-12-10 16:29 [PATCH v3 00/24] NFS/RDMA client for next Chuck Lever
                   ` (20 preceding siblings ...)
  2018-12-10 16:31 ` [PATCH v3 21/24] xprtrdma: Update comments in frwr_op_send Chuck Lever
@ 2018-12-10 16:31 ` Chuck Lever
  2018-12-10 16:31 ` [PATCH v3 23/24] xprtrdma: Add documenting comment for rpcrdma_buffer_destroy Chuck Lever
                   ` (2 subsequent siblings)
  24 siblings, 0 replies; 37+ messages in thread
From: Chuck Lever @ 2018-12-10 16:31 UTC (permalink / raw)
  To: anna.schumaker; +Cc: linux-rdma, linux-nfs

Since commit 7c8d9e7c8863 ("xprtrdma: Move Receive posting to
Receive handler"), rpcrdma_ep_post is no longer responsible for
posting Receive buffers. Update the documenting comment to reflect
this change.

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
 net/sunrpc/xprtrdma/verbs.c |   10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
index 1ee55d1..f403afc 100644
--- a/net/sunrpc/xprtrdma/verbs.c
+++ b/net/sunrpc/xprtrdma/verbs.c
@@ -1454,10 +1454,14 @@ struct rpcrdma_regbuf *
 	kfree(rb);
 }
 
-/*
- * Prepost any receive buffer, then post send.
+/**
+ * rpcrdma_ep_post - Post WRs to a transport's Send Queue
+ * @ia: transport's device information
+ * @ep: transport's RDMA endpoint information
+ * @req: rpcrdma_req containing the Send WR to post
  *
- * Receive buffer is donated to hardware, reclaimed upon recv completion.
+ * Returns 0 if the post was successful, otherwise -ENOTCONN
+ * is returned.
  */
 int
 rpcrdma_ep_post(struct rpcrdma_ia *ia,


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 23/24] xprtrdma: Add documenting comment for rpcrdma_buffer_destroy
  2018-12-10 16:29 [PATCH v3 00/24] NFS/RDMA client for next Chuck Lever
                   ` (21 preceding siblings ...)
  2018-12-10 16:31 ` [PATCH v3 22/24] xprtrdma: Replace outdated comment for rpcrdma_ep_post Chuck Lever
@ 2018-12-10 16:31 ` Chuck Lever
  2018-12-10 16:31 ` [PATCH v3 24/24] xprtrdma: Clarify comments in rpcrdma_ia_remove Chuck Lever
  2018-12-10 17:55 ` [PATCH v3 00/24] NFS/RDMA client for next Jason Gunthorpe
  24 siblings, 0 replies; 37+ messages in thread
From: Chuck Lever @ 2018-12-10 16:31 UTC (permalink / raw)
  To: anna.schumaker; +Cc: linux-rdma, linux-nfs

Make a note of the function's dependency on an earlier ib_drain_qp.

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
 net/sunrpc/xprtrdma/verbs.c |    8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
index f403afc..78a1200 100644
--- a/net/sunrpc/xprtrdma/verbs.c
+++ b/net/sunrpc/xprtrdma/verbs.c
@@ -1212,6 +1212,14 @@ struct rpcrdma_req *
 	dprintk("RPC:       %s: released %u MRs\n", __func__, count);
 }
 
+/**
+ * rpcrdma_buffer_destroy - Release all hw resources
+ * @buf: root control block for resources
+ *
+ * ORDERING: relies on a prior ib_drain_qp :
+ * - No more Send or Receive completions can occur
+ * - All MRs, reps, and reqs are returned to their free lists
+ */
 void
 rpcrdma_buffer_destroy(struct rpcrdma_buffer *buf)
 {


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 24/24] xprtrdma: Clarify comments in rpcrdma_ia_remove
  2018-12-10 16:29 [PATCH v3 00/24] NFS/RDMA client for next Chuck Lever
                   ` (22 preceding siblings ...)
  2018-12-10 16:31 ` [PATCH v3 23/24] xprtrdma: Add documenting comment for rpcrdma_buffer_destroy Chuck Lever
@ 2018-12-10 16:31 ` Chuck Lever
  2018-12-10 17:55 ` [PATCH v3 00/24] NFS/RDMA client for next Jason Gunthorpe
  24 siblings, 0 replies; 37+ messages in thread
From: Chuck Lever @ 2018-12-10 16:31 UTC (permalink / raw)
  To: anna.schumaker; +Cc: linux-rdma, linux-nfs

Comments are clarified to note how transport data structures are
protected.

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
 net/sunrpc/xprtrdma/verbs.c |   14 ++++++++++----
 1 file changed, 10 insertions(+), 4 deletions(-)

diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
index 78a1200..473de08 100644
--- a/net/sunrpc/xprtrdma/verbs.c
+++ b/net/sunrpc/xprtrdma/verbs.c
@@ -442,8 +442,7 @@
  * rpcrdma_ia_remove - Handle device driver unload
  * @ia: interface adapter being removed
  *
- * Divest transport H/W resources associated with this adapter,
- * but allow it to be restored later.
+ * Callers must serialize calls to this function.
  */
 void
 rpcrdma_ia_remove(struct rpcrdma_ia *ia)
@@ -474,16 +473,23 @@
 	ib_free_cq(ep->rep_attr.send_cq);
 	ep->rep_attr.send_cq = NULL;
 
-	/* The ULP is responsible for ensuring all DMA
-	 * mappings and MRs are gone.
+	/* The ib_drain_qp above guarantees that all posted
+	 * Receives have flushed, which returns the transport's
+	 * rpcrdma_reps to the rb_recv_bufs list.
 	 */
 	list_for_each_entry(rep, &buf->rb_recv_bufs, rr_list)
 		rpcrdma_dma_unmap_regbuf(rep->rr_rdmabuf);
+
+	/* DMA mapping happens in ->send_request with the
+	 * transport send lock held. Our caller is holding
+	 * the transport send lock.
+	 */
 	list_for_each_entry(req, &buf->rb_allreqs, rl_all) {
 		rpcrdma_dma_unmap_regbuf(req->rl_rdmabuf);
 		rpcrdma_dma_unmap_regbuf(req->rl_sendbuf);
 		rpcrdma_dma_unmap_regbuf(req->rl_recvbuf);
 	}
+
 	rpcrdma_mrs_destroy(buf);
 	ib_dealloc_pd(ia->ri_pd);
 	ia->ri_pd = NULL;


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 00/24] NFS/RDMA client for next
  2018-12-10 16:29 [PATCH v3 00/24] NFS/RDMA client for next Chuck Lever
                   ` (23 preceding siblings ...)
  2018-12-10 16:31 ` [PATCH v3 24/24] xprtrdma: Clarify comments in rpcrdma_ia_remove Chuck Lever
@ 2018-12-10 17:55 ` Jason Gunthorpe
  24 siblings, 0 replies; 37+ messages in thread
From: Jason Gunthorpe @ 2018-12-10 17:55 UTC (permalink / raw)
  To: Chuck Lever; +Cc: anna.schumaker, linux-rdma, linux-nfs

On Mon, Dec 10, 2018 at 11:29:17AM -0500, Chuck Lever wrote:
> Hi Anna, I'd like to see these patches merged into next.
> 
> There have been several regressions related to the ->send_request
> changes merged into v4.20. As a result, this series contains some
> fixes and clean-ups that resulted from testing and close code
> audit while working on those regressions.
> 
> The soft IRQ warnings and DMAR faults that I observed with krb5
> flavors on NFS/RDMA are resolved by a prototype fix that delays
> the xprt_wake_pending_tasks call at disconnect. This fix is not
> ready yet and thus does not appear in this series.
> 
> However, use of Kerberos seems to trigger a lot of connection loss.
> The dynamic rpcrdma_req allocation patches that were in this series
> last time have been dropped because they made it even worse.
> 
> "xprtrdma: Prevent leak of rpcrdma_rep objects" is included in this
> series for convenience. Please apply that to v4.20-rc. Thanks!
> 
> Changes since v2:
> - Rebased on v4.20-rc6 to pick up recent fixes
> - Patches related to "xprtrdma: Dynamically allocate rpcrdma_reqs"
>   have been dropped
> - A number of revisions of documenting comments have been added
> - Several new trace points are introduced
> 
> 
> Changes since v1:
> - Rebased on v4.20-rc4
> - Series includes the full set, not just the RDMA-related fixes
> - "Plant XID..." has been improved, based on testing with rxe
> - The required rxe driver fix is included for convenience
> - "Fix ri_max_segs..." replaces a bogus one-line fix in v1
> - The patch description for "Remove support for FMR" was updated
> 
> 
> Chuck Lever (24):
>       xprtrdma: Prevent leak of rpcrdma_rep objects
>       IB/rxe: IB_WR_REG_MR does not capture MR's iova field

This last patch is already applied to the RDMA tree, please do not
apply it to another tree.

Thanks,
Jason

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 02/24] IB/rxe: IB_WR_REG_MR does not capture MR's iova field
  2018-12-10 16:29 ` [PATCH v3 02/24] IB/rxe: IB_WR_REG_MR does not capture MR's iova field Chuck Lever
@ 2018-12-11 14:00   ` Christoph Hellwig
  2018-12-11 15:26     ` Chuck Lever
  0 siblings, 1 reply; 37+ messages in thread
From: Christoph Hellwig @ 2018-12-11 14:00 UTC (permalink / raw)
  To: Chuck Lever; +Cc: anna.schumaker, linux-rdma, linux-nfs

Shouldn't this go in through the rdma tree?

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 03/24] xprtrdma: Remove support for FMR memory registration
  2018-12-10 16:29 ` [PATCH v3 03/24] xprtrdma: Remove support for FMR memory registration Chuck Lever
@ 2018-12-11 14:02   ` Christoph Hellwig
  2018-12-11 15:29     ` Chuck Lever
  0 siblings, 1 reply; 37+ messages in thread
From: Christoph Hellwig @ 2018-12-11 14:02 UTC (permalink / raw)
  To: Chuck Lever; +Cc: anna.schumaker, linux-rdma, linux-nfs

Can we also kill off rpcrdma_memreg_ops now?  Indirect calls are
suprisingly expensive, especially in our post-spectre world, and
xprtrdma always seemed a little odd with this supper generic
abstraction.

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 05/24] xprtrdma: Reduce max_frwr_depth
  2018-12-10 16:29 ` [PATCH v3 05/24] xprtrdma: Reduce max_frwr_depth Chuck Lever
@ 2018-12-11 14:02   ` Christoph Hellwig
  2018-12-11 15:30     ` Chuck Lever
  0 siblings, 1 reply; 37+ messages in thread
From: Christoph Hellwig @ 2018-12-11 14:02 UTC (permalink / raw)
  To: Chuck Lever; +Cc: anna.schumaker, linux-rdma, linux-nfs

On Mon, Dec 10, 2018 at 11:29:45AM -0500, Chuck Lever wrote:
> Some devices advertise a large max_fast_reg_page_list_len
> capability, but perform optimally when MRs are significantly smaller
> than that depth -- probably when the MR itself is no larger than a
> page.
> 
> By default, the RDMA R/W core API uses max_sge_rd as the maximum
> page depth for MRs. For some devices, the value of max_sge_rd is
> 1, which is also not optimal. Thus, when max_sge_rd is larger than
> 1, use that value. Otherwise use the value of the
> max_fast_reg_page_list_len attribute.
> 
> I've tested this with a couple of devices, and it reproducibly
> improves the throughput of large I/Os by several percent.

Can you list which devices for reference in the changelog?

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 02/24] IB/rxe: IB_WR_REG_MR does not capture MR's iova field
  2018-12-11 14:00   ` Christoph Hellwig
@ 2018-12-11 15:26     ` Chuck Lever
  0 siblings, 0 replies; 37+ messages in thread
From: Chuck Lever @ 2018-12-11 15:26 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Anna Schumaker, linux-rdma, Linux NFS Mailing List



> On Dec 11, 2018, at 9:00 AM, Christoph Hellwig <hch@infradead.org> wrote:
> 
> Shouldn't this go in through the rdma tree?

In fact, yes. Jason has already accepted it. I included
it again in this series for anyone who tests my patches
with soft RoCE. I can drop this patch for subsequent posts.

--
Chuck Lever




^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 03/24] xprtrdma: Remove support for FMR memory registration
  2018-12-11 14:02   ` Christoph Hellwig
@ 2018-12-11 15:29     ` Chuck Lever
  2018-12-12  7:18       ` Christoph Hellwig
  0 siblings, 1 reply; 37+ messages in thread
From: Chuck Lever @ 2018-12-11 15:29 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Anna Schumaker, linux-rdma, Linux NFS Mailing List



> On Dec 11, 2018, at 9:02 AM, Christoph Hellwig <hch@infradead.org> wrote:
> 
> Can we also kill off rpcrdma_memreg_ops now?  Indirect calls are
> suprisingly expensive, especially in our post-spectre world, and
> xprtrdma always seemed a little odd with this supper generic
> abstraction.

Yep, I'm considering replacing those indirect calls.

However, note that the RPC layer is full of them too, as is the
DMA operations that are used by xprtrdma. I'm not sure it will make
much of a difference.


--
Chuck Lever




^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 05/24] xprtrdma: Reduce max_frwr_depth
  2018-12-11 14:02   ` Christoph Hellwig
@ 2018-12-11 15:30     ` Chuck Lever
  2018-12-12  7:18       ` Christoph Hellwig
  0 siblings, 1 reply; 37+ messages in thread
From: Chuck Lever @ 2018-12-11 15:30 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Anna Schumaker, linux-rdma, Linux NFS Mailing List



> On Dec 11, 2018, at 9:02 AM, Christoph Hellwig <hch@infradead.org> wrote:
> 
> On Mon, Dec 10, 2018 at 11:29:45AM -0500, Chuck Lever wrote:
>> Some devices advertise a large max_fast_reg_page_list_len
>> capability, but perform optimally when MRs are significantly smaller
>> than that depth -- probably when the MR itself is no larger than a
>> page.
>> 
>> By default, the RDMA R/W core API uses max_sge_rd as the maximum
>> page depth for MRs. For some devices, the value of max_sge_rd is
>> 1, which is also not optimal. Thus, when max_sge_rd is larger than
>> 1, use that value. Otherwise use the value of the
>> max_fast_reg_page_list_len attribute.
>> 
>> I've tested this with a couple of devices, and it reproducibly
>> improves the throughput of large I/Os by several percent.
> 
> Can you list which devices for reference in the changelog?

I have only three devices here. I can't make an exhaustive list.

Besides, this is exactly how rdma_rw works. I thought this was
common knowledge.

--
Chuck Lever




^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 18/24] NFS: Fix NFSv4 symbolic trace point output
       [not found]   ` <632f5635-4c37-16ae-cdd0-65679d21c9ec@oracle.com>
@ 2018-12-11 19:19     ` Calum Mackay
  0 siblings, 0 replies; 37+ messages in thread
From: Calum Mackay @ 2018-12-11 19:19 UTC (permalink / raw)
  To: Chuck Lever; +Cc: Anna Schumaker, linux-rdma, Linux NFS Mailing List

hi Chuck,

On 10/12/2018 4:30 pm, Chuck Lever wrote:
> These symbolic values were not being displayed in string form.
> TRACE_ENUM_DEFINE was missing in many cases.

-> TRACE_DEFINE_ENUM

cheers,
calum.

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 03/24] xprtrdma: Remove support for FMR memory registration
  2018-12-11 15:29     ` Chuck Lever
@ 2018-12-12  7:18       ` Christoph Hellwig
  0 siblings, 0 replies; 37+ messages in thread
From: Christoph Hellwig @ 2018-12-12  7:18 UTC (permalink / raw)
  To: Chuck Lever
  Cc: Christoph Hellwig, Anna Schumaker, linux-rdma, Linux NFS Mailing List

On Tue, Dec 11, 2018 at 10:29:10AM -0500, Chuck Lever wrote:
> 
> 
> > On Dec 11, 2018, at 9:02 AM, Christoph Hellwig <hch@infradead.org> wrote:
> > 
> > Can we also kill off rpcrdma_memreg_ops now?  Indirect calls are
> > suprisingly expensive, especially in our post-spectre world, and
> > xprtrdma always seemed a little odd with this supper generic
> > abstraction.
> 
> Yep, I'm considering replacing those indirect calls.
> 
> However, note that the RPC layer is full of them too, as is the
> DMA operations that are used by xprtrdma. I'm not sure it will make
> much of a difference.

I've posted a series to remove them for the DMA fast path, and every
indirect calls adds additional cost.

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 05/24] xprtrdma: Reduce max_frwr_depth
  2018-12-11 15:30     ` Chuck Lever
@ 2018-12-12  7:18       ` Christoph Hellwig
  0 siblings, 0 replies; 37+ messages in thread
From: Christoph Hellwig @ 2018-12-12  7:18 UTC (permalink / raw)
  To: Chuck Lever
  Cc: Christoph Hellwig, Anna Schumaker, linux-rdma, Linux NFS Mailing List

On Tue, Dec 11, 2018 at 10:30:24AM -0500, Chuck Lever wrote:
> 
> 
> > On Dec 11, 2018, at 9:02 AM, Christoph Hellwig <hch@infradead.org> wrote:
> > 
> > On Mon, Dec 10, 2018 at 11:29:45AM -0500, Chuck Lever wrote:
> >> Some devices advertise a large max_fast_reg_page_list_len
> >> capability, but perform optimally when MRs are significantly smaller
> >> than that depth -- probably when the MR itself is no larger than a
> >> page.
> >> 
> >> By default, the RDMA R/W core API uses max_sge_rd as the maximum
> >> page depth for MRs. For some devices, the value of max_sge_rd is
> >> 1, which is also not optimal. Thus, when max_sge_rd is larger than
> >> 1, use that value. Otherwise use the value of the
> >> max_fast_reg_page_list_len attribute.
> >> 
> >> I've tested this with a couple of devices, and it reproducibly
> >> improves the throughput of large I/Os by several percent.
> > 
> > Can you list which devices for reference in the changelog?
> 
> I have only three devices here. I can't make an exhaustive list.

Just list the onces you've tested.

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 16/24] SUNRPC: Remove support for kerberos_v1
  2018-12-10 16:30 ` [PATCH v3 16/24] SUNRPC: Remove support for kerberos_v1 Chuck Lever
@ 2018-12-12 21:20   ` Chuck Lever
  2018-12-14 21:16     ` Chuck Lever
  0 siblings, 1 reply; 37+ messages in thread
From: Chuck Lever @ 2018-12-12 21:20 UTC (permalink / raw)
  To: Trond Myklebust; +Cc: linux-rdma, Anna Schumaker, Linux NFS Mailing List

Hi Trond-

> On Dec 10, 2018, at 11:30 AM, Chuck Lever <chuck.lever@oracle.com> wrote:
> 
> Kerberos v1 allows the selection of encryption types that are known
> to be insecure and are no longer widely deployed. Also there is no
> convenient facility for testing v1 or these enctypes, so essentially
> this code has been untested for some time.
> 
> Note that RFC 6649 deprecates DES and Arcfour_56 in Kerberos, and
> RFC 8429 (October 2018) deprecates DES3 and Arcfour.
> 
> Support for DES_CBC_RAW, DES_CBC_CRC, DES_CBC_MD4, DES_CBC_MD5,
> DES3_CBC_RAW, and ARCFOUR_HMAC encryption in the Linux kernel
> RPCSEC_GSS implementation is removed by this patch.

Wondering what kind of impact this will have on folks who have
the deprecated encryption types in their krb5.keytab or with a
KDC that might uses DES3 for user principals.

Anna suggested putting this change behind a KCONFIG option.


> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
> ---
> include/linux/sunrpc/gss_krb5.h          |   39 ---
> include/linux/sunrpc/gss_krb5_enctypes.h |    2 
> net/sunrpc/Kconfig                       |    3 
> net/sunrpc/auth_gss/Makefile             |    2 
> net/sunrpc/auth_gss/gss_krb5_crypto.c    |  423 ------------------------------
> net/sunrpc/auth_gss/gss_krb5_keys.c      |   53 ----
> net/sunrpc/auth_gss/gss_krb5_mech.c      |  278 --------------------
> net/sunrpc/auth_gss/gss_krb5_seal.c      |   73 -----
> net/sunrpc/auth_gss/gss_krb5_seqnum.c    |  164 ------------
> net/sunrpc/auth_gss/gss_krb5_unseal.c    |   80 ------
> net/sunrpc/auth_gss/gss_krb5_wrap.c      |  254 ------------------
> 11 files changed, 12 insertions(+), 1359 deletions(-)
> delete mode 100644 net/sunrpc/auth_gss/gss_krb5_seqnum.c
> 
> diff --git a/include/linux/sunrpc/gss_krb5.h b/include/linux/sunrpc/gss_krb5.h
> index 02c0412..57f4a49 100644
> --- a/include/linux/sunrpc/gss_krb5.h
> +++ b/include/linux/sunrpc/gss_krb5.h
> @@ -105,7 +105,6 @@ struct krb5_ctx {
> 	struct crypto_sync_skcipher *acceptor_enc_aux;
> 	struct crypto_sync_skcipher *initiator_enc_aux;
> 	u8			Ksess[GSS_KRB5_MAX_KEYLEN]; /* session key */
> -	u8			cksum[GSS_KRB5_MAX_KEYLEN];
> 	s32			endtime;
> 	atomic_t		seq_send;
> 	atomic64_t		seq_send64;
> @@ -235,11 +234,6 @@ enum seal_alg {
> 	+ GSS_KRB5_MAX_CKSUM_LEN)
> 
> u32
> -make_checksum(struct krb5_ctx *kctx, char *header, int hdrlen,
> -		struct xdr_buf *body, int body_offset, u8 *cksumkey,
> -		unsigned int usage, struct xdr_netobj *cksumout);
> -
> -u32
> make_checksum_v2(struct krb5_ctx *, char *header, int hdrlen,
> 		 struct xdr_buf *body, int body_offset, u8 *key,
> 		 unsigned int usage, struct xdr_netobj *cksum);
> @@ -268,25 +262,6 @@ u32 gss_verify_mic_kerberos(struct gss_ctx *, struct xdr_buf *,
> 	     void *iv, void *in, void *out, int length); 
> 
> int
> -gss_encrypt_xdr_buf(struct crypto_sync_skcipher *tfm, struct xdr_buf *outbuf,
> -		    int offset, struct page **pages);
> -
> -int
> -gss_decrypt_xdr_buf(struct crypto_sync_skcipher *tfm, struct xdr_buf *inbuf,
> -		    int offset);
> -
> -s32
> -krb5_make_seq_num(struct krb5_ctx *kctx,
> -		struct crypto_sync_skcipher *key,
> -		int direction,
> -		u32 seqnum, unsigned char *cksum, unsigned char *buf);
> -
> -s32
> -krb5_get_seq_num(struct krb5_ctx *kctx,
> -	       unsigned char *cksum,
> -	       unsigned char *buf, int *direction, u32 *seqnum);
> -
> -int
> xdr_extend_head(struct xdr_buf *buf, unsigned int base, unsigned int shiftlen);
> 
> u32
> @@ -297,11 +272,6 @@ u32 gss_verify_mic_kerberos(struct gss_ctx *, struct xdr_buf *,
> 		gfp_t gfp_mask);
> 
> u32
> -gss_krb5_des3_make_key(const struct gss_krb5_enctype *gk5e,
> -		       struct xdr_netobj *randombits,
> -		       struct xdr_netobj *key);
> -
> -u32
> gss_krb5_aes_make_key(const struct gss_krb5_enctype *gk5e,
> 		      struct xdr_netobj *randombits,
> 		      struct xdr_netobj *key);
> @@ -316,14 +286,5 @@ u32 gss_verify_mic_kerberos(struct gss_ctx *, struct xdr_buf *,
> 		     struct xdr_buf *buf, u32 *plainoffset,
> 		     u32 *plainlen);
> 
> -int
> -krb5_rc4_setup_seq_key(struct krb5_ctx *kctx,
> -		       struct crypto_sync_skcipher *cipher,
> -		       unsigned char *cksum);
> -
> -int
> -krb5_rc4_setup_enc_key(struct krb5_ctx *kctx,
> -		       struct crypto_sync_skcipher *cipher,
> -		       s32 seqnum);
> void
> gss_krb5_make_confounder(char *p, u32 conflen);
> diff --git a/include/linux/sunrpc/gss_krb5_enctypes.h b/include/linux/sunrpc/gss_krb5_enctypes.h
> index ec6234e..7a8abcf 100644
> --- a/include/linux/sunrpc/gss_krb5_enctypes.h
> +++ b/include/linux/sunrpc/gss_krb5_enctypes.h
> @@ -1,4 +1,4 @@
> /*
>  * Dumb way to share this static piece of information with nfsd
>  */
> -#define KRB5_SUPPORTED_ENCTYPES "18,17,16,23,3,1,2"
> +#define KRB5_SUPPORTED_ENCTYPES "18,17"
> diff --git a/net/sunrpc/Kconfig b/net/sunrpc/Kconfig
> index ac09ca8..80c8efc 100644
> --- a/net/sunrpc/Kconfig
> +++ b/net/sunrpc/Kconfig
> @@ -18,9 +18,8 @@ config SUNRPC_SWAP
> config RPCSEC_GSS_KRB5
> 	tristate "Secure RPC: Kerberos V mechanism"
> 	depends on SUNRPC && CRYPTO
> -	depends on CRYPTO_MD5 && CRYPTO_DES && CRYPTO_CBC && CRYPTO_CTS
> +	depends on CRYPTO_MD5 && CRYPTO_CTS
> 	depends on CRYPTO_ECB && CRYPTO_HMAC && CRYPTO_SHA1 && CRYPTO_AES
> -	depends on CRYPTO_ARC4
> 	default y
> 	select SUNRPC_GSS
> 	help
> diff --git a/net/sunrpc/auth_gss/Makefile b/net/sunrpc/auth_gss/Makefile
> index c374268..b5a65a0 100644
> --- a/net/sunrpc/auth_gss/Makefile
> +++ b/net/sunrpc/auth_gss/Makefile
> @@ -12,4 +12,4 @@ auth_rpcgss-y := auth_gss.o gss_generic_token.o \
> obj-$(CONFIG_RPCSEC_GSS_KRB5) += rpcsec_gss_krb5.o
> 
> rpcsec_gss_krb5-y := gss_krb5_mech.o gss_krb5_seal.o gss_krb5_unseal.o \
> -	gss_krb5_seqnum.o gss_krb5_wrap.o gss_krb5_crypto.o gss_krb5_keys.o
> +	gss_krb5_wrap.o gss_krb5_crypto.o gss_krb5_keys.o
> diff --git a/net/sunrpc/auth_gss/gss_krb5_crypto.c b/net/sunrpc/auth_gss/gss_krb5_crypto.c
> index 4f43383..896dd87 100644
> --- a/net/sunrpc/auth_gss/gss_krb5_crypto.c
> +++ b/net/sunrpc/auth_gss/gss_krb5_crypto.c
> @@ -138,230 +138,6 @@
> 	return crypto_ahash_update(req);
> }
> 
> -static int
> -arcfour_hmac_md5_usage_to_salt(unsigned int usage, u8 salt[4])
> -{
> -	unsigned int ms_usage;
> -
> -	switch (usage) {
> -	case KG_USAGE_SIGN:
> -		ms_usage = 15;
> -		break;
> -	case KG_USAGE_SEAL:
> -		ms_usage = 13;
> -		break;
> -	default:
> -		return -EINVAL;
> -	}
> -	salt[0] = (ms_usage >> 0) & 0xff;
> -	salt[1] = (ms_usage >> 8) & 0xff;
> -	salt[2] = (ms_usage >> 16) & 0xff;
> -	salt[3] = (ms_usage >> 24) & 0xff;
> -
> -	return 0;
> -}
> -
> -static u32
> -make_checksum_hmac_md5(struct krb5_ctx *kctx, char *header, int hdrlen,
> -		       struct xdr_buf *body, int body_offset, u8 *cksumkey,
> -		       unsigned int usage, struct xdr_netobj *cksumout)
> -{
> -	struct scatterlist              sg[1];
> -	int err = -1;
> -	u8 *checksumdata;
> -	u8 *rc4salt;
> -	struct crypto_ahash *md5;
> -	struct crypto_ahash *hmac_md5;
> -	struct ahash_request *req;
> -
> -	if (cksumkey == NULL)
> -		return GSS_S_FAILURE;
> -
> -	if (cksumout->len < kctx->gk5e->cksumlength) {
> -		dprintk("%s: checksum buffer length, %u, too small for %s\n",
> -			__func__, cksumout->len, kctx->gk5e->name);
> -		return GSS_S_FAILURE;
> -	}
> -
> -	rc4salt = kmalloc_array(4, sizeof(*rc4salt), GFP_NOFS);
> -	if (!rc4salt)
> -		return GSS_S_FAILURE;
> -
> -	if (arcfour_hmac_md5_usage_to_salt(usage, rc4salt)) {
> -		dprintk("%s: invalid usage value %u\n", __func__, usage);
> -		goto out_free_rc4salt;
> -	}
> -
> -	checksumdata = kmalloc(GSS_KRB5_MAX_CKSUM_LEN, GFP_NOFS);
> -	if (!checksumdata)
> -		goto out_free_rc4salt;
> -
> -	md5 = crypto_alloc_ahash("md5", 0, CRYPTO_ALG_ASYNC);
> -	if (IS_ERR(md5))
> -		goto out_free_cksum;
> -
> -	hmac_md5 = crypto_alloc_ahash(kctx->gk5e->cksum_name, 0,
> -				      CRYPTO_ALG_ASYNC);
> -	if (IS_ERR(hmac_md5))
> -		goto out_free_md5;
> -
> -	req = ahash_request_alloc(md5, GFP_NOFS);
> -	if (!req)
> -		goto out_free_hmac_md5;
> -
> -	ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_SLEEP, NULL, NULL);
> -
> -	err = crypto_ahash_init(req);
> -	if (err)
> -		goto out;
> -	sg_init_one(sg, rc4salt, 4);
> -	ahash_request_set_crypt(req, sg, NULL, 4);
> -	err = crypto_ahash_update(req);
> -	if (err)
> -		goto out;
> -
> -	sg_init_one(sg, header, hdrlen);
> -	ahash_request_set_crypt(req, sg, NULL, hdrlen);
> -	err = crypto_ahash_update(req);
> -	if (err)
> -		goto out;
> -	err = xdr_process_buf(body, body_offset, body->len - body_offset,
> -			      checksummer, req);
> -	if (err)
> -		goto out;
> -	ahash_request_set_crypt(req, NULL, checksumdata, 0);
> -	err = crypto_ahash_final(req);
> -	if (err)
> -		goto out;
> -
> -	ahash_request_free(req);
> -	req = ahash_request_alloc(hmac_md5, GFP_NOFS);
> -	if (!req)
> -		goto out_free_hmac_md5;
> -
> -	ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_SLEEP, NULL, NULL);
> -
> -	err = crypto_ahash_setkey(hmac_md5, cksumkey, kctx->gk5e->keylength);
> -	if (err)
> -		goto out;
> -
> -	sg_init_one(sg, checksumdata, crypto_ahash_digestsize(md5));
> -	ahash_request_set_crypt(req, sg, checksumdata,
> -				crypto_ahash_digestsize(md5));
> -	err = crypto_ahash_digest(req);
> -	if (err)
> -		goto out;
> -
> -	memcpy(cksumout->data, checksumdata, kctx->gk5e->cksumlength);
> -	cksumout->len = kctx->gk5e->cksumlength;
> -out:
> -	ahash_request_free(req);
> -out_free_hmac_md5:
> -	crypto_free_ahash(hmac_md5);
> -out_free_md5:
> -	crypto_free_ahash(md5);
> -out_free_cksum:
> -	kfree(checksumdata);
> -out_free_rc4salt:
> -	kfree(rc4salt);
> -	return err ? GSS_S_FAILURE : 0;
> -}
> -
> -/*
> - * checksum the plaintext data and hdrlen bytes of the token header
> - * The checksum is performed over the first 8 bytes of the
> - * gss token header and then over the data body
> - */
> -u32
> -make_checksum(struct krb5_ctx *kctx, char *header, int hdrlen,
> -	      struct xdr_buf *body, int body_offset, u8 *cksumkey,
> -	      unsigned int usage, struct xdr_netobj *cksumout)
> -{
> -	struct crypto_ahash *tfm;
> -	struct ahash_request *req;
> -	struct scatterlist              sg[1];
> -	int err = -1;
> -	u8 *checksumdata;
> -	unsigned int checksumlen;
> -
> -	if (kctx->gk5e->ctype == CKSUMTYPE_HMAC_MD5_ARCFOUR)
> -		return make_checksum_hmac_md5(kctx, header, hdrlen,
> -					      body, body_offset,
> -					      cksumkey, usage, cksumout);
> -
> -	if (cksumout->len < kctx->gk5e->cksumlength) {
> -		dprintk("%s: checksum buffer length, %u, too small for %s\n",
> -			__func__, cksumout->len, kctx->gk5e->name);
> -		return GSS_S_FAILURE;
> -	}
> -
> -	checksumdata = kmalloc(GSS_KRB5_MAX_CKSUM_LEN, GFP_NOFS);
> -	if (checksumdata == NULL)
> -		return GSS_S_FAILURE;
> -
> -	tfm = crypto_alloc_ahash(kctx->gk5e->cksum_name, 0, CRYPTO_ALG_ASYNC);
> -	if (IS_ERR(tfm))
> -		goto out_free_cksum;
> -
> -	req = ahash_request_alloc(tfm, GFP_NOFS);
> -	if (!req)
> -		goto out_free_ahash;
> -
> -	ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_SLEEP, NULL, NULL);
> -
> -	checksumlen = crypto_ahash_digestsize(tfm);
> -
> -	if (cksumkey != NULL) {
> -		err = crypto_ahash_setkey(tfm, cksumkey,
> -					  kctx->gk5e->keylength);
> -		if (err)
> -			goto out;
> -	}
> -
> -	err = crypto_ahash_init(req);
> -	if (err)
> -		goto out;
> -	sg_init_one(sg, header, hdrlen);
> -	ahash_request_set_crypt(req, sg, NULL, hdrlen);
> -	err = crypto_ahash_update(req);
> -	if (err)
> -		goto out;
> -	err = xdr_process_buf(body, body_offset, body->len - body_offset,
> -			      checksummer, req);
> -	if (err)
> -		goto out;
> -	ahash_request_set_crypt(req, NULL, checksumdata, 0);
> -	err = crypto_ahash_final(req);
> -	if (err)
> -		goto out;
> -
> -	switch (kctx->gk5e->ctype) {
> -	case CKSUMTYPE_RSA_MD5:
> -		err = kctx->gk5e->encrypt(kctx->seq, NULL, checksumdata,
> -					  checksumdata, checksumlen);
> -		if (err)
> -			goto out;
> -		memcpy(cksumout->data,
> -		       checksumdata + checksumlen - kctx->gk5e->cksumlength,
> -		       kctx->gk5e->cksumlength);
> -		break;
> -	case CKSUMTYPE_HMAC_SHA1_DES3:
> -		memcpy(cksumout->data, checksumdata, kctx->gk5e->cksumlength);
> -		break;
> -	default:
> -		BUG();
> -		break;
> -	}
> -	cksumout->len = kctx->gk5e->cksumlength;
> -out:
> -	ahash_request_free(req);
> -out_free_ahash:
> -	crypto_free_ahash(tfm);
> -out_free_cksum:
> -	kfree(checksumdata);
> -	return err ? GSS_S_FAILURE : 0;
> -}
> -
> /*
>  * checksum the plaintext data and hdrlen bytes of the token header
>  * Per rfc4121, sec. 4.2.4, the checksum is performed over the data
> @@ -526,35 +302,6 @@ struct encryptor_desc {
> 	return 0;
> }
> 
> -int
> -gss_encrypt_xdr_buf(struct crypto_sync_skcipher *tfm, struct xdr_buf *buf,
> -		    int offset, struct page **pages)
> -{
> -	int ret;
> -	struct encryptor_desc desc;
> -	SYNC_SKCIPHER_REQUEST_ON_STACK(req, tfm);
> -
> -	BUG_ON((buf->len - offset) % crypto_sync_skcipher_blocksize(tfm) != 0);
> -
> -	skcipher_request_set_sync_tfm(req, tfm);
> -	skcipher_request_set_callback(req, 0, NULL, NULL);
> -
> -	memset(desc.iv, 0, sizeof(desc.iv));
> -	desc.req = req;
> -	desc.pos = offset;
> -	desc.outbuf = buf;
> -	desc.pages = pages;
> -	desc.fragno = 0;
> -	desc.fraglen = 0;
> -
> -	sg_init_table(desc.infrags, 4);
> -	sg_init_table(desc.outfrags, 4);
> -
> -	ret = xdr_process_buf(buf, offset, buf->len - offset, encryptor, &desc);
> -	skcipher_request_zero(req);
> -	return ret;
> -}
> -
> struct decryptor_desc {
> 	u8 iv[GSS_KRB5_MAX_BLOCKSIZE];
> 	struct skcipher_request *req;
> @@ -609,32 +356,6 @@ struct decryptor_desc {
> 	return 0;
> }
> 
> -int
> -gss_decrypt_xdr_buf(struct crypto_sync_skcipher *tfm, struct xdr_buf *buf,
> -		    int offset)
> -{
> -	int ret;
> -	struct decryptor_desc desc;
> -	SYNC_SKCIPHER_REQUEST_ON_STACK(req, tfm);
> -
> -	/* XXXJBF: */
> -	BUG_ON((buf->len - offset) % crypto_sync_skcipher_blocksize(tfm) != 0);
> -
> -	skcipher_request_set_sync_tfm(req, tfm);
> -	skcipher_request_set_callback(req, 0, NULL, NULL);
> -
> -	memset(desc.iv, 0, sizeof(desc.iv));
> -	desc.req = req;
> -	desc.fragno = 0;
> -	desc.fraglen = 0;
> -
> -	sg_init_table(desc.frags, 4);
> -
> -	ret = xdr_process_buf(buf, offset, buf->len - offset, decryptor, &desc);
> -	skcipher_request_zero(req);
> -	return ret;
> -}
> -
> /*
>  * This function makes the assumption that it was ultimately called
>  * from gss_wrap().
> @@ -942,147 +663,3 @@ struct decryptor_desc {
> 		ret = GSS_S_FAILURE;
> 	return ret;
> }
> -
> -/*
> - * Compute Kseq given the initial session key and the checksum.
> - * Set the key of the given cipher.
> - */
> -int
> -krb5_rc4_setup_seq_key(struct krb5_ctx *kctx,
> -		       struct crypto_sync_skcipher *cipher,
> -		       unsigned char *cksum)
> -{
> -	struct crypto_shash *hmac;
> -	struct shash_desc *desc;
> -	u8 Kseq[GSS_KRB5_MAX_KEYLEN];
> -	u32 zeroconstant = 0;
> -	int err;
> -
> -	dprintk("%s: entered\n", __func__);
> -
> -	hmac = crypto_alloc_shash(kctx->gk5e->cksum_name, 0, 0);
> -	if (IS_ERR(hmac)) {
> -		dprintk("%s: error %ld, allocating hash '%s'\n",
> -			__func__, PTR_ERR(hmac), kctx->gk5e->cksum_name);
> -		return PTR_ERR(hmac);
> -	}
> -
> -	desc = kmalloc(sizeof(*desc) + crypto_shash_descsize(hmac),
> -		       GFP_NOFS);
> -	if (!desc) {
> -		dprintk("%s: failed to allocate shash descriptor for '%s'\n",
> -			__func__, kctx->gk5e->cksum_name);
> -		crypto_free_shash(hmac);
> -		return -ENOMEM;
> -	}
> -
> -	desc->tfm = hmac;
> -	desc->flags = 0;
> -
> -	/* Compute intermediate Kseq from session key */
> -	err = crypto_shash_setkey(hmac, kctx->Ksess, kctx->gk5e->keylength);
> -	if (err)
> -		goto out_err;
> -
> -	err = crypto_shash_digest(desc, (u8 *)&zeroconstant, 4, Kseq);
> -	if (err)
> -		goto out_err;
> -
> -	/* Compute final Kseq from the checksum and intermediate Kseq */
> -	err = crypto_shash_setkey(hmac, Kseq, kctx->gk5e->keylength);
> -	if (err)
> -		goto out_err;
> -
> -	err = crypto_shash_digest(desc, cksum, 8, Kseq);
> -	if (err)
> -		goto out_err;
> -
> -	err = crypto_sync_skcipher_setkey(cipher, Kseq, kctx->gk5e->keylength);
> -	if (err)
> -		goto out_err;
> -
> -	err = 0;
> -
> -out_err:
> -	kzfree(desc);
> -	crypto_free_shash(hmac);
> -	dprintk("%s: returning %d\n", __func__, err);
> -	return err;
> -}
> -
> -/*
> - * Compute Kcrypt given the initial session key and the plaintext seqnum.
> - * Set the key of cipher kctx->enc.
> - */
> -int
> -krb5_rc4_setup_enc_key(struct krb5_ctx *kctx,
> -		       struct crypto_sync_skcipher *cipher,
> -		       s32 seqnum)
> -{
> -	struct crypto_shash *hmac;
> -	struct shash_desc *desc;
> -	u8 Kcrypt[GSS_KRB5_MAX_KEYLEN];
> -	u8 zeroconstant[4] = {0};
> -	u8 seqnumarray[4];
> -	int err, i;
> -
> -	dprintk("%s: entered, seqnum %u\n", __func__, seqnum);
> -
> -	hmac = crypto_alloc_shash(kctx->gk5e->cksum_name, 0, 0);
> -	if (IS_ERR(hmac)) {
> -		dprintk("%s: error %ld, allocating hash '%s'\n",
> -			__func__, PTR_ERR(hmac), kctx->gk5e->cksum_name);
> -		return PTR_ERR(hmac);
> -	}
> -
> -	desc = kmalloc(sizeof(*desc) + crypto_shash_descsize(hmac),
> -		       GFP_NOFS);
> -	if (!desc) {
> -		dprintk("%s: failed to allocate shash descriptor for '%s'\n",
> -			__func__, kctx->gk5e->cksum_name);
> -		crypto_free_shash(hmac);
> -		return -ENOMEM;
> -	}
> -
> -	desc->tfm = hmac;
> -	desc->flags = 0;
> -
> -	/* Compute intermediate Kcrypt from session key */
> -	for (i = 0; i < kctx->gk5e->keylength; i++)
> -		Kcrypt[i] = kctx->Ksess[i] ^ 0xf0;
> -
> -	err = crypto_shash_setkey(hmac, Kcrypt, kctx->gk5e->keylength);
> -	if (err)
> -		goto out_err;
> -
> -	err = crypto_shash_digest(desc, zeroconstant, 4, Kcrypt);
> -	if (err)
> -		goto out_err;
> -
> -	/* Compute final Kcrypt from the seqnum and intermediate Kcrypt */
> -	err = crypto_shash_setkey(hmac, Kcrypt, kctx->gk5e->keylength);
> -	if (err)
> -		goto out_err;
> -
> -	seqnumarray[0] = (unsigned char) ((seqnum >> 24) & 0xff);
> -	seqnumarray[1] = (unsigned char) ((seqnum >> 16) & 0xff);
> -	seqnumarray[2] = (unsigned char) ((seqnum >> 8) & 0xff);
> -	seqnumarray[3] = (unsigned char) ((seqnum >> 0) & 0xff);
> -
> -	err = crypto_shash_digest(desc, seqnumarray, 4, Kcrypt);
> -	if (err)
> -		goto out_err;
> -
> -	err = crypto_sync_skcipher_setkey(cipher, Kcrypt,
> -					  kctx->gk5e->keylength);
> -	if (err)
> -		goto out_err;
> -
> -	err = 0;
> -
> -out_err:
> -	kzfree(desc);
> -	crypto_free_shash(hmac);
> -	dprintk("%s: returning %d\n", __func__, err);
> -	return err;
> -}
> diff --git a/net/sunrpc/auth_gss/gss_krb5_keys.c b/net/sunrpc/auth_gss/gss_krb5_keys.c
> index 550fdf1..de327ae 100644
> --- a/net/sunrpc/auth_gss/gss_krb5_keys.c
> +++ b/net/sunrpc/auth_gss/gss_krb5_keys.c
> @@ -242,59 +242,6 @@ u32 krb5_derive_key(const struct gss_krb5_enctype *gk5e,
> 	return ret;
> }
> 
> -#define smask(step) ((1<<step)-1)
> -#define pstep(x, step) (((x)&smask(step))^(((x)>>step)&smask(step)))
> -#define parity_char(x) pstep(pstep(pstep((x), 4), 2), 1)
> -
> -static void mit_des_fixup_key_parity(u8 key[8])
> -{
> -	int i;
> -	for (i = 0; i < 8; i++) {
> -		key[i] &= 0xfe;
> -		key[i] |= 1^parity_char(key[i]);
> -	}
> -}
> -
> -/*
> - * This is the des3 key derivation postprocess function
> - */
> -u32 gss_krb5_des3_make_key(const struct gss_krb5_enctype *gk5e,
> -			   struct xdr_netobj *randombits,
> -			   struct xdr_netobj *key)
> -{
> -	int i;
> -	u32 ret = EINVAL;
> -
> -	if (key->len != 24) {
> -		dprintk("%s: key->len is %d\n", __func__, key->len);
> -		goto err_out;
> -	}
> -	if (randombits->len != 21) {
> -		dprintk("%s: randombits->len is %d\n",
> -			__func__, randombits->len);
> -		goto err_out;
> -	}
> -
> -	/* take the seven bytes, move them around into the top 7 bits of the
> -	   8 key bytes, then compute the parity bits.  Do this three times. */
> -
> -	for (i = 0; i < 3; i++) {
> -		memcpy(key->data + i*8, randombits->data + i*7, 7);
> -		key->data[i*8+7] = (((key->data[i*8]&1)<<1) |
> -				    ((key->data[i*8+1]&1)<<2) |
> -				    ((key->data[i*8+2]&1)<<3) |
> -				    ((key->data[i*8+3]&1)<<4) |
> -				    ((key->data[i*8+4]&1)<<5) |
> -				    ((key->data[i*8+5]&1)<<6) |
> -				    ((key->data[i*8+6]&1)<<7));
> -
> -		mit_des_fixup_key_parity(key->data + i*8);
> -	}
> -	ret = 0;
> -err_out:
> -	return ret;
> -}
> -
> /*
>  * This is the aes key derivation postprocess function
>  */
> diff --git a/net/sunrpc/auth_gss/gss_krb5_mech.c b/net/sunrpc/auth_gss/gss_krb5_mech.c
> index eab71fc..0837543 100644
> --- a/net/sunrpc/auth_gss/gss_krb5_mech.c
> +++ b/net/sunrpc/auth_gss/gss_krb5_mech.c
> @@ -54,69 +54,6 @@
> 
> static const struct gss_krb5_enctype supported_gss_krb5_enctypes[] = {
> 	/*
> -	 * DES (All DES enctypes are mapped to the same gss functionality)
> -	 */
> -	{
> -	  .etype = ENCTYPE_DES_CBC_RAW,
> -	  .ctype = CKSUMTYPE_RSA_MD5,
> -	  .name = "des-cbc-crc",
> -	  .encrypt_name = "cbc(des)",
> -	  .cksum_name = "md5",
> -	  .encrypt = krb5_encrypt,
> -	  .decrypt = krb5_decrypt,
> -	  .mk_key = NULL,
> -	  .signalg = SGN_ALG_DES_MAC_MD5,
> -	  .sealalg = SEAL_ALG_DES,
> -	  .keybytes = 7,
> -	  .keylength = 8,
> -	  .blocksize = 8,
> -	  .conflen = 8,
> -	  .cksumlength = 8,
> -	  .keyed_cksum = 0,
> -	},
> -	/*
> -	 * RC4-HMAC
> -	 */
> -	{
> -	  .etype = ENCTYPE_ARCFOUR_HMAC,
> -	  .ctype = CKSUMTYPE_HMAC_MD5_ARCFOUR,
> -	  .name = "rc4-hmac",
> -	  .encrypt_name = "ecb(arc4)",
> -	  .cksum_name = "hmac(md5)",
> -	  .encrypt = krb5_encrypt,
> -	  .decrypt = krb5_decrypt,
> -	  .mk_key = NULL,
> -	  .signalg = SGN_ALG_HMAC_MD5,
> -	  .sealalg = SEAL_ALG_MICROSOFT_RC4,
> -	  .keybytes = 16,
> -	  .keylength = 16,
> -	  .blocksize = 1,
> -	  .conflen = 8,
> -	  .cksumlength = 8,
> -	  .keyed_cksum = 1,
> -	},
> -	/*
> -	 * 3DES
> -	 */
> -	{
> -	  .etype = ENCTYPE_DES3_CBC_RAW,
> -	  .ctype = CKSUMTYPE_HMAC_SHA1_DES3,
> -	  .name = "des3-hmac-sha1",
> -	  .encrypt_name = "cbc(des3_ede)",
> -	  .cksum_name = "hmac(sha1)",
> -	  .encrypt = krb5_encrypt,
> -	  .decrypt = krb5_decrypt,
> -	  .mk_key = gss_krb5_des3_make_key,
> -	  .signalg = SGN_ALG_HMAC_SHA1_DES3_KD,
> -	  .sealalg = SEAL_ALG_DES3KD,
> -	  .keybytes = 21,
> -	  .keylength = 24,
> -	  .blocksize = 8,
> -	  .conflen = 8,
> -	  .cksumlength = 20,
> -	  .keyed_cksum = 1,
> -	},
> -	/*
> 	 * AES128
> 	 */
> 	{
> @@ -227,15 +164,6 @@
> 	if (IS_ERR(p))
> 		goto out_err;
> 
> -	switch (alg) {
> -	case ENCTYPE_DES_CBC_CRC:
> -	case ENCTYPE_DES_CBC_MD4:
> -	case ENCTYPE_DES_CBC_MD5:
> -		/* Map all these key types to ENCTYPE_DES_CBC_RAW */
> -		alg = ENCTYPE_DES_CBC_RAW;
> -		break;
> -	}
> -
> 	if (!supported_gss_krb5_enctype(alg)) {
> 		printk(KERN_WARNING "gss_kerberos_mech: unsupported "
> 			"encryption key algorithm %d\n", alg);
> @@ -271,81 +199,6 @@
> 	return p;
> }
> 
> -static int
> -gss_import_v1_context(const void *p, const void *end, struct krb5_ctx *ctx)
> -{
> -	u32 seq_send;
> -	int tmp;
> -
> -	p = simple_get_bytes(p, end, &ctx->initiate, sizeof(ctx->initiate));
> -	if (IS_ERR(p))
> -		goto out_err;
> -
> -	/* Old format supports only DES!  Any other enctype uses new format */
> -	ctx->enctype = ENCTYPE_DES_CBC_RAW;
> -
> -	ctx->gk5e = get_gss_krb5_enctype(ctx->enctype);
> -	if (ctx->gk5e == NULL) {
> -		p = ERR_PTR(-EINVAL);
> -		goto out_err;
> -	}
> -
> -	/* The downcall format was designed before we completely understood
> -	 * the uses of the context fields; so it includes some stuff we
> -	 * just give some minimal sanity-checking, and some we ignore
> -	 * completely (like the next twenty bytes): */
> -	if (unlikely(p + 20 > end || p + 20 < p)) {
> -		p = ERR_PTR(-EFAULT);
> -		goto out_err;
> -	}
> -	p += 20;
> -	p = simple_get_bytes(p, end, &tmp, sizeof(tmp));
> -	if (IS_ERR(p))
> -		goto out_err;
> -	if (tmp != SGN_ALG_DES_MAC_MD5) {
> -		p = ERR_PTR(-ENOSYS);
> -		goto out_err;
> -	}
> -	p = simple_get_bytes(p, end, &tmp, sizeof(tmp));
> -	if (IS_ERR(p))
> -		goto out_err;
> -	if (tmp != SEAL_ALG_DES) {
> -		p = ERR_PTR(-ENOSYS);
> -		goto out_err;
> -	}
> -	p = simple_get_bytes(p, end, &ctx->endtime, sizeof(ctx->endtime));
> -	if (IS_ERR(p))
> -		goto out_err;
> -	p = simple_get_bytes(p, end, &seq_send, sizeof(seq_send));
> -	if (IS_ERR(p))
> -		goto out_err;
> -	atomic_set(&ctx->seq_send, seq_send);
> -	p = simple_get_netobj(p, end, &ctx->mech_used);
> -	if (IS_ERR(p))
> -		goto out_err;
> -	p = get_key(p, end, ctx, &ctx->enc);
> -	if (IS_ERR(p))
> -		goto out_err_free_mech;
> -	p = get_key(p, end, ctx, &ctx->seq);
> -	if (IS_ERR(p))
> -		goto out_err_free_key1;
> -	if (p != end) {
> -		p = ERR_PTR(-EFAULT);
> -		goto out_err_free_key2;
> -	}
> -
> -	return 0;
> -
> -out_err_free_key2:
> -	crypto_free_sync_skcipher(ctx->seq);
> -out_err_free_key1:
> -	crypto_free_sync_skcipher(ctx->enc);
> -out_err_free_mech:
> -	kfree(ctx->mech_used.data);
> -out_err:
> -	return PTR_ERR(p);
> -}
> -
> static struct crypto_sync_skcipher *
> context_v2_alloc_cipher(struct krb5_ctx *ctx, const char *cname, u8 *key)
> {
> @@ -377,124 +230,6 @@
> }
> 
> static int
> -context_derive_keys_des3(struct krb5_ctx *ctx, gfp_t gfp_mask)
> -{
> -	struct xdr_netobj c, keyin, keyout;
> -	u8 cdata[GSS_KRB5_K5CLENGTH];
> -	u32 err;
> -
> -	c.len = GSS_KRB5_K5CLENGTH;
> -	c.data = cdata;
> -
> -	keyin.data = ctx->Ksess;
> -	keyin.len = ctx->gk5e->keylength;
> -	keyout.len = ctx->gk5e->keylength;
> -
> -	/* seq uses the raw key */
> -	ctx->seq = context_v2_alloc_cipher(ctx, ctx->gk5e->encrypt_name,
> -					   ctx->Ksess);
> -	if (ctx->seq == NULL)
> -		goto out_err;
> -
> -	ctx->enc = context_v2_alloc_cipher(ctx, ctx->gk5e->encrypt_name,
> -					   ctx->Ksess);
> -	if (ctx->enc == NULL)
> -		goto out_free_seq;
> -
> -	/* derive cksum */
> -	set_cdata(cdata, KG_USAGE_SIGN, KEY_USAGE_SEED_CHECKSUM);
> -	keyout.data = ctx->cksum;
> -	err = krb5_derive_key(ctx->gk5e, &keyin, &keyout, &c, gfp_mask);
> -	if (err) {
> -		dprintk("%s: Error %d deriving cksum key\n",
> -			__func__, err);
> -		goto out_free_enc;
> -	}
> -
> -	return 0;
> -
> -out_free_enc:
> -	crypto_free_sync_skcipher(ctx->enc);
> -out_free_seq:
> -	crypto_free_sync_skcipher(ctx->seq);
> -out_err:
> -	return -EINVAL;
> -}
> -
> -/*
> - * Note that RC4 depends on deriving keys using the sequence
> - * number or the checksum of a token.  Therefore, the final keys
> - * cannot be calculated until the token is being constructed!
> - */
> -static int
> -context_derive_keys_rc4(struct krb5_ctx *ctx)
> -{
> -	struct crypto_shash *hmac;
> -	char sigkeyconstant[] = "signaturekey";
> -	int slen = strlen(sigkeyconstant) + 1;	/* include null terminator */
> -	struct shash_desc *desc;
> -	int err;
> -
> -	dprintk("RPC:       %s: entered\n", __func__);
> -	/*
> -	 * derive cksum (aka Ksign) key
> -	 */
> -	hmac = crypto_alloc_shash(ctx->gk5e->cksum_name, 0, 0);
> -	if (IS_ERR(hmac)) {
> -		dprintk("%s: error %ld allocating hash '%s'\n",
> -			__func__, PTR_ERR(hmac), ctx->gk5e->cksum_name);
> -		err = PTR_ERR(hmac);
> -		goto out_err;
> -	}
> -
> -	err = crypto_shash_setkey(hmac, ctx->Ksess, ctx->gk5e->keylength);
> -	if (err)
> -		goto out_err_free_hmac;
> -
> -
> -	desc = kmalloc(sizeof(*desc) + crypto_shash_descsize(hmac), GFP_NOFS);
> -	if (!desc) {
> -		dprintk("%s: failed to allocate hash descriptor for '%s'\n",
> -			__func__, ctx->gk5e->cksum_name);
> -		err = -ENOMEM;
> -		goto out_err_free_hmac;
> -	}
> -
> -	desc->tfm = hmac;
> -	desc->flags = 0;
> -
> -	err = crypto_shash_digest(desc, sigkeyconstant, slen, ctx->cksum);
> -	kzfree(desc);
> -	if (err)
> -		goto out_err_free_hmac;
> -	/*
> -	 * allocate hash, and skciphers for data and seqnum encryption
> -	 */
> -	ctx->enc = crypto_alloc_sync_skcipher(ctx->gk5e->encrypt_name, 0, 0);
> -	if (IS_ERR(ctx->enc)) {
> -		err = PTR_ERR(ctx->enc);
> -		goto out_err_free_hmac;
> -	}
> -
> -	ctx->seq = crypto_alloc_sync_skcipher(ctx->gk5e->encrypt_name, 0, 0);
> -	if (IS_ERR(ctx->seq)) {
> -		crypto_free_sync_skcipher(ctx->enc);
> -		err = PTR_ERR(ctx->seq);
> -		goto out_err_free_hmac;
> -	}
> -
> -	dprintk("RPC:       %s: returning success\n", __func__);
> -
> -	err = 0;
> -
> -out_err_free_hmac:
> -	crypto_free_shash(hmac);
> -out_err:
> -	dprintk("RPC:       %s: returning %d\n", __func__, err);
> -	return err;
> -}
> -
> -static int
> context_derive_keys_new(struct krb5_ctx *ctx, gfp_t gfp_mask)
> {
> 	struct xdr_netobj c, keyin, keyout;
> @@ -635,9 +370,6 @@
> 	p = simple_get_bytes(p, end, &ctx->enctype, sizeof(ctx->enctype));
> 	if (IS_ERR(p))
> 		goto out_err;
> -	/* Map ENCTYPE_DES3_CBC_SHA1 to ENCTYPE_DES3_CBC_RAW */
> -	if (ctx->enctype == ENCTYPE_DES3_CBC_SHA1)
> -		ctx->enctype = ENCTYPE_DES3_CBC_RAW;
> 	ctx->gk5e = get_gss_krb5_enctype(ctx->enctype);
> 	if (ctx->gk5e == NULL) {
> 		dprintk("gss_kerberos_mech: unsupported krb5 enctype %u\n",
> @@ -665,10 +397,6 @@
> 	ctx->mech_used.len = gss_kerberos_mech.gm_oid.len;
> 
> 	switch (ctx->enctype) {
> -	case ENCTYPE_DES3_CBC_RAW:
> -		return context_derive_keys_des3(ctx, gfp_mask);
> -	case ENCTYPE_ARCFOUR_HMAC:
> -		return context_derive_keys_rc4(ctx);
> 	case ENCTYPE_AES128_CTS_HMAC_SHA1_96:
> 	case ENCTYPE_AES256_CTS_HMAC_SHA1_96:
> 		return context_derive_keys_new(ctx, gfp_mask);
> @@ -694,11 +422,7 @@
> 	if (ctx == NULL)
> 		return -ENOMEM;
> 
> -	if (len == 85)
> -		ret = gss_import_v1_context(p, end, ctx);
> -	else
> -		ret = gss_import_v2_context(p, end, ctx, gfp_mask);
> -
> +	ret = gss_import_v2_context(p, end, ctx, gfp_mask);
> 	if (ret == 0) {
> 		ctx_id->internal_ctx_id = ctx;
> 		if (endtime)
> diff --git a/net/sunrpc/auth_gss/gss_krb5_seal.c b/net/sunrpc/auth_gss/gss_krb5_seal.c
> index 48fe4a5..feb0f2a 100644
> --- a/net/sunrpc/auth_gss/gss_krb5_seal.c
> +++ b/net/sunrpc/auth_gss/gss_krb5_seal.c
> @@ -70,32 +70,6 @@
> #endif
> 
> static void *
> -setup_token(struct krb5_ctx *ctx, struct xdr_netobj *token)
> -{
> -	u16 *ptr;
> -	void *krb5_hdr;
> -	int body_size = GSS_KRB5_TOK_HDR_LEN + ctx->gk5e->cksumlength;
> -
> -	token->len = g_token_size(&ctx->mech_used, body_size);
> -
> -	ptr = (u16 *)token->data;
> -	g_make_token_header(&ctx->mech_used, body_size, (unsigned char **)&ptr);
> -
> -	/* ptr now at start of header described in rfc 1964, section 1.2.1: */
> -	krb5_hdr = ptr;
> -	*ptr++ = KG_TOK_MIC_MSG;
> -	/*
> -	 * signalg is stored as if it were converted from LE to host endian, even
> -	 * though it's an opaque pair of bytes according to the RFC.
> -	 */
> -	*ptr++ = (__force u16)cpu_to_le16(ctx->gk5e->signalg);
> -	*ptr++ = SEAL_ALG_NONE;
> -	*ptr = 0xffff;
> -
> -	return krb5_hdr;
> -}
> -
> -static void *
> setup_token_v2(struct krb5_ctx *ctx, struct xdr_netobj *token)
> {
> 	u16 *ptr;
> @@ -124,45 +98,6 @@
> }
> 
> static u32
> -gss_get_mic_v1(struct krb5_ctx *ctx, struct xdr_buf *text,
> -		struct xdr_netobj *token)
> -{
> -	char			cksumdata[GSS_KRB5_MAX_CKSUM_LEN];
> -	struct xdr_netobj	md5cksum = {.len = sizeof(cksumdata),
> -					    .data = cksumdata};
> -	void			*ptr;
> -	s32			now;
> -	u32			seq_send;
> -	u8			*cksumkey;
> -
> -	dprintk("RPC:       %s\n", __func__);
> -	BUG_ON(ctx == NULL);
> -
> -	now = get_seconds();
> -
> -	ptr = setup_token(ctx, token);
> -
> -	if (ctx->gk5e->keyed_cksum)
> -		cksumkey = ctx->cksum;
> -	else
> -		cksumkey = NULL;
> -
> -	if (make_checksum(ctx, ptr, 8, text, 0, cksumkey,
> -			  KG_USAGE_SIGN, &md5cksum))
> -		return GSS_S_FAILURE;
> -
> -	memcpy(ptr + GSS_KRB5_TOK_HDR_LEN, md5cksum.data, md5cksum.len);
> -
> -	seq_send = atomic_fetch_inc(&ctx->seq_send);
> -
> -	if (krb5_make_seq_num(ctx, ctx->seq, ctx->initiate ? 0 : 0xff,
> -			      seq_send, ptr + GSS_KRB5_TOK_HDR_LEN, ptr + 8))
> -		return GSS_S_FAILURE;
> -
> -	return (ctx->endtime < now) ? GSS_S_CONTEXT_EXPIRED : GSS_S_COMPLETE;
> -}
> -
> -static u32
> gss_get_mic_v2(struct krb5_ctx *ctx, struct xdr_buf *text,
> 		struct xdr_netobj *token)
> {
> @@ -210,14 +145,10 @@
> 	struct krb5_ctx		*ctx = gss_ctx->internal_ctx_id;
> 
> 	switch (ctx->enctype) {
> -	default:
> -		BUG();
> -	case ENCTYPE_DES_CBC_RAW:
> -	case ENCTYPE_DES3_CBC_RAW:
> -	case ENCTYPE_ARCFOUR_HMAC:
> -		return gss_get_mic_v1(ctx, text, token);
> 	case ENCTYPE_AES128_CTS_HMAC_SHA1_96:
> 	case ENCTYPE_AES256_CTS_HMAC_SHA1_96:
> 		return gss_get_mic_v2(ctx, text, token);
> +	default:
> +		return GSS_S_FAILURE;
> 	}
> }
> diff --git a/net/sunrpc/auth_gss/gss_krb5_seqnum.c b/net/sunrpc/auth_gss/gss_krb5_seqnum.c
> deleted file mode 100644
> index fb66562..0000000
> --- a/net/sunrpc/auth_gss/gss_krb5_seqnum.c
> +++ /dev/null
> @@ -1,164 +0,0 @@
> -/*
> - *  linux/net/sunrpc/gss_krb5_seqnum.c
> - *
> - *  Adapted from MIT Kerberos 5-1.2.1 lib/gssapi/krb5/util_seqnum.c
> - *
> - *  Copyright (c) 2000 The Regents of the University of Michigan.
> - *  All rights reserved.
> - *
> - *  Andy Adamson   <andros@umich.edu>
> - */
> -
> -/*
> - * Copyright 1993 by OpenVision Technologies, Inc.
> - *
> - * Permission to use, copy, modify, distribute, and sell this software
> - * and its documentation for any purpose is hereby granted without fee,
> - * provided that the above copyright notice appears in all copies and
> - * that both that copyright notice and this permission notice appear in
> - * supporting documentation, and that the name of OpenVision not be used
> - * in advertising or publicity pertaining to distribution of the software
> - * without specific, written prior permission. OpenVision makes no
> - * representations about the suitability of this software for any
> - * purpose.  It is provided "as is" without express or implied warranty.
> - *
> - * OPENVISION DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE,
> - * INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN NO
> - * EVENT SHALL OPENVISION BE LIABLE FOR ANY SPECIAL, INDIRECT OR
> - * CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF
> - * USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR
> - * OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
> - * PERFORMANCE OF THIS SOFTWARE.
> - */
> -
> -#include <crypto/skcipher.h>
> -#include <linux/types.h>
> -#include <linux/sunrpc/gss_krb5.h>
> -
> -#if IS_ENABLED(CONFIG_SUNRPC_DEBUG)
> -# define RPCDBG_FACILITY        RPCDBG_AUTH
> -#endif
> -
> -static s32
> -krb5_make_rc4_seq_num(struct krb5_ctx *kctx, int direction, s32 seqnum,
> -		      unsigned char *cksum, unsigned char *buf)
> -{
> -	struct crypto_sync_skcipher *cipher;
> -	unsigned char plain[8];
> -	s32 code;
> -
> -	dprintk("RPC:       %s:\n", __func__);
> -	cipher = crypto_alloc_sync_skcipher(kctx->gk5e->encrypt_name, 0, 0);
> -	if (IS_ERR(cipher))
> -		return PTR_ERR(cipher);
> -
> -	plain[0] = (unsigned char) ((seqnum >> 24) & 0xff);
> -	plain[1] = (unsigned char) ((seqnum >> 16) & 0xff);
> -	plain[2] = (unsigned char) ((seqnum >> 8) & 0xff);
> -	plain[3] = (unsigned char) ((seqnum >> 0) & 0xff);
> -	plain[4] = direction;
> -	plain[5] = direction;
> -	plain[6] = direction;
> -	plain[7] = direction;
> -
> -	code = krb5_rc4_setup_seq_key(kctx, cipher, cksum);
> -	if (code)
> -		goto out;
> -
> -	code = krb5_encrypt(cipher, cksum, plain, buf, 8);
> -out:
> -	crypto_free_sync_skcipher(cipher);
> -	return code;
> -}
> -s32
> -krb5_make_seq_num(struct krb5_ctx *kctx,
> -		struct crypto_sync_skcipher *key,
> -		int direction,
> -		u32 seqnum,
> -		unsigned char *cksum, unsigned char *buf)
> -{
> -	unsigned char plain[8];
> -
> -	if (kctx->enctype == ENCTYPE_ARCFOUR_HMAC)
> -		return krb5_make_rc4_seq_num(kctx, direction, seqnum,
> -					     cksum, buf);
> -
> -	plain[0] = (unsigned char) (seqnum & 0xff);
> -	plain[1] = (unsigned char) ((seqnum >> 8) & 0xff);
> -	plain[2] = (unsigned char) ((seqnum >> 16) & 0xff);
> -	plain[3] = (unsigned char) ((seqnum >> 24) & 0xff);
> -
> -	plain[4] = direction;
> -	plain[5] = direction;
> -	plain[6] = direction;
> -	plain[7] = direction;
> -
> -	return krb5_encrypt(key, cksum, plain, buf, 8);
> -}
> -
> -static s32
> -krb5_get_rc4_seq_num(struct krb5_ctx *kctx, unsigned char *cksum,
> -		     unsigned char *buf, int *direction, s32 *seqnum)
> -{
> -	struct crypto_sync_skcipher *cipher;
> -	unsigned char plain[8];
> -	s32 code;
> -
> -	dprintk("RPC:       %s:\n", __func__);
> -	cipher = crypto_alloc_sync_skcipher(kctx->gk5e->encrypt_name, 0, 0);
> -	if (IS_ERR(cipher))
> -		return PTR_ERR(cipher);
> -
> -	code = krb5_rc4_setup_seq_key(kctx, cipher, cksum);
> -	if (code)
> -		goto out;
> -
> -	code = krb5_decrypt(cipher, cksum, buf, plain, 8);
> -	if (code)
> -		goto out;
> -
> -	if ((plain[4] != plain[5]) || (plain[4] != plain[6])
> -				   || (plain[4] != plain[7])) {
> -		code = (s32)KG_BAD_SEQ;
> -		goto out;
> -	}
> -
> -	*direction = plain[4];
> -
> -	*seqnum = ((plain[0] << 24) | (plain[1] << 16) |
> -					(plain[2] << 8) | (plain[3]));
> -out:
> -	crypto_free_sync_skcipher(cipher);
> -	return code;
> -}
> -
> -s32
> -krb5_get_seq_num(struct krb5_ctx *kctx,
> -	       unsigned char *cksum,
> -	       unsigned char *buf,
> -	       int *direction, u32 *seqnum)
> -{
> -	s32 code;
> -	unsigned char plain[8];
> -	struct crypto_sync_skcipher *key = kctx->seq;
> -
> -	dprintk("RPC:       krb5_get_seq_num:\n");
> -
> -	if (kctx->enctype == ENCTYPE_ARCFOUR_HMAC)
> -		return krb5_get_rc4_seq_num(kctx, cksum, buf,
> -					    direction, seqnum);
> -
> -	if ((code = krb5_decrypt(key, cksum, buf, plain, 8)))
> -		return code;
> -
> -	if ((plain[4] != plain[5]) || (plain[4] != plain[6]) ||
> -	    (plain[4] != plain[7]))
> -		return (s32)KG_BAD_SEQ;
> -
> -	*direction = plain[4];
> -
> -	*seqnum = ((plain[0]) |
> -		   (plain[1] << 8) | (plain[2] << 16) | (plain[3] << 24));
> -
> -	return 0;
> -}
> diff --git a/net/sunrpc/auth_gss/gss_krb5_unseal.c b/net/sunrpc/auth_gss/gss_krb5_unseal.c
> index ef2b25b..f0f646a 100644
> --- a/net/sunrpc/auth_gss/gss_krb5_unseal.c
> +++ b/net/sunrpc/auth_gss/gss_krb5_unseal.c
> @@ -71,78 +71,6 @@
>  * supposedly taken over. */
> 
> static u32
> -gss_verify_mic_v1(struct krb5_ctx *ctx,
> -		struct xdr_buf *message_buffer, struct xdr_netobj *read_token)
> -{
> -	int			signalg;
> -	int			sealalg;
> -	char			cksumdata[GSS_KRB5_MAX_CKSUM_LEN];
> -	struct xdr_netobj	md5cksum = {.len = sizeof(cksumdata),
> -					    .data = cksumdata};
> -	s32			now;
> -	int			direction;
> -	u32			seqnum;
> -	unsigned char		*ptr = (unsigned char *)read_token->data;
> -	int			bodysize;
> -	u8			*cksumkey;
> -
> -	dprintk("RPC:       krb5_read_token\n");
> -
> -	if (g_verify_token_header(&ctx->mech_used, &bodysize, &ptr,
> -					read_token->len))
> -		return GSS_S_DEFECTIVE_TOKEN;
> -
> -	if ((ptr[0] != ((KG_TOK_MIC_MSG >> 8) & 0xff)) ||
> -	    (ptr[1] !=  (KG_TOK_MIC_MSG & 0xff)))
> -		return GSS_S_DEFECTIVE_TOKEN;
> -
> -	/* XXX sanity-check bodysize?? */
> -
> -	signalg = ptr[2] + (ptr[3] << 8);
> -	if (signalg != ctx->gk5e->signalg)
> -		return GSS_S_DEFECTIVE_TOKEN;
> -
> -	sealalg = ptr[4] + (ptr[5] << 8);
> -	if (sealalg != SEAL_ALG_NONE)
> -		return GSS_S_DEFECTIVE_TOKEN;
> -
> -	if ((ptr[6] != 0xff) || (ptr[7] != 0xff))
> -		return GSS_S_DEFECTIVE_TOKEN;
> -
> -	if (ctx->gk5e->keyed_cksum)
> -		cksumkey = ctx->cksum;
> -	else
> -		cksumkey = NULL;
> -
> -	if (make_checksum(ctx, ptr, 8, message_buffer, 0,
> -			  cksumkey, KG_USAGE_SIGN, &md5cksum))
> -		return GSS_S_FAILURE;
> -
> -	if (memcmp(md5cksum.data, ptr + GSS_KRB5_TOK_HDR_LEN,
> -					ctx->gk5e->cksumlength))
> -		return GSS_S_BAD_SIG;
> -
> -	/* it got through unscathed.  Make sure the context is unexpired */
> -
> -	now = get_seconds();
> -
> -	if (now > ctx->endtime)
> -		return GSS_S_CONTEXT_EXPIRED;
> -
> -	/* do sequencing checks */
> -
> -	if (krb5_get_seq_num(ctx, ptr + GSS_KRB5_TOK_HDR_LEN, ptr + 8,
> -			     &direction, &seqnum))
> -		return GSS_S_FAILURE;
> -
> -	if ((ctx->initiate && direction != 0xff) ||
> -	    (!ctx->initiate && direction != 0))
> -		return GSS_S_BAD_SIG;
> -
> -	return GSS_S_COMPLETE;
> -}
> -
> -static u32
> gss_verify_mic_v2(struct krb5_ctx *ctx,
> 		struct xdr_buf *message_buffer, struct xdr_netobj *read_token)
> {
> @@ -214,14 +142,10 @@
> 	struct krb5_ctx *ctx = gss_ctx->internal_ctx_id;
> 
> 	switch (ctx->enctype) {
> -	default:
> -		BUG();
> -	case ENCTYPE_DES_CBC_RAW:
> -	case ENCTYPE_DES3_CBC_RAW:
> -	case ENCTYPE_ARCFOUR_HMAC:
> -		return gss_verify_mic_v1(ctx, message_buffer, read_token);
> 	case ENCTYPE_AES128_CTS_HMAC_SHA1_96:
> 	case ENCTYPE_AES256_CTS_HMAC_SHA1_96:
> 		return gss_verify_mic_v2(ctx, message_buffer, read_token);
> +	default:
> +		return GSS_S_FAILURE;
> 	}
> }
> diff --git a/net/sunrpc/auth_gss/gss_krb5_wrap.c b/net/sunrpc/auth_gss/gss_krb5_wrap.c
> index 5cdde6c..98c99d3 100644
> --- a/net/sunrpc/auth_gss/gss_krb5_wrap.c
> +++ b/net/sunrpc/auth_gss/gss_krb5_wrap.c
> @@ -146,244 +146,6 @@
> 	}
> }
> 
> -/* Assumptions: the head and tail of inbuf are ours to play with.
> - * The pages, however, may be real pages in the page cache and we replace
> - * them with scratch pages from **pages before writing to them. */
> -/* XXX: obviously the above should be documentation of wrap interface,
> - * and shouldn't be in this kerberos-specific file. */
> -
> -/* XXX factor out common code with seal/unseal. */
> -
> -static u32
> -gss_wrap_kerberos_v1(struct krb5_ctx *kctx, int offset,
> -		struct xdr_buf *buf, struct page **pages)
> -{
> -	char			cksumdata[GSS_KRB5_MAX_CKSUM_LEN];
> -	struct xdr_netobj	md5cksum = {.len = sizeof(cksumdata),
> -					    .data = cksumdata};
> -	int			blocksize = 0, plainlen;
> -	unsigned char		*ptr, *msg_start;
> -	s32			now;
> -	int			headlen;
> -	struct page		**tmp_pages;
> -	u32			seq_send;
> -	u8			*cksumkey;
> -	u32			conflen = kctx->gk5e->conflen;
> -
> -	dprintk("RPC:       %s\n", __func__);
> -
> -	now = get_seconds();
> -
> -	blocksize = crypto_sync_skcipher_blocksize(kctx->enc);
> -	gss_krb5_add_padding(buf, offset, blocksize);
> -	BUG_ON((buf->len - offset) % blocksize);
> -	plainlen = conflen + buf->len - offset;
> -
> -	headlen = g_token_size(&kctx->mech_used,
> -		GSS_KRB5_TOK_HDR_LEN + kctx->gk5e->cksumlength + plainlen) -
> -		(buf->len - offset);
> -
> -	ptr = buf->head[0].iov_base + offset;
> -	/* shift data to make room for header. */
> -	xdr_extend_head(buf, offset, headlen);
> -
> -	/* XXX Would be cleverer to encrypt while copying. */
> -	BUG_ON((buf->len - offset - headlen) % blocksize);
> -
> -	g_make_token_header(&kctx->mech_used,
> -				GSS_KRB5_TOK_HDR_LEN +
> -				kctx->gk5e->cksumlength + plainlen, &ptr);
> -
> -
> -	/* ptr now at header described in rfc 1964, section 1.2.1: */
> -	ptr[0] = (unsigned char) ((KG_TOK_WRAP_MSG >> 8) & 0xff);
> -	ptr[1] = (unsigned char) (KG_TOK_WRAP_MSG & 0xff);
> -
> -	msg_start = ptr + GSS_KRB5_TOK_HDR_LEN + kctx->gk5e->cksumlength;
> -
> -	/*
> -	 * signalg and sealalg are stored as if they were converted from LE
> -	 * to host endian, even though they're opaque pairs of bytes according
> -	 * to the RFC.
> -	 */
> -	*(__le16 *)(ptr + 2) = cpu_to_le16(kctx->gk5e->signalg);
> -	*(__le16 *)(ptr + 4) = cpu_to_le16(kctx->gk5e->sealalg);
> -	ptr[6] = 0xff;
> -	ptr[7] = 0xff;
> -
> -	gss_krb5_make_confounder(msg_start, conflen);
> -
> -	if (kctx->gk5e->keyed_cksum)
> -		cksumkey = kctx->cksum;
> -	else
> -		cksumkey = NULL;
> -
> -	/* XXXJBF: UGH!: */
> -	tmp_pages = buf->pages;
> -	buf->pages = pages;
> -	if (make_checksum(kctx, ptr, 8, buf, offset + headlen - conflen,
> -					cksumkey, KG_USAGE_SEAL, &md5cksum))
> -		return GSS_S_FAILURE;
> -	buf->pages = tmp_pages;
> -
> -	memcpy(ptr + GSS_KRB5_TOK_HDR_LEN, md5cksum.data, md5cksum.len);
> -
> -	seq_send = atomic_fetch_inc(&kctx->seq_send);
> -
> -	/* XXX would probably be more efficient to compute checksum
> -	 * and encrypt at the same time: */
> -	if ((krb5_make_seq_num(kctx, kctx->seq, kctx->initiate ? 0 : 0xff,
> -			       seq_send, ptr + GSS_KRB5_TOK_HDR_LEN, ptr + 8)))
> -		return GSS_S_FAILURE;
> -
> -	if (kctx->enctype == ENCTYPE_ARCFOUR_HMAC) {
> -		struct crypto_sync_skcipher *cipher;
> -		int err;
> -		cipher = crypto_alloc_sync_skcipher(kctx->gk5e->encrypt_name,
> -						    0, 0);
> -		if (IS_ERR(cipher))
> -			return GSS_S_FAILURE;
> -
> -		krb5_rc4_setup_enc_key(kctx, cipher, seq_send);
> -
> -		err = gss_encrypt_xdr_buf(cipher, buf,
> -					  offset + headlen - conflen, pages);
> -		crypto_free_sync_skcipher(cipher);
> -		if (err)
> -			return GSS_S_FAILURE;
> -	} else {
> -		if (gss_encrypt_xdr_buf(kctx->enc, buf,
> -					offset + headlen - conflen, pages))
> -			return GSS_S_FAILURE;
> -	}
> -
> -	return (kctx->endtime < now) ? GSS_S_CONTEXT_EXPIRED : GSS_S_COMPLETE;
> -}
> -
> -static u32
> -gss_unwrap_kerberos_v1(struct krb5_ctx *kctx, int offset, struct xdr_buf *buf)
> -{
> -	int			signalg;
> -	int			sealalg;
> -	char			cksumdata[GSS_KRB5_MAX_CKSUM_LEN];
> -	struct xdr_netobj	md5cksum = {.len = sizeof(cksumdata),
> -					    .data = cksumdata};
> -	s32			now;
> -	int			direction;
> -	s32			seqnum;
> -	unsigned char		*ptr;
> -	int			bodysize;
> -	void			*data_start, *orig_start;
> -	int			data_len;
> -	int			blocksize;
> -	u32			conflen = kctx->gk5e->conflen;
> -	int			crypt_offset;
> -	u8			*cksumkey;
> -
> -	dprintk("RPC:       gss_unwrap_kerberos\n");
> -
> -	ptr = (u8 *)buf->head[0].iov_base + offset;
> -	if (g_verify_token_header(&kctx->mech_used, &bodysize, &ptr,
> -					buf->len - offset))
> -		return GSS_S_DEFECTIVE_TOKEN;
> -
> -	if ((ptr[0] != ((KG_TOK_WRAP_MSG >> 8) & 0xff)) ||
> -	    (ptr[1] !=  (KG_TOK_WRAP_MSG & 0xff)))
> -		return GSS_S_DEFECTIVE_TOKEN;
> -
> -	/* XXX sanity-check bodysize?? */
> -
> -	/* get the sign and seal algorithms */
> -
> -	signalg = ptr[2] + (ptr[3] << 8);
> -	if (signalg != kctx->gk5e->signalg)
> -		return GSS_S_DEFECTIVE_TOKEN;
> -
> -	sealalg = ptr[4] + (ptr[5] << 8);
> -	if (sealalg != kctx->gk5e->sealalg)
> -		return GSS_S_DEFECTIVE_TOKEN;
> -
> -	if ((ptr[6] != 0xff) || (ptr[7] != 0xff))
> -		return GSS_S_DEFECTIVE_TOKEN;
> -
> -	/*
> -	 * Data starts after token header and checksum.  ptr points
> -	 * to the beginning of the token header
> -	 */
> -	crypt_offset = ptr + (GSS_KRB5_TOK_HDR_LEN + kctx->gk5e->cksumlength) -
> -					(unsigned char *)buf->head[0].iov_base;
> -
> -	/*
> -	 * Need plaintext seqnum to derive encryption key for arcfour-hmac
> -	 */
> -	if (krb5_get_seq_num(kctx, ptr + GSS_KRB5_TOK_HDR_LEN,
> -			     ptr + 8, &direction, &seqnum))
> -		return GSS_S_BAD_SIG;
> -
> -	if ((kctx->initiate && direction != 0xff) ||
> -	    (!kctx->initiate && direction != 0))
> -		return GSS_S_BAD_SIG;
> -
> -	if (kctx->enctype == ENCTYPE_ARCFOUR_HMAC) {
> -		struct crypto_sync_skcipher *cipher;
> -		int err;
> -
> -		cipher = crypto_alloc_sync_skcipher(kctx->gk5e->encrypt_name,
> -						    0, 0);
> -		if (IS_ERR(cipher))
> -			return GSS_S_FAILURE;
> -
> -		krb5_rc4_setup_enc_key(kctx, cipher, seqnum);
> -
> -		err = gss_decrypt_xdr_buf(cipher, buf, crypt_offset);
> -		crypto_free_sync_skcipher(cipher);
> -		if (err)
> -			return GSS_S_DEFECTIVE_TOKEN;
> -	} else {
> -		if (gss_decrypt_xdr_buf(kctx->enc, buf, crypt_offset))
> -			return GSS_S_DEFECTIVE_TOKEN;
> -	}
> -
> -	if (kctx->gk5e->keyed_cksum)
> -		cksumkey = kctx->cksum;
> -	else
> -		cksumkey = NULL;
> -
> -	if (make_checksum(kctx, ptr, 8, buf, crypt_offset,
> -					cksumkey, KG_USAGE_SEAL, &md5cksum))
> -		return GSS_S_FAILURE;
> -
> -	if (memcmp(md5cksum.data, ptr + GSS_KRB5_TOK_HDR_LEN,
> -						kctx->gk5e->cksumlength))
> -		return GSS_S_BAD_SIG;
> -
> -	/* it got through unscathed.  Make sure the context is unexpired */
> -
> -	now = get_seconds();
> -
> -	if (now > kctx->endtime)
> -		return GSS_S_CONTEXT_EXPIRED;
> -
> -	/* do sequencing checks */
> -
> -	/* Copy the data back to the right position.  XXX: Would probably be
> -	 * better to copy and encrypt at the same time. */
> -
> -	blocksize = crypto_sync_skcipher_blocksize(kctx->enc);
> -	data_start = ptr + (GSS_KRB5_TOK_HDR_LEN + kctx->gk5e->cksumlength) +
> -					conflen;
> -	orig_start = buf->head[0].iov_base + offset;
> -	data_len = (buf->head[0].iov_base + buf->head[0].iov_len) - data_start;
> -	memmove(orig_start, data_start, data_len);
> -	buf->head[0].iov_len -= (data_start - orig_start);
> -	buf->len -= (data_start - orig_start);
> -
> -	if (gss_krb5_remove_padding(buf, blocksize))
> -		return GSS_S_DEFECTIVE_TOKEN;
> -
> -	return GSS_S_COMPLETE;
> -}
> -
> /*
>  * We can shift data by up to LOCAL_BUF_LEN bytes in a pass.  If we need
>  * to do more than that, we shift repeatedly.  Kevin Coffman reports
> @@ -588,15 +350,11 @@ static void rotate_left(u32 base, struct xdr_buf *buf, unsigned int shift)
> 	struct krb5_ctx	*kctx = gctx->internal_ctx_id;
> 
> 	switch (kctx->enctype) {
> -	default:
> -		BUG();
> -	case ENCTYPE_DES_CBC_RAW:
> -	case ENCTYPE_DES3_CBC_RAW:
> -	case ENCTYPE_ARCFOUR_HMAC:
> -		return gss_wrap_kerberos_v1(kctx, offset, buf, pages);
> 	case ENCTYPE_AES128_CTS_HMAC_SHA1_96:
> 	case ENCTYPE_AES256_CTS_HMAC_SHA1_96:
> 		return gss_wrap_kerberos_v2(kctx, offset, buf, pages);
> +	default:
> +		return GSS_S_FAILURE;
> 	}
> }
> 
> @@ -606,14 +364,10 @@ static void rotate_left(u32 base, struct xdr_buf *buf, unsigned int shift)
> 	struct krb5_ctx	*kctx = gctx->internal_ctx_id;
> 
> 	switch (kctx->enctype) {
> -	default:
> -		BUG();
> -	case ENCTYPE_DES_CBC_RAW:
> -	case ENCTYPE_DES3_CBC_RAW:
> -	case ENCTYPE_ARCFOUR_HMAC:
> -		return gss_unwrap_kerberos_v1(kctx, offset, buf);
> 	case ENCTYPE_AES128_CTS_HMAC_SHA1_96:
> 	case ENCTYPE_AES256_CTS_HMAC_SHA1_96:
> 		return gss_unwrap_kerberos_v2(kctx, offset, buf);
> +	default:
> +		return GSS_S_FAILURE;
> 	}
> }
> 

--
Chuck Lever




^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 16/24] SUNRPC: Remove support for kerberos_v1
  2018-12-12 21:20   ` Chuck Lever
@ 2018-12-14 21:16     ` Chuck Lever
  0 siblings, 0 replies; 37+ messages in thread
From: Chuck Lever @ 2018-12-14 21:16 UTC (permalink / raw)
  To: Trond Myklebust, Anna Schumaker
  Cc: linux-rdma, Linux NFS Mailing List, Simo Sorce

Following up...

> On Dec 12, 2018, at 4:20 PM, Chuck Lever <chuck.lever@oracle.com> wrote:
> 
> Hi Trond-
> 
>> On Dec 10, 2018, at 11:30 AM, Chuck Lever <chuck.lever@oracle.com> wrote:
>> 
>> Kerberos v1 allows the selection of encryption types that are known
>> to be insecure and are no longer widely deployed. Also there is no
>> convenient facility for testing v1 or these enctypes, so essentially
>> this code has been untested for some time.
>> 
>> Note that RFC 6649 deprecates DES and Arcfour_56 in Kerberos, and
>> RFC 8429 (October 2018) deprecates DES3 and Arcfour.
>> 
>> Support for DES_CBC_RAW, DES_CBC_CRC, DES_CBC_MD4, DES_CBC_MD5,
>> DES3_CBC_RAW, and ARCFOUR_HMAC encryption in the Linux kernel
>> RPCSEC_GSS implementation is removed by this patch.
> 
> Wondering what kind of impact this will have on folks who have
> the deprecated encryption types in their krb5.keytab or with a
> KDC that might uses DES3 for user principals.
> 
> Anna suggested putting this change behind a KCONFIG option.

I'm told there are indeed convenient ways to test these old
enctypes using current KDCs, and further that some customers
still insist on using them.

A better approach here would be to investigate testing of these
enctypes to ensure they are still in working order today and
after forthcoming significant changes in this area; and to
introduce a CONFIG option to disable these enctypes.

I will drop this patch from my for-4.21. For v4.22, I will try
to revisit via a narrower change that preserves support.

I plan to post a fresh revision of my for-4.21 next week,
rebased on v4.20-rc7, with fixes for the soft IRQ / DMAR
issues.


>> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
>> ---
>> include/linux/sunrpc/gss_krb5.h          |   39 ---
>> include/linux/sunrpc/gss_krb5_enctypes.h |    2 
>> net/sunrpc/Kconfig                       |    3 
>> net/sunrpc/auth_gss/Makefile             |    2 
>> net/sunrpc/auth_gss/gss_krb5_crypto.c    |  423 ------------------------------
>> net/sunrpc/auth_gss/gss_krb5_keys.c      |   53 ----
>> net/sunrpc/auth_gss/gss_krb5_mech.c      |  278 --------------------
>> net/sunrpc/auth_gss/gss_krb5_seal.c      |   73 -----
>> net/sunrpc/auth_gss/gss_krb5_seqnum.c    |  164 ------------
>> net/sunrpc/auth_gss/gss_krb5_unseal.c    |   80 ------
>> net/sunrpc/auth_gss/gss_krb5_wrap.c      |  254 ------------------
>> 11 files changed, 12 insertions(+), 1359 deletions(-)
>> delete mode 100644 net/sunrpc/auth_gss/gss_krb5_seqnum.c
>> 
>> diff --git a/include/linux/sunrpc/gss_krb5.h b/include/linux/sunrpc/gss_krb5.h
>> index 02c0412..57f4a49 100644
>> --- a/include/linux/sunrpc/gss_krb5.h
>> +++ b/include/linux/sunrpc/gss_krb5.h
>> @@ -105,7 +105,6 @@ struct krb5_ctx {
>> 	struct crypto_sync_skcipher *acceptor_enc_aux;
>> 	struct crypto_sync_skcipher *initiator_enc_aux;
>> 	u8			Ksess[GSS_KRB5_MAX_KEYLEN]; /* session key */
>> -	u8			cksum[GSS_KRB5_MAX_KEYLEN];
>> 	s32			endtime;
>> 	atomic_t		seq_send;
>> 	atomic64_t		seq_send64;
>> @@ -235,11 +234,6 @@ enum seal_alg {
>> 	+ GSS_KRB5_MAX_CKSUM_LEN)
>> 
>> u32
>> -make_checksum(struct krb5_ctx *kctx, char *header, int hdrlen,
>> -		struct xdr_buf *body, int body_offset, u8 *cksumkey,
>> -		unsigned int usage, struct xdr_netobj *cksumout);
>> -
>> -u32
>> make_checksum_v2(struct krb5_ctx *, char *header, int hdrlen,
>> 		 struct xdr_buf *body, int body_offset, u8 *key,
>> 		 unsigned int usage, struct xdr_netobj *cksum);
>> @@ -268,25 +262,6 @@ u32 gss_verify_mic_kerberos(struct gss_ctx *, struct xdr_buf *,
>> 	     void *iv, void *in, void *out, int length); 
>> 
>> int
>> -gss_encrypt_xdr_buf(struct crypto_sync_skcipher *tfm, struct xdr_buf *outbuf,
>> -		    int offset, struct page **pages);
>> -
>> -int
>> -gss_decrypt_xdr_buf(struct crypto_sync_skcipher *tfm, struct xdr_buf *inbuf,
>> -		    int offset);
>> -
>> -s32
>> -krb5_make_seq_num(struct krb5_ctx *kctx,
>> -		struct crypto_sync_skcipher *key,
>> -		int direction,
>> -		u32 seqnum, unsigned char *cksum, unsigned char *buf);
>> -
>> -s32
>> -krb5_get_seq_num(struct krb5_ctx *kctx,
>> -	       unsigned char *cksum,
>> -	       unsigned char *buf, int *direction, u32 *seqnum);
>> -
>> -int
>> xdr_extend_head(struct xdr_buf *buf, unsigned int base, unsigned int shiftlen);
>> 
>> u32
>> @@ -297,11 +272,6 @@ u32 gss_verify_mic_kerberos(struct gss_ctx *, struct xdr_buf *,
>> 		gfp_t gfp_mask);
>> 
>> u32
>> -gss_krb5_des3_make_key(const struct gss_krb5_enctype *gk5e,
>> -		       struct xdr_netobj *randombits,
>> -		       struct xdr_netobj *key);
>> -
>> -u32
>> gss_krb5_aes_make_key(const struct gss_krb5_enctype *gk5e,
>> 		      struct xdr_netobj *randombits,
>> 		      struct xdr_netobj *key);
>> @@ -316,14 +286,5 @@ u32 gss_verify_mic_kerberos(struct gss_ctx *, struct xdr_buf *,
>> 		     struct xdr_buf *buf, u32 *plainoffset,
>> 		     u32 *plainlen);
>> 
>> -int
>> -krb5_rc4_setup_seq_key(struct krb5_ctx *kctx,
>> -		       struct crypto_sync_skcipher *cipher,
>> -		       unsigned char *cksum);
>> -
>> -int
>> -krb5_rc4_setup_enc_key(struct krb5_ctx *kctx,
>> -		       struct crypto_sync_skcipher *cipher,
>> -		       s32 seqnum);
>> void
>> gss_krb5_make_confounder(char *p, u32 conflen);
>> diff --git a/include/linux/sunrpc/gss_krb5_enctypes.h b/include/linux/sunrpc/gss_krb5_enctypes.h
>> index ec6234e..7a8abcf 100644
>> --- a/include/linux/sunrpc/gss_krb5_enctypes.h
>> +++ b/include/linux/sunrpc/gss_krb5_enctypes.h
>> @@ -1,4 +1,4 @@
>> /*
>> * Dumb way to share this static piece of information with nfsd
>> */
>> -#define KRB5_SUPPORTED_ENCTYPES "18,17,16,23,3,1,2"
>> +#define KRB5_SUPPORTED_ENCTYPES "18,17"
>> diff --git a/net/sunrpc/Kconfig b/net/sunrpc/Kconfig
>> index ac09ca8..80c8efc 100644
>> --- a/net/sunrpc/Kconfig
>> +++ b/net/sunrpc/Kconfig
>> @@ -18,9 +18,8 @@ config SUNRPC_SWAP
>> config RPCSEC_GSS_KRB5
>> 	tristate "Secure RPC: Kerberos V mechanism"
>> 	depends on SUNRPC && CRYPTO
>> -	depends on CRYPTO_MD5 && CRYPTO_DES && CRYPTO_CBC && CRYPTO_CTS
>> +	depends on CRYPTO_MD5 && CRYPTO_CTS
>> 	depends on CRYPTO_ECB && CRYPTO_HMAC && CRYPTO_SHA1 && CRYPTO_AES
>> -	depends on CRYPTO_ARC4
>> 	default y
>> 	select SUNRPC_GSS
>> 	help
>> diff --git a/net/sunrpc/auth_gss/Makefile b/net/sunrpc/auth_gss/Makefile
>> index c374268..b5a65a0 100644
>> --- a/net/sunrpc/auth_gss/Makefile
>> +++ b/net/sunrpc/auth_gss/Makefile
>> @@ -12,4 +12,4 @@ auth_rpcgss-y := auth_gss.o gss_generic_token.o \
>> obj-$(CONFIG_RPCSEC_GSS_KRB5) += rpcsec_gss_krb5.o
>> 
>> rpcsec_gss_krb5-y := gss_krb5_mech.o gss_krb5_seal.o gss_krb5_unseal.o \
>> -	gss_krb5_seqnum.o gss_krb5_wrap.o gss_krb5_crypto.o gss_krb5_keys.o
>> +	gss_krb5_wrap.o gss_krb5_crypto.o gss_krb5_keys.o
>> diff --git a/net/sunrpc/auth_gss/gss_krb5_crypto.c b/net/sunrpc/auth_gss/gss_krb5_crypto.c
>> index 4f43383..896dd87 100644
>> --- a/net/sunrpc/auth_gss/gss_krb5_crypto.c
>> +++ b/net/sunrpc/auth_gss/gss_krb5_crypto.c
>> @@ -138,230 +138,6 @@
>> 	return crypto_ahash_update(req);
>> }
>> 
>> -static int
>> -arcfour_hmac_md5_usage_to_salt(unsigned int usage, u8 salt[4])
>> -{
>> -	unsigned int ms_usage;
>> -
>> -	switch (usage) {
>> -	case KG_USAGE_SIGN:
>> -		ms_usage = 15;
>> -		break;
>> -	case KG_USAGE_SEAL:
>> -		ms_usage = 13;
>> -		break;
>> -	default:
>> -		return -EINVAL;
>> -	}
>> -	salt[0] = (ms_usage >> 0) & 0xff;
>> -	salt[1] = (ms_usage >> 8) & 0xff;
>> -	salt[2] = (ms_usage >> 16) & 0xff;
>> -	salt[3] = (ms_usage >> 24) & 0xff;
>> -
>> -	return 0;
>> -}
>> -
>> -static u32
>> -make_checksum_hmac_md5(struct krb5_ctx *kctx, char *header, int hdrlen,
>> -		       struct xdr_buf *body, int body_offset, u8 *cksumkey,
>> -		       unsigned int usage, struct xdr_netobj *cksumout)
>> -{
>> -	struct scatterlist              sg[1];
>> -	int err = -1;
>> -	u8 *checksumdata;
>> -	u8 *rc4salt;
>> -	struct crypto_ahash *md5;
>> -	struct crypto_ahash *hmac_md5;
>> -	struct ahash_request *req;
>> -
>> -	if (cksumkey == NULL)
>> -		return GSS_S_FAILURE;
>> -
>> -	if (cksumout->len < kctx->gk5e->cksumlength) {
>> -		dprintk("%s: checksum buffer length, %u, too small for %s\n",
>> -			__func__, cksumout->len, kctx->gk5e->name);
>> -		return GSS_S_FAILURE;
>> -	}
>> -
>> -	rc4salt = kmalloc_array(4, sizeof(*rc4salt), GFP_NOFS);
>> -	if (!rc4salt)
>> -		return GSS_S_FAILURE;
>> -
>> -	if (arcfour_hmac_md5_usage_to_salt(usage, rc4salt)) {
>> -		dprintk("%s: invalid usage value %u\n", __func__, usage);
>> -		goto out_free_rc4salt;
>> -	}
>> -
>> -	checksumdata = kmalloc(GSS_KRB5_MAX_CKSUM_LEN, GFP_NOFS);
>> -	if (!checksumdata)
>> -		goto out_free_rc4salt;
>> -
>> -	md5 = crypto_alloc_ahash("md5", 0, CRYPTO_ALG_ASYNC);
>> -	if (IS_ERR(md5))
>> -		goto out_free_cksum;
>> -
>> -	hmac_md5 = crypto_alloc_ahash(kctx->gk5e->cksum_name, 0,
>> -				      CRYPTO_ALG_ASYNC);
>> -	if (IS_ERR(hmac_md5))
>> -		goto out_free_md5;
>> -
>> -	req = ahash_request_alloc(md5, GFP_NOFS);
>> -	if (!req)
>> -		goto out_free_hmac_md5;
>> -
>> -	ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_SLEEP, NULL, NULL);
>> -
>> -	err = crypto_ahash_init(req);
>> -	if (err)
>> -		goto out;
>> -	sg_init_one(sg, rc4salt, 4);
>> -	ahash_request_set_crypt(req, sg, NULL, 4);
>> -	err = crypto_ahash_update(req);
>> -	if (err)
>> -		goto out;
>> -
>> -	sg_init_one(sg, header, hdrlen);
>> -	ahash_request_set_crypt(req, sg, NULL, hdrlen);
>> -	err = crypto_ahash_update(req);
>> -	if (err)
>> -		goto out;
>> -	err = xdr_process_buf(body, body_offset, body->len - body_offset,
>> -			      checksummer, req);
>> -	if (err)
>> -		goto out;
>> -	ahash_request_set_crypt(req, NULL, checksumdata, 0);
>> -	err = crypto_ahash_final(req);
>> -	if (err)
>> -		goto out;
>> -
>> -	ahash_request_free(req);
>> -	req = ahash_request_alloc(hmac_md5, GFP_NOFS);
>> -	if (!req)
>> -		goto out_free_hmac_md5;
>> -
>> -	ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_SLEEP, NULL, NULL);
>> -
>> -	err = crypto_ahash_setkey(hmac_md5, cksumkey, kctx->gk5e->keylength);
>> -	if (err)
>> -		goto out;
>> -
>> -	sg_init_one(sg, checksumdata, crypto_ahash_digestsize(md5));
>> -	ahash_request_set_crypt(req, sg, checksumdata,
>> -				crypto_ahash_digestsize(md5));
>> -	err = crypto_ahash_digest(req);
>> -	if (err)
>> -		goto out;
>> -
>> -	memcpy(cksumout->data, checksumdata, kctx->gk5e->cksumlength);
>> -	cksumout->len = kctx->gk5e->cksumlength;
>> -out:
>> -	ahash_request_free(req);
>> -out_free_hmac_md5:
>> -	crypto_free_ahash(hmac_md5);
>> -out_free_md5:
>> -	crypto_free_ahash(md5);
>> -out_free_cksum:
>> -	kfree(checksumdata);
>> -out_free_rc4salt:
>> -	kfree(rc4salt);
>> -	return err ? GSS_S_FAILURE : 0;
>> -}
>> -
>> -/*
>> - * checksum the plaintext data and hdrlen bytes of the token header
>> - * The checksum is performed over the first 8 bytes of the
>> - * gss token header and then over the data body
>> - */
>> -u32
>> -make_checksum(struct krb5_ctx *kctx, char *header, int hdrlen,
>> -	      struct xdr_buf *body, int body_offset, u8 *cksumkey,
>> -	      unsigned int usage, struct xdr_netobj *cksumout)
>> -{
>> -	struct crypto_ahash *tfm;
>> -	struct ahash_request *req;
>> -	struct scatterlist              sg[1];
>> -	int err = -1;
>> -	u8 *checksumdata;
>> -	unsigned int checksumlen;
>> -
>> -	if (kctx->gk5e->ctype == CKSUMTYPE_HMAC_MD5_ARCFOUR)
>> -		return make_checksum_hmac_md5(kctx, header, hdrlen,
>> -					      body, body_offset,
>> -					      cksumkey, usage, cksumout);
>> -
>> -	if (cksumout->len < kctx->gk5e->cksumlength) {
>> -		dprintk("%s: checksum buffer length, %u, too small for %s\n",
>> -			__func__, cksumout->len, kctx->gk5e->name);
>> -		return GSS_S_FAILURE;
>> -	}
>> -
>> -	checksumdata = kmalloc(GSS_KRB5_MAX_CKSUM_LEN, GFP_NOFS);
>> -	if (checksumdata == NULL)
>> -		return GSS_S_FAILURE;
>> -
>> -	tfm = crypto_alloc_ahash(kctx->gk5e->cksum_name, 0, CRYPTO_ALG_ASYNC);
>> -	if (IS_ERR(tfm))
>> -		goto out_free_cksum;
>> -
>> -	req = ahash_request_alloc(tfm, GFP_NOFS);
>> -	if (!req)
>> -		goto out_free_ahash;
>> -
>> -	ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_SLEEP, NULL, NULL);
>> -
>> -	checksumlen = crypto_ahash_digestsize(tfm);
>> -
>> -	if (cksumkey != NULL) {
>> -		err = crypto_ahash_setkey(tfm, cksumkey,
>> -					  kctx->gk5e->keylength);
>> -		if (err)
>> -			goto out;
>> -	}
>> -
>> -	err = crypto_ahash_init(req);
>> -	if (err)
>> -		goto out;
>> -	sg_init_one(sg, header, hdrlen);
>> -	ahash_request_set_crypt(req, sg, NULL, hdrlen);
>> -	err = crypto_ahash_update(req);
>> -	if (err)
>> -		goto out;
>> -	err = xdr_process_buf(body, body_offset, body->len - body_offset,
>> -			      checksummer, req);
>> -	if (err)
>> -		goto out;
>> -	ahash_request_set_crypt(req, NULL, checksumdata, 0);
>> -	err = crypto_ahash_final(req);
>> -	if (err)
>> -		goto out;
>> -
>> -	switch (kctx->gk5e->ctype) {
>> -	case CKSUMTYPE_RSA_MD5:
>> -		err = kctx->gk5e->encrypt(kctx->seq, NULL, checksumdata,
>> -					  checksumdata, checksumlen);
>> -		if (err)
>> -			goto out;
>> -		memcpy(cksumout->data,
>> -		       checksumdata + checksumlen - kctx->gk5e->cksumlength,
>> -		       kctx->gk5e->cksumlength);
>> -		break;
>> -	case CKSUMTYPE_HMAC_SHA1_DES3:
>> -		memcpy(cksumout->data, checksumdata, kctx->gk5e->cksumlength);
>> -		break;
>> -	default:
>> -		BUG();
>> -		break;
>> -	}
>> -	cksumout->len = kctx->gk5e->cksumlength;
>> -out:
>> -	ahash_request_free(req);
>> -out_free_ahash:
>> -	crypto_free_ahash(tfm);
>> -out_free_cksum:
>> -	kfree(checksumdata);
>> -	return err ? GSS_S_FAILURE : 0;
>> -}
>> -
>> /*
>> * checksum the plaintext data and hdrlen bytes of the token header
>> * Per rfc4121, sec. 4.2.4, the checksum is performed over the data
>> @@ -526,35 +302,6 @@ struct encryptor_desc {
>> 	return 0;
>> }
>> 
>> -int
>> -gss_encrypt_xdr_buf(struct crypto_sync_skcipher *tfm, struct xdr_buf *buf,
>> -		    int offset, struct page **pages)
>> -{
>> -	int ret;
>> -	struct encryptor_desc desc;
>> -	SYNC_SKCIPHER_REQUEST_ON_STACK(req, tfm);
>> -
>> -	BUG_ON((buf->len - offset) % crypto_sync_skcipher_blocksize(tfm) != 0);
>> -
>> -	skcipher_request_set_sync_tfm(req, tfm);
>> -	skcipher_request_set_callback(req, 0, NULL, NULL);
>> -
>> -	memset(desc.iv, 0, sizeof(desc.iv));
>> -	desc.req = req;
>> -	desc.pos = offset;
>> -	desc.outbuf = buf;
>> -	desc.pages = pages;
>> -	desc.fragno = 0;
>> -	desc.fraglen = 0;
>> -
>> -	sg_init_table(desc.infrags, 4);
>> -	sg_init_table(desc.outfrags, 4);
>> -
>> -	ret = xdr_process_buf(buf, offset, buf->len - offset, encryptor, &desc);
>> -	skcipher_request_zero(req);
>> -	return ret;
>> -}
>> -
>> struct decryptor_desc {
>> 	u8 iv[GSS_KRB5_MAX_BLOCKSIZE];
>> 	struct skcipher_request *req;
>> @@ -609,32 +356,6 @@ struct decryptor_desc {
>> 	return 0;
>> }
>> 
>> -int
>> -gss_decrypt_xdr_buf(struct crypto_sync_skcipher *tfm, struct xdr_buf *buf,
>> -		    int offset)
>> -{
>> -	int ret;
>> -	struct decryptor_desc desc;
>> -	SYNC_SKCIPHER_REQUEST_ON_STACK(req, tfm);
>> -
>> -	/* XXXJBF: */
>> -	BUG_ON((buf->len - offset) % crypto_sync_skcipher_blocksize(tfm) != 0);
>> -
>> -	skcipher_request_set_sync_tfm(req, tfm);
>> -	skcipher_request_set_callback(req, 0, NULL, NULL);
>> -
>> -	memset(desc.iv, 0, sizeof(desc.iv));
>> -	desc.req = req;
>> -	desc.fragno = 0;
>> -	desc.fraglen = 0;
>> -
>> -	sg_init_table(desc.frags, 4);
>> -
>> -	ret = xdr_process_buf(buf, offset, buf->len - offset, decryptor, &desc);
>> -	skcipher_request_zero(req);
>> -	return ret;
>> -}
>> -
>> /*
>> * This function makes the assumption that it was ultimately called
>> * from gss_wrap().
>> @@ -942,147 +663,3 @@ struct decryptor_desc {
>> 		ret = GSS_S_FAILURE;
>> 	return ret;
>> }
>> -
>> -/*
>> - * Compute Kseq given the initial session key and the checksum.
>> - * Set the key of the given cipher.
>> - */
>> -int
>> -krb5_rc4_setup_seq_key(struct krb5_ctx *kctx,
>> -		       struct crypto_sync_skcipher *cipher,
>> -		       unsigned char *cksum)
>> -{
>> -	struct crypto_shash *hmac;
>> -	struct shash_desc *desc;
>> -	u8 Kseq[GSS_KRB5_MAX_KEYLEN];
>> -	u32 zeroconstant = 0;
>> -	int err;
>> -
>> -	dprintk("%s: entered\n", __func__);
>> -
>> -	hmac = crypto_alloc_shash(kctx->gk5e->cksum_name, 0, 0);
>> -	if (IS_ERR(hmac)) {
>> -		dprintk("%s: error %ld, allocating hash '%s'\n",
>> -			__func__, PTR_ERR(hmac), kctx->gk5e->cksum_name);
>> -		return PTR_ERR(hmac);
>> -	}
>> -
>> -	desc = kmalloc(sizeof(*desc) + crypto_shash_descsize(hmac),
>> -		       GFP_NOFS);
>> -	if (!desc) {
>> -		dprintk("%s: failed to allocate shash descriptor for '%s'\n",
>> -			__func__, kctx->gk5e->cksum_name);
>> -		crypto_free_shash(hmac);
>> -		return -ENOMEM;
>> -	}
>> -
>> -	desc->tfm = hmac;
>> -	desc->flags = 0;
>> -
>> -	/* Compute intermediate Kseq from session key */
>> -	err = crypto_shash_setkey(hmac, kctx->Ksess, kctx->gk5e->keylength);
>> -	if (err)
>> -		goto out_err;
>> -
>> -	err = crypto_shash_digest(desc, (u8 *)&zeroconstant, 4, Kseq);
>> -	if (err)
>> -		goto out_err;
>> -
>> -	/* Compute final Kseq from the checksum and intermediate Kseq */
>> -	err = crypto_shash_setkey(hmac, Kseq, kctx->gk5e->keylength);
>> -	if (err)
>> -		goto out_err;
>> -
>> -	err = crypto_shash_digest(desc, cksum, 8, Kseq);
>> -	if (err)
>> -		goto out_err;
>> -
>> -	err = crypto_sync_skcipher_setkey(cipher, Kseq, kctx->gk5e->keylength);
>> -	if (err)
>> -		goto out_err;
>> -
>> -	err = 0;
>> -
>> -out_err:
>> -	kzfree(desc);
>> -	crypto_free_shash(hmac);
>> -	dprintk("%s: returning %d\n", __func__, err);
>> -	return err;
>> -}
>> -
>> -/*
>> - * Compute Kcrypt given the initial session key and the plaintext seqnum.
>> - * Set the key of cipher kctx->enc.
>> - */
>> -int
>> -krb5_rc4_setup_enc_key(struct krb5_ctx *kctx,
>> -		       struct crypto_sync_skcipher *cipher,
>> -		       s32 seqnum)
>> -{
>> -	struct crypto_shash *hmac;
>> -	struct shash_desc *desc;
>> -	u8 Kcrypt[GSS_KRB5_MAX_KEYLEN];
>> -	u8 zeroconstant[4] = {0};
>> -	u8 seqnumarray[4];
>> -	int err, i;
>> -
>> -	dprintk("%s: entered, seqnum %u\n", __func__, seqnum);
>> -
>> -	hmac = crypto_alloc_shash(kctx->gk5e->cksum_name, 0, 0);
>> -	if (IS_ERR(hmac)) {
>> -		dprintk("%s: error %ld, allocating hash '%s'\n",
>> -			__func__, PTR_ERR(hmac), kctx->gk5e->cksum_name);
>> -		return PTR_ERR(hmac);
>> -	}
>> -
>> -	desc = kmalloc(sizeof(*desc) + crypto_shash_descsize(hmac),
>> -		       GFP_NOFS);
>> -	if (!desc) {
>> -		dprintk("%s: failed to allocate shash descriptor for '%s'\n",
>> -			__func__, kctx->gk5e->cksum_name);
>> -		crypto_free_shash(hmac);
>> -		return -ENOMEM;
>> -	}
>> -
>> -	desc->tfm = hmac;
>> -	desc->flags = 0;
>> -
>> -	/* Compute intermediate Kcrypt from session key */
>> -	for (i = 0; i < kctx->gk5e->keylength; i++)
>> -		Kcrypt[i] = kctx->Ksess[i] ^ 0xf0;
>> -
>> -	err = crypto_shash_setkey(hmac, Kcrypt, kctx->gk5e->keylength);
>> -	if (err)
>> -		goto out_err;
>> -
>> -	err = crypto_shash_digest(desc, zeroconstant, 4, Kcrypt);
>> -	if (err)
>> -		goto out_err;
>> -
>> -	/* Compute final Kcrypt from the seqnum and intermediate Kcrypt */
>> -	err = crypto_shash_setkey(hmac, Kcrypt, kctx->gk5e->keylength);
>> -	if (err)
>> -		goto out_err;
>> -
>> -	seqnumarray[0] = (unsigned char) ((seqnum >> 24) & 0xff);
>> -	seqnumarray[1] = (unsigned char) ((seqnum >> 16) & 0xff);
>> -	seqnumarray[2] = (unsigned char) ((seqnum >> 8) & 0xff);
>> -	seqnumarray[3] = (unsigned char) ((seqnum >> 0) & 0xff);
>> -
>> -	err = crypto_shash_digest(desc, seqnumarray, 4, Kcrypt);
>> -	if (err)
>> -		goto out_err;
>> -
>> -	err = crypto_sync_skcipher_setkey(cipher, Kcrypt,
>> -					  kctx->gk5e->keylength);
>> -	if (err)
>> -		goto out_err;
>> -
>> -	err = 0;
>> -
>> -out_err:
>> -	kzfree(desc);
>> -	crypto_free_shash(hmac);
>> -	dprintk("%s: returning %d\n", __func__, err);
>> -	return err;
>> -}
>> diff --git a/net/sunrpc/auth_gss/gss_krb5_keys.c b/net/sunrpc/auth_gss/gss_krb5_keys.c
>> index 550fdf1..de327ae 100644
>> --- a/net/sunrpc/auth_gss/gss_krb5_keys.c
>> +++ b/net/sunrpc/auth_gss/gss_krb5_keys.c
>> @@ -242,59 +242,6 @@ u32 krb5_derive_key(const struct gss_krb5_enctype *gk5e,
>> 	return ret;
>> }
>> 
>> -#define smask(step) ((1<<step)-1)
>> -#define pstep(x, step) (((x)&smask(step))^(((x)>>step)&smask(step)))
>> -#define parity_char(x) pstep(pstep(pstep((x), 4), 2), 1)
>> -
>> -static void mit_des_fixup_key_parity(u8 key[8])
>> -{
>> -	int i;
>> -	for (i = 0; i < 8; i++) {
>> -		key[i] &= 0xfe;
>> -		key[i] |= 1^parity_char(key[i]);
>> -	}
>> -}
>> -
>> -/*
>> - * This is the des3 key derivation postprocess function
>> - */
>> -u32 gss_krb5_des3_make_key(const struct gss_krb5_enctype *gk5e,
>> -			   struct xdr_netobj *randombits,
>> -			   struct xdr_netobj *key)
>> -{
>> -	int i;
>> -	u32 ret = EINVAL;
>> -
>> -	if (key->len != 24) {
>> -		dprintk("%s: key->len is %d\n", __func__, key->len);
>> -		goto err_out;
>> -	}
>> -	if (randombits->len != 21) {
>> -		dprintk("%s: randombits->len is %d\n",
>> -			__func__, randombits->len);
>> -		goto err_out;
>> -	}
>> -
>> -	/* take the seven bytes, move them around into the top 7 bits of the
>> -	   8 key bytes, then compute the parity bits.  Do this three times. */
>> -
>> -	for (i = 0; i < 3; i++) {
>> -		memcpy(key->data + i*8, randombits->data + i*7, 7);
>> -		key->data[i*8+7] = (((key->data[i*8]&1)<<1) |
>> -				    ((key->data[i*8+1]&1)<<2) |
>> -				    ((key->data[i*8+2]&1)<<3) |
>> -				    ((key->data[i*8+3]&1)<<4) |
>> -				    ((key->data[i*8+4]&1)<<5) |
>> -				    ((key->data[i*8+5]&1)<<6) |
>> -				    ((key->data[i*8+6]&1)<<7));
>> -
>> -		mit_des_fixup_key_parity(key->data + i*8);
>> -	}
>> -	ret = 0;
>> -err_out:
>> -	return ret;
>> -}
>> -
>> /*
>> * This is the aes key derivation postprocess function
>> */
>> diff --git a/net/sunrpc/auth_gss/gss_krb5_mech.c b/net/sunrpc/auth_gss/gss_krb5_mech.c
>> index eab71fc..0837543 100644
>> --- a/net/sunrpc/auth_gss/gss_krb5_mech.c
>> +++ b/net/sunrpc/auth_gss/gss_krb5_mech.c
>> @@ -54,69 +54,6 @@
>> 
>> static const struct gss_krb5_enctype supported_gss_krb5_enctypes[] = {
>> 	/*
>> -	 * DES (All DES enctypes are mapped to the same gss functionality)
>> -	 */
>> -	{
>> -	  .etype = ENCTYPE_DES_CBC_RAW,
>> -	  .ctype = CKSUMTYPE_RSA_MD5,
>> -	  .name = "des-cbc-crc",
>> -	  .encrypt_name = "cbc(des)",
>> -	  .cksum_name = "md5",
>> -	  .encrypt = krb5_encrypt,
>> -	  .decrypt = krb5_decrypt,
>> -	  .mk_key = NULL,
>> -	  .signalg = SGN_ALG_DES_MAC_MD5,
>> -	  .sealalg = SEAL_ALG_DES,
>> -	  .keybytes = 7,
>> -	  .keylength = 8,
>> -	  .blocksize = 8,
>> -	  .conflen = 8,
>> -	  .cksumlength = 8,
>> -	  .keyed_cksum = 0,
>> -	},
>> -	/*
>> -	 * RC4-HMAC
>> -	 */
>> -	{
>> -	  .etype = ENCTYPE_ARCFOUR_HMAC,
>> -	  .ctype = CKSUMTYPE_HMAC_MD5_ARCFOUR,
>> -	  .name = "rc4-hmac",
>> -	  .encrypt_name = "ecb(arc4)",
>> -	  .cksum_name = "hmac(md5)",
>> -	  .encrypt = krb5_encrypt,
>> -	  .decrypt = krb5_decrypt,
>> -	  .mk_key = NULL,
>> -	  .signalg = SGN_ALG_HMAC_MD5,
>> -	  .sealalg = SEAL_ALG_MICROSOFT_RC4,
>> -	  .keybytes = 16,
>> -	  .keylength = 16,
>> -	  .blocksize = 1,
>> -	  .conflen = 8,
>> -	  .cksumlength = 8,
>> -	  .keyed_cksum = 1,
>> -	},
>> -	/*
>> -	 * 3DES
>> -	 */
>> -	{
>> -	  .etype = ENCTYPE_DES3_CBC_RAW,
>> -	  .ctype = CKSUMTYPE_HMAC_SHA1_DES3,
>> -	  .name = "des3-hmac-sha1",
>> -	  .encrypt_name = "cbc(des3_ede)",
>> -	  .cksum_name = "hmac(sha1)",
>> -	  .encrypt = krb5_encrypt,
>> -	  .decrypt = krb5_decrypt,
>> -	  .mk_key = gss_krb5_des3_make_key,
>> -	  .signalg = SGN_ALG_HMAC_SHA1_DES3_KD,
>> -	  .sealalg = SEAL_ALG_DES3KD,
>> -	  .keybytes = 21,
>> -	  .keylength = 24,
>> -	  .blocksize = 8,
>> -	  .conflen = 8,
>> -	  .cksumlength = 20,
>> -	  .keyed_cksum = 1,
>> -	},
>> -	/*
>> 	 * AES128
>> 	 */
>> 	{
>> @@ -227,15 +164,6 @@
>> 	if (IS_ERR(p))
>> 		goto out_err;
>> 
>> -	switch (alg) {
>> -	case ENCTYPE_DES_CBC_CRC:
>> -	case ENCTYPE_DES_CBC_MD4:
>> -	case ENCTYPE_DES_CBC_MD5:
>> -		/* Map all these key types to ENCTYPE_DES_CBC_RAW */
>> -		alg = ENCTYPE_DES_CBC_RAW;
>> -		break;
>> -	}
>> -
>> 	if (!supported_gss_krb5_enctype(alg)) {
>> 		printk(KERN_WARNING "gss_kerberos_mech: unsupported "
>> 			"encryption key algorithm %d\n", alg);
>> @@ -271,81 +199,6 @@
>> 	return p;
>> }
>> 
>> -static int
>> -gss_import_v1_context(const void *p, const void *end, struct krb5_ctx *ctx)
>> -{
>> -	u32 seq_send;
>> -	int tmp;
>> -
>> -	p = simple_get_bytes(p, end, &ctx->initiate, sizeof(ctx->initiate));
>> -	if (IS_ERR(p))
>> -		goto out_err;
>> -
>> -	/* Old format supports only DES!  Any other enctype uses new format */
>> -	ctx->enctype = ENCTYPE_DES_CBC_RAW;
>> -
>> -	ctx->gk5e = get_gss_krb5_enctype(ctx->enctype);
>> -	if (ctx->gk5e == NULL) {
>> -		p = ERR_PTR(-EINVAL);
>> -		goto out_err;
>> -	}
>> -
>> -	/* The downcall format was designed before we completely understood
>> -	 * the uses of the context fields; so it includes some stuff we
>> -	 * just give some minimal sanity-checking, and some we ignore
>> -	 * completely (like the next twenty bytes): */
>> -	if (unlikely(p + 20 > end || p + 20 < p)) {
>> -		p = ERR_PTR(-EFAULT);
>> -		goto out_err;
>> -	}
>> -	p += 20;
>> -	p = simple_get_bytes(p, end, &tmp, sizeof(tmp));
>> -	if (IS_ERR(p))
>> -		goto out_err;
>> -	if (tmp != SGN_ALG_DES_MAC_MD5) {
>> -		p = ERR_PTR(-ENOSYS);
>> -		goto out_err;
>> -	}
>> -	p = simple_get_bytes(p, end, &tmp, sizeof(tmp));
>> -	if (IS_ERR(p))
>> -		goto out_err;
>> -	if (tmp != SEAL_ALG_DES) {
>> -		p = ERR_PTR(-ENOSYS);
>> -		goto out_err;
>> -	}
>> -	p = simple_get_bytes(p, end, &ctx->endtime, sizeof(ctx->endtime));
>> -	if (IS_ERR(p))
>> -		goto out_err;
>> -	p = simple_get_bytes(p, end, &seq_send, sizeof(seq_send));
>> -	if (IS_ERR(p))
>> -		goto out_err;
>> -	atomic_set(&ctx->seq_send, seq_send);
>> -	p = simple_get_netobj(p, end, &ctx->mech_used);
>> -	if (IS_ERR(p))
>> -		goto out_err;
>> -	p = get_key(p, end, ctx, &ctx->enc);
>> -	if (IS_ERR(p))
>> -		goto out_err_free_mech;
>> -	p = get_key(p, end, ctx, &ctx->seq);
>> -	if (IS_ERR(p))
>> -		goto out_err_free_key1;
>> -	if (p != end) {
>> -		p = ERR_PTR(-EFAULT);
>> -		goto out_err_free_key2;
>> -	}
>> -
>> -	return 0;
>> -
>> -out_err_free_key2:
>> -	crypto_free_sync_skcipher(ctx->seq);
>> -out_err_free_key1:
>> -	crypto_free_sync_skcipher(ctx->enc);
>> -out_err_free_mech:
>> -	kfree(ctx->mech_used.data);
>> -out_err:
>> -	return PTR_ERR(p);
>> -}
>> -
>> static struct crypto_sync_skcipher *
>> context_v2_alloc_cipher(struct krb5_ctx *ctx, const char *cname, u8 *key)
>> {
>> @@ -377,124 +230,6 @@
>> }
>> 
>> static int
>> -context_derive_keys_des3(struct krb5_ctx *ctx, gfp_t gfp_mask)
>> -{
>> -	struct xdr_netobj c, keyin, keyout;
>> -	u8 cdata[GSS_KRB5_K5CLENGTH];
>> -	u32 err;
>> -
>> -	c.len = GSS_KRB5_K5CLENGTH;
>> -	c.data = cdata;
>> -
>> -	keyin.data = ctx->Ksess;
>> -	keyin.len = ctx->gk5e->keylength;
>> -	keyout.len = ctx->gk5e->keylength;
>> -
>> -	/* seq uses the raw key */
>> -	ctx->seq = context_v2_alloc_cipher(ctx, ctx->gk5e->encrypt_name,
>> -					   ctx->Ksess);
>> -	if (ctx->seq == NULL)
>> -		goto out_err;
>> -
>> -	ctx->enc = context_v2_alloc_cipher(ctx, ctx->gk5e->encrypt_name,
>> -					   ctx->Ksess);
>> -	if (ctx->enc == NULL)
>> -		goto out_free_seq;
>> -
>> -	/* derive cksum */
>> -	set_cdata(cdata, KG_USAGE_SIGN, KEY_USAGE_SEED_CHECKSUM);
>> -	keyout.data = ctx->cksum;
>> -	err = krb5_derive_key(ctx->gk5e, &keyin, &keyout, &c, gfp_mask);
>> -	if (err) {
>> -		dprintk("%s: Error %d deriving cksum key\n",
>> -			__func__, err);
>> -		goto out_free_enc;
>> -	}
>> -
>> -	return 0;
>> -
>> -out_free_enc:
>> -	crypto_free_sync_skcipher(ctx->enc);
>> -out_free_seq:
>> -	crypto_free_sync_skcipher(ctx->seq);
>> -out_err:
>> -	return -EINVAL;
>> -}
>> -
>> -/*
>> - * Note that RC4 depends on deriving keys using the sequence
>> - * number or the checksum of a token.  Therefore, the final keys
>> - * cannot be calculated until the token is being constructed!
>> - */
>> -static int
>> -context_derive_keys_rc4(struct krb5_ctx *ctx)
>> -{
>> -	struct crypto_shash *hmac;
>> -	char sigkeyconstant[] = "signaturekey";
>> -	int slen = strlen(sigkeyconstant) + 1;	/* include null terminator */
>> -	struct shash_desc *desc;
>> -	int err;
>> -
>> -	dprintk("RPC:       %s: entered\n", __func__);
>> -	/*
>> -	 * derive cksum (aka Ksign) key
>> -	 */
>> -	hmac = crypto_alloc_shash(ctx->gk5e->cksum_name, 0, 0);
>> -	if (IS_ERR(hmac)) {
>> -		dprintk("%s: error %ld allocating hash '%s'\n",
>> -			__func__, PTR_ERR(hmac), ctx->gk5e->cksum_name);
>> -		err = PTR_ERR(hmac);
>> -		goto out_err;
>> -	}
>> -
>> -	err = crypto_shash_setkey(hmac, ctx->Ksess, ctx->gk5e->keylength);
>> -	if (err)
>> -		goto out_err_free_hmac;
>> -
>> -
>> -	desc = kmalloc(sizeof(*desc) + crypto_shash_descsize(hmac), GFP_NOFS);
>> -	if (!desc) {
>> -		dprintk("%s: failed to allocate hash descriptor for '%s'\n",
>> -			__func__, ctx->gk5e->cksum_name);
>> -		err = -ENOMEM;
>> -		goto out_err_free_hmac;
>> -	}
>> -
>> -	desc->tfm = hmac;
>> -	desc->flags = 0;
>> -
>> -	err = crypto_shash_digest(desc, sigkeyconstant, slen, ctx->cksum);
>> -	kzfree(desc);
>> -	if (err)
>> -		goto out_err_free_hmac;
>> -	/*
>> -	 * allocate hash, and skciphers for data and seqnum encryption
>> -	 */
>> -	ctx->enc = crypto_alloc_sync_skcipher(ctx->gk5e->encrypt_name, 0, 0);
>> -	if (IS_ERR(ctx->enc)) {
>> -		err = PTR_ERR(ctx->enc);
>> -		goto out_err_free_hmac;
>> -	}
>> -
>> -	ctx->seq = crypto_alloc_sync_skcipher(ctx->gk5e->encrypt_name, 0, 0);
>> -	if (IS_ERR(ctx->seq)) {
>> -		crypto_free_sync_skcipher(ctx->enc);
>> -		err = PTR_ERR(ctx->seq);
>> -		goto out_err_free_hmac;
>> -	}
>> -
>> -	dprintk("RPC:       %s: returning success\n", __func__);
>> -
>> -	err = 0;
>> -
>> -out_err_free_hmac:
>> -	crypto_free_shash(hmac);
>> -out_err:
>> -	dprintk("RPC:       %s: returning %d\n", __func__, err);
>> -	return err;
>> -}
>> -
>> -static int
>> context_derive_keys_new(struct krb5_ctx *ctx, gfp_t gfp_mask)
>> {
>> 	struct xdr_netobj c, keyin, keyout;
>> @@ -635,9 +370,6 @@
>> 	p = simple_get_bytes(p, end, &ctx->enctype, sizeof(ctx->enctype));
>> 	if (IS_ERR(p))
>> 		goto out_err;
>> -	/* Map ENCTYPE_DES3_CBC_SHA1 to ENCTYPE_DES3_CBC_RAW */
>> -	if (ctx->enctype == ENCTYPE_DES3_CBC_SHA1)
>> -		ctx->enctype = ENCTYPE_DES3_CBC_RAW;
>> 	ctx->gk5e = get_gss_krb5_enctype(ctx->enctype);
>> 	if (ctx->gk5e == NULL) {
>> 		dprintk("gss_kerberos_mech: unsupported krb5 enctype %u\n",
>> @@ -665,10 +397,6 @@
>> 	ctx->mech_used.len = gss_kerberos_mech.gm_oid.len;
>> 
>> 	switch (ctx->enctype) {
>> -	case ENCTYPE_DES3_CBC_RAW:
>> -		return context_derive_keys_des3(ctx, gfp_mask);
>> -	case ENCTYPE_ARCFOUR_HMAC:
>> -		return context_derive_keys_rc4(ctx);
>> 	case ENCTYPE_AES128_CTS_HMAC_SHA1_96:
>> 	case ENCTYPE_AES256_CTS_HMAC_SHA1_96:
>> 		return context_derive_keys_new(ctx, gfp_mask);
>> @@ -694,11 +422,7 @@
>> 	if (ctx == NULL)
>> 		return -ENOMEM;
>> 
>> -	if (len == 85)
>> -		ret = gss_import_v1_context(p, end, ctx);
>> -	else
>> -		ret = gss_import_v2_context(p, end, ctx, gfp_mask);
>> -
>> +	ret = gss_import_v2_context(p, end, ctx, gfp_mask);
>> 	if (ret == 0) {
>> 		ctx_id->internal_ctx_id = ctx;
>> 		if (endtime)
>> diff --git a/net/sunrpc/auth_gss/gss_krb5_seal.c b/net/sunrpc/auth_gss/gss_krb5_seal.c
>> index 48fe4a5..feb0f2a 100644
>> --- a/net/sunrpc/auth_gss/gss_krb5_seal.c
>> +++ b/net/sunrpc/auth_gss/gss_krb5_seal.c
>> @@ -70,32 +70,6 @@
>> #endif
>> 
>> static void *
>> -setup_token(struct krb5_ctx *ctx, struct xdr_netobj *token)
>> -{
>> -	u16 *ptr;
>> -	void *krb5_hdr;
>> -	int body_size = GSS_KRB5_TOK_HDR_LEN + ctx->gk5e->cksumlength;
>> -
>> -	token->len = g_token_size(&ctx->mech_used, body_size);
>> -
>> -	ptr = (u16 *)token->data;
>> -	g_make_token_header(&ctx->mech_used, body_size, (unsigned char **)&ptr);
>> -
>> -	/* ptr now at start of header described in rfc 1964, section 1.2.1: */
>> -	krb5_hdr = ptr;
>> -	*ptr++ = KG_TOK_MIC_MSG;
>> -	/*
>> -	 * signalg is stored as if it were converted from LE to host endian, even
>> -	 * though it's an opaque pair of bytes according to the RFC.
>> -	 */
>> -	*ptr++ = (__force u16)cpu_to_le16(ctx->gk5e->signalg);
>> -	*ptr++ = SEAL_ALG_NONE;
>> -	*ptr = 0xffff;
>> -
>> -	return krb5_hdr;
>> -}
>> -
>> -static void *
>> setup_token_v2(struct krb5_ctx *ctx, struct xdr_netobj *token)
>> {
>> 	u16 *ptr;
>> @@ -124,45 +98,6 @@
>> }
>> 
>> static u32
>> -gss_get_mic_v1(struct krb5_ctx *ctx, struct xdr_buf *text,
>> -		struct xdr_netobj *token)
>> -{
>> -	char			cksumdata[GSS_KRB5_MAX_CKSUM_LEN];
>> -	struct xdr_netobj	md5cksum = {.len = sizeof(cksumdata),
>> -					    .data = cksumdata};
>> -	void			*ptr;
>> -	s32			now;
>> -	u32			seq_send;
>> -	u8			*cksumkey;
>> -
>> -	dprintk("RPC:       %s\n", __func__);
>> -	BUG_ON(ctx == NULL);
>> -
>> -	now = get_seconds();
>> -
>> -	ptr = setup_token(ctx, token);
>> -
>> -	if (ctx->gk5e->keyed_cksum)
>> -		cksumkey = ctx->cksum;
>> -	else
>> -		cksumkey = NULL;
>> -
>> -	if (make_checksum(ctx, ptr, 8, text, 0, cksumkey,
>> -			  KG_USAGE_SIGN, &md5cksum))
>> -		return GSS_S_FAILURE;
>> -
>> -	memcpy(ptr + GSS_KRB5_TOK_HDR_LEN, md5cksum.data, md5cksum.len);
>> -
>> -	seq_send = atomic_fetch_inc(&ctx->seq_send);
>> -
>> -	if (krb5_make_seq_num(ctx, ctx->seq, ctx->initiate ? 0 : 0xff,
>> -			      seq_send, ptr + GSS_KRB5_TOK_HDR_LEN, ptr + 8))
>> -		return GSS_S_FAILURE;
>> -
>> -	return (ctx->endtime < now) ? GSS_S_CONTEXT_EXPIRED : GSS_S_COMPLETE;
>> -}
>> -
>> -static u32
>> gss_get_mic_v2(struct krb5_ctx *ctx, struct xdr_buf *text,
>> 		struct xdr_netobj *token)
>> {
>> @@ -210,14 +145,10 @@
>> 	struct krb5_ctx		*ctx = gss_ctx->internal_ctx_id;
>> 
>> 	switch (ctx->enctype) {
>> -	default:
>> -		BUG();
>> -	case ENCTYPE_DES_CBC_RAW:
>> -	case ENCTYPE_DES3_CBC_RAW:
>> -	case ENCTYPE_ARCFOUR_HMAC:
>> -		return gss_get_mic_v1(ctx, text, token);
>> 	case ENCTYPE_AES128_CTS_HMAC_SHA1_96:
>> 	case ENCTYPE_AES256_CTS_HMAC_SHA1_96:
>> 		return gss_get_mic_v2(ctx, text, token);
>> +	default:
>> +		return GSS_S_FAILURE;
>> 	}
>> }
>> diff --git a/net/sunrpc/auth_gss/gss_krb5_seqnum.c b/net/sunrpc/auth_gss/gss_krb5_seqnum.c
>> deleted file mode 100644
>> index fb66562..0000000
>> --- a/net/sunrpc/auth_gss/gss_krb5_seqnum.c
>> +++ /dev/null
>> @@ -1,164 +0,0 @@
>> -/*
>> - *  linux/net/sunrpc/gss_krb5_seqnum.c
>> - *
>> - *  Adapted from MIT Kerberos 5-1.2.1 lib/gssapi/krb5/util_seqnum.c
>> - *
>> - *  Copyright (c) 2000 The Regents of the University of Michigan.
>> - *  All rights reserved.
>> - *
>> - *  Andy Adamson   <andros@umich.edu>
>> - */
>> -
>> -/*
>> - * Copyright 1993 by OpenVision Technologies, Inc.
>> - *
>> - * Permission to use, copy, modify, distribute, and sell this software
>> - * and its documentation for any purpose is hereby granted without fee,
>> - * provided that the above copyright notice appears in all copies and
>> - * that both that copyright notice and this permission notice appear in
>> - * supporting documentation, and that the name of OpenVision not be used
>> - * in advertising or publicity pertaining to distribution of the software
>> - * without specific, written prior permission. OpenVision makes no
>> - * representations about the suitability of this software for any
>> - * purpose.  It is provided "as is" without express or implied warranty.
>> - *
>> - * OPENVISION DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE,
>> - * INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN NO
>> - * EVENT SHALL OPENVISION BE LIABLE FOR ANY SPECIAL, INDIRECT OR
>> - * CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF
>> - * USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR
>> - * OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
>> - * PERFORMANCE OF THIS SOFTWARE.
>> - */
>> -
>> -#include <crypto/skcipher.h>
>> -#include <linux/types.h>
>> -#include <linux/sunrpc/gss_krb5.h>
>> -
>> -#if IS_ENABLED(CONFIG_SUNRPC_DEBUG)
>> -# define RPCDBG_FACILITY        RPCDBG_AUTH
>> -#endif
>> -
>> -static s32
>> -krb5_make_rc4_seq_num(struct krb5_ctx *kctx, int direction, s32 seqnum,
>> -		      unsigned char *cksum, unsigned char *buf)
>> -{
>> -	struct crypto_sync_skcipher *cipher;
>> -	unsigned char plain[8];
>> -	s32 code;
>> -
>> -	dprintk("RPC:       %s:\n", __func__);
>> -	cipher = crypto_alloc_sync_skcipher(kctx->gk5e->encrypt_name, 0, 0);
>> -	if (IS_ERR(cipher))
>> -		return PTR_ERR(cipher);
>> -
>> -	plain[0] = (unsigned char) ((seqnum >> 24) & 0xff);
>> -	plain[1] = (unsigned char) ((seqnum >> 16) & 0xff);
>> -	plain[2] = (unsigned char) ((seqnum >> 8) & 0xff);
>> -	plain[3] = (unsigned char) ((seqnum >> 0) & 0xff);
>> -	plain[4] = direction;
>> -	plain[5] = direction;
>> -	plain[6] = direction;
>> -	plain[7] = direction;
>> -
>> -	code = krb5_rc4_setup_seq_key(kctx, cipher, cksum);
>> -	if (code)
>> -		goto out;
>> -
>> -	code = krb5_encrypt(cipher, cksum, plain, buf, 8);
>> -out:
>> -	crypto_free_sync_skcipher(cipher);
>> -	return code;
>> -}
>> -s32
>> -krb5_make_seq_num(struct krb5_ctx *kctx,
>> -		struct crypto_sync_skcipher *key,
>> -		int direction,
>> -		u32 seqnum,
>> -		unsigned char *cksum, unsigned char *buf)
>> -{
>> -	unsigned char plain[8];
>> -
>> -	if (kctx->enctype == ENCTYPE_ARCFOUR_HMAC)
>> -		return krb5_make_rc4_seq_num(kctx, direction, seqnum,
>> -					     cksum, buf);
>> -
>> -	plain[0] = (unsigned char) (seqnum & 0xff);
>> -	plain[1] = (unsigned char) ((seqnum >> 8) & 0xff);
>> -	plain[2] = (unsigned char) ((seqnum >> 16) & 0xff);
>> -	plain[3] = (unsigned char) ((seqnum >> 24) & 0xff);
>> -
>> -	plain[4] = direction;
>> -	plain[5] = direction;
>> -	plain[6] = direction;
>> -	plain[7] = direction;
>> -
>> -	return krb5_encrypt(key, cksum, plain, buf, 8);
>> -}
>> -
>> -static s32
>> -krb5_get_rc4_seq_num(struct krb5_ctx *kctx, unsigned char *cksum,
>> -		     unsigned char *buf, int *direction, s32 *seqnum)
>> -{
>> -	struct crypto_sync_skcipher *cipher;
>> -	unsigned char plain[8];
>> -	s32 code;
>> -
>> -	dprintk("RPC:       %s:\n", __func__);
>> -	cipher = crypto_alloc_sync_skcipher(kctx->gk5e->encrypt_name, 0, 0);
>> -	if (IS_ERR(cipher))
>> -		return PTR_ERR(cipher);
>> -
>> -	code = krb5_rc4_setup_seq_key(kctx, cipher, cksum);
>> -	if (code)
>> -		goto out;
>> -
>> -	code = krb5_decrypt(cipher, cksum, buf, plain, 8);
>> -	if (code)
>> -		goto out;
>> -
>> -	if ((plain[4] != plain[5]) || (plain[4] != plain[6])
>> -				   || (plain[4] != plain[7])) {
>> -		code = (s32)KG_BAD_SEQ;
>> -		goto out;
>> -	}
>> -
>> -	*direction = plain[4];
>> -
>> -	*seqnum = ((plain[0] << 24) | (plain[1] << 16) |
>> -					(plain[2] << 8) | (plain[3]));
>> -out:
>> -	crypto_free_sync_skcipher(cipher);
>> -	return code;
>> -}
>> -
>> -s32
>> -krb5_get_seq_num(struct krb5_ctx *kctx,
>> -	       unsigned char *cksum,
>> -	       unsigned char *buf,
>> -	       int *direction, u32 *seqnum)
>> -{
>> -	s32 code;
>> -	unsigned char plain[8];
>> -	struct crypto_sync_skcipher *key = kctx->seq;
>> -
>> -	dprintk("RPC:       krb5_get_seq_num:\n");
>> -
>> -	if (kctx->enctype == ENCTYPE_ARCFOUR_HMAC)
>> -		return krb5_get_rc4_seq_num(kctx, cksum, buf,
>> -					    direction, seqnum);
>> -
>> -	if ((code = krb5_decrypt(key, cksum, buf, plain, 8)))
>> -		return code;
>> -
>> -	if ((plain[4] != plain[5]) || (plain[4] != plain[6]) ||
>> -	    (plain[4] != plain[7]))
>> -		return (s32)KG_BAD_SEQ;
>> -
>> -	*direction = plain[4];
>> -
>> -	*seqnum = ((plain[0]) |
>> -		   (plain[1] << 8) | (plain[2] << 16) | (plain[3] << 24));
>> -
>> -	return 0;
>> -}
>> diff --git a/net/sunrpc/auth_gss/gss_krb5_unseal.c b/net/sunrpc/auth_gss/gss_krb5_unseal.c
>> index ef2b25b..f0f646a 100644
>> --- a/net/sunrpc/auth_gss/gss_krb5_unseal.c
>> +++ b/net/sunrpc/auth_gss/gss_krb5_unseal.c
>> @@ -71,78 +71,6 @@
>> * supposedly taken over. */
>> 
>> static u32
>> -gss_verify_mic_v1(struct krb5_ctx *ctx,
>> -		struct xdr_buf *message_buffer, struct xdr_netobj *read_token)
>> -{
>> -	int			signalg;
>> -	int			sealalg;
>> -	char			cksumdata[GSS_KRB5_MAX_CKSUM_LEN];
>> -	struct xdr_netobj	md5cksum = {.len = sizeof(cksumdata),
>> -					    .data = cksumdata};
>> -	s32			now;
>> -	int			direction;
>> -	u32			seqnum;
>> -	unsigned char		*ptr = (unsigned char *)read_token->data;
>> -	int			bodysize;
>> -	u8			*cksumkey;
>> -
>> -	dprintk("RPC:       krb5_read_token\n");
>> -
>> -	if (g_verify_token_header(&ctx->mech_used, &bodysize, &ptr,
>> -					read_token->len))
>> -		return GSS_S_DEFECTIVE_TOKEN;
>> -
>> -	if ((ptr[0] != ((KG_TOK_MIC_MSG >> 8) & 0xff)) ||
>> -	    (ptr[1] !=  (KG_TOK_MIC_MSG & 0xff)))
>> -		return GSS_S_DEFECTIVE_TOKEN;
>> -
>> -	/* XXX sanity-check bodysize?? */
>> -
>> -	signalg = ptr[2] + (ptr[3] << 8);
>> -	if (signalg != ctx->gk5e->signalg)
>> -		return GSS_S_DEFECTIVE_TOKEN;
>> -
>> -	sealalg = ptr[4] + (ptr[5] << 8);
>> -	if (sealalg != SEAL_ALG_NONE)
>> -		return GSS_S_DEFECTIVE_TOKEN;
>> -
>> -	if ((ptr[6] != 0xff) || (ptr[7] != 0xff))
>> -		return GSS_S_DEFECTIVE_TOKEN;
>> -
>> -	if (ctx->gk5e->keyed_cksum)
>> -		cksumkey = ctx->cksum;
>> -	else
>> -		cksumkey = NULL;
>> -
>> -	if (make_checksum(ctx, ptr, 8, message_buffer, 0,
>> -			  cksumkey, KG_USAGE_SIGN, &md5cksum))
>> -		return GSS_S_FAILURE;
>> -
>> -	if (memcmp(md5cksum.data, ptr + GSS_KRB5_TOK_HDR_LEN,
>> -					ctx->gk5e->cksumlength))
>> -		return GSS_S_BAD_SIG;
>> -
>> -	/* it got through unscathed.  Make sure the context is unexpired */
>> -
>> -	now = get_seconds();
>> -
>> -	if (now > ctx->endtime)
>> -		return GSS_S_CONTEXT_EXPIRED;
>> -
>> -	/* do sequencing checks */
>> -
>> -	if (krb5_get_seq_num(ctx, ptr + GSS_KRB5_TOK_HDR_LEN, ptr + 8,
>> -			     &direction, &seqnum))
>> -		return GSS_S_FAILURE;
>> -
>> -	if ((ctx->initiate && direction != 0xff) ||
>> -	    (!ctx->initiate && direction != 0))
>> -		return GSS_S_BAD_SIG;
>> -
>> -	return GSS_S_COMPLETE;
>> -}
>> -
>> -static u32
>> gss_verify_mic_v2(struct krb5_ctx *ctx,
>> 		struct xdr_buf *message_buffer, struct xdr_netobj *read_token)
>> {
>> @@ -214,14 +142,10 @@
>> 	struct krb5_ctx *ctx = gss_ctx->internal_ctx_id;
>> 
>> 	switch (ctx->enctype) {
>> -	default:
>> -		BUG();
>> -	case ENCTYPE_DES_CBC_RAW:
>> -	case ENCTYPE_DES3_CBC_RAW:
>> -	case ENCTYPE_ARCFOUR_HMAC:
>> -		return gss_verify_mic_v1(ctx, message_buffer, read_token);
>> 	case ENCTYPE_AES128_CTS_HMAC_SHA1_96:
>> 	case ENCTYPE_AES256_CTS_HMAC_SHA1_96:
>> 		return gss_verify_mic_v2(ctx, message_buffer, read_token);
>> +	default:
>> +		return GSS_S_FAILURE;
>> 	}
>> }
>> diff --git a/net/sunrpc/auth_gss/gss_krb5_wrap.c b/net/sunrpc/auth_gss/gss_krb5_wrap.c
>> index 5cdde6c..98c99d3 100644
>> --- a/net/sunrpc/auth_gss/gss_krb5_wrap.c
>> +++ b/net/sunrpc/auth_gss/gss_krb5_wrap.c
>> @@ -146,244 +146,6 @@
>> 	}
>> }
>> 
>> -/* Assumptions: the head and tail of inbuf are ours to play with.
>> - * The pages, however, may be real pages in the page cache and we replace
>> - * them with scratch pages from **pages before writing to them. */
>> -/* XXX: obviously the above should be documentation of wrap interface,
>> - * and shouldn't be in this kerberos-specific file. */
>> -
>> -/* XXX factor out common code with seal/unseal. */
>> -
>> -static u32
>> -gss_wrap_kerberos_v1(struct krb5_ctx *kctx, int offset,
>> -		struct xdr_buf *buf, struct page **pages)
>> -{
>> -	char			cksumdata[GSS_KRB5_MAX_CKSUM_LEN];
>> -	struct xdr_netobj	md5cksum = {.len = sizeof(cksumdata),
>> -					    .data = cksumdata};
>> -	int			blocksize = 0, plainlen;
>> -	unsigned char		*ptr, *msg_start;
>> -	s32			now;
>> -	int			headlen;
>> -	struct page		**tmp_pages;
>> -	u32			seq_send;
>> -	u8			*cksumkey;
>> -	u32			conflen = kctx->gk5e->conflen;
>> -
>> -	dprintk("RPC:       %s\n", __func__);
>> -
>> -	now = get_seconds();
>> -
>> -	blocksize = crypto_sync_skcipher_blocksize(kctx->enc);
>> -	gss_krb5_add_padding(buf, offset, blocksize);
>> -	BUG_ON((buf->len - offset) % blocksize);
>> -	plainlen = conflen + buf->len - offset;
>> -
>> -	headlen = g_token_size(&kctx->mech_used,
>> -		GSS_KRB5_TOK_HDR_LEN + kctx->gk5e->cksumlength + plainlen) -
>> -		(buf->len - offset);
>> -
>> -	ptr = buf->head[0].iov_base + offset;
>> -	/* shift data to make room for header. */
>> -	xdr_extend_head(buf, offset, headlen);
>> -
>> -	/* XXX Would be cleverer to encrypt while copying. */
>> -	BUG_ON((buf->len - offset - headlen) % blocksize);
>> -
>> -	g_make_token_header(&kctx->mech_used,
>> -				GSS_KRB5_TOK_HDR_LEN +
>> -				kctx->gk5e->cksumlength + plainlen, &ptr);
>> -
>> -
>> -	/* ptr now at header described in rfc 1964, section 1.2.1: */
>> -	ptr[0] = (unsigned char) ((KG_TOK_WRAP_MSG >> 8) & 0xff);
>> -	ptr[1] = (unsigned char) (KG_TOK_WRAP_MSG & 0xff);
>> -
>> -	msg_start = ptr + GSS_KRB5_TOK_HDR_LEN + kctx->gk5e->cksumlength;
>> -
>> -	/*
>> -	 * signalg and sealalg are stored as if they were converted from LE
>> -	 * to host endian, even though they're opaque pairs of bytes according
>> -	 * to the RFC.
>> -	 */
>> -	*(__le16 *)(ptr + 2) = cpu_to_le16(kctx->gk5e->signalg);
>> -	*(__le16 *)(ptr + 4) = cpu_to_le16(kctx->gk5e->sealalg);
>> -	ptr[6] = 0xff;
>> -	ptr[7] = 0xff;
>> -
>> -	gss_krb5_make_confounder(msg_start, conflen);
>> -
>> -	if (kctx->gk5e->keyed_cksum)
>> -		cksumkey = kctx->cksum;
>> -	else
>> -		cksumkey = NULL;
>> -
>> -	/* XXXJBF: UGH!: */
>> -	tmp_pages = buf->pages;
>> -	buf->pages = pages;
>> -	if (make_checksum(kctx, ptr, 8, buf, offset + headlen - conflen,
>> -					cksumkey, KG_USAGE_SEAL, &md5cksum))
>> -		return GSS_S_FAILURE;
>> -	buf->pages = tmp_pages;
>> -
>> -	memcpy(ptr + GSS_KRB5_TOK_HDR_LEN, md5cksum.data, md5cksum.len);
>> -
>> -	seq_send = atomic_fetch_inc(&kctx->seq_send);
>> -
>> -	/* XXX would probably be more efficient to compute checksum
>> -	 * and encrypt at the same time: */
>> -	if ((krb5_make_seq_num(kctx, kctx->seq, kctx->initiate ? 0 : 0xff,
>> -			       seq_send, ptr + GSS_KRB5_TOK_HDR_LEN, ptr + 8)))
>> -		return GSS_S_FAILURE;
>> -
>> -	if (kctx->enctype == ENCTYPE_ARCFOUR_HMAC) {
>> -		struct crypto_sync_skcipher *cipher;
>> -		int err;
>> -		cipher = crypto_alloc_sync_skcipher(kctx->gk5e->encrypt_name,
>> -						    0, 0);
>> -		if (IS_ERR(cipher))
>> -			return GSS_S_FAILURE;
>> -
>> -		krb5_rc4_setup_enc_key(kctx, cipher, seq_send);
>> -
>> -		err = gss_encrypt_xdr_buf(cipher, buf,
>> -					  offset + headlen - conflen, pages);
>> -		crypto_free_sync_skcipher(cipher);
>> -		if (err)
>> -			return GSS_S_FAILURE;
>> -	} else {
>> -		if (gss_encrypt_xdr_buf(kctx->enc, buf,
>> -					offset + headlen - conflen, pages))
>> -			return GSS_S_FAILURE;
>> -	}
>> -
>> -	return (kctx->endtime < now) ? GSS_S_CONTEXT_EXPIRED : GSS_S_COMPLETE;
>> -}
>> -
>> -static u32
>> -gss_unwrap_kerberos_v1(struct krb5_ctx *kctx, int offset, struct xdr_buf *buf)
>> -{
>> -	int			signalg;
>> -	int			sealalg;
>> -	char			cksumdata[GSS_KRB5_MAX_CKSUM_LEN];
>> -	struct xdr_netobj	md5cksum = {.len = sizeof(cksumdata),
>> -					    .data = cksumdata};
>> -	s32			now;
>> -	int			direction;
>> -	s32			seqnum;
>> -	unsigned char		*ptr;
>> -	int			bodysize;
>> -	void			*data_start, *orig_start;
>> -	int			data_len;
>> -	int			blocksize;
>> -	u32			conflen = kctx->gk5e->conflen;
>> -	int			crypt_offset;
>> -	u8			*cksumkey;
>> -
>> -	dprintk("RPC:       gss_unwrap_kerberos\n");
>> -
>> -	ptr = (u8 *)buf->head[0].iov_base + offset;
>> -	if (g_verify_token_header(&kctx->mech_used, &bodysize, &ptr,
>> -					buf->len - offset))
>> -		return GSS_S_DEFECTIVE_TOKEN;
>> -
>> -	if ((ptr[0] != ((KG_TOK_WRAP_MSG >> 8) & 0xff)) ||
>> -	    (ptr[1] !=  (KG_TOK_WRAP_MSG & 0xff)))
>> -		return GSS_S_DEFECTIVE_TOKEN;
>> -
>> -	/* XXX sanity-check bodysize?? */
>> -
>> -	/* get the sign and seal algorithms */
>> -
>> -	signalg = ptr[2] + (ptr[3] << 8);
>> -	if (signalg != kctx->gk5e->signalg)
>> -		return GSS_S_DEFECTIVE_TOKEN;
>> -
>> -	sealalg = ptr[4] + (ptr[5] << 8);
>> -	if (sealalg != kctx->gk5e->sealalg)
>> -		return GSS_S_DEFECTIVE_TOKEN;
>> -
>> -	if ((ptr[6] != 0xff) || (ptr[7] != 0xff))
>> -		return GSS_S_DEFECTIVE_TOKEN;
>> -
>> -	/*
>> -	 * Data starts after token header and checksum.  ptr points
>> -	 * to the beginning of the token header
>> -	 */
>> -	crypt_offset = ptr + (GSS_KRB5_TOK_HDR_LEN + kctx->gk5e->cksumlength) -
>> -					(unsigned char *)buf->head[0].iov_base;
>> -
>> -	/*
>> -	 * Need plaintext seqnum to derive encryption key for arcfour-hmac
>> -	 */
>> -	if (krb5_get_seq_num(kctx, ptr + GSS_KRB5_TOK_HDR_LEN,
>> -			     ptr + 8, &direction, &seqnum))
>> -		return GSS_S_BAD_SIG;
>> -
>> -	if ((kctx->initiate && direction != 0xff) ||
>> -	    (!kctx->initiate && direction != 0))
>> -		return GSS_S_BAD_SIG;
>> -
>> -	if (kctx->enctype == ENCTYPE_ARCFOUR_HMAC) {
>> -		struct crypto_sync_skcipher *cipher;
>> -		int err;
>> -
>> -		cipher = crypto_alloc_sync_skcipher(kctx->gk5e->encrypt_name,
>> -						    0, 0);
>> -		if (IS_ERR(cipher))
>> -			return GSS_S_FAILURE;
>> -
>> -		krb5_rc4_setup_enc_key(kctx, cipher, seqnum);
>> -
>> -		err = gss_decrypt_xdr_buf(cipher, buf, crypt_offset);
>> -		crypto_free_sync_skcipher(cipher);
>> -		if (err)
>> -			return GSS_S_DEFECTIVE_TOKEN;
>> -	} else {
>> -		if (gss_decrypt_xdr_buf(kctx->enc, buf, crypt_offset))
>> -			return GSS_S_DEFECTIVE_TOKEN;
>> -	}
>> -
>> -	if (kctx->gk5e->keyed_cksum)
>> -		cksumkey = kctx->cksum;
>> -	else
>> -		cksumkey = NULL;
>> -
>> -	if (make_checksum(kctx, ptr, 8, buf, crypt_offset,
>> -					cksumkey, KG_USAGE_SEAL, &md5cksum))
>> -		return GSS_S_FAILURE;
>> -
>> -	if (memcmp(md5cksum.data, ptr + GSS_KRB5_TOK_HDR_LEN,
>> -						kctx->gk5e->cksumlength))
>> -		return GSS_S_BAD_SIG;
>> -
>> -	/* it got through unscathed.  Make sure the context is unexpired */
>> -
>> -	now = get_seconds();
>> -
>> -	if (now > kctx->endtime)
>> -		return GSS_S_CONTEXT_EXPIRED;
>> -
>> -	/* do sequencing checks */
>> -
>> -	/* Copy the data back to the right position.  XXX: Would probably be
>> -	 * better to copy and encrypt at the same time. */
>> -
>> -	blocksize = crypto_sync_skcipher_blocksize(kctx->enc);
>> -	data_start = ptr + (GSS_KRB5_TOK_HDR_LEN + kctx->gk5e->cksumlength) +
>> -					conflen;
>> -	orig_start = buf->head[0].iov_base + offset;
>> -	data_len = (buf->head[0].iov_base + buf->head[0].iov_len) - data_start;
>> -	memmove(orig_start, data_start, data_len);
>> -	buf->head[0].iov_len -= (data_start - orig_start);
>> -	buf->len -= (data_start - orig_start);
>> -
>> -	if (gss_krb5_remove_padding(buf, blocksize))
>> -		return GSS_S_DEFECTIVE_TOKEN;
>> -
>> -	return GSS_S_COMPLETE;
>> -}
>> -
>> /*
>> * We can shift data by up to LOCAL_BUF_LEN bytes in a pass.  If we need
>> * to do more than that, we shift repeatedly.  Kevin Coffman reports
>> @@ -588,15 +350,11 @@ static void rotate_left(u32 base, struct xdr_buf *buf, unsigned int shift)
>> 	struct krb5_ctx	*kctx = gctx->internal_ctx_id;
>> 
>> 	switch (kctx->enctype) {
>> -	default:
>> -		BUG();
>> -	case ENCTYPE_DES_CBC_RAW:
>> -	case ENCTYPE_DES3_CBC_RAW:
>> -	case ENCTYPE_ARCFOUR_HMAC:
>> -		return gss_wrap_kerberos_v1(kctx, offset, buf, pages);
>> 	case ENCTYPE_AES128_CTS_HMAC_SHA1_96:
>> 	case ENCTYPE_AES256_CTS_HMAC_SHA1_96:
>> 		return gss_wrap_kerberos_v2(kctx, offset, buf, pages);
>> +	default:
>> +		return GSS_S_FAILURE;
>> 	}
>> }
>> 
>> @@ -606,14 +364,10 @@ static void rotate_left(u32 base, struct xdr_buf *buf, unsigned int shift)
>> 	struct krb5_ctx	*kctx = gctx->internal_ctx_id;
>> 
>> 	switch (kctx->enctype) {
>> -	default:
>> -		BUG();
>> -	case ENCTYPE_DES_CBC_RAW:
>> -	case ENCTYPE_DES3_CBC_RAW:
>> -	case ENCTYPE_ARCFOUR_HMAC:
>> -		return gss_unwrap_kerberos_v1(kctx, offset, buf);
>> 	case ENCTYPE_AES128_CTS_HMAC_SHA1_96:
>> 	case ENCTYPE_AES256_CTS_HMAC_SHA1_96:
>> 		return gss_unwrap_kerberos_v2(kctx, offset, buf);
>> +	default:
>> +		return GSS_S_FAILURE;
>> 	}
>> }
>> 
> 
> --
> Chuck Lever

--
Chuck Lever




^ permalink raw reply	[flat|nested] 37+ messages in thread

end of thread, other threads:[~2018-12-14 21:17 UTC | newest]

Thread overview: 37+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-12-10 16:29 [PATCH v3 00/24] NFS/RDMA client for next Chuck Lever
2018-12-10 16:29 ` [PATCH v3 01/24] xprtrdma: Prevent leak of rpcrdma_rep objects Chuck Lever
2018-12-10 16:29 ` [PATCH v3 02/24] IB/rxe: IB_WR_REG_MR does not capture MR's iova field Chuck Lever
2018-12-11 14:00   ` Christoph Hellwig
2018-12-11 15:26     ` Chuck Lever
2018-12-10 16:29 ` [PATCH v3 03/24] xprtrdma: Remove support for FMR memory registration Chuck Lever
2018-12-11 14:02   ` Christoph Hellwig
2018-12-11 15:29     ` Chuck Lever
2018-12-12  7:18       ` Christoph Hellwig
2018-12-10 16:29 ` [PATCH v3 04/24] xprtrdma: Fix ri_max_segs and the result of ro_maxpages Chuck Lever
2018-12-10 16:29 ` [PATCH v3 05/24] xprtrdma: Reduce max_frwr_depth Chuck Lever
2018-12-11 14:02   ` Christoph Hellwig
2018-12-11 15:30     ` Chuck Lever
2018-12-12  7:18       ` Christoph Hellwig
2018-12-10 16:29 ` [PATCH v3 06/24] xprtrdma: Plant XID in on-the-wire RDMA offset (FRWR) Chuck Lever
2018-12-10 16:29 ` [PATCH v3 07/24] xprtrdma: Recognize XDRBUF_SPARSE_PAGES Chuck Lever
2018-12-10 16:30 ` [PATCH v3 08/24] xprtrdma: Remove request_module from backchannel Chuck Lever
2018-12-10 16:30 ` [PATCH v3 09/24] xprtrdma: Expose transport header errors Chuck Lever
2018-12-10 16:30 ` [PATCH v3 10/24] xprtrdma: Simplify locking that protects the rl_allreqs list Chuck Lever
2018-12-10 16:30 ` [PATCH v3 11/24] xprtrdma: Cull dprintk() call sites Chuck Lever
2018-12-10 16:30 ` [PATCH v3 12/24] xprtrdma: Clean up of xprtrdma chunk trace points Chuck Lever
2018-12-10 16:30 ` [PATCH v3 13/24] xprtrdma: Relocate the xprtrdma_mr_map " Chuck Lever
2018-12-10 16:30 ` [PATCH v3 14/24] xprtrdma: Add trace points for calls to transport switch methods Chuck Lever
2018-12-10 16:30 ` [PATCH v3 15/24] NFS: Make "port=" mount option optional for RDMA mounts Chuck Lever
2018-12-10 16:30 ` [PATCH v3 16/24] SUNRPC: Remove support for kerberos_v1 Chuck Lever
2018-12-12 21:20   ` Chuck Lever
2018-12-14 21:16     ` Chuck Lever
2018-12-10 16:30 ` [PATCH v3 17/24] SUNRPC: Fix some kernel doc complaints Chuck Lever
2018-12-10 16:30 ` [PATCH v3 18/24] NFS: Fix NFSv4 symbolic trace point output Chuck Lever
     [not found]   ` <632f5635-4c37-16ae-cdd0-65679d21c9ec@oracle.com>
2018-12-11 19:19     ` Calum Mackay
2018-12-10 16:31 ` [PATCH v3 19/24] SUNRPC: Simplify defining common RPC trace events Chuck Lever
2018-12-10 16:31 ` [PATCH v3 20/24] xprtrdma: Trace mapping, alloc, and dereg failures Chuck Lever
2018-12-10 16:31 ` [PATCH v3 21/24] xprtrdma: Update comments in frwr_op_send Chuck Lever
2018-12-10 16:31 ` [PATCH v3 22/24] xprtrdma: Replace outdated comment for rpcrdma_ep_post Chuck Lever
2018-12-10 16:31 ` [PATCH v3 23/24] xprtrdma: Add documenting comment for rpcrdma_buffer_destroy Chuck Lever
2018-12-10 16:31 ` [PATCH v3 24/24] xprtrdma: Clarify comments in rpcrdma_ia_remove Chuck Lever
2018-12-10 17:55 ` [PATCH v3 00/24] NFS/RDMA client for next Jason Gunthorpe

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.