All of lore.kernel.org
 help / color / mirror / Atom feed
* [af-packet 0/2] Enhance af-packet to provide (near zero)lossless packet capture functionality.
@ 2011-06-08  3:13 Chetan Loke
  2011-06-08  3:13 ` [af-packet 1/2] " Chetan Loke
  2011-06-08  3:13 ` [af-packet 2/2] " Chetan Loke
  0 siblings, 2 replies; 9+ messages in thread
From: Chetan Loke @ 2011-06-08  3:13 UTC (permalink / raw)
  To: netdev; +Cc: davem, eric.dumazet, kaber, johann.baudy, Chetan Loke

Hello,

Please review the patchset. Any feedback is appreciated.

The patch set is intended to:
a) demonstrate the improvements
b) gather suggestions

This patch attempts to:
1)Improve network capture visibility by increasing packet density
2)Assist in analyzing multiple(aggregated) capture ports.

Benefits:
  B1) ~15-20% reduction in cpu-usage.
  B2) ~20% increase in packet capture rate.
  B3) ~2x  increase in packet density.
  B4) Port aggregation analysis.
  B5) Non static frame size to capture entire packet payload.


With the current af_packet->rx::mmap based approach, the element size
in the block needs to be statically configured. Nothing wrong with this
config/implementation. But the traffic profile cannot be known in advance.
And so it would be nice if that configuration wasn't static. Normally,
one would configure the element-size to be '2048' so that you can atleast
capture the entire 'MTU-size'.But if the traffic profile varies then we
would end up either i)wasting memory or ii) end up getting a sliced frame.
In other words the packet density will be much less in the first case.

--------------------
Performance results:
--------------------

Tpacket config(same on Physical/Virtual setup):
64 blocks(1MB block size)

**************
Physical setup
**************

pktgen: 64 byte traffic.

1G Intel
driver: igb
version: 2.1.0-k2
firmware-version: 3.19-0


Tpacket          V1                 V3
capture-rate     600K pps     720K pps
cpu usage        70%           53%
Drop-rate         7-10%        ~1%

**********************
Virtual Machine setup:
**********************

pktgen: 64 byte traffic,40M packets(clone_skb <40000000>)

Worker VMs(FC12):
3 VMs:VM0 .. VM2, each sending 40M packets.

probe-VM(FC15): 1-vCPU/512MB memory
running patched kernel


Tpacket          V1                       V3
capture-rate     700-800K pps        1M pps
cpu usage        50%                   ~30%
Drop-rate         9-10%                <1%


Plus, in the VM setup,V3 sees/captures around 5-10% more traffic than V1/V2.

------------
Enhancement:
------------
E1) Enhanced tpacket_rcv so that it can dump/copy the packets one after another.
E2) Also implemented basic timeout mechanism to close 'a' current block.
    That way, user-space won't be blocked forever on an idle link.
    This is a much needed feature while monitoring multiple ports.
    Look at 3) below.

-------------------------------
Why is such enhancement needed?
-------------------------------
1) Well, spin-waiting/polling on a per-packet basis to see if it's ready
   to be consumed does not scale while monitoring multiple ports.
   poll() is not performance friendly either.
2) Also, typically a user-space packet capture interface handles multiple
   packets to another user-space protocol-decoder.

   ----------------
   protocol-decoder
          T2
   ----------------
    =============
    ship pkts
    =============
           ^
           |
           v
   -----------------
   pkt-capture logic
           T1
   -----------------
   ================
     nic/sock IF
   ================
           ^
           |
           V

T1 and T2 are user-space threads. If the hand-off between T1 and T2
happens on a per-pkt basis then the solution does NOT scale.

However, one can argue that T1 can coalesce packets and then pass of a
single chunk to T2.But T1's packet consumption granularity is still at
an individual packet level and that is something that needs to be
addressed to avoid excessive polling.


3) Port aggregation analysis:
   Multiple ports are viewed/analyzed as one logical pipe.
   Example:
   3.1) up-stream    path can be tapped in eth1
   3.2) down-stream  path can be tapped in eth2
   3.3) Network TAP splits Rx/Tx paths and then feeds to eth1,eth2.

   If both eth1,eth2 need to be viewed as one logical channel,
   then that implies we need to timesort the packets as they come across
   eth1,eth2.

   3.4) But following issues further complicates the problem:
        3.4.1)What if one stream is bursty and other is flowing
              at line rate?
        3.4.2)How long do we wait before we can actually make a
              decision in the app-space and bail-out from the spin-wait?

   Solution:
   3.5) Once we receive a block from multiple ports,we can compare
        the timestamps from the block-descriptor and then easily time sort
        the packets and feed them to the decoders.


Please don't shoot the patchset because of its size :).



Chetan Loke (2):

 include/linux/if_packet.h |  127 +++++++-
 net/packet/af_packet.c    |  878 ++++++++++++++++++++++++++++++++++++++++++---
 2 files changed, 957 insertions(+), 48 deletions(-)

-- 
1.7.5.2


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [af-packet 1/2] Enhance af-packet to provide (near zero)lossless packet capture functionality.
  2011-06-08  3:13 [af-packet 0/2] Enhance af-packet to provide (near zero)lossless packet capture functionality Chetan Loke
@ 2011-06-08  3:13 ` Chetan Loke
  2011-06-08  4:35   ` Eric Dumazet
  2011-06-08 16:03   ` Stephen Hemminger
  2011-06-08  3:13 ` [af-packet 2/2] " Chetan Loke
  1 sibling, 2 replies; 9+ messages in thread
From: Chetan Loke @ 2011-06-08  3:13 UTC (permalink / raw)
  To: netdev; +Cc: davem, eric.dumazet, kaber, johann.baudy, Chetan Loke, Chetan Loke

Added TPACKET_V3 definitions

Signed-off-by: Chetan Loke <lokec@ccs.neu.edu>
---
 include/linux/if_packet.h |  127 ++++++++++++++++++++++++++++++++++++++++++++-
 1 files changed, 126 insertions(+), 1 deletions(-)

diff --git a/include/linux/if_packet.h b/include/linux/if_packet.h
index 72bfa5a..9e4eea1 100644
--- a/include/linux/if_packet.h
+++ b/include/linux/if_packet.h
@@ -24,7 +24,7 @@ struct sockaddr_ll {
 #define PACKET_HOST		0		/* To us		*/
 #define PACKET_BROADCAST	1		/* To all		*/
 #define PACKET_MULTICAST	2		/* To group		*/
-#define PACKET_OTHERHOST	3		/* To someone else 	*/
+#define PACKET_OTHERHOST	3		/* To someone else	*/
 #define PACKET_OUTGOING		4		/* Outgoing of any type */
 /* These ones are invisible by user level */
 #define PACKET_LOOPBACK		5		/* MC/BRD frame looped back */
@@ -55,6 +55,17 @@ struct tpacket_stats {
 	unsigned int	tp_drops;
 };
 
+struct tpacket_stats_v3 {
+	unsigned int	tp_packets;
+	unsigned int	tp_drops;
+	unsigned int	tp_freeze_q_cnt;
+};
+
+union tpacket_stats_u {
+	struct tpacket_stats stats1;
+	struct tpacket_stats_v3 stats3;
+};
+
 struct tpacket_auxdata {
 	__u32		tp_status;
 	__u32		tp_len;
@@ -70,6 +81,7 @@ struct tpacket_auxdata {
 #define TP_STATUS_COPY		0x2
 #define TP_STATUS_LOSING	0x4
 #define TP_STATUS_CSUMNOTREADY	0x8
+#define TP_STATUS_BLK_TMO	0x10
 
 /* Tx ring - header status */
 #define TP_STATUS_AVAILABLE	0x0
@@ -102,11 +114,111 @@ struct tpacket2_hdr {
 	__u16		tp_vlan_tci;
 };
 
+struct tpacket3_hdr {
+	__u32		tp_status;
+	__u32		tp_len;
+	__u32		tp_snaplen;
+	__u16		tp_mac;
+	__u16		tp_net;
+	__u32		tp_sec;
+	__u32		tp_nsec;
+	__u16		tp_vlan_tci;
+	__u32		tp_next_offset;
+};
+
+struct bd_ts {
+	unsigned int ts_sec;
+	union {
+		struct {
+			unsigned int ts_usec;
+		};
+		struct {
+			unsigned int ts_nsec;
+		};
+	};
+} __attribute__ ((__packed__));
+
+struct bd_v1 {
+	/*
+	 * If you re-order the first 5 fields then
+	 * the BLOCK_XXX macros will NOT work.
+	 */
+	__u32	block_status;
+	__u32	num_pkts;
+	__u32	offset_to_first_pkt;
+
+	/* Number of valid bytes (including padding)
+	 * blk_len <= tp_block_size
+	 */
+	__u32	blk_len;
+
+	/*
+	 * Quite a few uses of sequence number:
+	 * 1. Make sure cache flush etc worked.
+	 *    Well, one can argue - why not use the increasing ts below?
+	 *    But look at 2. below first.
+	 * 2. When you pass around blocks to other user space decoders,
+	 *    you can see which blk[s] is[are] outstanding etc.
+	 * 3. Validate kernel code.
+	 */
+	__u64	seq_num;
+
+	/*
+	 * ts_last_pkt:
+	 *
+	 * Case 1.	Block has 'N'(N >=1) packets and TMO'd(timed out)
+	 *		ts_last_pkt == 'time-stamp of last packet' and NOT the
+	 *		time when the timer fired and the block was closed.
+	 *		By providing the ts of the last packet we can absolutely
+	 *		guarantee that time-stamp wise, the first packet in the next
+	 *		block will never precede the last packet of the previous
+	 *		block.
+	 * Case 2.	Block has zero packets and TMO'd
+	 *		ts_last_pkt = time when the timer fired and the block
+	 *		was closed.
+	 * Case 3.	Block has 'N' packets and NO TMO.
+	 *		ts_last_pkt = time-stamp of the last pkt in the block.
+	 *
+	 * ts_first_pkt:
+	 *		Is always the time-stamp when the block was opened.
+	 *		Case a)	ZERO packets
+	 *			No packets to deal with but atleast you know the
+	 *			time-interval of this block.
+	 *		Case b) Non-zero packets
+	 *			Use the ts of the first packet in the block.
+	 *
+	 */
+	struct bd_ts	ts_first_pkt;
+	struct bd_ts	ts_last_pkt;
+} __attribute__ ((__packed__));
+
+struct block_desc {
+	__u16 version;
+	union {
+		struct {
+			__u32	words[4];
+			__u64	dword;
+		} __attribute__ ((__packed__));
+		struct bd_v1 bd1;
+	};
+} __attribute__ ((__packed__));
+
+
+
 #define TPACKET2_HDRLEN		(TPACKET_ALIGN(sizeof(struct tpacket2_hdr)) + sizeof(struct sockaddr_ll))
+#define TPACKET3_HDRLEN		(TPACKET_ALIGN(sizeof(struct tpacket3_hdr)) + sizeof(struct sockaddr_ll))
+
+#define BLOCK_STATUS(x)	((x)->words[0])
+#define BLOCK_NUM_PKTS(x)	((x)->words[1])
+#define BLOCK_O2FP(x)		((x)->words[2])
+#define BLOCK_LEN(x)		((x)->words[3])
+#define BLOCK_SNUM(x)		((x)->dword)
+
 
 enum tpacket_versions {
 	TPACKET_V1,
 	TPACKET_V2,
+	TPACKET_V3,
 };
 
 /*
@@ -129,6 +241,19 @@ struct tpacket_req {
 	unsigned int	tp_frame_nr;	/* Total number of frames */
 };
 
+struct tpacket_req3 {
+	unsigned int	tp_block_size;	/* Minimal size of contiguous block */
+	unsigned int	tp_block_nr;	/* Number of blocks */
+	unsigned int	tp_frame_size;	/* Size of frame */
+	unsigned int	tp_frame_nr;	/* Total number of frames */
+	unsigned int	tp_retire_blk_tov; /* timeout in msecs */
+};
+
+union tpacket_req_u {
+	struct tpacket_req	req;
+	struct tpacket_req3	req3;
+};
+
 struct packet_mreq {
 	int		mr_ifindex;
 	unsigned short	mr_type;
-- 
1.7.5.2


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [af-packet 2/2] Enhance af-packet to provide (near zero)lossless packet capture functionality
  2011-06-08  3:13 [af-packet 0/2] Enhance af-packet to provide (near zero)lossless packet capture functionality Chetan Loke
  2011-06-08  3:13 ` [af-packet 1/2] " Chetan Loke
@ 2011-06-08  3:13 ` Chetan Loke
  2011-06-08  4:18   ` Joe Perches
  1 sibling, 1 reply; 9+ messages in thread
From: Chetan Loke @ 2011-06-08  3:13 UTC (permalink / raw)
  To: netdev; +Cc: davem, eric.dumazet, kaber, johann.baudy, Chetan Loke, Chetan Loke

1) Blocks can now be configured with non-static frame format.
   Non-static frame format provides following benefits:
   1.1) Increases packet density by a factor of 2x.
   1.2) Ability to capture entire packet.
   1.3) Captures 99% 64-byte traffic as seen by the kernel.
2) Read/poll is now at a block-level rather than at packet level.
3) Added user-configurable timeout knob for timing out blocks on slow/bursty links.
4) Block level processing now allows monitoring multiple links as a single
   logical pipe.

Changes:
C1) tpacket_rcv()
    C1.1) packet_current_frame() is replaced by packet_current_rx_frame()
          The bulk of the processing is then moved in the following chain:
          packet_current_rx_frame()
            __packet_lookup_frame_in_block
              fill_curr_block()
              or
                retire_current_block
                dispatch_next_block
              or
              return NULL(queue is plugged/paused)

Signed-off-by: Chetan Loke <lokec@ccs.neu.edu>
---
 net/packet/af_packet.c |  878 +++++++++++++++++++++++++++++++++++++++++++++---
 1 files changed, 831 insertions(+), 47 deletions(-)

diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
index 91cb1d7..e0a1314 100644
--- a/net/packet/af_packet.c
+++ b/net/packet/af_packet.c
@@ -40,6 +40,9 @@
  *					byte arrays at the end of sockaddr_ll
  *					and packet_mreq.
  *		Johann Baudy	:	Added TX RING.
+ *		Chetan Loke	:	Implemented TPACKET_V3 block abstraction
+ *					layer. Copyright (C) 2011, <lokec@ccs.neu.edu>
+ *
  *
  *		This program is free software; you can redistribute it and/or
  *		modify it under the terms of the GNU General Public License
@@ -161,9 +164,49 @@ struct packet_mreq_max {
 	unsigned char	mr_address[MAX_ADDR_LEN];
 };
 
-static int packet_set_ring(struct sock *sk, struct tpacket_req *req,
+static int packet_set_ring(struct sock *sk, union tpacket_req_u *req_u,
 		int closing, int tx_ring);
 
+
+#define V3_ALIGNMENT	(4)
+#define ALIGN_4(x)	(((x)+V3_ALIGNMENT-1)&~(V3_ALIGNMENT-1))
+
+#define BLK_HDR_LEN	(sizeof(struct block_desc))
+
+/* kbdq - kernel block descriptor queue */
+struct kbdq_core {
+	struct pgv	*pkbdq;
+	unsigned int	hdrlen;
+	unsigned char	reset_pending_on_curr_blk;
+	unsigned char   delete_blk_timer;
+	unsigned short	kactive_blk_num;
+	unsigned short	hole_bytes_size;
+	char		*pkblk_start;
+	char		*pkblk_end;
+	int		kblk_size;
+	unsigned int	knum_blocks;
+	uint64_t	knxt_seq_num;
+	char		*prev;
+	char		*nxt_offset;
+
+	/* last_kactive_blk_num:
+	 * trick to see if user-space has caught up
+	 * in order to avoid refreshing timer when every single pkt arrives.
+	 */
+	unsigned short	last_kactive_blk_num;
+
+	atomic_t	blk_fill_in_prog;
+
+	/* Default is set to 8ms */
+#define DEFAULT_PRB_RETIRE_TOV	(8)
+
+	unsigned short  retire_blk_tov;
+	unsigned long	tov_in_jiffies;
+
+	/* timer to retire an outstanding block */
+	struct timer_list retire_blk_timer;
+};
+
 #define PGV_FROM_VMALLOC 1
 struct pgv {
 	char *buffer;
@@ -180,18 +223,36 @@ struct packet_ring_buffer {
 	unsigned int		pg_vec_pages;
 	unsigned int		pg_vec_len;
 
+	struct kbdq_core	prb_bdqc;
 	atomic_t		pending;
 };
 
 struct packet_sock;
 static int tpacket_snd(struct packet_sock *po, struct msghdr *msg);
 
+static void *packet_previous_frame(struct packet_sock *po,
+		struct packet_ring_buffer *rb,
+		int status);
+static void packet_increment_head(struct packet_ring_buffer *buff);
+static int prb_curr_blk_in_use(struct kbdq_core *,
+			struct block_desc *);
+static void *prb_dispatch_next_block(struct kbdq_core *,
+			struct packet_sock *);
+static void prb_retire_current_block(struct kbdq_core *,
+		struct packet_sock *, unsigned int status);
+static int prb_queue_frozen(struct kbdq_core *);
+static void prb_open_block(struct kbdq_core *, struct block_desc *);
+static void prb_retire_rx_blk_timer_expired(unsigned long);
+static void _prb_refresh_rx_retire_blk_timer(struct kbdq_core *);
+static void prb_init_blk_timer(struct packet_sock *, struct kbdq_core *,
+				void (*func) (unsigned long));
 static void packet_flush_mclist(struct sock *sk);
 
 struct packet_sock {
 	/* struct sock has to be the first member of packet_sock */
 	struct sock		sk;
 	struct tpacket_stats	stats;
+	union  tpacket_stats_u	stats_u;
 	struct packet_ring_buffer	rx_ring;
 	struct packet_ring_buffer	tx_ring;
 	int			copy_thresh;
@@ -223,6 +284,19 @@ struct packet_skb_cb {
 
 #define PACKET_SKB_CB(__skb)	((struct packet_skb_cb *)((__skb)->cb))
 
+#define GET_PBDQC_FROM_RB(x)	((struct kbdq_core *)(&(x)->prb_bdqc))
+
+#define GET_PBLOCK_DESC(x, bid)	((struct block_desc *)((x)->pkbdq[(bid)].buffer))
+
+#define GET_CURR_PBLOCK_DESC_FROM_CORE(x)	\
+	((struct block_desc *)((x)->pkbdq[(x)->kactive_blk_num].buffer))
+
+
+#define GET_NEXT_PRB_BLK_NUM(x) \
+	(((x)->kactive_blk_num < ((x)->knum_blocks-1)) ? \
+	((x)->kactive_blk_num+1) : 0)
+
+
 static inline __pure struct page *pgv_to_page(void *addr)
 {
 	if (is_vmalloc_addr(addr))
@@ -248,8 +322,11 @@ static void __packet_set_status(struct packet_sock *po, void *frame, int status)
 		h.h2->tp_status = status;
 		flush_dcache_page(pgv_to_page(&h.h2->tp_status));
 		break;
+	case TPACKET_V3:
 	default:
-		pr_err("TPACKET version not supported\n");
+		pr_err("<%s> TPACKET version not supported.Who is calling?\
+			Dumping stack.\n", __func__);
+		dump_stack();
 		BUG();
 	}
 
@@ -274,8 +351,11 @@ static int __packet_get_status(struct packet_sock *po, void *frame)
 	case TPACKET_V2:
 		flush_dcache_page(pgv_to_page(&h.h2->tp_status));
 		return h.h2->tp_status;
+	case TPACKET_V3:
 	default:
-		pr_err("TPACKET version not supported\n");
+		pr_err("<%s> TPACKET version:%d not supported.\
+			Dumping stack.\n", __func__, po->tp_version);
+		dump_stack();
 		BUG();
 		return 0;
 	}
@@ -312,6 +392,617 @@ static inline void *packet_current_frame(struct packet_sock *po,
 	return packet_lookup_frame(po, rb, rb->head, status);
 }
 
+static void prb_del_retire_blk_timer(struct kbdq_core *pkc)
+{
+	del_timer_sync(&pkc->retire_blk_timer);
+}
+
+static void prb_shutdown_retire_blk_timer(struct packet_sock *po,
+		int tx_ring,
+		struct sk_buff_head *rb_queue)
+{
+	struct kbdq_core *pkc;
+
+	pkc = tx_ring ? &po->tx_ring.prb_bdqc : &po->rx_ring.prb_bdqc;
+
+	spin_lock(&rb_queue->lock);
+	pkc->delete_blk_timer = 1;
+	spin_unlock(&rb_queue->lock);
+
+	prb_del_retire_blk_timer(pkc);
+}
+
+static void prb_init_blk_timer(struct packet_sock *po,
+		struct kbdq_core *pkc,
+		void (*func) (unsigned long))
+{
+	init_timer(&pkc->retire_blk_timer);
+	pkc->retire_blk_timer.data = (long)po;
+	pkc->retire_blk_timer.function = func;
+	pkc->retire_blk_timer.expires = jiffies;
+}
+
+static void prb_setup_retire_blk_timer(struct packet_sock *po, int tx_ring)
+{
+	struct kbdq_core *pkc;
+
+	if (tx_ring)
+		BUG();
+
+	pkc = tx_ring ? &po->tx_ring.prb_bdqc : &po->rx_ring.prb_bdqc;
+	prb_init_blk_timer(po, pkc, prb_retire_rx_blk_timer_expired);
+}
+
+static int prb_calc_retire_blk_tmo(struct packet_sock *po,
+				int blk_size_in_bytes)
+{
+	struct net_device *dev;
+	unsigned int mbits = 0, msec = 0, div = 0, tmo = 0;
+
+	dev = dev_get_by_index(sock_net(&po->sk), po->ifindex);
+	if (unlikely(dev == NULL))
+		return DEFAULT_PRB_RETIRE_TOV;
+
+	if (dev->ethtool_ops && dev->ethtool_ops->get_settings) {
+		struct ethtool_cmd ecmd = { .cmd = ETHTOOL_GSET, };
+
+		if (!dev->ethtool_ops->get_settings(dev, &ecmd)) {
+			switch (ecmd.speed) {
+			case SPEED_10000:
+				msec = 1;
+				div = 10000/1000;
+				break;
+			case SPEED_1000:
+				msec = 1;
+				div = 1000/1000;
+				break;
+			/*
+			 * If the link speed is so slow you don't really
+			 * need to worry about perf anyways
+			 */
+			case SPEED_100:
+			case SPEED_10:
+			default:
+				return DEFAULT_PRB_RETIRE_TOV;
+			}
+		}
+	}
+
+	mbits = (blk_size_in_bytes * 8) / (1024 * 1024);
+
+	if (div)
+		mbits /= div;
+
+	tmo = mbits * msec;
+
+	if (div)
+		return tmo+1;
+	return tmo;
+}
+
+static void init_prb_bdqc(struct packet_sock *po,
+			struct packet_ring_buffer *rb,
+			struct pgv *pg_vec,
+			union tpacket_req_u *req_u, int tx_ring)
+{
+	struct kbdq_core *p1 = &rb->prb_bdqc;
+	struct block_desc *pbd;
+
+	memset(p1, 0x0, sizeof(*p1));
+	p1->knxt_seq_num = 1;
+	p1->pkbdq = pg_vec;
+	pbd = (struct block_desc *)pg_vec[0].buffer;
+	p1->pkblk_start	= (char *)pg_vec[0].buffer;
+	p1->kblk_size = req_u->req3.tp_block_size;
+	p1->knum_blocks	= req_u->req3.tp_block_nr;
+	p1->hdrlen = po->tp_hdrlen;
+	pbd->version = po->tp_version;
+	p1->last_kactive_blk_num = 0;
+	po->stats_u.stats3.tp_freeze_q_cnt = 0;
+	if (req_u->req3.tp_retire_blk_tov)
+		p1->retire_blk_tov = req_u->req3.tp_retire_blk_tov;
+	else
+		p1->retire_blk_tov = prb_calc_retire_blk_tmo(po,
+						req_u->req3.tp_block_size);
+	p1->tov_in_jiffies = msecs_to_jiffies(p1->retire_blk_tov);
+	prb_setup_retire_blk_timer(po, tx_ring);
+	prb_open_block(p1, pbd);
+}
+
+/*  Do NOT update the last_blk_num first.
+ *  Assumes sk_buff_head lock is held.
+ */
+static void _prb_refresh_rx_retire_blk_timer(struct kbdq_core *pkc)
+{
+	mod_timer(&pkc->retire_blk_timer,
+			jiffies + pkc->tov_in_jiffies);
+	pkc->last_kactive_blk_num = pkc->kactive_blk_num;
+}
+
+/*
+ * Timer logic:
+ * 1) We refresh the timer only when we open a block.
+ *    By doing this we don't waste cycles refreshing the timer
+ *	  on packet-by-packet basis.
+ *
+ * With a 1MB block-size, on a 1Gbps line, it will take
+ * i) ~8 ms to fill a block + ii) memcpy etc.
+ * In this cut we are not accounting for the memcpy time.
+ *
+ * So, if the user sets the 'tmo' to 10ms then the timer
+ * will never fire while the block is still getting filled
+ * (which is what we want). However, the user could choose
+ * to close a block early and that's fine.
+ *
+ * But when the timer does fire, we check whether or not to refresh it.
+ * Since the tmo granularity is in msecs, it is not too expensive
+ * to refresh the timer, lets say every '8' msecs.
+ * Either the user can set the 'tmo' or we can derive it based on
+ * a) line-speed and b) block-size.
+ * prb_calc_retire_blk_tmo() calculates the tmo.
+ *
+ */
+static void prb_retire_rx_blk_timer_expired(unsigned long data)
+{
+	struct packet_sock *po = (struct packet_sock *)data;
+	struct kbdq_core *pkc = &po->rx_ring.prb_bdqc;
+	unsigned int frozen;
+	struct block_desc *pbd;
+
+	spin_lock(&po->sk.sk_receive_queue.lock);
+
+	frozen = prb_queue_frozen(pkc);
+	pbd = GET_CURR_PBLOCK_DESC_FROM_CORE(pkc);
+
+	if (unlikely(pkc->delete_blk_timer))
+		goto out;
+
+	/* We only need to plug the race when the block is partially filled.
+	 * tpacket_rcv:
+	 *		lock(); increment BLOCK_NUM_PKTS; unlock()
+	 *		copy_bits() is in progress ...
+	 * timer fires on other cpu:
+	 *		we can't retire the current block because copy_bits
+	 *		is in progress.
+	 *
+	 */
+	if (BLOCK_NUM_PKTS(pbd)) {
+		while (atomic_read(&pkc->blk_fill_in_prog)) {
+			/* Waiting for skb_copy_bits to finish... */
+			cpu_relax();
+		}
+	}
+
+	if (pkc->last_kactive_blk_num == pkc->kactive_blk_num) {
+		if (!frozen) {
+			prb_retire_current_block(pkc, po, TP_STATUS_BLK_TMO);
+			if (!prb_dispatch_next_block(pkc, po))
+				goto refresh_timer;
+			else
+				goto out;
+		} else {
+			/* Case 1. Queue was frozen because user-space was
+			 *	   lagging behind.
+			 */
+			if (prb_curr_blk_in_use(pkc, pbd)) {
+			       /*
+				* Ok, user-space is still behind.
+				* So just refresh the timer.
+				*/
+				goto refresh_timer;
+			} else {
+			       /* Case 2. queue was frozen, user-space caught up,
+				* now the link went idle && the timer fired.
+				* We don't have a block to close. So we open this
+				* block and restart the timer.
+				* opening a block thaws the queue, restarts timer.
+				* Thawing/timer-refresh is a side effect.
+				*/
+				prb_open_block(pkc, pbd);
+				goto out;
+			}
+		}
+	}
+
+refresh_timer:
+	_prb_refresh_rx_retire_blk_timer(pkc);
+
+out:
+	spin_unlock(&po->sk.sk_receive_queue.lock);
+}
+
+static inline void prb_flush_block(struct kbdq_core *pkc1, struct block_desc *pbd1,
+			__u32 status)
+{
+	/* Flush everything minus the block header */
+
+#if ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE == 1
+	u8 *start, *end;
+
+	start = (u8 *)pbd1;
+
+	/* Skip the block header(we know header WILL fit in 4K) */
+	start += PAGE_SIZE;
+
+	end = (u8 *)PAGE_ALIGN((unsigned long)pkc1->pkblk_end);
+	for (; start < end; start += PAGE_SIZE)
+		flush_dcache_page(pgv_to_page(start));
+
+	smp_wmb();
+#endif
+
+	/* Now update the block status. */
+
+	BLOCK_STATUS(pbd1) = status;
+
+	/* Flush the block header */
+
+#if ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE == 1
+	start = (u8 *)pbd1;
+	flush_dcache_page(pgv_to_page(start));
+
+	smp_wmb();
+#endif
+}
+
+/*
+ * Side effect:
+ *
+ * 1) flush the block
+ * 2) Increment active_blk_num
+ *
+ * Note:We DONT refresh the timer on purpose.
+ *	Because almost always the next block will be opened.
+ */
+static void prb_close_block(struct kbdq_core *pkc1, struct block_desc *pbd1,
+		struct packet_sock *po, unsigned int stat)
+{
+	__u32 status = TP_STATUS_USER | stat;
+
+	struct tpacket3_hdr *last_pkt;
+	struct bd_v1 *b1 = &pbd1->bd1;
+
+	if (po->stats.tp_drops)
+		status |= TP_STATUS_LOSING;
+
+	last_pkt = (struct tpacket3_hdr *)pkc1->prev;
+	last_pkt->tp_next_offset = 0;
+
+	/* Get the ts of the last pkt */
+	if (BLOCK_NUM_PKTS(pbd1)) {
+		b1->ts_last_pkt.ts_sec = last_pkt->tp_sec;
+		b1->ts_last_pkt.ts_nsec	= last_pkt->tp_nsec;
+	} else {
+		/* Ok, we tmo'd - so get the current time */
+		struct timespec ts;
+		getnstimeofday(&ts);
+		b1->ts_last_pkt.ts_sec = ts.tv_sec;
+		b1->ts_last_pkt.ts_nsec	= ts.tv_nsec;
+	}
+
+	smp_wmb();
+
+	/* Flush the block */
+	prb_flush_block(pkc1, pbd1, status);
+
+	pkc1->kactive_blk_num = GET_NEXT_PRB_BLK_NUM(pkc1);
+}
+
+static inline void prb_thaw_queue(struct kbdq_core *pkc)
+{
+	pkc->reset_pending_on_curr_blk = 0;
+}
+
+/*
+ * Side effect of opening a block:
+ *
+ * 1) prb_queue is thawed.
+ * 2) retire_blk_timer is refreshed.
+ *
+ */
+static void prb_open_block(struct kbdq_core *pkc1, struct block_desc *pbd1)
+{
+	struct timespec ts;
+	struct bd_v1 *b1 = &pbd1->bd1;
+
+	smp_rmb();
+
+	if (likely(TP_STATUS_KERNEL == BLOCK_STATUS(pbd1))) {
+
+		BLOCK_SNUM(pbd1) = pkc1->knxt_seq_num++;
+		BLOCK_NUM_PKTS(pbd1) = 0;
+		BLOCK_LEN(pbd1) = BLK_HDR_LEN;
+		getnstimeofday(&ts);
+		b1->ts_first_pkt.ts_sec = ts.tv_sec;
+		b1->ts_first_pkt.ts_nsec = ts.tv_nsec;
+		pkc1->pkblk_start = (char *)pbd1;
+		pkc1->nxt_offset = (char *)(pkc1->pkblk_start+BLK_HDR_LEN);
+		BLOCK_O2FP(pbd1) = (__u32)BLK_HDR_LEN;
+		pkc1->prev = pkc1->nxt_offset;
+		pkc1->pkblk_end = pkc1->pkblk_start + pkc1->kblk_size;
+		prb_thaw_queue(pkc1);
+		_prb_refresh_rx_retire_blk_timer(pkc1);
+
+		smp_wmb();
+
+		return;
+	}
+
+	pr_err("<%s> ERROR block:%p is NOT FREE status:%d\
+			kactive_blk_num:%d\n",
+			__func__, pbd1, BLOCK_STATUS(pbd1), pkc1->kactive_blk_num);
+	dump_stack();
+	BUG();
+}
+
+/*
+ * Queue freeze logic:
+ * 1) Assume tp_block_nr = 8 blocks.
+ * 2) At time 't0', user opens Rx ring.
+ * 3) Some time past 't0', kernel starts filling blocks starting from 0 .. 7
+ * 4) user-space is either sleeping or processing block '0'.
+ * 5) tpacket_rcv is currently filling block '7', since there is no space left,
+ *    it will close block-7,loop around and try to fill block '0'.
+ *    call-flow:
+ *    __packet_lookup_frame_in_block
+ *      prb_retire_current_block()
+ *      prb_dispatch_next_block()
+ *        |->(BLOCK_STATUS == USER) evaluates to true
+ *    5.1) Since block-0 is currently in-use, we just freeze the queue.
+ * 6) Now there are two cases:
+ *    6.1) Link goes idle right after the queue is frozen.
+ *         But remember, the last open_block() refreshed the timer.
+ *         When this timer expires,it will refresh itself so that we can
+ *         re-open block-0 in near future.
+ *    6.2) Link is busy and keeps on receiving packets. This is a simple
+ *         case and __packet_lookup_frame_in_block will check if block-0
+ *         is free and can now be re-used.
+ */
+static inline void prb_freeze_queue(struct kbdq_core *pkc,
+				  struct packet_sock *po)
+{
+	pkc->reset_pending_on_curr_blk = 1;
+	po->stats_u.stats3.tp_freeze_q_cnt++;
+}
+
+#define TOTAL_PKT_LEN_INCL_ALIGN(length) (ALIGN_4((length)))
+
+/*
+ * If the next block is free then we will dispatch it
+ * and return a good offset.
+ * Else, we will freeze the queue.
+ * So, caller must check the return value.
+ */
+static void *prb_dispatch_next_block(struct kbdq_core *pkc,
+		struct packet_sock *po)
+{
+	struct block_desc *pbd;
+
+	smp_rmb();
+
+	/* 1. Get current block num */
+	pbd = GET_CURR_PBLOCK_DESC_FROM_CORE(pkc);
+
+	/* 2. If this block is currently in_use then freeze the queue */
+	if (TP_STATUS_USER & BLOCK_STATUS(pbd)) {
+		prb_freeze_queue(pkc, po);
+		return NULL;
+	}
+
+	/*
+	 * 3.
+	 * open this block and return the offset where the first packet
+	 * needs to get stored.
+	 */
+	prb_open_block(pkc, pbd);
+	return (void *)pkc->nxt_offset;
+}
+
+static void prb_retire_current_block(struct kbdq_core *pkc,
+		struct packet_sock *po, unsigned int status)
+{
+	struct block_desc *pbd = GET_CURR_PBLOCK_DESC_FROM_CORE(pkc);
+
+	/* retire/close the current block */
+	if (likely(TP_STATUS_KERNEL == BLOCK_STATUS(pbd))) {
+		/*
+		 * Plug the case where copy_bits() is in progress on
+		 * cpu-0 and tpacket_rcv() got invoked on cpu-1, didn't
+		 * have space to copy the pkt in the current block and
+		 * called prb_retire_current_block()
+		 *
+		 * TODO:DURING REVIEW ASK IF THIS IS A VALID RACE.
+		 *	MAIN CONCERN IS ABOUT r[f/p]s THREADS(?) EXECUTING
+		 *	IN PARALLEL.
+		 *
+		 * We don't need to worry about the TMO case because
+		 * the timer-handler already handled this case.
+		 */
+		if (!(status & TP_STATUS_BLK_TMO)) {
+			while (atomic_read(&pkc->blk_fill_in_prog)) {
+				/* Waiting for skb_copy_bits to finish... */
+				cpu_relax();
+			}
+		}
+		prb_close_block(pkc, pbd, po, status);
+		return;
+	}
+
+	pr_err("<%s> ERROR-pbd[%d]:%p.Dumping stack\n",
+			__func__, pkc->kactive_blk_num, pbd);
+	dump_stack();
+	BUG();
+}
+
+static inline int prb_curr_blk_in_use(struct kbdq_core *pkc,
+				      struct block_desc *pbd)
+{
+	return (TP_STATUS_USER & BLOCK_STATUS(pbd));
+}
+
+static inline int prb_queue_frozen(struct kbdq_core *pkc)
+{
+	return pkc->reset_pending_on_curr_blk;
+}
+
+static inline void prb_clear_blk_fill_status(struct packet_ring_buffer *rb)
+{
+	struct kbdq_core *pkc  = GET_PBDQC_FROM_RB(rb);
+	atomic_dec(&pkc->blk_fill_in_prog);
+}
+
+static inline void prb_fill_curr_block(char *curr, struct kbdq_core *pkc,
+				struct block_desc *pbd,
+				unsigned int len)
+{
+	struct tpacket3_hdr *ppd;
+
+	ppd  = (struct tpacket3_hdr *)curr;
+	ppd->tp_next_offset = TOTAL_PKT_LEN_INCL_ALIGN(len);
+	pkc->prev = curr;
+	pkc->nxt_offset += TOTAL_PKT_LEN_INCL_ALIGN(len);
+	BLOCK_LEN(pbd) += TOTAL_PKT_LEN_INCL_ALIGN(len);
+	BLOCK_NUM_PKTS(pbd) += 1;
+	atomic_inc(&pkc->blk_fill_in_prog);
+}
+
+/* Assumes caller has the sk->rx_queue.lock */
+static void *__packet_lookup_frame_in_block(struct packet_ring_buffer *rb,
+					    int status,
+					    unsigned int len,
+					    struct packet_sock *po)
+{
+	struct kbdq_core *pkc  = GET_PBDQC_FROM_RB(rb);
+	struct block_desc *pbd = GET_CURR_PBLOCK_DESC_FROM_CORE(pkc);
+	char *curr, *end;
+
+	/* Queue is frozen when user space is lagging behind */
+	if (prb_queue_frozen(pkc)) {
+		/*
+		 * Check if that last block which caused the queue to freeze,
+		 * is still in_use by user-space.
+		 */
+		if (prb_curr_blk_in_use(pkc, pbd)) {
+			/* Can't record this packet */
+			return NULL;
+		} else {
+			/*
+			 * Ok, the block was released by user-space.
+			 * Now let's open that block.
+			 * opening a block also thaws the queue.
+			 * Thawing is a side effect.
+			 */
+			prb_open_block(pkc, pbd);
+		}
+	}
+
+	smp_mb();
+	curr = pkc->nxt_offset;
+	end = (char *) ((char *)pbd + pkc->kblk_size);
+
+	/* first try the current block */
+	if (curr+TOTAL_PKT_LEN_INCL_ALIGN(len) < end) {
+		prb_fill_curr_block(curr, pkc, pbd, len);
+		return (void *)curr;
+	}
+
+	/* Ok, close the current block */
+	prb_retire_current_block(pkc, po, 0);
+
+	/* Now, try to dispatch the next block */
+	curr = (char *)prb_dispatch_next_block(pkc, po);
+	if (curr) {
+		pbd = GET_CURR_PBLOCK_DESC_FROM_CORE(pkc);
+		prb_fill_curr_block(curr, pkc, pbd, len);
+		return (void *)curr;
+	}
+
+	/*
+	 * No free blocks are available.user_space hasn't caught up yet.
+	 * Queue was just frozen and now this packet will get dropped.
+	 */
+	return NULL;
+}
+
+static inline void *packet_current_rx_frame(struct packet_sock *po,
+					    struct packet_ring_buffer *rb,
+					    int status, unsigned int len)
+{
+	char *curr = NULL;
+	switch (po->tp_version) {
+	case TPACKET_V1:
+	case TPACKET_V2:
+		curr = packet_lookup_frame(po, rb, rb->head, status);
+		return curr;
+	case TPACKET_V3:
+		return __packet_lookup_frame_in_block(rb, status, len, po);
+	default:
+		pr_err("<%s> TPACKET version:%d not supported\n",
+			__func__, po->tp_version);
+		BUG();
+		return 0;
+	}
+}
+
+static inline void *prb_lookup_block(struct packet_sock *po,
+				     struct packet_ring_buffer *rb,
+				     unsigned int previous,
+				     int status)
+{
+	struct kbdq_core *pkc  = GET_PBDQC_FROM_RB(rb);
+	struct block_desc *pbd = GET_PBLOCK_DESC(pkc, previous);
+
+	if (status != BLOCK_STATUS(pbd))
+		return NULL;
+	return pbd;
+}
+
+static inline int prb_previous_blk_num(struct packet_ring_buffer *rb)
+{
+	unsigned int prev;
+	if (rb->prb_bdqc.kactive_blk_num)
+		prev = rb->prb_bdqc.kactive_blk_num-1;
+	else
+		prev = rb->prb_bdqc.knum_blocks-1;
+	return prev;
+}
+
+/* Assumes caller has held the rx_queue.lock */
+static inline void *__prb_previous_block(struct packet_sock *po,
+					 struct packet_ring_buffer *rb,
+					 int status)
+{
+	unsigned int previous = prb_previous_blk_num(rb);
+	return prb_lookup_block(po, rb, previous, status);
+}
+
+static inline void *packet_previous_rx_frame(struct packet_sock *po,
+					     struct packet_ring_buffer *rb,
+					     int status)
+{
+	if (po->tp_version <= TPACKET_V2)
+		return packet_previous_frame(po, rb, status);
+
+	return __prb_previous_block(po, rb, status);
+}
+
+static inline void packet_increment_rx_head(struct packet_sock *po,
+					    struct packet_ring_buffer *rb)
+{
+	switch (po->tp_version) {
+	case TPACKET_V1:
+	case TPACKET_V2:
+		return packet_increment_head(rb);
+	case TPACKET_V3:
+	default:
+		pr_err("<%s> TPACKET version:%d not supported.\
+			Dumping stack.\n", __func__, po->tp_version);
+		dump_stack();
+		BUG();
+		return;
+	}
+}
+
 static inline void *packet_previous_frame(struct packet_sock *po,
 		struct packet_ring_buffer *rb,
 		int status)
@@ -663,12 +1354,13 @@ static int tpacket_rcv(struct sk_buff *skb, struct net_device *dev,
 	union {
 		struct tpacket_hdr *h1;
 		struct tpacket2_hdr *h2;
+		struct tpacket3_hdr *h3;
 		void *raw;
 	} h;
 	u8 *skb_head = skb->data;
 	int skb_len = skb->len;
 	unsigned int snaplen, res;
-	unsigned long status = TP_STATUS_LOSING|TP_STATUS_USER;
+	unsigned long status = TP_STATUS_USER;
 	unsigned short macoff, netoff, hdrlen;
 	struct sk_buff *copy_skb = NULL;
 	struct timeval tv;
@@ -714,37 +1406,46 @@ static int tpacket_rcv(struct sk_buff *skb, struct net_device *dev,
 			po->tp_reserve;
 		macoff = netoff - maclen;
 	}
-
-	if (macoff + snaplen > po->rx_ring.frame_size) {
-		if (po->copy_thresh &&
-		    atomic_read(&sk->sk_rmem_alloc) + skb->truesize <
-		    (unsigned)sk->sk_rcvbuf) {
-			if (skb_shared(skb)) {
-				copy_skb = skb_clone(skb, GFP_ATOMIC);
-			} else {
-				copy_skb = skb_get(skb);
-				skb_head = skb->data;
+	if (po->tp_version <= TPACKET_V2) {
+		if (macoff + snaplen > po->rx_ring.frame_size) {
+			if (po->copy_thresh &&
+				atomic_read(&sk->sk_rmem_alloc) + skb->truesize <
+				(unsigned)sk->sk_rcvbuf) {
+				if (skb_shared(skb)) {
+					copy_skb = skb_clone(skb, GFP_ATOMIC);
+				} else {
+					copy_skb = skb_get(skb);
+					skb_head = skb->data;
+				}
+				if (copy_skb)
+					skb_set_owner_r(copy_skb, sk);
 			}
-			if (copy_skb)
-				skb_set_owner_r(copy_skb, sk);
+			snaplen = po->rx_ring.frame_size - macoff;
+			if ((int)snaplen < 0)
+				snaplen = 0;
 		}
-		snaplen = po->rx_ring.frame_size - macoff;
-		if ((int)snaplen < 0)
-			snaplen = 0;
 	}
-
 	spin_lock(&sk->sk_receive_queue.lock);
-	h.raw = packet_current_frame(po, &po->rx_ring, TP_STATUS_KERNEL);
+	h.raw = packet_current_rx_frame(po, &po->rx_ring,
+					TP_STATUS_KERNEL, (macoff+snaplen));
 	if (!h.raw)
 		goto ring_is_full;
-	packet_increment_head(&po->rx_ring);
+	if (po->tp_version <= TPACKET_V2) {
+		packet_increment_rx_head(po, &po->rx_ring);
+	/*
+	 * LOSING will be reported till you read the stats,
+	 * because it's COR - Clear On Read.
+	 * Anyways, moving it for V1/V2 only as V3 doesn't need this
+	 * at packet level.
+	 */
+		if (po->stats.tp_drops)
+			status |= TP_STATUS_LOSING;
+	}
 	po->stats.tp_packets++;
 	if (copy_skb) {
 		status |= TP_STATUS_COPY;
 		__skb_queue_tail(&sk->sk_receive_queue, copy_skb);
 	}
-	if (!po->stats.tp_drops)
-		status &= ~TP_STATUS_LOSING;
 	spin_unlock(&sk->sk_receive_queue.lock);
 
 	skb_copy_bits(skb, 0, h.raw + macoff, snaplen);
@@ -789,6 +1490,30 @@ static int tpacket_rcv(struct sk_buff *skb, struct net_device *dev,
 		h.h2->tp_vlan_tci = vlan_tx_tag_get(skb);
 		hdrlen = sizeof(*h.h2);
 		break;
+	case TPACKET_V3:
+		/* tp_nxt_offset is already populated above.
+		 * So DONT clear those fields here
+		 */
+		h.h3->tp_status = status;
+		h.h3->tp_len = skb->len;
+		h.h3->tp_snaplen = snaplen;
+		h.h3->tp_mac = macoff;
+		h.h3->tp_net = netoff;
+		if ((po->tp_tstamp & SOF_TIMESTAMPING_SYS_HARDWARE)
+				&& shhwtstamps->syststamp.tv64)
+			ts = ktime_to_timespec(shhwtstamps->syststamp);
+		else if ((po->tp_tstamp & SOF_TIMESTAMPING_RAW_HARDWARE)
+				&& shhwtstamps->hwtstamp.tv64)
+			ts = ktime_to_timespec(shhwtstamps->hwtstamp);
+		else if (skb->tstamp.tv64)
+			ts = ktime_to_timespec(skb->tstamp);
+		else
+			getnstimeofday(&ts);
+		h.h3->tp_sec  = ts.tv_sec;
+		h.h3->tp_nsec = ts.tv_nsec;
+		h.h3->tp_vlan_tci = vlan_tx_tag_get(skb);
+		hdrlen = sizeof(*h.h3);
+		break;
 	default:
 		BUG();
 	}
@@ -803,18 +1528,22 @@ static int tpacket_rcv(struct sk_buff *skb, struct net_device *dev,
 		sll->sll_ifindex = orig_dev->ifindex;
 	else
 		sll->sll_ifindex = dev->ifindex;
-
-	__packet_set_status(po, h.raw, status);
+	if (po->tp_version <= TPACKET_V2)
+		__packet_set_status(po, h.raw, status);
 	smp_mb();
 #if ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE == 1
 	{
 		u8 *start, *end;
 
-		end = (u8 *)PAGE_ALIGN((unsigned long)h.raw + macoff + snaplen);
-		for (start = h.raw; start < end; start += PAGE_SIZE)
-			flush_dcache_page(pgv_to_page(start));
+		if (po->tp_version <= TPACKET_V2) {
+			end = (u8 *)PAGE_ALIGN((unsigned long)h.raw + macoff + snaplen);
+			for (start = h.raw; start < end; start += PAGE_SIZE)
+				flush_dcache_page(pgv_to_page(start));
+		}
 	}
 #endif
+	if (po->tp_version > TPACKET_V2)
+		prb_clear_blk_fill_status(&po->rx_ring);
 
 	sk->sk_data_ready(sk, 0);
 
@@ -1291,7 +2020,7 @@ static int packet_release(struct socket *sock)
 	struct sock *sk = sock->sk;
 	struct packet_sock *po;
 	struct net *net;
-	struct tpacket_req req;
+	union tpacket_req_u req_u;
 
 	if (!sk)
 		return 0;
@@ -1318,13 +2047,13 @@ static int packet_release(struct socket *sock)
 
 	packet_flush_mclist(sk);
 
-	memset(&req, 0, sizeof(req));
+	memset(&req_u, 0, sizeof(req_u));
 
 	if (po->rx_ring.pg_vec)
-		packet_set_ring(sk, &req, 1, 0);
+		packet_set_ring(sk, &req_u, 1, 0);
 
 	if (po->tx_ring.pg_vec)
-		packet_set_ring(sk, &req, 1, 1);
+		packet_set_ring(sk, &req_u, 1, 1);
 
 	synchronize_net();
 	/*
@@ -1949,15 +2678,26 @@ packet_setsockopt(struct socket *sock, int level, int optname, char __user *optv
 	case PACKET_RX_RING:
 	case PACKET_TX_RING:
 	{
-		struct tpacket_req req;
+		union tpacket_req_u req_u;
+		int len;
 
-		if (optlen < sizeof(req))
+		switch (po->tp_version) {
+		case TPACKET_V1:
+		case TPACKET_V2:
+			len = sizeof(req_u.req);
+			break;
+		case TPACKET_V3:
+		default:
+			len = sizeof(req_u.req3);
+			break;
+		}
+		if (optlen < len)
 			return -EINVAL;
 		if (pkt_sk(sk)->has_vnet_hdr)
 			return -EINVAL;
-		if (copy_from_user(&req, optval, sizeof(req)))
+		if (copy_from_user(&req_u.req, optval, len))
 			return -EFAULT;
-		return packet_set_ring(sk, &req, 0, optname == PACKET_TX_RING);
+		return packet_set_ring(sk, &req_u, 0, optname == PACKET_TX_RING);
 	}
 	case PACKET_COPY_THRESH:
 	{
@@ -1984,6 +2724,7 @@ packet_setsockopt(struct socket *sock, int level, int optname, char __user *optv
 		switch (val) {
 		case TPACKET_V1:
 		case TPACKET_V2:
+		case TPACKET_V3:
 			po->tp_version = val;
 			return 0;
 		default:
@@ -2082,6 +2823,7 @@ static int packet_getsockopt(struct socket *sock, int level, int optname,
 	struct packet_sock *po = pkt_sk(sk);
 	void *data;
 	struct tpacket_stats st;
+	union tpacket_stats_u st_u;
 
 	if (level != SOL_PACKET)
 		return -ENOPROTOOPT;
@@ -2094,15 +2836,26 @@ static int packet_getsockopt(struct socket *sock, int level, int optname,
 
 	switch (optname) {
 	case PACKET_STATISTICS:
-		if (len > sizeof(struct tpacket_stats))
-			len = sizeof(struct tpacket_stats);
+		if (po->tp_version == TPACKET_V3) {
+			len = sizeof(struct tpacket_stats_v3);
+		} else {
+			if (len > sizeof(struct tpacket_stats))
+				len = sizeof(struct tpacket_stats);
+		}
 		spin_lock_bh(&sk->sk_receive_queue.lock);
-		st = po->stats;
+		if (po->tp_version == TPACKET_V3) {
+			memcpy(&st_u.stats3, &po->stats,
+			sizeof(struct tpacket_stats));
+			st_u.stats3.tp_freeze_q_cnt = po->stats_u.stats3.tp_freeze_q_cnt;
+			st_u.stats3.tp_packets += po->stats.tp_drops;
+			data = &st_u.stats3;
+		} else {
+			st = po->stats;
+			st.tp_packets += st.tp_drops;
+			data = &st;
+		}
 		memset(&po->stats, 0, sizeof(st));
 		spin_unlock_bh(&sk->sk_receive_queue.lock);
-		st.tp_packets += st.tp_drops;
-
-		data = &st;
 		break;
 	case PACKET_AUXDATA:
 		if (len > sizeof(int))
@@ -2143,6 +2896,9 @@ static int packet_getsockopt(struct socket *sock, int level, int optname,
 		case TPACKET_V2:
 			val = sizeof(struct tpacket2_hdr);
 			break;
+		case TPACKET_V3:
+			val = sizeof(struct tpacket3_hdr);
+			break;
 		default:
 			return -EINVAL;
 		}
@@ -2293,7 +3049,7 @@ static unsigned int packet_poll(struct file *file, struct socket *sock,
 
 	spin_lock_bh(&sk->sk_receive_queue.lock);
 	if (po->rx_ring.pg_vec) {
-		if (!packet_previous_frame(po, &po->rx_ring, TP_STATUS_KERNEL))
+		if (!packet_previous_rx_frame(po, &po->rx_ring, TP_STATUS_KERNEL))
 			mask |= POLLIN | POLLRDNORM;
 	}
 	spin_unlock_bh(&sk->sk_receive_queue.lock);
@@ -2412,7 +3168,7 @@ out_free_pgvec:
 	goto out;
 }
 
-static int packet_set_ring(struct sock *sk, struct tpacket_req *req,
+static int packet_set_ring(struct sock *sk, union tpacket_req_u *req_u,
 		int closing, int tx_ring)
 {
 	struct pgv *pg_vec = NULL;
@@ -2421,7 +3177,17 @@ static int packet_set_ring(struct sock *sk, struct tpacket_req *req,
 	struct packet_ring_buffer *rb;
 	struct sk_buff_head *rb_queue;
 	__be16 num;
-	int err;
+	int err = -EINVAL;
+	/* Added to avoid minimal code churn */
+	struct tpacket_req *req = &req_u->req;
+
+	/* Opening a Tx-ring is NOT supported in TPACKET_V3 */
+	if (!closing && tx_ring && (po->tp_version > TPACKET_V2)) {
+		pr_err("<%s> Tx-ring is not supported on version:%d.\
+			   Dumping stack.\n", __func__, po->tp_version);
+		dump_stack();
+		goto out;
+	}
 
 	rb = tx_ring ? &po->tx_ring : &po->rx_ring;
 	rb_queue = tx_ring ? &sk->sk_write_queue : &sk->sk_receive_queue;
@@ -2447,6 +3213,9 @@ static int packet_set_ring(struct sock *sk, struct tpacket_req *req,
 		case TPACKET_V2:
 			po->tp_hdrlen = TPACKET2_HDRLEN;
 			break;
+		case TPACKET_V3:
+			po->tp_hdrlen = TPACKET3_HDRLEN;
+			break;
 		}
 
 		err = -EINVAL;
@@ -2472,6 +3241,17 @@ static int packet_set_ring(struct sock *sk, struct tpacket_req *req,
 		pg_vec = alloc_pg_vec(req, order);
 		if (unlikely(!pg_vec))
 			goto out;
+		switch (po->tp_version) {
+		case TPACKET_V3:
+		/* Transmit path is not supported. We checked
+		 * it above but just being paranoid
+		 */
+			if (!tx_ring)
+				init_prb_bdqc(po, rb, pg_vec, req_u, tx_ring);
+				break;
+		default:
+			break;
+		}
 	}
 	/* Done */
 	else {
@@ -2528,7 +3308,11 @@ static int packet_set_ring(struct sock *sk, struct tpacket_req *req,
 		dev_add_pack(&po->prot_hook);
 	}
 	spin_unlock(&po->bind_lock);
-
+	if (closing && (po->tp_version > TPACKET_V2)) {
+		/* Because we don't support block-based V3 on tx-ring */
+		if (!tx_ring)
+			prb_shutdown_retire_blk_timer(po, tx_ring, rb_queue);
+	}
 	release_sock(sk);
 
 	if (pg_vec)
-- 
1.7.5.2


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [af-packet 2/2] Enhance af-packet to provide (near zero)lossless packet capture functionality
  2011-06-08  3:13 ` [af-packet 2/2] " Chetan Loke
@ 2011-06-08  4:18   ` Joe Perches
  2011-06-08 22:01     ` chetan loke
  0 siblings, 1 reply; 9+ messages in thread
From: Joe Perches @ 2011-06-08  4:18 UTC (permalink / raw)
  To: Chetan Loke; +Cc: netdev, davem, eric.dumazet, kaber, johann.baudy, Chetan Loke

On Tue, 2011-06-07 at 23:13 -0400, Chetan Loke wrote:
> Signed-off-by: Chetan Loke <lokec@ccs.neu.edu>

just trivia:

> ---
>  net/packet/af_packet.c |  878 +++++++++++++++++++++++++++++++++++++++++++++---
[]
> +/* kbdq - kernel block descriptor queue */
> +struct kbdq_core {
> +	struct pgv	*pkbdq;
> +	unsigned int	hdrlen;
> +	unsigned char	reset_pending_on_curr_blk;
> +	unsigned char   delete_blk_timer;
> +	unsigned short	kactive_blk_num;
> +	unsigned short	hole_bytes_size;
> +	char		*pkblk_start;
> +	char		*pkblk_end;
> +	int		kblk_size;
> +	unsigned int	knum_blocks;
> +	uint64_t	knxt_seq_num;
> +	char		*prev;
> +	char		*nxt_offset;
> +
> +	/* last_kactive_blk_num:
> +	 * trick to see if user-space has caught up
> +	 * in order to avoid refreshing timer when every single pkt arrives.
> +	 */
> +	unsigned short	last_kactive_blk_num;
> +
> +	atomic_t	blk_fill_in_prog;
> +
> +	/* Default is set to 8ms */
> +#define DEFAULT_PRB_RETIRE_TOV	(8)
> +
> +	unsigned short  retire_blk_tov;
> +	unsigned long	tov_in_jiffies;
> +
> +	/* timer to retire an outstanding block */
> +	struct timer_list retire_blk_timer;
> +};

You could align the member entries a bit more,
maybe move last_kactive_blk_num after retire_blk_tov

[]

> @@ -248,8 +322,11 @@ static void __packet_set_status(struct packet_sock *po, void *frame, int status)
>  		h.h2->tp_status = status;
>  		flush_dcache_page(pgv_to_page(&h.h2->tp_status));
>  		break;
> +	case TPACKET_V3:
>  	default:
> -		pr_err("TPACKET version not supported\n");
> +		pr_err("<%s> TPACKET version not supported.Who is calling?\
> +			Dumping stack.\n", __func__);

whitespace defect because of line continuation.  Maybe just:
		WARN(1, "TPACKET version not supported\n");

>  		BUG();
>  	}
>  
> @@ -274,8 +351,11 @@ static int __packet_get_status(struct packet_sock *po, void *frame)
>  	case TPACKET_V2:
>  		flush_dcache_page(pgv_to_page(&h.h2->tp_status));
>  		return h.h2->tp_status;
> +	case TPACKET_V3:
>  	default:
> -		pr_err("TPACKET version not supported\n");
> +		pr_err("<%s> TPACKET version:%d not supported.\
> +			Dumping stack.\n", __func__, po->tp_version);
> +		dump_stack();

here too.

		WARN(1, "TPACKET version %d not supported\n", po->tp_version);

>  		BUG();
>  		return 0;
>  	}
[]
> +static void prb_open_block(struct kbdq_core *pkc1, struct block_desc *pbd1)
> +{
> +	pr_err("<%s> ERROR block:%p is NOT FREE status:%d\
> +			kactive_blk_num:%d\n",
> +			__func__, pbd1, BLOCK_STATUS(pbd1), pkc1->kactive_blk_num);
> +	dump_stack();
> +	BUG();

here too.  maybe just:
	WARN(1, "%s: ERROR block:%p is not free.  status: %s kactive_blk_num:%d\n"
	     __func__, pbd1, BLOCK_STATUS(pbd1), pkc1->kactive_blk_num);

> +static inline void packet_increment_rx_head(struct packet_sock *po,
> +					    struct packet_ring_buffer *rb)
> +{
> +	switch (po->tp_version) {
> +	case TPACKET_V1:
> +	case TPACKET_V2:
> +		return packet_increment_head(rb);
> +	case TPACKET_V3:
> +	default:
> +		pr_err("<%s> TPACKET version:%d not supported.\
> +			Dumping stack.\n", __func__, po->tp_version);

whitespace, WARN(1, etc...


> @@ -2412,7 +3168,7 @@ out_free_pgvec:
>  	goto out;
>  }
>  
> -static int packet_set_ring(struct sock *sk, struct tpacket_req *req,
> +static int packet_set_ring(struct sock *sk, union tpacket_req_u *req_u,
>  		int closing, int tx_ring)
>  {
>  	struct pgv *pg_vec = NULL;
> @@ -2421,7 +3177,17 @@ static int packet_set_ring(struct sock *sk, struct tpacket_req *req,
>  	struct packet_ring_buffer *rb;
>  	struct sk_buff_head *rb_queue;
>  	__be16 num;
> -	int err;
> +	int err = -EINVAL;
> +	/* Added to avoid minimal code churn */
> +	struct tpacket_req *req = &req_u->req;
> +
> +	/* Opening a Tx-ring is NOT supported in TPACKET_V3 */
> +	if (!closing && tx_ring && (po->tp_version > TPACKET_V2)) {
> +		pr_err("<%s> Tx-ring is not supported on version:%d.\
> +			   Dumping stack.\n", __func__, po->tp_version);

whitespace, WARN(1, etc...



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [af-packet 1/2] Enhance af-packet to provide (near zero)lossless packet capture functionality.
  2011-06-08  3:13 ` [af-packet 1/2] " Chetan Loke
@ 2011-06-08  4:35   ` Eric Dumazet
  2011-06-08 22:10     ` chetan loke
  2011-06-08 16:03   ` Stephen Hemminger
  1 sibling, 1 reply; 9+ messages in thread
From: Eric Dumazet @ 2011-06-08  4:35 UTC (permalink / raw)
  To: Chetan Loke; +Cc: netdev, davem, kaber, johann.baudy, Chetan Loke

Le mardi 07 juin 2011 à 23:13 -0400, Chetan Loke a écrit :
>  
> +struct tpacket3_hdr {
> +	__u32		tp_status;
> +	__u32		tp_len;
> +	__u32		tp_snaplen;
> +	__u16		tp_mac;
> +	__u16		tp_net;

> +	__u32		tp_sec;
> +	__u32		tp_nsec;
> +	__u16		tp_vlan_tci;

missing "__u16 tp_padding;" here

check :

http://git2.kernel.org/?p=linux/kernel/git/davem/net-2.6.git;a=commit;h=13fcb7bd322164c67926ffe272846d4860196dc6


> +	__u32		tp_next_offset;
> +};




^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [af-packet 1/2] Enhance af-packet to provide (near zero)lossless packet capture functionality.
  2011-06-08  3:13 ` [af-packet 1/2] " Chetan Loke
  2011-06-08  4:35   ` Eric Dumazet
@ 2011-06-08 16:03   ` Stephen Hemminger
  1 sibling, 0 replies; 9+ messages in thread
From: Stephen Hemminger @ 2011-06-08 16:03 UTC (permalink / raw)
  To: Chetan Loke; +Cc: netdev, davem, eric.dumazet, kaber, johann.baudy, Chetan Loke

On Tue,  7 Jun 2011 23:13:05 -0400
Chetan Loke <loke.chetan@gmail.com> wrote:

> --- a/include/linux/if_packet.h
> +++ b/include/linux/if_packet.h
> @@ -24,7 +24,7 @@ struct sockaddr_ll {
>  #define PACKET_HOST		0		/* To us		*/
>  #define PACKET_BROADCAST	1		/* To all		*/
>  #define PACKET_MULTICAST	2		/* To group		*/
> -#define PACKET_OTHERHOST	3		/* To someone else 	*/
> +#define PACKET_OTHERHOST	3		/* To someone else	*/

Useless whitespace change in patch. It makes sense to review the resulting
diff and avoid this kind of stuff creeping in.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [af-packet 2/2] Enhance af-packet to provide (near zero)lossless packet capture functionality
  2011-06-08  4:18   ` Joe Perches
@ 2011-06-08 22:01     ` chetan loke
  0 siblings, 0 replies; 9+ messages in thread
From: chetan loke @ 2011-06-08 22:01 UTC (permalink / raw)
  To: Joe Perches; +Cc: netdev, davem, eric.dumazet, kaber, johann.baudy, Chetan Loke

On Wed, Jun 8, 2011 at 12:18 AM, Joe Perches <joe@perches.com> wrote:
> On Tue, 2011-06-07 at 23:13 -0400, Chetan Loke wrote:
>> Signed-off-by: Chetan Loke <lokec@ccs.neu.edu>
>
> just trivia:
>
>> ---
>>  net/packet/af_packet.c |  878 +++++++++++++++++++++++++++++++++++++++++++++---
> []
>> +/* kbdq - kernel block descriptor queue */
>> +struct kbdq_core {
>> +     struct pgv      *pkbdq;
>> +     unsigned int    hdrlen;
>> +     unsigned char   reset_pending_on_curr_blk;
>> +     unsigned char   delete_blk_timer;
>> +     unsigned short  kactive_blk_num;
>> +     unsigned short  hole_bytes_size;
>> +     char            *pkblk_start;
>> +     char            *pkblk_end;
>> +     int             kblk_size;
>> +     unsigned int    knum_blocks;
>> +     uint64_t        knxt_seq_num;
>> +     char            *prev;
>> +     char            *nxt_offset;
>> +
>> +     /* last_kactive_blk_num:
>> +      * trick to see if user-space has caught up
>> +      * in order to avoid refreshing timer when every single pkt arrives.
>> +      */
>> +     unsigned short  last_kactive_blk_num;
>> +
>> +     atomic_t        blk_fill_in_prog;
>> +
>> +     /* Default is set to 8ms */
>> +#define DEFAULT_PRB_RETIRE_TOV       (8)
>> +
>> +     unsigned short  retire_blk_tov;
>> +     unsigned long   tov_in_jiffies;
>> +
>> +     /* timer to retire an outstanding block */
>> +     struct timer_list retire_blk_timer;
>> +};
>
> You could align the member entries a bit more,
> maybe move last_kactive_blk_num after retire_blk_tov
>
> []
>
>> @@ -248,8 +322,11 @@ static void __packet_set_status(struct packet_sock *po, void *frame, int status)
>>               h.h2->tp_status = status;
>>               flush_dcache_page(pgv_to_page(&h.h2->tp_status));
>>               break;
>> +     case TPACKET_V3:
>>       default:
>> -             pr_err("TPACKET version not supported\n");
>> +             pr_err("<%s> TPACKET version not supported.Who is calling?\
>> +                     Dumping stack.\n", __func__);
>
> whitespace defect because of line continuation.  Maybe just:
>                WARN(1, "TPACKET version not supported\n");
>
>>               BUG();
>>       }
>>
>> @@ -274,8 +351,11 @@ static int __packet_get_status(struct packet_sock *po, void *frame)
>>       case TPACKET_V2:
>>               flush_dcache_page(pgv_to_page(&h.h2->tp_status));
>>               return h.h2->tp_status;
>> +     case TPACKET_V3:
>>       default:
>> -             pr_err("TPACKET version not supported\n");
>> +             pr_err("<%s> TPACKET version:%d not supported.\
>> +                     Dumping stack.\n", __func__, po->tp_version);
>> +             dump_stack();
>
> here too.
>
>                WARN(1, "TPACKET version %d not supported\n", po->tp_version);
>
>>               BUG();
>>               return 0;
>>       }
> []
>> +static void prb_open_block(struct kbdq_core *pkc1, struct block_desc *pbd1)
>> +{
>> +     pr_err("<%s> ERROR block:%p is NOT FREE status:%d\
>> +                     kactive_blk_num:%d\n",
>> +                     __func__, pbd1, BLOCK_STATUS(pbd1), pkc1->kactive_blk_num);
>> +     dump_stack();
>> +     BUG();
>
> here too.  maybe just:
>        WARN(1, "%s: ERROR block:%p is not free.  status: %s kactive_blk_num:%d\n"
>             __func__, pbd1, BLOCK_STATUS(pbd1), pkc1->kactive_blk_num);
>
>> +static inline void packet_increment_rx_head(struct packet_sock *po,
>> +                                         struct packet_ring_buffer *rb)
>> +{
>> +     switch (po->tp_version) {
>> +     case TPACKET_V1:
>> +     case TPACKET_V2:
>> +             return packet_increment_head(rb);
>> +     case TPACKET_V3:
>> +     default:
>> +             pr_err("<%s> TPACKET version:%d not supported.\
>> +                     Dumping stack.\n", __func__, po->tp_version);
>
> whitespace, WARN(1, etc...
>
>
>> @@ -2412,7 +3168,7 @@ out_free_pgvec:
>>       goto out;
>>  }
>>
>> -static int packet_set_ring(struct sock *sk, struct tpacket_req *req,
>> +static int packet_set_ring(struct sock *sk, union tpacket_req_u *req_u,
>>               int closing, int tx_ring)
>>  {
>>       struct pgv *pg_vec = NULL;
>> @@ -2421,7 +3177,17 @@ static int packet_set_ring(struct sock *sk, struct tpacket_req *req,
>>       struct packet_ring_buffer *rb;
>>       struct sk_buff_head *rb_queue;
>>       __be16 num;
>> -     int err;
>> +     int err = -EINVAL;
>> +     /* Added to avoid minimal code churn */
>> +     struct tpacket_req *req = &req_u->req;
>> +
>> +     /* Opening a Tx-ring is NOT supported in TPACKET_V3 */
>> +     if (!closing && tx_ring && (po->tp_version > TPACKET_V2)) {
>> +             pr_err("<%s> Tx-ring is not supported on version:%d.\
>> +                        Dumping stack.\n", __func__, po->tp_version);
>
> whitespace, WARN(1, etc...
>
>
>

Sure, will address all the comments(sob,alignment,pr_err etc) in the
next iteration of the patch.

thanks
Chetan

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [af-packet 1/2] Enhance af-packet to provide (near zero)lossless packet capture functionality.
  2011-06-08  4:35   ` Eric Dumazet
@ 2011-06-08 22:10     ` chetan loke
  2011-06-08 22:22       ` Eric Dumazet
  0 siblings, 1 reply; 9+ messages in thread
From: chetan loke @ 2011-06-08 22:10 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: netdev, davem, kaber, johann.baudy, Chetan Loke

On Wed, Jun 8, 2011 at 12:35 AM, Eric Dumazet <eric.dumazet@gmail.com> wrote:
> Le mardi 07 juin 2011 à 23:13 -0400, Chetan Loke a écrit :
>>
>> +struct tpacket3_hdr {
>> +     __u32           tp_status;
>> +     __u32           tp_len;
>> +     __u32           tp_snaplen;
>> +     __u16           tp_mac;
>> +     __u16           tp_net;
>
>> +     __u32           tp_sec;
>> +     __u32           tp_nsec;
>> +     __u16           tp_vlan_tci;
>
> missing "__u16 tp_padding;" here
>
> check :
>
> http://git2.kernel.org/?p=linux/kernel/git/davem/net-2.6.git;a=commit;h=13fcb7bd322164c67926ffe272846d4860196dc6


Eric, thanks for pointing that. I will add the padding. But just out
of curiosity, how is the information being leaked in tpacket_rcv()?

If someone is capturing packets then they have access to all the data
anyways. Also, tpacket_rcv doesn't memset the frame-element to 'zero'
before calling
skb_copy_bits(). And we would never want to memset anyways.

thnx
Chetan Loke

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [af-packet 1/2] Enhance af-packet to provide (near zero)lossless packet capture functionality.
  2011-06-08 22:10     ` chetan loke
@ 2011-06-08 22:22       ` Eric Dumazet
  0 siblings, 0 replies; 9+ messages in thread
From: Eric Dumazet @ 2011-06-08 22:22 UTC (permalink / raw)
  To: chetan loke; +Cc: netdev, davem, kaber, johann.baudy, Chetan Loke

Le mercredi 08 juin 2011 à 18:10 -0400, chetan loke a écrit :

> Eric, thanks for pointing that. I will add the padding. But just out
> of curiosity, how is the information being leaked in tpacket_rcv()?
> 
> If someone is capturing packets then they have access to all the data
> anyways. Also, tpacket_rcv doesn't memset the frame-element to 'zero'
> before calling
> skb_copy_bits(). And we would never want to memset anyways.
> 

Its a security risk, leaking content of kernel stack or kernel memory.

capturing packets capability is not meaning "accessing full memory"

Some clever hackers can exploit these kind of leaks.

Better make sure we dont have holes in structures copied to user.
(or mapped in this case, but you never knows ;) )




^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2011-06-08 22:22 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-06-08  3:13 [af-packet 0/2] Enhance af-packet to provide (near zero)lossless packet capture functionality Chetan Loke
2011-06-08  3:13 ` [af-packet 1/2] " Chetan Loke
2011-06-08  4:35   ` Eric Dumazet
2011-06-08 22:10     ` chetan loke
2011-06-08 22:22       ` Eric Dumazet
2011-06-08 16:03   ` Stephen Hemminger
2011-06-08  3:13 ` [af-packet 2/2] " Chetan Loke
2011-06-08  4:18   ` Joe Perches
2011-06-08 22:01     ` chetan loke

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.