* AVC Seqlock vs. RCU experiment results
@ 2004-10-11 15:16 Kylene J Hall
0 siblings, 0 replies; 2+ messages in thread
From: Kylene J Hall @ 2004-10-11 15:16 UTC (permalink / raw)
To: Linux Kernel ML(Eng)
[-- Attachment #1: Type: text/plain, Size: 5286 bytes --]
The following is our the results from our experiment. Since after many
iterations based on comments the patch is still not showing significant
performance improvements we will turn our efforts towards looking at the
rcu patch.
In the results section
wselinux corresponds to booting with the parameter selinux=1
woselinux corresponds to booting with the parameter selinux=0
1way Results
Chatroom results (msg/sec) 4 iterations each default configuration
myseq+wselinux rcu+wselinux Vanilla+wselinux
491400 429184 194931
300075 449943 428724
466200 433839 324675
440528 448933 430107
================================================================
Unixbench results (index values)
myseq+wselinux
rcu+wselinux Vanilla+wselinux
Dhrystone 2 using register variables 307.2 309.5
307.2
Double-Precision Whetstone 145.6 146.3
146.3
Execl Throughput 621.4 651.4
643.5
File Copy 1024 bufsize 2000 maxblocks 640.6 533.9 650.4
File Copy 256 bufsize 500 maxblocks 614.3 431.2
669.0
File Copy 4096 bufsize 8000 maxblocks 528.2 498.5 513.6
Pipe Throughput 524.1 410.5
568.7
Process Creation 991.1 956.5
962.3
Shell Scripts (8 concurrent) 485.5 501.7
495.0
System Call Overhead 713.9 725.9
721.8
================================================================
503.2 466.1 513.1
4way Results
Chatroom results (msg/sec)
myseq+woselinux myseq+wselinux rcu+woselinux rcu+wselinux
Vanilla+woselinux Vanilla+wselinux
466744 275103 378787 310318 455062 229885
455062 318471 454029 389863 423280 209205
437636 314712 431965 326264 372786 201005
431499 374181 418410 309837 377714 191204
========================================================================
Unixbench results (index values)
myseq+woselinux
myseq+wselinux rcu+woselinux rcu+wselinux Vanilla+woselinux
Vanilla+wselinux
Dhrystone 2 using register variables 131.5 131.5 131.4 131.5
131.1 131.3
Double-Precision Whetstone 82.7 82.7 83.1 83.0 82.7
83.1
Execl Throughput 311.8 285.8 309.0 297.3
309.5 296.0
File Copy 1024 bufsize 2000 maxblocks 172.7 154.3
174.1 159.1 173.0 155.8
File Copy 256 bufsize 500 maxblocks 159.2 135.2 159.8 141.7
161.0 138.0
File Copy 4096 bufsize 8000 maxblocks 209.4 195.5
209.4 201.5 209.4 200.5
Pipe Throughput 240.0 162.8 238.0 195.5
240.3 187.3
Process Creation 343.4 332.4 339.8 340.8
342.4 337.5
Shell Scripts (8 concurrent) 791.7 767.2 793.3 769.5
792.2 765.0
System Call Overhead 375.9 377.6 374.9 373.4
375.7 377.3
================================================================
233.4 213.7 233.1
221.4 233.4 219.0
>> 1) the number of AVC entries is 3 times as big, increasing
>> the chance of L1/L2 cache misses, but decreasing the chance
>> of an AVC miss - dunno if this is good or bad
>We increased this number to attempt to do just that produce fewer AVC
misses. We saw about a 1 point improvement on the Unixbench benchmark
>> 2) avc_claim_node doesn't check any more whether the avc
>> entry got recently used - dunno if this is good or bad
>My initial attempts and trying to keep this functionality seemed racy but
I think I could try:
> int claim_this_node = -1;
> take read_lock
> while( walking list) {
> if ( current_node->used == UNUSED )
> claim_this_node = current_node
> break;
> current_node->used = UNUSED
> }
> if ( claim_this_node == -1 )
> claim_this_node = round robin approach
>
> . . .continue with current claim node work
>
Have now tried this results included above no real noticeable improvement.
>> 3) the av_decision isn't part of the avc_entry any more
>> (again, dunno if this is good or bad)
>Here was our logic correct me if I'm wrong. We assumed the cacheline
size to be 128 bytes since we wanted each lock to protect a cacheline or a
>couple of cachelines which were a group of entries which had the same
hash value we needed to decrease the size of the entries inorder to fit
more >into this region thus we used a pointer to the av_decision structure
which accounts for most of the data stored per entry. This allowed us to
pack the >lookup data which accessed most frequently in the adjacent
cachelines. Make sense? Other suggestions?
>>cheers,
>>Rik
Kylene J. Hall
IBM Linux Technology Center
[-- Attachment #2: seq.patch --]
[-- Type: application/octet-stream, Size: 26805 bytes --]
--- linux-2.6.8.1/security/selinux/avc.c 2004-08-14 05:55:48.000000000 -0500
+++ linux-2.6.8.1-myseq/security/selinux/avc.c 2004-10-05 16:21:47.000000000 -0500
@@ -28,80 +28,85 @@
#include "avc.h"
#include "avc_ss.h"
#ifdef CONFIG_AUDIT
#include "class_to_string.h"
#endif
#include "common_perm_to_string.h"
#include "av_inherit.h"
#include "av_perm_to_string.h"
#include "objsec.h"
-#define AVC_CACHE_SLOTS 512
+#define AVC_CACHE_SLOTS 3*512
#define AVC_CACHE_MAXNODES 410
+#define AVC_CACHE_SLOTS_PER_LOCK 6
+#define AVC_CACHE_BLOCKS AVC_CACHE_SLOTS/AVC_CACHE_SLOTS_PER_LOCK
+
+#define SLOT_INVALID 0
+#define SLOT_NOT_RECENTLY_USED 1
+#define SLOT_RECENTLY_USED 2
struct avc_entry {
u32 ssid;
u32 tsid;
u16 tclass;
- struct av_decision avd;
- int used; /* used recently */
+ struct av_decision *avd;
+ char used; /* active node */
};
struct avc_node {
- struct avc_entry ae;
- struct avc_node *next;
+ seqlock_t lock;
+ short next_evicted;
+ struct avc_entry ae[6];
};
struct avc_cache {
- struct avc_node *slots[AVC_CACHE_SLOTS];
- u32 lru_hint; /* LRU hint for reclaim scan */
- u32 active_nodes;
- u32 latest_notif; /* latest revocation notification */
+ struct avc_node blocks[AVC_CACHE_BLOCKS];
+ atomic_t active_nodes; //???
+ atomic_t latest_notif; /* latest revocation notification */
};
struct avc_callback_node {
int (*callback) (u32 event, u32 ssid, u32 tsid,
u16 tclass, u32 perms,
u32 *out_retained);
u32 events;
u32 ssid;
u32 tsid;
u16 tclass;
u32 perms;
struct avc_callback_node *next;
};
-static spinlock_t avc_lock = SPIN_LOCK_UNLOCKED;
-static struct avc_node *avc_node_freelist;
+//static spinlock_t avc_lock = SPIN_LOCK_UNLOCKED;
+//static struct avc_node *avc_node_freelist;
static struct avc_cache avc_cache;
static unsigned avc_cache_stats[AVC_NSTATS];
static struct avc_callback_node *avc_callbacks;
static inline int avc_hash(u32 ssid, u32 tsid, u16 tclass)
{
- return (ssid ^ (tsid<<2) ^ (tclass<<4)) & (AVC_CACHE_SLOTS - 1);
+ return (ssid ^ (tsid<<2) ^ (tclass<<4)) & (AVC_CACHE_BLOCKS - 1);
}
#ifdef AVC_CACHE_STATS
static inline void avc_cache_stats_incr(int type)
{
avc_cache_stats[type]++;
}
static inline void avc_cache_stats_add(int type, unsigned val)
{
avc_cache_stats[type] += val;
}
#else
static inline void avc_cache_stats_incr(int type)
{ }
-
static inline void avc_cache_stats_add(int type, unsigned val)
{ }
#endif
/**
* avc_dump_av - Display an access vector in human-readable form.
* @tclass: target security class
* @av: access vector
*/
void avc_dump_av(struct audit_buffer *ab, u16 tclass, u32 av)
@@ -181,207 +186,262 @@
audit_log_format(ab, " tclass=%s", class_to_string[tclass]);
}
/**
* avc_init - Initialize the AVC.
*
* Initialize the access vector cache.
*/
void __init avc_init(void)
{
- struct avc_node *new;
- int i;
- for (i = 0; i < AVC_CACHE_MAXNODES; i++) {
- new = kmalloc(sizeof(*new), GFP_ATOMIC);
- if (!new) {
- printk(KERN_WARNING "avc: only able to allocate "
- "%d entries\n", i);
+ int i, j;
+ struct av_decision* avd;
+
+ for ( i = 0; i < AVC_CACHE_BLOCKS; i++ ) {
+ seqlock_init(&avc_cache.blocks[i].lock);
+ for ( j = 0; j < AVC_CACHE_SLOTS_PER_LOCK; j++ ) {
+ avd = kmalloc(sizeof(*avd), GFP_ATOMIC );
+ if ( !avd ) {
+ printk(KERN_WARNING
+ "avc: only able to allocate some entries\n");
break;
}
- memset(new, 0, sizeof(*new));
- new->next = avc_node_freelist;
- avc_node_freelist = new;
+ memset(avd, 0, sizeof(*avd));
+ avc_cache.blocks[i].ae[j].avd = avd;
+ }
}
audit_log(current->audit_context, "AVC INITIALIZED\n");
}
#if 0
static void avc_hash_eval(char *tag)
{
int i, chain_len, max_chain_len, slots_used;
struct avc_node *node;
unsigned long flags;
+
spin_lock_irqsave(&avc_lock,flags);
slots_used = 0;
max_chain_len = 0;
for (i = 0; i < AVC_CACHE_SLOTS; i++) {
node = avc_cache.slots[i];
if (node) {
slots_used++;
chain_len = 0;
while (node) {
chain_len++;
node = node->next;
}
if (chain_len > max_chain_len)
max_chain_len = chain_len;
}
}
spin_unlock_irqrestore(&avc_lock,flags);
- printk(KERN_INFO "\n");
- printk(KERN_INFO "%s avc: %d entries and %d/%d buckets used, longest "
- "chain length %d\n", tag, avc_cache.active_nodes, slots_used,
- AVC_CACHE_SLOTS, max_chain_len);
}
#else
static inline void avc_hash_eval(char *tag)
{ }
#endif
-static inline struct avc_node *avc_reclaim_node(void)
-{
- struct avc_node *prev, *cur;
- int hvalue, try;
+static inline struct avc_entry *avc_claim_node(u32 ssid,
+ u32 tsid, u16 tclass,
+ struct av_decision *avd )
+{
+ struct avc_entry* new = NULL;
+ struct avc_node* block;
+ int hvalue, i, seq, claim_this_node, mark_unused=0;
+ unsigned long flags;
- hvalue = avc_cache.lru_hint;
- for (try = 0; try < 2; try++) {
- do {
- prev = NULL;
- cur = avc_cache.slots[hvalue];
- while (cur) {
- if (!cur->ae.used)
- goto found;
+ hvalue = avc_hash(ssid, tsid, tclass );
+ block = &avc_cache.blocks[ hvalue ];
- cur->ae.used = 0;
+ do {
+ //reset
+ claim_this_node = -1;
+ mark_unused = 0;
+
+ seq = read_seqbegin_irqsave( &(block->lock), flags );
+
+ if ( block->ae[block->next_evicted%AVC_CACHE_SLOTS_PER_LOCK].used
+ != SLOT_RECENTLY_USED ) {
+ claim_this_node = block->next_evicted%AVC_CACHE_SLOTS_PER_LOCK;
+ } else {
+ for ( i = 0; i < AVC_CACHE_SLOTS_PER_LOCK; i++ ) {
+ if( block->ae[i].used != SLOT_RECENTLY_USED ) {
+ claim_this_node = i;
+ break;
+ } else {
+ /* Should mark entry NOT_RECENTLY_USED
+ will do below while holding write lock
+ because of the following flag:
+ */
+ mark_unused=1;
+ }
+ }
- prev = cur;
- cur = cur->next;
}
- hvalue = (hvalue + 1) & (AVC_CACHE_SLOTS - 1);
- } while (hvalue != avc_cache.lru_hint);
+ } while ( read_seqretry_irqrestore( &(block->lock), seq, flags ) );
+
+ if ( claim_this_node < 0 ) {
+ claim_this_node = block->next_evicted%AVC_CACHE_SLOTS_PER_LOCK;
}
- panic("avc_reclaim_node");
+ write_seqlock_irqsave( &(block->lock), flags );
-found:
- avc_cache.lru_hint = hvalue;
+ new = &( block->ae[claim_this_node]);
- if (prev == NULL)
- avc_cache.slots[hvalue] = cur->next;
- else
- prev->next = cur->next;
+ //Updates that really belong above but placed here for locking efficiency
+ if ( block->next_evicted%AVC_CACHE_SLOTS_PER_LOCK == claim_this_node )
+ block->next_evicted++;
- return cur;
-}
+ if ( mark_unused )
+ for ( i = 0; i < claim_this_node; i++ )
+ block->ae[i].used = SLOT_NOT_RECENTLY_USED;
-static inline struct avc_node *avc_claim_node(u32 ssid,
- u32 tsid, u16 tclass)
-{
- struct avc_node *new;
- int hvalue;
+ new->ssid = ssid;
+ new->tsid = tsid;
+ new->tclass = tclass;
+ new->used = SLOT_RECENTLY_USED;
- hvalue = avc_hash(ssid, tsid, tclass);
- if (avc_node_freelist) {
- new = avc_node_freelist;
- avc_node_freelist = avc_node_freelist->next;
- avc_cache.active_nodes++;
- } else {
- new = avc_reclaim_node();
- if (!new)
- goto out;
- }
+ new->avd->allowed = avd->allowed;
+ new->avd->decided = avd->decided;
+ new->avd->auditallow = avd->auditallow;
+ new->avd->auditdeny = avd->auditdeny;
+ new->avd->seqno = avd->seqno;
- new->ae.used = 1;
- new->ae.ssid = ssid;
- new->ae.tsid = tsid;
- new->ae.tclass = tclass;
- new->next = avc_cache.slots[hvalue];
- avc_cache.slots[hvalue] = new;
+ write_sequnlock_irqrestore( &(block->lock), flags );
-out:
return new;
}
-static inline struct avc_node *avc_search_node(u32 ssid, u32 tsid,
+static inline struct avc_entry *avc_search_node(u32 ssid, u32 tsid,
u16 tclass, int *probes)
{
- struct avc_node *cur;
- int hvalue;
+ struct avc_node *block;
+ struct avc_entry *ae, *ret=NULL;
+ int hvalue, seq;
int tprobes = 1;
+ int i;
+ unsigned long flags;
+ char use = 0;
hvalue = avc_hash(ssid, tsid, tclass);
- cur = avc_cache.slots[hvalue];
- while (cur != NULL &&
- (ssid != cur->ae.ssid ||
- tclass != cur->ae.tclass ||
- tsid != cur->ae.tsid)) {
+ block = &avc_cache.blocks[hvalue];
+ do {
+ seq = read_seqbegin_irqsave( &(block->lock), flags );
+ for ( i=0; i < AVC_CACHE_SLOTS_PER_LOCK; i++ ) {
+ ae = &(block->ae[i]);
+ if ( ssid != ae->ssid ||
+ tclass != ae->tclass ||
+ tsid != ae->tsid ||
+ ae->used == SLOT_INVALID )
tprobes++;
- cur = cur->next;
+ else {
+ ret = ae;
+ use = ret->used;
+ break;
+ }
}
+ } while ( read_seqretry_irqrestore( &(block->lock), seq, flags ) );
- if (cur == NULL) {
+ if (ret == NULL) {
/* cache miss */
goto out;
}
+ if ( use != SLOT_RECENTLY_USED ) {
+ write_seqlock_irqsave( &(block->lock), flags );
+ if ( ret->ssid == ssid &&
+ ret->tsid == tsid &&
+ ret->tclass == tclass )
+ ret->used = SLOT_RECENTLY_USED;
+ else {
+ ret=NULL;
+ }
+ write_sequnlock_irqrestore( &(block->lock), flags );
+ }
+
/* cache hit */
- if (probes)
+ if (ret && probes)
*probes = tprobes;
-
- cur->ae.used = 1;
-
-out:
- return cur;
+ out:
+ return ret;
}
/**
* avc_lookup - Look up an AVC entry.
* @ssid: source security identifier
* @tsid: target security identifier
* @tclass: target security class
* @requested: requested permissions, interpreted based on @tclass
* @aeref: AVC entry reference
*
* Look up an AVC entry that is valid for the
* @requested permissions between the SID pair
* (@ssid, @tsid), interpreting the permissions
* based on @tclass. If a valid AVC entry exists,
* then this function updates @aeref to refer to the
* entry and returns %0. Otherwise, this function
* returns -%ENOENT.
*/
-int avc_lookup(u32 ssid, u32 tsid, u16 tclass,
- u32 requested, struct avc_entry_ref *aeref)
+struct avc_entry* avc_lookup( u32 ssid, u32 tsid, u16 tclass, u32 requested, struct av_decision* avd )
+
{
- struct avc_node *node;
- int probes, rc = 0;
+ struct avc_entry *node = NULL;
+ int seq, probes, hvalue;
+ unsigned long flags;
+ u32 denied;
+ int i = 0;
avc_cache_stats_incr(AVC_CAV_LOOKUPS);
- node = avc_search_node(ssid, tsid, tclass,&probes);
+ hvalue = avc_hash( ssid, tsid, tclass );
+ node = avc_search_node(ssid, tsid, tclass, &probes);
+ if ( node ) {
+ do {
+ seq = read_seqbegin_irqsave( &( avc_cache.blocks[hvalue].lock ), flags );
+ i++;
+ if ( node->ssid == ssid && node->tsid == tsid && node->tclass ) {
+ memcpy( avd, node->avd, sizeof(*avd) );
+ denied = requested & ~(avd->allowed);
+ } else { /* node was removed from the cache */
+ node=NULL;
+ break;
+ }
+ } while( read_seqretry_irqrestore( &( avc_cache.blocks[hvalue].lock ), seq, flags ) );
+ //printk( KERN_DEBUG "To copy avd, looped %d times.\n", i );
+ }
- if (node && ((node->ae.avd.decided & requested) == requested)) {
+ if ( node && ( !requested || denied ) && !selinux_enforcing && avd->allowed != (avd->allowed|requested) ) {
+ write_seqlock_irqsave( &(avc_cache.blocks[hvalue].lock), flags );
+ if ( node->ssid == ssid && node->tsid == tsid && node->tclass == tclass )
+ node->avd->allowed |= requested;
+ else /* node was removed from the cache */
+ node=NULL;
+ write_sequnlock_irqrestore( &(avc_cache.blocks[hvalue].lock), flags );
+ }
+
+ if (node && (( avd->decided & requested ) == requested)) {
avc_cache_stats_incr(AVC_CAV_HITS);
avc_cache_stats_add(AVC_CAV_PROBES,probes);
- aeref->ae = &node->ae;
goto out;
}
avc_cache_stats_incr(AVC_CAV_MISSES);
- rc = -ENOENT;
-out:
- return rc;
+
+ out:
+ return node;
}
/**
* avc_insert - Insert an AVC entry.
* @ssid: source security identifier
* @tsid: target security identifier
* @tclass: target security class
* @ae: AVC entry
* @aeref: AVC entry reference
*
@@ -389,58 +449,52 @@
* (@ssid, @tsid) and class @tclass.
* The access vectors and the sequence number are
* normally provided by the security server in
* response to a security_compute_av() call. If the
* sequence number @ae->avd.seqno is not less than the latest
* revocation notification, then the function copies
* the access vectors into a cache entry, updates
* @aeref to refer to the entry, and returns %0.
* Otherwise, this function returns -%EAGAIN.
*/
-int avc_insert(u32 ssid, u32 tsid, u16 tclass,
- struct avc_entry *ae, struct avc_entry_ref *aeref)
+struct avc_entry* avc_insert(u32 ssid, u32 tsid, u16 tclass, u32 requested,
+ struct av_decision *avd )
{
- struct avc_node *node;
- int rc = 0;
+ struct avc_entry *entry = NULL;
+ int denied;
- if (ae->avd.seqno < avc_cache.latest_notif) {
+ if (avd->seqno < atomic_read(&avc_cache.latest_notif)) {
printk(KERN_WARNING "avc: seqno %d < latest_notif %d\n",
- ae->avd.seqno, avc_cache.latest_notif);
- rc = -EAGAIN;
+ avd->seqno, atomic_read(&avc_cache.latest_notif));
goto out;
}
- node = avc_claim_node(ssid, tsid, tclass);
- if (!node) {
- rc = -ENOMEM;
- goto out;
- }
+ denied = requested & ~( avd->allowed );
+ if ( denied && !selinux_enforcing )
+ avd->allowed |= requested;
- node->ae.avd.allowed = ae->avd.allowed;
- node->ae.avd.decided = ae->avd.decided;
- node->ae.avd.auditallow = ae->avd.auditallow;
- node->ae.avd.auditdeny = ae->avd.auditdeny;
- node->ae.avd.seqno = ae->avd.seqno;
- aeref->ae = &node->ae;
-out:
- return rc;
+ entry = avc_claim_node(ssid, tsid, tclass, avd);
+
+ out:
+ return entry;
}
static inline void avc_print_ipv6_addr(struct audit_buffer *ab,
struct in6_addr *addr, u16 port,
char *name1, char *name2)
{
if (!ipv6_addr_any(addr))
audit_log_format(ab, " %s=%04x:%04x:%04x:%04x:%04x:"
"%04x:%04x:%04x", name1, NIP6(*addr));
if (port)
audit_log_format(ab, " %s=%d", name2, ntohs(port));
+
}
static inline void avc_print_ipv4_addr(struct audit_buffer *ab, u32 addr,
u16 port, char *name1, char *name2)
{
if (addr)
audit_log_format(ab, " %s=%d.%d.%d.%d", name1, NIPQUAD(addr));
if (port)
audit_log_format(ab, " %s=%d", name2, ntohs(port));
}
@@ -670,96 +724,103 @@
goto out;
}
c->callback = callback;
c->events = events;
c->ssid = ssid;
c->tsid = tsid;
c->perms = perms;
c->next = avc_callbacks;
avc_callbacks = c;
-out:
+ out:
return rc;
}
static inline int avc_sidcmp(u32 x, u32 y)
{
return (x == y || x == SECSID_WILD || y == SECSID_WILD);
}
-static inline void avc_update_node(u32 event, struct avc_node *node, u32 perms)
+/* hold write lock when calling */
+static inline void avc_update_node(u32 event, struct avc_entry * node, u32 perms)
{
switch (event) {
case AVC_CALLBACK_GRANT:
- node->ae.avd.allowed |= perms;
+ node->avd->allowed |= perms;
break;
case AVC_CALLBACK_TRY_REVOKE:
case AVC_CALLBACK_REVOKE:
- node->ae.avd.allowed &= ~perms;
+ node->avd->allowed &= ~perms;
break;
case AVC_CALLBACK_AUDITALLOW_ENABLE:
- node->ae.avd.auditallow |= perms;
+ node->avd->auditallow |= perms;
break;
case AVC_CALLBACK_AUDITALLOW_DISABLE:
- node->ae.avd.auditallow &= ~perms;
+ node->avd->auditallow &= ~perms;
break;
case AVC_CALLBACK_AUDITDENY_ENABLE:
- node->ae.avd.auditdeny |= perms;
+ node->avd->auditdeny |= perms;
break;
case AVC_CALLBACK_AUDITDENY_DISABLE:
- node->ae.avd.auditdeny &= ~perms;
+ node->avd->auditdeny &= ~perms;
break;
}
}
static int avc_update_cache(u32 event, u32 ssid, u32 tsid,
u16 tclass, u32 perms)
{
struct avc_node *node;
- int i;
+ struct avc_entry *entry;
+ int i, j;
unsigned long flags;
- spin_lock_irqsave(&avc_lock,flags);
-
if (ssid == SECSID_WILD || tsid == SECSID_WILD) {
/* apply to all matching nodes */
- for (i = 0; i < AVC_CACHE_SLOTS; i++) {
- for (node = avc_cache.slots[i]; node;
- node = node->next) {
- if (avc_sidcmp(ssid, node->ae.ssid) &&
- avc_sidcmp(tsid, node->ae.tsid) &&
- tclass == node->ae.tclass) {
- avc_update_node(event,node,perms);
+ for (i = 0; i < AVC_CACHE_BLOCKS; i++) {
+ node = &(avc_cache.blocks[i]);
+ for (j = 0; j < AVC_CACHE_SLOTS_PER_LOCK; j++ ) {
+ entry = &(node->ae[j]);
+ if (avc_sidcmp(ssid, entry->ssid) &&
+ avc_sidcmp(tsid, entry->tsid) &&
+ tclass == entry->tclass) {
+ write_seqlock_irqsave( &(node->lock), flags);
+ if (avc_sidcmp(ssid, entry->ssid) &&
+ avc_sidcmp(tsid, entry->tsid) &&
+ tclass == entry->tclass) {
+ avc_update_node(event,entry,perms);
+ }
+ write_sequnlock_irqrestore( &(node->lock), flags);
}
}
}
} else {
/* apply to one node */
- node = avc_search_node(ssid, tsid, tclass, NULL);
- if (node) {
- avc_update_node(event,node,perms);
+ entry = avc_search_node(ssid, tsid, tclass, NULL);
+ if (entry) {
+ node = &(avc_cache.blocks[ avc_hash( ssid,tsid,tclass) ] );
+ write_seqlock_irqsave( &(node->lock), flags);
+ if ( entry->ssid == ssid && entry->tsid == tsid && entry->tclass == tclass )
+ avc_update_node(event,entry,perms);
+ write_sequnlock_irqrestore( &(node->lock), flags);
}
}
-
- spin_unlock_irqrestore(&avc_lock,flags);
-
return 0;
}
static int avc_control(u32 event, u32 ssid, u32 tsid,
u16 tclass, u32 perms,
u32 seqno, u32 *out_retained)
{
struct avc_callback_node *c;
u32 tretained = 0, cretained = 0;
int rc = 0;
- unsigned long flags;
/*
* try_revoke only removes permissions from the cache
* state if they are not retained by the object manager.
* Hence, try_revoke must wait until after the callbacks have
* been invoked to update the cache state.
*/
if (event != AVC_CALLBACK_TRY_REVOKE)
avc_update_cache(event,ssid,tsid,tclass,perms);
@@ -780,26 +841,24 @@
}
}
if (event == AVC_CALLBACK_TRY_REVOKE) {
/* revoke any unretained permissions */
perms &= ~tretained;
avc_update_cache(event,ssid,tsid,tclass,perms);
*out_retained = tretained;
}
- spin_lock_irqsave(&avc_lock,flags);
- if (seqno > avc_cache.latest_notif)
- avc_cache.latest_notif = seqno;
- spin_unlock_irqrestore(&avc_lock,flags);
+ if (seqno > atomic_read(&avc_cache.latest_notif))
+ atomic_set(&avc_cache.latest_notif, seqno);
-out:
+ out:
return rc;
}
/**
* avc_ss_grant - Grant previously denied permissions.
* @ssid: source security identifier or %SECSID_WILD
* @tsid: target security identifier or %SECSID_WILD
* @tclass: target security class
* @perms: permissions to grant
* @seqno: policy sequence number
@@ -820,94 +879,86 @@
* @seqno: policy sequence number
* @out_retained: subset of @perms that are retained
*
* Try to revoke previously granted permissions, but
* only if they are not retained as migrated permissions.
* Return the subset of permissions that are retained via @out_retained.
*/
int avc_ss_try_revoke(u32 ssid, u32 tsid, u16 tclass,
u32 perms, u32 seqno, u32 *out_retained)
{
+
return avc_control(AVC_CALLBACK_TRY_REVOKE,
ssid, tsid, tclass, perms, seqno, out_retained);
}
/**
* avc_ss_revoke - Revoke previously granted permissions.
* @ssid: source security identifier or %SECSID_WILD
* @tsid: target security identifier or %SECSID_WILD
* @tclass: target security class
* @perms: permissions to grant
* @seqno: policy sequence number
*
* Revoke previously granted permissions, even if
* they are retained as migrated permissions.
*/
int avc_ss_revoke(u32 ssid, u32 tsid, u16 tclass,
u32 perms, u32 seqno)
{
+
return avc_control(AVC_CALLBACK_REVOKE,
ssid, tsid, tclass, perms, seqno, NULL);
}
/**
* avc_ss_reset - Flush the cache and revalidate migrated permissions.
* @seqno: policy sequence number
*/
int avc_ss_reset(u32 seqno)
{
struct avc_callback_node *c;
- int i, rc = 0;
- struct avc_node *node, *tmp;
+ int i, j, rc = 0;
+ struct avc_node *node;
+ struct avc_entry *entry;
unsigned long flags;
- avc_hash_eval("reset");
- spin_lock_irqsave(&avc_lock,flags);
+ avc_hash_eval("reset");
- for (i = 0; i < AVC_CACHE_SLOTS; i++) {
- node = avc_cache.slots[i];
- while (node) {
- tmp = node;
- node = node->next;
- tmp->ae.ssid = tmp->ae.tsid = SECSID_NULL;
- tmp->ae.tclass = SECCLASS_NULL;
- tmp->ae.avd.allowed = tmp->ae.avd.decided = 0;
- tmp->ae.avd.auditallow = tmp->ae.avd.auditdeny = 0;
- tmp->ae.used = 0;
- tmp->next = avc_node_freelist;
- avc_node_freelist = tmp;
- avc_cache.active_nodes--;
+ for (i = 0; i < AVC_CACHE_BLOCKS; i++) {
+ node = &(avc_cache.blocks[i]);
+ write_seqlock_irqsave( &(node->lock), flags);
+ for ( j = 0; j < AVC_CACHE_SLOTS_PER_LOCK; j++ ) {
+ entry = &(node->ae[j]);
+ entry->used = SLOT_INVALID;
+ entry->ssid = entry->tsid = SECSID_NULL;
+ entry->tclass = SECCLASS_NULL;
+ entry->avd->allowed = entry->avd->decided = 0;
+ entry->avd->auditallow = entry->avd->auditdeny = 0;
}
- avc_cache.slots[i] = NULL;
+ node->next_evicted = 0;
+ write_sequnlock_irqrestore( &(node->lock), flags);
}
- avc_cache.lru_hint = 0;
-
- spin_unlock_irqrestore(&avc_lock,flags);
-
for (i = 0; i < AVC_NSTATS; i++)
avc_cache_stats[i] = 0;
-
for (c = avc_callbacks; c; c = c->next) {
if (c->events & AVC_CALLBACK_RESET) {
rc = c->callback(AVC_CALLBACK_RESET,
0, 0, 0, 0, NULL);
if (rc)
goto out;
}
}
-
- spin_lock_irqsave(&avc_lock,flags);
- if (seqno > avc_cache.latest_notif)
- avc_cache.latest_notif = seqno;
- spin_unlock_irqrestore(&avc_lock,flags);
-out:
+ if (seqno > atomic_read(&avc_cache.latest_notif))
+ atomic_set(&avc_cache.latest_notif, seqno);
+ out:
return rc;
}
/**
* avc_ss_set_auditallow - Enable or disable auditing of granted permissions.
* @ssid: source security identifier or %SECSID_WILD
* @tsid: target security identifier or %SECSID_WILD
* @tclass: target security class
* @perms: permissions to grant
* @seqno: policy sequence number
@@ -962,85 +1013,44 @@
* -%EACCES if any permissions are denied, or another -errno upon
* other errors. This function is typically called by avc_has_perm(),
* but may also be called directly to separate permission checking from
* auditing, e.g. in cases where a lock must be held for the check but
* should be released for the auditing.
*/
int avc_has_perm_noaudit(u32 ssid, u32 tsid,
u16 tclass, u32 requested,
struct avc_entry_ref *aeref, struct av_decision *avd)
{
- struct avc_entry *ae;
+ struct av_decision dec, *tdec;
int rc = 0;
- unsigned long flags;
- struct avc_entry entry;
- u32 denied;
- struct avc_entry_ref ref;
+ //unsigned long flags;
+ struct avc_entry* entry;
+ //u32 denied;
+ //int seq, hvalue;
- if (!aeref) {
- avc_entry_ref_init(&ref);
- aeref = &ref;
- }
+ if (avd)
+ tdec=avd;
+ else
+ tdec=&dec;
- spin_lock_irqsave(&avc_lock, flags);
avc_cache_stats_incr(AVC_ENTRY_LOOKUPS);
- ae = aeref->ae;
- if (ae) {
- if (ae->ssid == ssid &&
- ae->tsid == tsid &&
- ae->tclass == tclass &&
- ((ae->avd.decided & requested) == requested)) {
- avc_cache_stats_incr(AVC_ENTRY_HITS);
- ae->used = 1;
- } else {
- avc_cache_stats_incr(AVC_ENTRY_DISCARDS);
- ae = NULL;
- }
- }
-
- if (!ae) {
- avc_cache_stats_incr(AVC_ENTRY_MISSES);
- rc = avc_lookup(ssid, tsid, tclass, requested, aeref);
- if (rc) {
- spin_unlock_irqrestore(&avc_lock,flags);
- rc = security_compute_av(ssid,tsid,tclass,requested,&entry.avd);
+ entry = avc_lookup(ssid, tsid, tclass, requested, tdec);
+ if ( !entry ) {
+ rc = security_compute_av(ssid,tsid,tclass,requested,tdec);
if (rc)
goto out;
- spin_lock_irqsave(&avc_lock, flags);
- rc = avc_insert(ssid,tsid,tclass,&entry,aeref);
- if (rc) {
- spin_unlock_irqrestore(&avc_lock,flags);
- goto out;
- }
- }
- ae = aeref->ae;
- }
-
- if (avd)
- memcpy(avd, &ae->avd, sizeof(*avd));
-
- denied = requested & ~(ae->avd.allowed);
-
- if (!requested || denied) {
- if (selinux_enforcing) {
- spin_unlock_irqrestore(&avc_lock,flags);
- rc = -EACCES;
- goto out;
- } else {
- ae->avd.allowed |= requested;
- spin_unlock_irqrestore(&avc_lock,flags);
+ entry = avc_insert(ssid,tsid,tclass,requested,tdec);
+ if (!entry)
goto out;
}
- }
- spin_unlock_irqrestore(&avc_lock,flags);
-out:
+ out:
return rc;
}
/**
* avc_has_perm - Check permissions and perform any appropriate auditing.
* @ssid: source security identifier
* @tsid: target security identifier
* @tclass: target security class
* @requested: requested permissions, interpreted based on @tclass
* @aeref: AVC entry reference
@@ -1057,12 +1067,13 @@
*/
int avc_has_perm(u32 ssid, u32 tsid, u16 tclass,
u32 requested, struct avc_entry_ref *aeref,
struct avc_audit_data *auditdata)
{
struct av_decision avd;
int rc;
rc = avc_has_perm_noaudit(ssid, tsid, tclass, requested, aeref, &avd);
avc_audit(ssid, tsid, tclass, requested, &avd, rc, auditdata);
+
return rc;
}
--- linux-2.6.8.1/security/selinux/include/avc.h 2004-08-14 05:54:51.000000000 -0500
+++ linux-2.6.8.1-myseq/security/selinux/include/avc.h 2004-09-10 16:27:56.000000000 -0500
@@ -112,25 +112,26 @@
void avc_dump_av(struct audit_buffer *ab, u16 tclass, u32 av);
void avc_dump_query(struct audit_buffer *ab, u32 ssid, u32 tsid, u16 tclass);
void avc_dump_cache(struct audit_buffer *ab, char *tag);
/*
* AVC operations
*/
void __init avc_init(void);
-int avc_lookup(u32 ssid, u32 tsid, u16 tclass,
- u32 requested, struct avc_entry_ref *aeref);
+struct avc_entry* avc_lookup(u32 ssid, u32 tsid, u16 tclass,
+ u32 requested, struct av_decision* avd );
+
+struct avc_entry* avc_insert(u32 ssid, u32 tsid, u16 tclass, u32 requested,
+ struct av_decision* avd );
-int avc_insert(u32 ssid, u32 tsid, u16 tclass,
- struct avc_entry *ae, struct avc_entry_ref *out_aeref);
void avc_audit(u32 ssid, u32 tsid,
u16 tclass, u32 requested,
struct av_decision *avd, int result, struct avc_audit_data *auditdata);
int avc_has_perm_noaudit(u32 ssid, u32 tsid,
u16 tclass, u32 requested,
struct avc_entry_ref *aeref, struct av_decision *avd);
int avc_has_perm(u32 ssid, u32 tsid,
^ permalink raw reply [flat|nested] 2+ messages in thread
* AVC Seqlock vs. RCU experiment results
@ 2004-10-07 18:11 Kylene J Hall
0 siblings, 0 replies; 2+ messages in thread
From: Kylene J Hall @ 2004-10-07 18:11 UTC (permalink / raw)
To: Emily Ratliff, George Wilson, Serge E Hallyn, Niki Rahimi,
Trent Jaeger, Janak Desai, Doc Shankar, Kylene J Hall,
Jerone B Young, Paul E McKenney [IMAP],
Gerrit _ Huizenga, Stephen Smalley, Rik van Riel, James Morris,
SELinux-ML(Eng), Linux Kernel ML(Eng)
[-- Attachment #1.1: Type: text/plain, Size: 5756 bytes --]
The following is our the results from our experiment. Since after many
iterations based on comments the patch is still not showing significant
performance improvements we will turn our efforts towards looking at the
rcu patch.
In the results section
wselinux corresponds to booting with the parameter selinux=1
woselinux corresponds to booting with the parameter selinux=0
1way Results
Chatroom results (msg/sec) 4 iterations each default configuration
myseq+wselinux rcu+wselinux Vanilla+wselinux
491400 429184 194931
300075 449943 428724
466200 433839 324675
440528 448933 430107
================================================================
Unixbench results (index values)
myseq+wselinux rcu+wselinux
Vanilla+wselinux
Dhrystone 2 using register variables 307.2 309.5
307.2
Double-Precision Whetstone 145.6 146.3
146.3
Execl Throughput 621.4 651.4 643.5
File Copy 1024 bufsize 2000 maxblocks 640.6 533.9
650.4
File Copy 256 bufsize 500 maxblocks 614.3 431.2
669.0
File Copy 4096 bufsize 8000 maxblocks 528.2 498.5
513.6
Pipe Throughput 524.1 410.5 568.7
Process Creation 991.1 956.5 962.3
Shell Scripts (8 concurrent) 485.5 501.7
495.0
System Call Overhead 713.9 725.9
721.8
================================================================
503.2
466.1 513.1
4way Results
Chatroom results (msg/sec)
myseq+woselinux myseq+wselinux rcu+woselinux rcu+wselinux Vanilla+woselinux
Vanilla+wselinux
466744 275103 378787 310318 455062 229885
455062 318471 454029 389863 423280 209205
437636 314712 431965 326264 372786
201005
431499 374181 418410 309837 377714 191204
========================================================================
Unixbench results (index values)
myseq+woselinux
myseq+wselinux rcu+woselinux rcu+wselinux Vanilla+woselinux
Vanilla+wselinux
Dhrystone 2 using register variables 131.5
131.5 131.4 131.5 131.1
131.3
Double-Precision Whetstone 82.7 82.7
83.1 83.0 82.7 83.1
Execl Throughput 311.8 285.8
309.0 297.3 309.5 296.0
File Copy 1024 bufsize 2000 maxblocks 172.7 154.3
174.1 159.1 173.0 155.8
File Copy 256 bufsize 500 maxblocks 159.2
135.2 159.8 141.7 161.0
138.0
File Copy 4096 bufsize 8000 maxblocks 209.4 195.5
209.4 201.5 209.4 200.5
Pipe Throughput 240.0 162.8
238.0 195.5 240.3 187.3
Process Creation 343.4 332.4
339.8 340.8 342.4 337.5
Shell Scripts (8 concurrent) 791.7 767.2
793.3 769.5 792.2 765.0
System Call Overhead 375.9 377.6
374.9 373.4 375.7 377.3
================================================================
233.4
213.7 233.1 221.4 233.4
219.0
>> 1) the number of AVC entries is 3 times as big, increasing
>> the chance of L1/L2 cache misses, but decreasing the chance
>> of an AVC miss - dunno if this is good or bad
>We increased this number to attempt to do just that produce fewer AVC
misses. We saw about a 1 point improvement on the Unixbench benchmark
>> 2) avc_claim_node doesn't check any more whether the avc
>> entry got recently used - dunno if this is good or bad
>My initial attempts and trying to keep this functionality seemed racy but
I think I could try:
> int claim_this_node = -1;
> take read_lock
> while( walking list) {
> if ( current_node->used == UNUSED )
> claim_this_node = current_node
> break;
> current_node->used = UNUSED
> }
> if ( claim_this_node == -1 )
> claim_this_node = round robin approach
>
> . . .continue with current claim node work
>
Have now tried this results included above no real noticeable improvement.
>> 3) the av_decision isn't part of the avc_entry any more
>> (again, dunno if this is good or bad)
>Here was our logic correct me if I'm wrong. We assumed the cacheline size
to be 128 bytes since we wanted each lock to protect a cacheline or a
>couple of cachelines which were a group of entries which had the same hash
value we needed to decrease the size of the entries inorder to fit more
>into this region thus we used a pointer to the av_decision structure which
accounts for most of the data stored per entry. This allowed us to pack
the >lookup data which accessed most frequently in the adjacent cachelines.
Make sense? Other suggestions?
>>cheers,
>>Rik
(See attached file: seq.patch)
Kylene J. Hall
IBM Linux Technology Center
[-- Attachment #1.2: Type: text/html, Size: 6521 bytes --]
[-- Attachment #2: seq.patch --]
[-- Type: application/octet-stream, Size: 26805 bytes --]
--- linux-2.6.8.1/security/selinux/avc.c 2004-08-14 05:55:48.000000000 -0500
+++ linux-2.6.8.1-myseq/security/selinux/avc.c 2004-10-05 16:21:47.000000000 -0500
@@ -28,80 +28,85 @@
#include "avc.h"
#include "avc_ss.h"
#ifdef CONFIG_AUDIT
#include "class_to_string.h"
#endif
#include "common_perm_to_string.h"
#include "av_inherit.h"
#include "av_perm_to_string.h"
#include "objsec.h"
-#define AVC_CACHE_SLOTS 512
+#define AVC_CACHE_SLOTS 3*512
#define AVC_CACHE_MAXNODES 410
+#define AVC_CACHE_SLOTS_PER_LOCK 6
+#define AVC_CACHE_BLOCKS AVC_CACHE_SLOTS/AVC_CACHE_SLOTS_PER_LOCK
+
+#define SLOT_INVALID 0
+#define SLOT_NOT_RECENTLY_USED 1
+#define SLOT_RECENTLY_USED 2
struct avc_entry {
u32 ssid;
u32 tsid;
u16 tclass;
- struct av_decision avd;
- int used; /* used recently */
+ struct av_decision *avd;
+ char used; /* active node */
};
struct avc_node {
- struct avc_entry ae;
- struct avc_node *next;
+ seqlock_t lock;
+ short next_evicted;
+ struct avc_entry ae[6];
};
struct avc_cache {
- struct avc_node *slots[AVC_CACHE_SLOTS];
- u32 lru_hint; /* LRU hint for reclaim scan */
- u32 active_nodes;
- u32 latest_notif; /* latest revocation notification */
+ struct avc_node blocks[AVC_CACHE_BLOCKS];
+ atomic_t active_nodes; //???
+ atomic_t latest_notif; /* latest revocation notification */
};
struct avc_callback_node {
int (*callback) (u32 event, u32 ssid, u32 tsid,
u16 tclass, u32 perms,
u32 *out_retained);
u32 events;
u32 ssid;
u32 tsid;
u16 tclass;
u32 perms;
struct avc_callback_node *next;
};
-static spinlock_t avc_lock = SPIN_LOCK_UNLOCKED;
-static struct avc_node *avc_node_freelist;
+//static spinlock_t avc_lock = SPIN_LOCK_UNLOCKED;
+//static struct avc_node *avc_node_freelist;
static struct avc_cache avc_cache;
static unsigned avc_cache_stats[AVC_NSTATS];
static struct avc_callback_node *avc_callbacks;
static inline int avc_hash(u32 ssid, u32 tsid, u16 tclass)
{
- return (ssid ^ (tsid<<2) ^ (tclass<<4)) & (AVC_CACHE_SLOTS - 1);
+ return (ssid ^ (tsid<<2) ^ (tclass<<4)) & (AVC_CACHE_BLOCKS - 1);
}
#ifdef AVC_CACHE_STATS
static inline void avc_cache_stats_incr(int type)
{
avc_cache_stats[type]++;
}
static inline void avc_cache_stats_add(int type, unsigned val)
{
avc_cache_stats[type] += val;
}
#else
static inline void avc_cache_stats_incr(int type)
{ }
-
static inline void avc_cache_stats_add(int type, unsigned val)
{ }
#endif
/**
* avc_dump_av - Display an access vector in human-readable form.
* @tclass: target security class
* @av: access vector
*/
void avc_dump_av(struct audit_buffer *ab, u16 tclass, u32 av)
@@ -181,207 +186,262 @@
audit_log_format(ab, " tclass=%s", class_to_string[tclass]);
}
/**
* avc_init - Initialize the AVC.
*
* Initialize the access vector cache.
*/
void __init avc_init(void)
{
- struct avc_node *new;
- int i;
- for (i = 0; i < AVC_CACHE_MAXNODES; i++) {
- new = kmalloc(sizeof(*new), GFP_ATOMIC);
- if (!new) {
- printk(KERN_WARNING "avc: only able to allocate "
- "%d entries\n", i);
+ int i, j;
+ struct av_decision* avd;
+
+ for ( i = 0; i < AVC_CACHE_BLOCKS; i++ ) {
+ seqlock_init(&avc_cache.blocks[i].lock);
+ for ( j = 0; j < AVC_CACHE_SLOTS_PER_LOCK; j++ ) {
+ avd = kmalloc(sizeof(*avd), GFP_ATOMIC );
+ if ( !avd ) {
+ printk(KERN_WARNING
+ "avc: only able to allocate some entries\n");
break;
}
- memset(new, 0, sizeof(*new));
- new->next = avc_node_freelist;
- avc_node_freelist = new;
+ memset(avd, 0, sizeof(*avd));
+ avc_cache.blocks[i].ae[j].avd = avd;
+ }
}
audit_log(current->audit_context, "AVC INITIALIZED\n");
}
#if 0
static void avc_hash_eval(char *tag)
{
int i, chain_len, max_chain_len, slots_used;
struct avc_node *node;
unsigned long flags;
+
spin_lock_irqsave(&avc_lock,flags);
slots_used = 0;
max_chain_len = 0;
for (i = 0; i < AVC_CACHE_SLOTS; i++) {
node = avc_cache.slots[i];
if (node) {
slots_used++;
chain_len = 0;
while (node) {
chain_len++;
node = node->next;
}
if (chain_len > max_chain_len)
max_chain_len = chain_len;
}
}
spin_unlock_irqrestore(&avc_lock,flags);
- printk(KERN_INFO "\n");
- printk(KERN_INFO "%s avc: %d entries and %d/%d buckets used, longest "
- "chain length %d\n", tag, avc_cache.active_nodes, slots_used,
- AVC_CACHE_SLOTS, max_chain_len);
}
#else
static inline void avc_hash_eval(char *tag)
{ }
#endif
-static inline struct avc_node *avc_reclaim_node(void)
-{
- struct avc_node *prev, *cur;
- int hvalue, try;
+static inline struct avc_entry *avc_claim_node(u32 ssid,
+ u32 tsid, u16 tclass,
+ struct av_decision *avd )
+{
+ struct avc_entry* new = NULL;
+ struct avc_node* block;
+ int hvalue, i, seq, claim_this_node, mark_unused=0;
+ unsigned long flags;
- hvalue = avc_cache.lru_hint;
- for (try = 0; try < 2; try++) {
- do {
- prev = NULL;
- cur = avc_cache.slots[hvalue];
- while (cur) {
- if (!cur->ae.used)
- goto found;
+ hvalue = avc_hash(ssid, tsid, tclass );
+ block = &avc_cache.blocks[ hvalue ];
- cur->ae.used = 0;
+ do {
+ //reset
+ claim_this_node = -1;
+ mark_unused = 0;
+
+ seq = read_seqbegin_irqsave( &(block->lock), flags );
+
+ if ( block->ae[block->next_evicted%AVC_CACHE_SLOTS_PER_LOCK].used
+ != SLOT_RECENTLY_USED ) {
+ claim_this_node = block->next_evicted%AVC_CACHE_SLOTS_PER_LOCK;
+ } else {
+ for ( i = 0; i < AVC_CACHE_SLOTS_PER_LOCK; i++ ) {
+ if( block->ae[i].used != SLOT_RECENTLY_USED ) {
+ claim_this_node = i;
+ break;
+ } else {
+ /* Should mark entry NOT_RECENTLY_USED
+ will do below while holding write lock
+ because of the following flag:
+ */
+ mark_unused=1;
+ }
+ }
- prev = cur;
- cur = cur->next;
}
- hvalue = (hvalue + 1) & (AVC_CACHE_SLOTS - 1);
- } while (hvalue != avc_cache.lru_hint);
+ } while ( read_seqretry_irqrestore( &(block->lock), seq, flags ) );
+
+ if ( claim_this_node < 0 ) {
+ claim_this_node = block->next_evicted%AVC_CACHE_SLOTS_PER_LOCK;
}
- panic("avc_reclaim_node");
+ write_seqlock_irqsave( &(block->lock), flags );
-found:
- avc_cache.lru_hint = hvalue;
+ new = &( block->ae[claim_this_node]);
- if (prev == NULL)
- avc_cache.slots[hvalue] = cur->next;
- else
- prev->next = cur->next;
+ //Updates that really belong above but placed here for locking efficiency
+ if ( block->next_evicted%AVC_CACHE_SLOTS_PER_LOCK == claim_this_node )
+ block->next_evicted++;
- return cur;
-}
+ if ( mark_unused )
+ for ( i = 0; i < claim_this_node; i++ )
+ block->ae[i].used = SLOT_NOT_RECENTLY_USED;
-static inline struct avc_node *avc_claim_node(u32 ssid,
- u32 tsid, u16 tclass)
-{
- struct avc_node *new;
- int hvalue;
+ new->ssid = ssid;
+ new->tsid = tsid;
+ new->tclass = tclass;
+ new->used = SLOT_RECENTLY_USED;
- hvalue = avc_hash(ssid, tsid, tclass);
- if (avc_node_freelist) {
- new = avc_node_freelist;
- avc_node_freelist = avc_node_freelist->next;
- avc_cache.active_nodes++;
- } else {
- new = avc_reclaim_node();
- if (!new)
- goto out;
- }
+ new->avd->allowed = avd->allowed;
+ new->avd->decided = avd->decided;
+ new->avd->auditallow = avd->auditallow;
+ new->avd->auditdeny = avd->auditdeny;
+ new->avd->seqno = avd->seqno;
- new->ae.used = 1;
- new->ae.ssid = ssid;
- new->ae.tsid = tsid;
- new->ae.tclass = tclass;
- new->next = avc_cache.slots[hvalue];
- avc_cache.slots[hvalue] = new;
+ write_sequnlock_irqrestore( &(block->lock), flags );
-out:
return new;
}
-static inline struct avc_node *avc_search_node(u32 ssid, u32 tsid,
+static inline struct avc_entry *avc_search_node(u32 ssid, u32 tsid,
u16 tclass, int *probes)
{
- struct avc_node *cur;
- int hvalue;
+ struct avc_node *block;
+ struct avc_entry *ae, *ret=NULL;
+ int hvalue, seq;
int tprobes = 1;
+ int i;
+ unsigned long flags;
+ char use = 0;
hvalue = avc_hash(ssid, tsid, tclass);
- cur = avc_cache.slots[hvalue];
- while (cur != NULL &&
- (ssid != cur->ae.ssid ||
- tclass != cur->ae.tclass ||
- tsid != cur->ae.tsid)) {
+ block = &avc_cache.blocks[hvalue];
+ do {
+ seq = read_seqbegin_irqsave( &(block->lock), flags );
+ for ( i=0; i < AVC_CACHE_SLOTS_PER_LOCK; i++ ) {
+ ae = &(block->ae[i]);
+ if ( ssid != ae->ssid ||
+ tclass != ae->tclass ||
+ tsid != ae->tsid ||
+ ae->used == SLOT_INVALID )
tprobes++;
- cur = cur->next;
+ else {
+ ret = ae;
+ use = ret->used;
+ break;
+ }
}
+ } while ( read_seqretry_irqrestore( &(block->lock), seq, flags ) );
- if (cur == NULL) {
+ if (ret == NULL) {
/* cache miss */
goto out;
}
+ if ( use != SLOT_RECENTLY_USED ) {
+ write_seqlock_irqsave( &(block->lock), flags );
+ if ( ret->ssid == ssid &&
+ ret->tsid == tsid &&
+ ret->tclass == tclass )
+ ret->used = SLOT_RECENTLY_USED;
+ else {
+ ret=NULL;
+ }
+ write_sequnlock_irqrestore( &(block->lock), flags );
+ }
+
/* cache hit */
- if (probes)
+ if (ret && probes)
*probes = tprobes;
-
- cur->ae.used = 1;
-
-out:
- return cur;
+ out:
+ return ret;
}
/**
* avc_lookup - Look up an AVC entry.
* @ssid: source security identifier
* @tsid: target security identifier
* @tclass: target security class
* @requested: requested permissions, interpreted based on @tclass
* @aeref: AVC entry reference
*
* Look up an AVC entry that is valid for the
* @requested permissions between the SID pair
* (@ssid, @tsid), interpreting the permissions
* based on @tclass. If a valid AVC entry exists,
* then this function updates @aeref to refer to the
* entry and returns %0. Otherwise, this function
* returns -%ENOENT.
*/
-int avc_lookup(u32 ssid, u32 tsid, u16 tclass,
- u32 requested, struct avc_entry_ref *aeref)
+struct avc_entry* avc_lookup( u32 ssid, u32 tsid, u16 tclass, u32 requested, struct av_decision* avd )
+
{
- struct avc_node *node;
- int probes, rc = 0;
+ struct avc_entry *node = NULL;
+ int seq, probes, hvalue;
+ unsigned long flags;
+ u32 denied;
+ int i = 0;
avc_cache_stats_incr(AVC_CAV_LOOKUPS);
- node = avc_search_node(ssid, tsid, tclass,&probes);
+ hvalue = avc_hash( ssid, tsid, tclass );
+ node = avc_search_node(ssid, tsid, tclass, &probes);
+ if ( node ) {
+ do {
+ seq = read_seqbegin_irqsave( &( avc_cache.blocks[hvalue].lock ), flags );
+ i++;
+ if ( node->ssid == ssid && node->tsid == tsid && node->tclass ) {
+ memcpy( avd, node->avd, sizeof(*avd) );
+ denied = requested & ~(avd->allowed);
+ } else { /* node was removed from the cache */
+ node=NULL;
+ break;
+ }
+ } while( read_seqretry_irqrestore( &( avc_cache.blocks[hvalue].lock ), seq, flags ) );
+ //printk( KERN_DEBUG "To copy avd, looped %d times.\n", i );
+ }
- if (node && ((node->ae.avd.decided & requested) == requested)) {
+ if ( node && ( !requested || denied ) && !selinux_enforcing && avd->allowed != (avd->allowed|requested) ) {
+ write_seqlock_irqsave( &(avc_cache.blocks[hvalue].lock), flags );
+ if ( node->ssid == ssid && node->tsid == tsid && node->tclass == tclass )
+ node->avd->allowed |= requested;
+ else /* node was removed from the cache */
+ node=NULL;
+ write_sequnlock_irqrestore( &(avc_cache.blocks[hvalue].lock), flags );
+ }
+
+ if (node && (( avd->decided & requested ) == requested)) {
avc_cache_stats_incr(AVC_CAV_HITS);
avc_cache_stats_add(AVC_CAV_PROBES,probes);
- aeref->ae = &node->ae;
goto out;
}
avc_cache_stats_incr(AVC_CAV_MISSES);
- rc = -ENOENT;
-out:
- return rc;
+
+ out:
+ return node;
}
/**
* avc_insert - Insert an AVC entry.
* @ssid: source security identifier
* @tsid: target security identifier
* @tclass: target security class
* @ae: AVC entry
* @aeref: AVC entry reference
*
@@ -389,58 +449,52 @@
* (@ssid, @tsid) and class @tclass.
* The access vectors and the sequence number are
* normally provided by the security server in
* response to a security_compute_av() call. If the
* sequence number @ae->avd.seqno is not less than the latest
* revocation notification, then the function copies
* the access vectors into a cache entry, updates
* @aeref to refer to the entry, and returns %0.
* Otherwise, this function returns -%EAGAIN.
*/
-int avc_insert(u32 ssid, u32 tsid, u16 tclass,
- struct avc_entry *ae, struct avc_entry_ref *aeref)
+struct avc_entry* avc_insert(u32 ssid, u32 tsid, u16 tclass, u32 requested,
+ struct av_decision *avd )
{
- struct avc_node *node;
- int rc = 0;
+ struct avc_entry *entry = NULL;
+ int denied;
- if (ae->avd.seqno < avc_cache.latest_notif) {
+ if (avd->seqno < atomic_read(&avc_cache.latest_notif)) {
printk(KERN_WARNING "avc: seqno %d < latest_notif %d\n",
- ae->avd.seqno, avc_cache.latest_notif);
- rc = -EAGAIN;
+ avd->seqno, atomic_read(&avc_cache.latest_notif));
goto out;
}
- node = avc_claim_node(ssid, tsid, tclass);
- if (!node) {
- rc = -ENOMEM;
- goto out;
- }
+ denied = requested & ~( avd->allowed );
+ if ( denied && !selinux_enforcing )
+ avd->allowed |= requested;
- node->ae.avd.allowed = ae->avd.allowed;
- node->ae.avd.decided = ae->avd.decided;
- node->ae.avd.auditallow = ae->avd.auditallow;
- node->ae.avd.auditdeny = ae->avd.auditdeny;
- node->ae.avd.seqno = ae->avd.seqno;
- aeref->ae = &node->ae;
-out:
- return rc;
+ entry = avc_claim_node(ssid, tsid, tclass, avd);
+
+ out:
+ return entry;
}
static inline void avc_print_ipv6_addr(struct audit_buffer *ab,
struct in6_addr *addr, u16 port,
char *name1, char *name2)
{
if (!ipv6_addr_any(addr))
audit_log_format(ab, " %s=%04x:%04x:%04x:%04x:%04x:"
"%04x:%04x:%04x", name1, NIP6(*addr));
if (port)
audit_log_format(ab, " %s=%d", name2, ntohs(port));
+
}
static inline void avc_print_ipv4_addr(struct audit_buffer *ab, u32 addr,
u16 port, char *name1, char *name2)
{
if (addr)
audit_log_format(ab, " %s=%d.%d.%d.%d", name1, NIPQUAD(addr));
if (port)
audit_log_format(ab, " %s=%d", name2, ntohs(port));
}
@@ -670,96 +724,103 @@
goto out;
}
c->callback = callback;
c->events = events;
c->ssid = ssid;
c->tsid = tsid;
c->perms = perms;
c->next = avc_callbacks;
avc_callbacks = c;
-out:
+ out:
return rc;
}
static inline int avc_sidcmp(u32 x, u32 y)
{
return (x == y || x == SECSID_WILD || y == SECSID_WILD);
}
-static inline void avc_update_node(u32 event, struct avc_node *node, u32 perms)
+/* hold write lock when calling */
+static inline void avc_update_node(u32 event, struct avc_entry * node, u32 perms)
{
switch (event) {
case AVC_CALLBACK_GRANT:
- node->ae.avd.allowed |= perms;
+ node->avd->allowed |= perms;
break;
case AVC_CALLBACK_TRY_REVOKE:
case AVC_CALLBACK_REVOKE:
- node->ae.avd.allowed &= ~perms;
+ node->avd->allowed &= ~perms;
break;
case AVC_CALLBACK_AUDITALLOW_ENABLE:
- node->ae.avd.auditallow |= perms;
+ node->avd->auditallow |= perms;
break;
case AVC_CALLBACK_AUDITALLOW_DISABLE:
- node->ae.avd.auditallow &= ~perms;
+ node->avd->auditallow &= ~perms;
break;
case AVC_CALLBACK_AUDITDENY_ENABLE:
- node->ae.avd.auditdeny |= perms;
+ node->avd->auditdeny |= perms;
break;
case AVC_CALLBACK_AUDITDENY_DISABLE:
- node->ae.avd.auditdeny &= ~perms;
+ node->avd->auditdeny &= ~perms;
break;
}
}
static int avc_update_cache(u32 event, u32 ssid, u32 tsid,
u16 tclass, u32 perms)
{
struct avc_node *node;
- int i;
+ struct avc_entry *entry;
+ int i, j;
unsigned long flags;
- spin_lock_irqsave(&avc_lock,flags);
-
if (ssid == SECSID_WILD || tsid == SECSID_WILD) {
/* apply to all matching nodes */
- for (i = 0; i < AVC_CACHE_SLOTS; i++) {
- for (node = avc_cache.slots[i]; node;
- node = node->next) {
- if (avc_sidcmp(ssid, node->ae.ssid) &&
- avc_sidcmp(tsid, node->ae.tsid) &&
- tclass == node->ae.tclass) {
- avc_update_node(event,node,perms);
+ for (i = 0; i < AVC_CACHE_BLOCKS; i++) {
+ node = &(avc_cache.blocks[i]);
+ for (j = 0; j < AVC_CACHE_SLOTS_PER_LOCK; j++ ) {
+ entry = &(node->ae[j]);
+ if (avc_sidcmp(ssid, entry->ssid) &&
+ avc_sidcmp(tsid, entry->tsid) &&
+ tclass == entry->tclass) {
+ write_seqlock_irqsave( &(node->lock), flags);
+ if (avc_sidcmp(ssid, entry->ssid) &&
+ avc_sidcmp(tsid, entry->tsid) &&
+ tclass == entry->tclass) {
+ avc_update_node(event,entry,perms);
+ }
+ write_sequnlock_irqrestore( &(node->lock), flags);
}
}
}
} else {
/* apply to one node */
- node = avc_search_node(ssid, tsid, tclass, NULL);
- if (node) {
- avc_update_node(event,node,perms);
+ entry = avc_search_node(ssid, tsid, tclass, NULL);
+ if (entry) {
+ node = &(avc_cache.blocks[ avc_hash( ssid,tsid,tclass) ] );
+ write_seqlock_irqsave( &(node->lock), flags);
+ if ( entry->ssid == ssid && entry->tsid == tsid && entry->tclass == tclass )
+ avc_update_node(event,entry,perms);
+ write_sequnlock_irqrestore( &(node->lock), flags);
}
}
-
- spin_unlock_irqrestore(&avc_lock,flags);
-
return 0;
}
static int avc_control(u32 event, u32 ssid, u32 tsid,
u16 tclass, u32 perms,
u32 seqno, u32 *out_retained)
{
struct avc_callback_node *c;
u32 tretained = 0, cretained = 0;
int rc = 0;
- unsigned long flags;
/*
* try_revoke only removes permissions from the cache
* state if they are not retained by the object manager.
* Hence, try_revoke must wait until after the callbacks have
* been invoked to update the cache state.
*/
if (event != AVC_CALLBACK_TRY_REVOKE)
avc_update_cache(event,ssid,tsid,tclass,perms);
@@ -780,26 +841,24 @@
}
}
if (event == AVC_CALLBACK_TRY_REVOKE) {
/* revoke any unretained permissions */
perms &= ~tretained;
avc_update_cache(event,ssid,tsid,tclass,perms);
*out_retained = tretained;
}
- spin_lock_irqsave(&avc_lock,flags);
- if (seqno > avc_cache.latest_notif)
- avc_cache.latest_notif = seqno;
- spin_unlock_irqrestore(&avc_lock,flags);
+ if (seqno > atomic_read(&avc_cache.latest_notif))
+ atomic_set(&avc_cache.latest_notif, seqno);
-out:
+ out:
return rc;
}
/**
* avc_ss_grant - Grant previously denied permissions.
* @ssid: source security identifier or %SECSID_WILD
* @tsid: target security identifier or %SECSID_WILD
* @tclass: target security class
* @perms: permissions to grant
* @seqno: policy sequence number
@@ -820,94 +879,86 @@
* @seqno: policy sequence number
* @out_retained: subset of @perms that are retained
*
* Try to revoke previously granted permissions, but
* only if they are not retained as migrated permissions.
* Return the subset of permissions that are retained via @out_retained.
*/
int avc_ss_try_revoke(u32 ssid, u32 tsid, u16 tclass,
u32 perms, u32 seqno, u32 *out_retained)
{
+
return avc_control(AVC_CALLBACK_TRY_REVOKE,
ssid, tsid, tclass, perms, seqno, out_retained);
}
/**
* avc_ss_revoke - Revoke previously granted permissions.
* @ssid: source security identifier or %SECSID_WILD
* @tsid: target security identifier or %SECSID_WILD
* @tclass: target security class
* @perms: permissions to grant
* @seqno: policy sequence number
*
* Revoke previously granted permissions, even if
* they are retained as migrated permissions.
*/
int avc_ss_revoke(u32 ssid, u32 tsid, u16 tclass,
u32 perms, u32 seqno)
{
+
return avc_control(AVC_CALLBACK_REVOKE,
ssid, tsid, tclass, perms, seqno, NULL);
}
/**
* avc_ss_reset - Flush the cache and revalidate migrated permissions.
* @seqno: policy sequence number
*/
int avc_ss_reset(u32 seqno)
{
struct avc_callback_node *c;
- int i, rc = 0;
- struct avc_node *node, *tmp;
+ int i, j, rc = 0;
+ struct avc_node *node;
+ struct avc_entry *entry;
unsigned long flags;
- avc_hash_eval("reset");
- spin_lock_irqsave(&avc_lock,flags);
+ avc_hash_eval("reset");
- for (i = 0; i < AVC_CACHE_SLOTS; i++) {
- node = avc_cache.slots[i];
- while (node) {
- tmp = node;
- node = node->next;
- tmp->ae.ssid = tmp->ae.tsid = SECSID_NULL;
- tmp->ae.tclass = SECCLASS_NULL;
- tmp->ae.avd.allowed = tmp->ae.avd.decided = 0;
- tmp->ae.avd.auditallow = tmp->ae.avd.auditdeny = 0;
- tmp->ae.used = 0;
- tmp->next = avc_node_freelist;
- avc_node_freelist = tmp;
- avc_cache.active_nodes--;
+ for (i = 0; i < AVC_CACHE_BLOCKS; i++) {
+ node = &(avc_cache.blocks[i]);
+ write_seqlock_irqsave( &(node->lock), flags);
+ for ( j = 0; j < AVC_CACHE_SLOTS_PER_LOCK; j++ ) {
+ entry = &(node->ae[j]);
+ entry->used = SLOT_INVALID;
+ entry->ssid = entry->tsid = SECSID_NULL;
+ entry->tclass = SECCLASS_NULL;
+ entry->avd->allowed = entry->avd->decided = 0;
+ entry->avd->auditallow = entry->avd->auditdeny = 0;
}
- avc_cache.slots[i] = NULL;
+ node->next_evicted = 0;
+ write_sequnlock_irqrestore( &(node->lock), flags);
}
- avc_cache.lru_hint = 0;
-
- spin_unlock_irqrestore(&avc_lock,flags);
-
for (i = 0; i < AVC_NSTATS; i++)
avc_cache_stats[i] = 0;
-
for (c = avc_callbacks; c; c = c->next) {
if (c->events & AVC_CALLBACK_RESET) {
rc = c->callback(AVC_CALLBACK_RESET,
0, 0, 0, 0, NULL);
if (rc)
goto out;
}
}
-
- spin_lock_irqsave(&avc_lock,flags);
- if (seqno > avc_cache.latest_notif)
- avc_cache.latest_notif = seqno;
- spin_unlock_irqrestore(&avc_lock,flags);
-out:
+ if (seqno > atomic_read(&avc_cache.latest_notif))
+ atomic_set(&avc_cache.latest_notif, seqno);
+ out:
return rc;
}
/**
* avc_ss_set_auditallow - Enable or disable auditing of granted permissions.
* @ssid: source security identifier or %SECSID_WILD
* @tsid: target security identifier or %SECSID_WILD
* @tclass: target security class
* @perms: permissions to grant
* @seqno: policy sequence number
@@ -962,85 +1013,44 @@
* -%EACCES if any permissions are denied, or another -errno upon
* other errors. This function is typically called by avc_has_perm(),
* but may also be called directly to separate permission checking from
* auditing, e.g. in cases where a lock must be held for the check but
* should be released for the auditing.
*/
int avc_has_perm_noaudit(u32 ssid, u32 tsid,
u16 tclass, u32 requested,
struct avc_entry_ref *aeref, struct av_decision *avd)
{
- struct avc_entry *ae;
+ struct av_decision dec, *tdec;
int rc = 0;
- unsigned long flags;
- struct avc_entry entry;
- u32 denied;
- struct avc_entry_ref ref;
+ //unsigned long flags;
+ struct avc_entry* entry;
+ //u32 denied;
+ //int seq, hvalue;
- if (!aeref) {
- avc_entry_ref_init(&ref);
- aeref = &ref;
- }
+ if (avd)
+ tdec=avd;
+ else
+ tdec=&dec;
- spin_lock_irqsave(&avc_lock, flags);
avc_cache_stats_incr(AVC_ENTRY_LOOKUPS);
- ae = aeref->ae;
- if (ae) {
- if (ae->ssid == ssid &&
- ae->tsid == tsid &&
- ae->tclass == tclass &&
- ((ae->avd.decided & requested) == requested)) {
- avc_cache_stats_incr(AVC_ENTRY_HITS);
- ae->used = 1;
- } else {
- avc_cache_stats_incr(AVC_ENTRY_DISCARDS);
- ae = NULL;
- }
- }
-
- if (!ae) {
- avc_cache_stats_incr(AVC_ENTRY_MISSES);
- rc = avc_lookup(ssid, tsid, tclass, requested, aeref);
- if (rc) {
- spin_unlock_irqrestore(&avc_lock,flags);
- rc = security_compute_av(ssid,tsid,tclass,requested,&entry.avd);
+ entry = avc_lookup(ssid, tsid, tclass, requested, tdec);
+ if ( !entry ) {
+ rc = security_compute_av(ssid,tsid,tclass,requested,tdec);
if (rc)
goto out;
- spin_lock_irqsave(&avc_lock, flags);
- rc = avc_insert(ssid,tsid,tclass,&entry,aeref);
- if (rc) {
- spin_unlock_irqrestore(&avc_lock,flags);
- goto out;
- }
- }
- ae = aeref->ae;
- }
-
- if (avd)
- memcpy(avd, &ae->avd, sizeof(*avd));
-
- denied = requested & ~(ae->avd.allowed);
-
- if (!requested || denied) {
- if (selinux_enforcing) {
- spin_unlock_irqrestore(&avc_lock,flags);
- rc = -EACCES;
- goto out;
- } else {
- ae->avd.allowed |= requested;
- spin_unlock_irqrestore(&avc_lock,flags);
+ entry = avc_insert(ssid,tsid,tclass,requested,tdec);
+ if (!entry)
goto out;
}
- }
- spin_unlock_irqrestore(&avc_lock,flags);
-out:
+ out:
return rc;
}
/**
* avc_has_perm - Check permissions and perform any appropriate auditing.
* @ssid: source security identifier
* @tsid: target security identifier
* @tclass: target security class
* @requested: requested permissions, interpreted based on @tclass
* @aeref: AVC entry reference
@@ -1057,12 +1067,13 @@
*/
int avc_has_perm(u32 ssid, u32 tsid, u16 tclass,
u32 requested, struct avc_entry_ref *aeref,
struct avc_audit_data *auditdata)
{
struct av_decision avd;
int rc;
rc = avc_has_perm_noaudit(ssid, tsid, tclass, requested, aeref, &avd);
avc_audit(ssid, tsid, tclass, requested, &avd, rc, auditdata);
+
return rc;
}
--- linux-2.6.8.1/security/selinux/include/avc.h 2004-08-14 05:54:51.000000000 -0500
+++ linux-2.6.8.1-myseq/security/selinux/include/avc.h 2004-09-10 16:27:56.000000000 -0500
@@ -112,25 +112,26 @@
void avc_dump_av(struct audit_buffer *ab, u16 tclass, u32 av);
void avc_dump_query(struct audit_buffer *ab, u32 ssid, u32 tsid, u16 tclass);
void avc_dump_cache(struct audit_buffer *ab, char *tag);
/*
* AVC operations
*/
void __init avc_init(void);
-int avc_lookup(u32 ssid, u32 tsid, u16 tclass,
- u32 requested, struct avc_entry_ref *aeref);
+struct avc_entry* avc_lookup(u32 ssid, u32 tsid, u16 tclass,
+ u32 requested, struct av_decision* avd );
+
+struct avc_entry* avc_insert(u32 ssid, u32 tsid, u16 tclass, u32 requested,
+ struct av_decision* avd );
-int avc_insert(u32 ssid, u32 tsid, u16 tclass,
- struct avc_entry *ae, struct avc_entry_ref *out_aeref);
void avc_audit(u32 ssid, u32 tsid,
u16 tclass, u32 requested,
struct av_decision *avd, int result, struct avc_audit_data *auditdata);
int avc_has_perm_noaudit(u32 ssid, u32 tsid,
u16 tclass, u32 requested,
struct avc_entry_ref *aeref, struct av_decision *avd);
int avc_has_perm(u32 ssid, u32 tsid,
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2004-10-11 15:23 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2004-10-11 15:16 AVC Seqlock vs. RCU experiment results Kylene J Hall
-- strict thread matches above, loose matches on Subject: below --
2004-10-07 18:11 Kylene J Hall
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.