* [v3 PATCH bpf-next 1/6] bpf: add percpu stats for bpf_map elements insertions/deletions
2023-06-30 8:25 [v3 PATCH bpf-next 0/6] bpf: add percpu stats for bpf_map Anton Protopopov
@ 2023-06-30 8:25 ` Anton Protopopov
2023-06-30 8:25 ` [v3 PATCH bpf-next 2/6] bpf: add a new kfunc to return current bpf_map elements count Anton Protopopov
` (4 subsequent siblings)
5 siblings, 0 replies; 20+ messages in thread
From: Anton Protopopov @ 2023-06-30 8:25 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, John Fastabend,
Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, bpf
Cc: Anton Protopopov
Add a generic percpu stats for bpf_map elements insertions/deletions in order
to keep track of both, the current (approximate) number of elements in a map
and per-cpu statistics on update/delete operations.
To expose these stats a particular map implementation should initialize the
counter and adjust it as needed using the 'bpf_map_*_elem_count' helpers
provided by this commit.
Signed-off-by: Anton Protopopov <aspsk@isovalent.com>
---
include/linux/bpf.h | 30 ++++++++++++++++++++++++++++++
1 file changed, 30 insertions(+)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index f58895830ada..360433f14496 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -275,6 +275,7 @@ struct bpf_map {
} owner;
bool bypass_spec_v1;
bool frozen; /* write-once; write-protected by freeze_mutex */
+ s64 __percpu *elem_count;
};
static inline const char *btf_field_type_name(enum btf_field_type type)
@@ -2040,6 +2041,35 @@ bpf_map_alloc_percpu(const struct bpf_map *map, size_t size, size_t align,
}
#endif
+static inline int
+bpf_map_init_elem_count(struct bpf_map *map)
+{
+ size_t size = sizeof(*map->elem_count), align = size;
+ gfp_t flags = GFP_USER | __GFP_NOWARN;
+
+ map->elem_count = bpf_map_alloc_percpu(map, size, align, flags);
+ if (!map->elem_count)
+ return -ENOMEM;
+
+ return 0;
+}
+
+static inline void
+bpf_map_free_elem_count(struct bpf_map *map)
+{
+ free_percpu(map->elem_count);
+}
+
+static inline void bpf_map_inc_elem_count(struct bpf_map *map)
+{
+ this_cpu_inc(*map->elem_count);
+}
+
+static inline void bpf_map_dec_elem_count(struct bpf_map *map)
+{
+ this_cpu_dec(*map->elem_count);
+}
+
extern int sysctl_unprivileged_bpf_disabled;
static inline bool bpf_allow_ptr_leaks(void)
--
2.34.1
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [v3 PATCH bpf-next 2/6] bpf: add a new kfunc to return current bpf_map elements count
2023-06-30 8:25 [v3 PATCH bpf-next 0/6] bpf: add percpu stats for bpf_map Anton Protopopov
2023-06-30 8:25 ` [v3 PATCH bpf-next 1/6] bpf: add percpu stats for bpf_map elements insertions/deletions Anton Protopopov
@ 2023-06-30 8:25 ` Anton Protopopov
2023-06-30 8:25 ` [v3 PATCH bpf-next 3/6] bpf: populate the per-cpu insertions/deletions counters for hashmaps Anton Protopopov
` (3 subsequent siblings)
5 siblings, 0 replies; 20+ messages in thread
From: Anton Protopopov @ 2023-06-30 8:25 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, John Fastabend,
Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, bpf
Cc: Anton Protopopov
A bpf_map_sum_elem_count kfunc was added to simplify getting the sum of the map
per-cpu element counters. If a map doesn't implement the counter, then the
function will always return 0.
Signed-off-by: Anton Protopopov <aspsk@isovalent.com>
---
kernel/bpf/map_iter.c | 39 ++++++++++++++++++++++++++++++++++++++-
1 file changed, 38 insertions(+), 1 deletion(-)
diff --git a/kernel/bpf/map_iter.c b/kernel/bpf/map_iter.c
index b0fa190b0979..d06d3b7150e5 100644
--- a/kernel/bpf/map_iter.c
+++ b/kernel/bpf/map_iter.c
@@ -93,7 +93,7 @@ static struct bpf_iter_reg bpf_map_reg_info = {
.ctx_arg_info_size = 1,
.ctx_arg_info = {
{ offsetof(struct bpf_iter__bpf_map, map),
- PTR_TO_BTF_ID_OR_NULL },
+ PTR_TO_BTF_ID_OR_NULL | PTR_TRUSTED },
},
.seq_info = &bpf_map_seq_info,
};
@@ -193,3 +193,40 @@ static int __init bpf_map_iter_init(void)
}
late_initcall(bpf_map_iter_init);
+
+__diag_push();
+__diag_ignore_all("-Wmissing-prototypes",
+ "Global functions as their definitions will be in vmlinux BTF");
+
+__bpf_kfunc s64 bpf_map_sum_elem_count(struct bpf_map *map)
+{
+ s64 *pcount;
+ s64 ret = 0;
+ int cpu;
+
+ if (!map || !map->elem_count)
+ return 0;
+
+ for_each_possible_cpu(cpu) {
+ pcount = per_cpu_ptr(map->elem_count, cpu);
+ ret += READ_ONCE(*pcount);
+ }
+ return ret;
+}
+
+__diag_pop();
+
+BTF_SET8_START(bpf_map_iter_kfunc_ids)
+BTF_ID_FLAGS(func, bpf_map_sum_elem_count, KF_TRUSTED_ARGS)
+BTF_SET8_END(bpf_map_iter_kfunc_ids)
+
+static const struct btf_kfunc_id_set bpf_map_iter_kfunc_set = {
+ .owner = THIS_MODULE,
+ .set = &bpf_map_iter_kfunc_ids,
+};
+
+static int init_subsystem(void)
+{
+ return register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING, &bpf_map_iter_kfunc_set);
+}
+late_initcall(init_subsystem);
--
2.34.1
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [v3 PATCH bpf-next 3/6] bpf: populate the per-cpu insertions/deletions counters for hashmaps
2023-06-30 8:25 [v3 PATCH bpf-next 0/6] bpf: add percpu stats for bpf_map Anton Protopopov
2023-06-30 8:25 ` [v3 PATCH bpf-next 1/6] bpf: add percpu stats for bpf_map elements insertions/deletions Anton Protopopov
2023-06-30 8:25 ` [v3 PATCH bpf-next 2/6] bpf: add a new kfunc to return current bpf_map elements count Anton Protopopov
@ 2023-06-30 8:25 ` Anton Protopopov
2023-07-04 13:56 ` Hou Tao
2023-06-30 8:25 ` [v3 PATCH bpf-next 4/6] bpf: make preloaded map iterators to display map elements count Anton Protopopov
` (2 subsequent siblings)
5 siblings, 1 reply; 20+ messages in thread
From: Anton Protopopov @ 2023-06-30 8:25 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, John Fastabend,
Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, bpf
Cc: Anton Protopopov
Initialize and utilize the per-cpu insertions/deletions counters for hash-based
maps. Non-trivial changes only apply to the preallocated maps for which the
{inc,dec}_elem_count functions are not called, as there's no need in counting
elements to sustain proper map operations.
To increase/decrease percpu counters for preallocated maps we add raw calls to
the bpf_map_{inc,dec}_elem_count functions so that the impact is minimal. For
dynamically allocated maps we add corresponding calls to the existing
{inc,dec}_elem_count functions.
Signed-off-by: Anton Protopopov <aspsk@isovalent.com>
---
kernel/bpf/hashtab.c | 23 ++++++++++++++++++++---
1 file changed, 20 insertions(+), 3 deletions(-)
diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
index 56d3da7d0bc6..faaef4fd3df0 100644
--- a/kernel/bpf/hashtab.c
+++ b/kernel/bpf/hashtab.c
@@ -581,8 +581,14 @@ static struct bpf_map *htab_map_alloc(union bpf_attr *attr)
}
}
+ err = bpf_map_init_elem_count(&htab->map);
+ if (err)
+ goto free_extra_elements;
+
return &htab->map;
+free_extra_elements:
+ free_percpu(htab->extra_elems);
free_prealloc:
prealloc_destroy(htab);
free_map_locked:
@@ -804,6 +810,7 @@ static bool htab_lru_map_delete_node(void *arg, struct bpf_lru_node *node)
if (l == tgt_l) {
hlist_nulls_del_rcu(&l->hash_node);
check_and_free_fields(htab, l);
+ bpf_map_dec_elem_count(&htab->map);
break;
}
@@ -900,6 +907,8 @@ static bool is_map_full(struct bpf_htab *htab)
static void inc_elem_count(struct bpf_htab *htab)
{
+ bpf_map_inc_elem_count(&htab->map);
+
if (htab->use_percpu_counter)
percpu_counter_add_batch(&htab->pcount, 1, PERCPU_COUNTER_BATCH);
else
@@ -908,6 +917,8 @@ static void inc_elem_count(struct bpf_htab *htab)
static void dec_elem_count(struct bpf_htab *htab)
{
+ bpf_map_dec_elem_count(&htab->map);
+
if (htab->use_percpu_counter)
percpu_counter_add_batch(&htab->pcount, -1, PERCPU_COUNTER_BATCH);
else
@@ -920,6 +931,7 @@ static void free_htab_elem(struct bpf_htab *htab, struct htab_elem *l)
htab_put_fd_value(htab, l);
if (htab_is_prealloc(htab)) {
+ bpf_map_dec_elem_count(&htab->map);
check_and_free_fields(htab, l);
__pcpu_freelist_push(&htab->freelist, &l->fnode);
} else {
@@ -1000,6 +1012,7 @@ static struct htab_elem *alloc_htab_elem(struct bpf_htab *htab, void *key,
if (!l)
return ERR_PTR(-E2BIG);
l_new = container_of(l, struct htab_elem, fnode);
+ bpf_map_inc_elem_count(&htab->map);
}
} else {
if (is_map_full(htab))
@@ -1224,7 +1237,8 @@ static long htab_lru_map_update_elem(struct bpf_map *map, void *key, void *value
if (l_old) {
bpf_lru_node_set_ref(&l_new->lru_node);
hlist_nulls_del_rcu(&l_old->hash_node);
- }
+ } else
+ bpf_map_inc_elem_count(&htab->map);
ret = 0;
err:
@@ -1351,6 +1365,7 @@ static long __htab_lru_percpu_map_update_elem(struct bpf_map *map, void *key,
pcpu_init_value(htab, htab_elem_get_ptr(l_new, key_size),
value, onallcpus);
hlist_nulls_add_head_rcu(&l_new->hash_node, head);
+ bpf_map_inc_elem_count(&htab->map);
l_new = NULL;
}
ret = 0;
@@ -1437,9 +1452,10 @@ static long htab_lru_map_delete_elem(struct bpf_map *map, void *key)
l = lookup_elem_raw(head, hash, key, key_size);
- if (l)
+ if (l) {
+ bpf_map_dec_elem_count(&htab->map);
hlist_nulls_del_rcu(&l->hash_node);
- else
+ } else
ret = -ENOENT;
htab_unlock_bucket(htab, b, hash, flags);
@@ -1523,6 +1539,7 @@ static void htab_map_free(struct bpf_map *map)
prealloc_destroy(htab);
}
+ bpf_map_free_elem_count(map);
free_percpu(htab->extra_elems);
bpf_map_area_free(htab->buckets);
bpf_mem_alloc_destroy(&htab->pcpu_ma);
--
2.34.1
^ permalink raw reply related [flat|nested] 20+ messages in thread
* Re: [v3 PATCH bpf-next 3/6] bpf: populate the per-cpu insertions/deletions counters for hashmaps
2023-06-30 8:25 ` [v3 PATCH bpf-next 3/6] bpf: populate the per-cpu insertions/deletions counters for hashmaps Anton Protopopov
@ 2023-07-04 13:56 ` Hou Tao
2023-07-04 14:34 ` Anton Protopopov
0 siblings, 1 reply; 20+ messages in thread
From: Hou Tao @ 2023-07-04 13:56 UTC (permalink / raw)
To: Anton Protopopov
Cc: Alexei Starovoitov, Daniel Borkmann, John Fastabend,
Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, bpf
Hi,
On 6/30/2023 4:25 PM, Anton Protopopov wrote:
> Initialize and utilize the per-cpu insertions/deletions counters for hash-based
> maps. Non-trivial changes only apply to the preallocated maps for which the
> {inc,dec}_elem_count functions are not called, as there's no need in counting
> elements to sustain proper map operations.
>
> To increase/decrease percpu counters for preallocated maps we add raw calls to
> the bpf_map_{inc,dec}_elem_count functions so that the impact is minimal. For
> dynamically allocated maps we add corresponding calls to the existing
> {inc,dec}_elem_count functions.
>
> Signed-off-by: Anton Protopopov <aspsk@isovalent.com>
> ---
> kernel/bpf/hashtab.c | 23 ++++++++++++++++++++---
> 1 file changed, 20 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
> index 56d3da7d0bc6..faaef4fd3df0 100644
> --- a/kernel/bpf/hashtab.c
> +++ b/kernel/bpf/hashtab.c
> @@ -581,8 +581,14 @@ static struct bpf_map *htab_map_alloc(union bpf_attr *attr)
> }
> }
>
> + err = bpf_map_init_elem_count(&htab->map);
> + if (err)
> + goto free_extra_elements;
Considering the per-cpu counter is not always needed, is it a good idea
to make the elem_count being optional by introducing a new map flag ?
> +
> return &htab->map;
>
> +free_extra_elements:
> + free_percpu(htab->extra_elems);
> free_prealloc:
> prealloc_destroy(htab);
Need to check prealloc before calling prealloc_destroy(htab), otherwise
for non-preallocated percpu htab prealloc_destroy() will trigger invalid
memory dereference.
> free_map_locked:
> @@ -804,6 +810,7 @@ static bool htab_lru_map_delete_node(void *arg, struct bpf_lru_node *node)
> if (l == tgt_l) {
> hlist_nulls_del_rcu(&l->hash_node);
> check_and_free_fields(htab, l);
> + bpf_map_dec_elem_count(&htab->map);
> break;
> }
>
> @@ -900,6 +907,8 @@ static bool is_map_full(struct bpf_htab *htab)
>
> static void inc_elem_count(struct bpf_htab *htab)
> {
> + bpf_map_inc_elem_count(&htab->map);
> +
> if (htab->use_percpu_counter)
> percpu_counter_add_batch(&htab->pcount, 1, PERCPU_COUNTER_BATCH);
> else
> @@ -908,6 +917,8 @@ static void inc_elem_count(struct bpf_htab *htab)
>
> static void dec_elem_count(struct bpf_htab *htab)
> {
> + bpf_map_dec_elem_count(&htab->map);
> +
> if (htab->use_percpu_counter)
> percpu_counter_add_batch(&htab->pcount, -1, PERCPU_COUNTER_BATCH);
> else
> @@ -920,6 +931,7 @@ static void free_htab_elem(struct bpf_htab *htab, struct htab_elem *l)
> htab_put_fd_value(htab, l);
>
> if (htab_is_prealloc(htab)) {
> + bpf_map_dec_elem_count(&htab->map);
> check_and_free_fields(htab, l);
> __pcpu_freelist_push(&htab->freelist, &l->fnode);
> } else {
> @@ -1000,6 +1012,7 @@ static struct htab_elem *alloc_htab_elem(struct bpf_htab *htab, void *key,
> if (!l)
> return ERR_PTR(-E2BIG);
> l_new = container_of(l, struct htab_elem, fnode);
> + bpf_map_inc_elem_count(&htab->map);
> }
> } else {
> if (is_map_full(htab))
> @@ -1224,7 +1237,8 @@ static long htab_lru_map_update_elem(struct bpf_map *map, void *key, void *value
> if (l_old) {
> bpf_lru_node_set_ref(&l_new->lru_node);
> hlist_nulls_del_rcu(&l_old->hash_node);
> - }
> + } else
> + bpf_map_inc_elem_count(&htab->map);
> ret = 0;
>
> err:
> @@ -1351,6 +1365,7 @@ static long __htab_lru_percpu_map_update_elem(struct bpf_map *map, void *key,
> pcpu_init_value(htab, htab_elem_get_ptr(l_new, key_size),
> value, onallcpus);
> hlist_nulls_add_head_rcu(&l_new->hash_node, head);
> + bpf_map_inc_elem_count(&htab->map);
> l_new = NULL;
> }
> ret = 0;
> @@ -1437,9 +1452,10 @@ static long htab_lru_map_delete_elem(struct bpf_map *map, void *key)
>
> l = lookup_elem_raw(head, hash, key, key_size);
>
> - if (l)
> + if (l) {
> + bpf_map_dec_elem_count(&htab->map);
> hlist_nulls_del_rcu(&l->hash_node);
> - else
> + } else
> ret = -ENOENT;
Also need to decrease elem_count for
__htab_map_lookup_and_delete_batch() and
__htab_map_lookup_and_delete_elem() when is_lru_map is true. Maybe for
LRU map, we could just do bpf_map_dec_elem_count() in
htab_lru_push_free() and do bpf_map_inc_elem_count() in prealloc_lru_pop().
>
> htab_unlock_bucket(htab, b, hash, flags);
> @@ -1523,6 +1539,7 @@ static void htab_map_free(struct bpf_map *map)
> prealloc_destroy(htab);
> }
>
> + bpf_map_free_elem_count(map);
> free_percpu(htab->extra_elems);
> bpf_map_area_free(htab->buckets);
> bpf_mem_alloc_destroy(&htab->pcpu_ma);
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [v3 PATCH bpf-next 3/6] bpf: populate the per-cpu insertions/deletions counters for hashmaps
2023-07-04 13:56 ` Hou Tao
@ 2023-07-04 14:34 ` Anton Protopopov
2023-07-06 2:01 ` Hou Tao
0 siblings, 1 reply; 20+ messages in thread
From: Anton Protopopov @ 2023-07-04 14:34 UTC (permalink / raw)
To: Hou Tao
Cc: Alexei Starovoitov, Daniel Borkmann, John Fastabend,
Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, bpf
On Tue, Jul 04, 2023 at 09:56:36PM +0800, Hou Tao wrote:
> Hi,
>
> On 6/30/2023 4:25 PM, Anton Protopopov wrote:
> > Initialize and utilize the per-cpu insertions/deletions counters for hash-based
> > maps. Non-trivial changes only apply to the preallocated maps for which the
> > {inc,dec}_elem_count functions are not called, as there's no need in counting
> > elements to sustain proper map operations.
> >
> > To increase/decrease percpu counters for preallocated maps we add raw calls to
> > the bpf_map_{inc,dec}_elem_count functions so that the impact is minimal. For
> > dynamically allocated maps we add corresponding calls to the existing
> > {inc,dec}_elem_count functions.
> >
> > Signed-off-by: Anton Protopopov <aspsk@isovalent.com>
> > ---
> > kernel/bpf/hashtab.c | 23 ++++++++++++++++++++---
> > 1 file changed, 20 insertions(+), 3 deletions(-)
> >
> > diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
> > index 56d3da7d0bc6..faaef4fd3df0 100644
> > --- a/kernel/bpf/hashtab.c
> > +++ b/kernel/bpf/hashtab.c
> > @@ -581,8 +581,14 @@ static struct bpf_map *htab_map_alloc(union bpf_attr *attr)
> > }
> > }
> >
> > + err = bpf_map_init_elem_count(&htab->map);
> > + if (err)
> > + goto free_extra_elements;
> Considering the per-cpu counter is not always needed, is it a good idea
> to make the elem_count being optional by introducing a new map flag ?
Per-map-flag or a static key? For me it looked like just doing an unconditional
`inc` for a per-cpu variable is better vs. doing a check then `inc` or an
unconditional jump.
> > +
> > return &htab->map;
> >
> > +free_extra_elements:
> > + free_percpu(htab->extra_elems);
> > free_prealloc:
> > prealloc_destroy(htab);
> Need to check prealloc before calling prealloc_destroy(htab), otherwise
> for non-preallocated percpu htab prealloc_destroy() will trigger invalid
> memory dereference.
Thanks!
> > free_map_locked:
> > @@ -804,6 +810,7 @@ static bool htab_lru_map_delete_node(void *arg, struct bpf_lru_node *node)
> > if (l == tgt_l) {
> > hlist_nulls_del_rcu(&l->hash_node);
> > check_and_free_fields(htab, l);
> > + bpf_map_dec_elem_count(&htab->map);
> > break;
> > }
> >
> > @@ -900,6 +907,8 @@ static bool is_map_full(struct bpf_htab *htab)
> >
> > static void inc_elem_count(struct bpf_htab *htab)
> > {
> > + bpf_map_inc_elem_count(&htab->map);
> > +
> > if (htab->use_percpu_counter)
> > percpu_counter_add_batch(&htab->pcount, 1, PERCPU_COUNTER_BATCH);
> > else
> > @@ -908,6 +917,8 @@ static void inc_elem_count(struct bpf_htab *htab)
> >
> > static void dec_elem_count(struct bpf_htab *htab)
> > {
> > + bpf_map_dec_elem_count(&htab->map);
> > +
> > if (htab->use_percpu_counter)
> > percpu_counter_add_batch(&htab->pcount, -1, PERCPU_COUNTER_BATCH);
> > else
> > @@ -920,6 +931,7 @@ static void free_htab_elem(struct bpf_htab *htab, struct htab_elem *l)
> > htab_put_fd_value(htab, l);
> >
> > if (htab_is_prealloc(htab)) {
> > + bpf_map_dec_elem_count(&htab->map);
> > check_and_free_fields(htab, l);
> > __pcpu_freelist_push(&htab->freelist, &l->fnode);
> > } else {
> > @@ -1000,6 +1012,7 @@ static struct htab_elem *alloc_htab_elem(struct bpf_htab *htab, void *key,
> > if (!l)
> > return ERR_PTR(-E2BIG);
> > l_new = container_of(l, struct htab_elem, fnode);
> > + bpf_map_inc_elem_count(&htab->map);
> > }
> > } else {
> > if (is_map_full(htab))
> > @@ -1224,7 +1237,8 @@ static long htab_lru_map_update_elem(struct bpf_map *map, void *key, void *value
> > if (l_old) {
> > bpf_lru_node_set_ref(&l_new->lru_node);
> > hlist_nulls_del_rcu(&l_old->hash_node);
> > - }
> > + } else
> > + bpf_map_inc_elem_count(&htab->map);
> > ret = 0;
> >
> > err:
> > @@ -1351,6 +1365,7 @@ static long __htab_lru_percpu_map_update_elem(struct bpf_map *map, void *key,
> > pcpu_init_value(htab, htab_elem_get_ptr(l_new, key_size),
> > value, onallcpus);
> > hlist_nulls_add_head_rcu(&l_new->hash_node, head);
> > + bpf_map_inc_elem_count(&htab->map);
> > l_new = NULL;
> > }
> > ret = 0;
> > @@ -1437,9 +1452,10 @@ static long htab_lru_map_delete_elem(struct bpf_map *map, void *key)
> >
> > l = lookup_elem_raw(head, hash, key, key_size);
> >
> > - if (l)
> > + if (l) {
> > + bpf_map_dec_elem_count(&htab->map);
> > hlist_nulls_del_rcu(&l->hash_node);
> > - else
> > + } else
> > ret = -ENOENT;
> Also need to decrease elem_count for
> __htab_map_lookup_and_delete_batch() and
> __htab_map_lookup_and_delete_elem() when is_lru_map is true. Maybe for
> LRU map, we could just do bpf_map_dec_elem_count() in
> htab_lru_push_free() and do bpf_map_inc_elem_count() in prealloc_lru_pop().
Thanks. I will fix the logic and extend the selftest to test the batch ops as well.
> >
> > htab_unlock_bucket(htab, b, hash, flags);
> > @@ -1523,6 +1539,7 @@ static void htab_map_free(struct bpf_map *map)
> > prealloc_destroy(htab);
> > }
> >
> > + bpf_map_free_elem_count(map);
> > free_percpu(htab->extra_elems);
> > bpf_map_area_free(htab->buckets);
> > bpf_mem_alloc_destroy(&htab->pcpu_ma);
>
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [v3 PATCH bpf-next 3/6] bpf: populate the per-cpu insertions/deletions counters for hashmaps
2023-07-04 14:34 ` Anton Protopopov
@ 2023-07-06 2:01 ` Hou Tao
2023-07-06 12:25 ` Anton Protopopov
0 siblings, 1 reply; 20+ messages in thread
From: Hou Tao @ 2023-07-06 2:01 UTC (permalink / raw)
To: Anton Protopopov
Cc: Alexei Starovoitov, Daniel Borkmann, John Fastabend,
Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, bpf
Hi,
On 7/4/2023 10:34 PM, Anton Protopopov wrote:
> On Tue, Jul 04, 2023 at 09:56:36PM +0800, Hou Tao wrote:
>> Hi,
>>
>> On 6/30/2023 4:25 PM, Anton Protopopov wrote:
>>> Initialize and utilize the per-cpu insertions/deletions counters for hash-based
>>> maps. Non-trivial changes only apply to the preallocated maps for which the
>>> {inc,dec}_elem_count functions are not called, as there's no need in counting
>>> elements to sustain proper map operations.
>>>
>>> To increase/decrease percpu counters for preallocated maps we add raw calls to
>>> the bpf_map_{inc,dec}_elem_count functions so that the impact is minimal. For
>>> dynamically allocated maps we add corresponding calls to the existing
>>> {inc,dec}_elem_count functions.
>>>
>>> Signed-off-by: Anton Protopopov <aspsk@isovalent.com>
>>> ---
>>> kernel/bpf/hashtab.c | 23 ++++++++++++++++++++---
>>> 1 file changed, 20 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
>>> index 56d3da7d0bc6..faaef4fd3df0 100644
>>> --- a/kernel/bpf/hashtab.c
>>> +++ b/kernel/bpf/hashtab.c
>>> @@ -581,8 +581,14 @@ static struct bpf_map *htab_map_alloc(union bpf_attr *attr)
>>> }
>>> }
>>>
>>> + err = bpf_map_init_elem_count(&htab->map);
>>> + if (err)
>>> + goto free_extra_elements;
>> Considering the per-cpu counter is not always needed, is it a good idea
>> to make the elem_count being optional by introducing a new map flag ?
> Per-map-flag or a static key? For me it looked like just doing an unconditional
> `inc` for a per-cpu variable is better vs. doing a check then `inc` or an
> unconditional jump.
Sorry I didn't make it clear that I was worried about the allocated
per-cpu memory. Previous I thought the per-cpu memory is limited, but
after did some experiments I found it was almost the same as kmalloc()
which could use all available memory to fulfill the allocation request.
For a host with 72-cpus, the memory overhead for 10k hash map is about
~6MB. The overhead is tiny compared with the total available memory, but
it is avoidable.
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [v3 PATCH bpf-next 3/6] bpf: populate the per-cpu insertions/deletions counters for hashmaps
2023-07-06 2:01 ` Hou Tao
@ 2023-07-06 12:25 ` Anton Protopopov
2023-07-06 12:30 ` Hou Tao
0 siblings, 1 reply; 20+ messages in thread
From: Anton Protopopov @ 2023-07-06 12:25 UTC (permalink / raw)
To: Hou Tao
Cc: Alexei Starovoitov, Daniel Borkmann, John Fastabend,
Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, bpf
On Thu, Jul 06, 2023 at 10:01:26AM +0800, Hou Tao wrote:
> Hi,
>
> On 7/4/2023 10:34 PM, Anton Protopopov wrote:
> > On Tue, Jul 04, 2023 at 09:56:36PM +0800, Hou Tao wrote:
> >> Hi,
> >>
> >> On 6/30/2023 4:25 PM, Anton Protopopov wrote:
> >>> Initialize and utilize the per-cpu insertions/deletions counters for hash-based
> >>> maps. Non-trivial changes only apply to the preallocated maps for which the
> >>> {inc,dec}_elem_count functions are not called, as there's no need in counting
> >>> elements to sustain proper map operations.
> >>>
> >>> To increase/decrease percpu counters for preallocated maps we add raw calls to
> >>> the bpf_map_{inc,dec}_elem_count functions so that the impact is minimal. For
> >>> dynamically allocated maps we add corresponding calls to the existing
> >>> {inc,dec}_elem_count functions.
> >>>
> >>> Signed-off-by: Anton Protopopov <aspsk@isovalent.com>
> >>> ---
> >>> kernel/bpf/hashtab.c | 23 ++++++++++++++++++++---
> >>> 1 file changed, 20 insertions(+), 3 deletions(-)
> >>>
> >>> diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
> >>> index 56d3da7d0bc6..faaef4fd3df0 100644
> >>> --- a/kernel/bpf/hashtab.c
> >>> +++ b/kernel/bpf/hashtab.c
> >>> @@ -581,8 +581,14 @@ static struct bpf_map *htab_map_alloc(union bpf_attr *attr)
> >>> }
> >>> }
> >>>
> >>> + err = bpf_map_init_elem_count(&htab->map);
> >>> + if (err)
> >>> + goto free_extra_elements;
> >> Considering the per-cpu counter is not always needed, is it a good idea
> >> to make the elem_count being optional by introducing a new map flag ?
> > Per-map-flag or a static key? For me it looked like just doing an unconditional
> > `inc` for a per-cpu variable is better vs. doing a check then `inc` or an
> > unconditional jump.
>
> Sorry I didn't make it clear that I was worried about the allocated
> per-cpu memory. Previous I thought the per-cpu memory is limited, but
> after did some experiments I found it was almost the same as kmalloc()
> which could use all available memory to fulfill the allocation request.
> For a host with 72-cpus, the memory overhead for 10k hash map is about
> ~6MB. The overhead is tiny compared with the total available memory, but
> it is avoidable.
So, in my first patch I've only added new counters for preallocated maps. But
then the feedback was that we need a generic percpu inc/dec counters, so I
added them by default. For me a percpu s64 looks cheap enough for a hash map...
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [v3 PATCH bpf-next 3/6] bpf: populate the per-cpu insertions/deletions counters for hashmaps
2023-07-06 12:25 ` Anton Protopopov
@ 2023-07-06 12:30 ` Hou Tao
0 siblings, 0 replies; 20+ messages in thread
From: Hou Tao @ 2023-07-06 12:30 UTC (permalink / raw)
To: Anton Protopopov
Cc: Alexei Starovoitov, Daniel Borkmann, John Fastabend,
Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, bpf
Hi,
On 7/6/2023 8:25 PM, Anton Protopopov wrote:
> On Thu, Jul 06, 2023 at 10:01:26AM +0800, Hou Tao wrote:
>> Hi,
>>
>> On 7/4/2023 10:34 PM, Anton Protopopov wrote:
>>> On Tue, Jul 04, 2023 at 09:56:36PM +0800, Hou Tao wrote:
SNIP
>>>> by introducing a new map flag ?
>>> Per-map-flag or a static key? For me it looked like just doing an unconditional
>>> `inc` for a per-cpu variable is better vs. doing a check then `inc` or an
>>> unconditional jump.
>> Sorry I didn't make it clear that I was worried about the allocated
>> per-cpu memory. Previous I thought the per-cpu memory is limited, but
>> after did some experiments I found it was almost the same as kmalloc()
>> which could use all available memory to fulfill the allocation request.
>> For a host with 72-cpus, the memory overhead for 10k hash map is about
>> ~6MB. The overhead is tiny compared with the total available memory, but
>> it is avoidable.
> So, in my first patch I've only added new counters for preallocated maps. But
> then the feedback was that we need a generic percpu inc/dec counters, so I
> added them by default. For me a percpu s64 looks cheap enough for a hash map...
Thanks for the explanation. Let's just allocate the per-cpu elem_count
in hash map. If there are use cases which need to make it optional, we
can revise that later.
> .
^ permalink raw reply [flat|nested] 20+ messages in thread
* [v3 PATCH bpf-next 4/6] bpf: make preloaded map iterators to display map elements count
2023-06-30 8:25 [v3 PATCH bpf-next 0/6] bpf: add percpu stats for bpf_map Anton Protopopov
` (2 preceding siblings ...)
2023-06-30 8:25 ` [v3 PATCH bpf-next 3/6] bpf: populate the per-cpu insertions/deletions counters for hashmaps Anton Protopopov
@ 2023-06-30 8:25 ` Anton Protopopov
2023-06-30 8:25 ` [v3 PATCH bpf-next 5/6] selftests/bpf: test map percpu stats Anton Protopopov
2023-06-30 8:25 ` [v3 PATCH bpf-next 6/6] selftests/bpf: check that ->elem_count is non-zero for the hash map Anton Protopopov
5 siblings, 0 replies; 20+ messages in thread
From: Anton Protopopov @ 2023-06-30 8:25 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, John Fastabend,
Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, bpf
Cc: Anton Protopopov
Add another column to the /sys/fs/bpf/maps.debug iterator to display
cur_entries, the current number of entries in the map as is returned
by the bpf_map_sum_elem_count kfunc. Also fix formatting.
Example:
# cat /sys/fs/bpf/maps.debug
id name max_entries cur_entries
2 iterator.rodata 1 0
125 cilium_auth_map 524288 666
126 cilium_runtime_ 256 0
127 cilium_signals 32 0
128 cilium_node_map 16384 1344
129 cilium_events 32 0
...
Signed-off-by: Anton Protopopov <aspsk@isovalent.com>
---
kernel/bpf/preload/iterators/iterators.bpf.c | 9 +-
.../iterators/iterators.lskel-little-endian.h | 526 +++++++++---------
2 files changed, 275 insertions(+), 260 deletions(-)
diff --git a/kernel/bpf/preload/iterators/iterators.bpf.c b/kernel/bpf/preload/iterators/iterators.bpf.c
index 03af863314ea..b78968b63fab 100644
--- a/kernel/bpf/preload/iterators/iterators.bpf.c
+++ b/kernel/bpf/preload/iterators/iterators.bpf.c
@@ -73,6 +73,8 @@ static const char *get_name(struct btf *btf, long btf_id, const char *fallback)
return str + name_off;
}
+__s64 bpf_map_sum_elem_count(struct bpf_map *map) __ksym;
+
SEC("iter/bpf_map")
int dump_bpf_map(struct bpf_iter__bpf_map *ctx)
{
@@ -84,9 +86,12 @@ int dump_bpf_map(struct bpf_iter__bpf_map *ctx)
return 0;
if (seq_num == 0)
- BPF_SEQ_PRINTF(seq, " id name max_entries\n");
+ BPF_SEQ_PRINTF(seq, " id name max_entries cur_entries\n");
+
+ BPF_SEQ_PRINTF(seq, "%4u %-16s %10d %10lld\n",
+ map->id, map->name, map->max_entries,
+ bpf_map_sum_elem_count(map));
- BPF_SEQ_PRINTF(seq, "%4u %-16s%6d\n", map->id, map->name, map->max_entries);
return 0;
}
diff --git a/kernel/bpf/preload/iterators/iterators.lskel-little-endian.h b/kernel/bpf/preload/iterators/iterators.lskel-little-endian.h
index 70f236a82fe1..5b98ab02025e 100644
--- a/kernel/bpf/preload/iterators/iterators.lskel-little-endian.h
+++ b/kernel/bpf/preload/iterators/iterators.lskel-little-endian.h
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) */
-/* THIS FILE IS AUTOGENERATED! */
+/* THIS FILE IS AUTOGENERATED BY BPFTOOL! */
#ifndef __ITERATORS_BPF_SKEL_H__
#define __ITERATORS_BPF_SKEL_H__
@@ -18,8 +18,6 @@ struct iterators_bpf {
int dump_bpf_map_fd;
int dump_bpf_prog_fd;
} links;
- struct iterators_bpf__rodata {
- } *rodata;
};
static inline int
@@ -68,7 +66,6 @@ iterators_bpf__destroy(struct iterators_bpf *skel)
iterators_bpf__detach(skel);
skel_closenz(skel->progs.dump_bpf_map.prog_fd);
skel_closenz(skel->progs.dump_bpf_prog.prog_fd);
- skel_free_map_data(skel->rodata, skel->maps.rodata.initial_value, 4096);
skel_closenz(skel->maps.rodata.map_fd);
skel_free(skel);
}
@@ -81,15 +78,6 @@ iterators_bpf__open(void)
if (!skel)
goto cleanup;
skel->ctx.sz = (void *)&skel->links - (void *)skel;
- skel->rodata = skel_prep_map_data((void *)"\
-\x20\x20\x69\x64\x20\x6e\x61\x6d\x65\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
-\x20\x20\x20\x6d\x61\x78\x5f\x65\x6e\x74\x72\x69\x65\x73\x0a\0\x25\x34\x75\x20\
-\x25\x2d\x31\x36\x73\x25\x36\x64\x0a\0\x20\x20\x69\x64\x20\x6e\x61\x6d\x65\x20\
-\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x61\x74\x74\x61\x63\x68\x65\
-\x64\x0a\0\x25\x34\x75\x20\x25\x2d\x31\x36\x73\x20\x25\x73\x20\x25\x73\x0a\0", 4096, 98);
- if (!skel->rodata)
- goto cleanup;
- skel->maps.rodata.initial_value = (__u64) (long) skel->rodata;
return skel;
cleanup:
iterators_bpf__destroy(skel);
@@ -103,7 +91,7 @@ iterators_bpf__load(struct iterators_bpf *skel)
int err;
opts.ctx = (struct bpf_loader_ctx *)skel;
- opts.data_sz = 6056;
+ opts.data_sz = 6208;
opts.data = (void *)"\
\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
@@ -138,190 +126,197 @@ iterators_bpf__load(struct iterators_bpf *skel)
\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x9f\xeb\x01\0\
-\x18\0\0\0\0\0\0\0\x1c\x04\0\0\x1c\x04\0\0\xf9\x04\0\0\0\0\0\0\0\0\0\x02\x02\0\
+\x18\0\0\0\0\0\0\0\x80\x04\0\0\x80\x04\0\0\x31\x05\0\0\0\0\0\0\0\0\0\x02\x02\0\
\0\0\x01\0\0\0\x02\0\0\x04\x10\0\0\0\x13\0\0\0\x03\0\0\0\0\0\0\0\x18\0\0\0\x04\
\0\0\0\x40\0\0\0\0\0\0\0\0\0\0\x02\x08\0\0\0\0\0\0\0\0\0\0\x02\x0d\0\0\0\0\0\0\
\0\x01\0\0\x0d\x06\0\0\0\x1c\0\0\0\x01\0\0\0\x20\0\0\0\0\0\0\x01\x04\0\0\0\x20\
-\0\0\x01\x24\0\0\0\x01\0\0\x0c\x05\0\0\0\xa3\0\0\0\x03\0\0\x04\x18\0\0\0\xb1\0\
-\0\0\x09\0\0\0\0\0\0\0\xb5\0\0\0\x0b\0\0\0\x40\0\0\0\xc0\0\0\0\x0b\0\0\0\x80\0\
-\0\0\0\0\0\0\0\0\0\x02\x0a\0\0\0\xc8\0\0\0\0\0\0\x07\0\0\0\0\xd1\0\0\0\0\0\0\
-\x08\x0c\0\0\0\xd7\0\0\0\0\0\0\x01\x08\0\0\0\x40\0\0\0\x94\x01\0\0\x03\0\0\x04\
-\x18\0\0\0\x9c\x01\0\0\x0e\0\0\0\0\0\0\0\x9f\x01\0\0\x11\0\0\0\x20\0\0\0\xa4\
-\x01\0\0\x0e\0\0\0\xa0\0\0\0\xb0\x01\0\0\0\0\0\x08\x0f\0\0\0\xb6\x01\0\0\0\0\0\
-\x01\x04\0\0\0\x20\0\0\0\xc3\x01\0\0\0\0\0\x01\x01\0\0\0\x08\0\0\x01\0\0\0\0\0\
-\0\0\x03\0\0\0\0\x10\0\0\0\x12\0\0\0\x10\0\0\0\xc8\x01\0\0\0\0\0\x01\x04\0\0\0\
-\x20\0\0\0\0\0\0\0\0\0\0\x02\x14\0\0\0\x2c\x02\0\0\x02\0\0\x04\x10\0\0\0\x13\0\
-\0\0\x03\0\0\0\0\0\0\0\x3f\x02\0\0\x15\0\0\0\x40\0\0\0\0\0\0\0\0\0\0\x02\x18\0\
-\0\0\0\0\0\0\x01\0\0\x0d\x06\0\0\0\x1c\0\0\0\x13\0\0\0\x44\x02\0\0\x01\0\0\x0c\
-\x16\0\0\0\x90\x02\0\0\x01\0\0\x04\x08\0\0\0\x99\x02\0\0\x19\0\0\0\0\0\0\0\0\0\
-\0\0\0\0\0\x02\x1a\0\0\0\xea\x02\0\0\x06\0\0\x04\x38\0\0\0\x9c\x01\0\0\x0e\0\0\
-\0\0\0\0\0\x9f\x01\0\0\x11\0\0\0\x20\0\0\0\xf7\x02\0\0\x1b\0\0\0\xc0\0\0\0\x08\
-\x03\0\0\x15\0\0\0\0\x01\0\0\x11\x03\0\0\x1d\0\0\0\x40\x01\0\0\x1b\x03\0\0\x1e\
-\0\0\0\x80\x01\0\0\0\0\0\0\0\0\0\x02\x1c\0\0\0\0\0\0\0\0\0\0\x0a\x10\0\0\0\0\0\
-\0\0\0\0\0\x02\x1f\0\0\0\0\0\0\0\0\0\0\x02\x20\0\0\0\x65\x03\0\0\x02\0\0\x04\
-\x08\0\0\0\x73\x03\0\0\x0e\0\0\0\0\0\0\0\x7c\x03\0\0\x0e\0\0\0\x20\0\0\0\x1b\
-\x03\0\0\x03\0\0\x04\x18\0\0\0\x86\x03\0\0\x1b\0\0\0\0\0\0\0\x8e\x03\0\0\x21\0\
-\0\0\x40\0\0\0\x94\x03\0\0\x23\0\0\0\x80\0\0\0\0\0\0\0\0\0\0\x02\x22\0\0\0\0\0\
-\0\0\0\0\0\x02\x24\0\0\0\x98\x03\0\0\x01\0\0\x04\x04\0\0\0\xa3\x03\0\0\x0e\0\0\
-\0\0\0\0\0\x0c\x04\0\0\x01\0\0\x04\x04\0\0\0\x15\x04\0\0\x0e\0\0\0\0\0\0\0\0\0\
-\0\0\0\0\0\x03\0\0\0\0\x1c\0\0\0\x12\0\0\0\x23\0\0\0\x8b\x04\0\0\0\0\0\x0e\x25\
-\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x03\0\0\0\0\x1c\0\0\0\x12\0\0\0\x0e\0\0\0\x9f\x04\
-\0\0\0\0\0\x0e\x27\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x03\0\0\0\0\x1c\0\0\0\x12\0\0\0\
-\x20\0\0\0\xb5\x04\0\0\0\0\0\x0e\x29\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x03\0\0\0\0\
-\x1c\0\0\0\x12\0\0\0\x11\0\0\0\xca\x04\0\0\0\0\0\x0e\x2b\0\0\0\0\0\0\0\0\0\0\0\
-\0\0\0\x03\0\0\0\0\x10\0\0\0\x12\0\0\0\x04\0\0\0\xe1\x04\0\0\0\0\0\x0e\x2d\0\0\
-\0\x01\0\0\0\xe9\x04\0\0\x04\0\0\x0f\x62\0\0\0\x26\0\0\0\0\0\0\0\x23\0\0\0\x28\
-\0\0\0\x23\0\0\0\x0e\0\0\0\x2a\0\0\0\x31\0\0\0\x20\0\0\0\x2c\0\0\0\x51\0\0\0\
-\x11\0\0\0\xf1\x04\0\0\x01\0\0\x0f\x04\0\0\0\x2e\0\0\0\0\0\0\0\x04\0\0\0\0\x62\
-\x70\x66\x5f\x69\x74\x65\x72\x5f\x5f\x62\x70\x66\x5f\x6d\x61\x70\0\x6d\x65\x74\
-\x61\0\x6d\x61\x70\0\x63\x74\x78\0\x69\x6e\x74\0\x64\x75\x6d\x70\x5f\x62\x70\
-\x66\x5f\x6d\x61\x70\0\x69\x74\x65\x72\x2f\x62\x70\x66\x5f\x6d\x61\x70\0\x30\
-\x3a\x30\0\x2f\x77\x2f\x6e\x65\x74\x2d\x6e\x65\x78\x74\x2f\x6b\x65\x72\x6e\x65\
-\x6c\x2f\x62\x70\x66\x2f\x70\x72\x65\x6c\x6f\x61\x64\x2f\x69\x74\x65\x72\x61\
-\x74\x6f\x72\x73\x2f\x69\x74\x65\x72\x61\x74\x6f\x72\x73\x2e\x62\x70\x66\x2e\
-\x63\0\x09\x73\x74\x72\x75\x63\x74\x20\x73\x65\x71\x5f\x66\x69\x6c\x65\x20\x2a\
-\x73\x65\x71\x20\x3d\x20\x63\x74\x78\x2d\x3e\x6d\x65\x74\x61\x2d\x3e\x73\x65\
-\x71\x3b\0\x62\x70\x66\x5f\x69\x74\x65\x72\x5f\x6d\x65\x74\x61\0\x73\x65\x71\0\
-\x73\x65\x73\x73\x69\x6f\x6e\x5f\x69\x64\0\x73\x65\x71\x5f\x6e\x75\x6d\0\x73\
-\x65\x71\x5f\x66\x69\x6c\x65\0\x5f\x5f\x75\x36\x34\0\x75\x6e\x73\x69\x67\x6e\
-\x65\x64\x20\x6c\x6f\x6e\x67\x20\x6c\x6f\x6e\x67\0\x30\x3a\x31\0\x09\x73\x74\
-\x72\x75\x63\x74\x20\x62\x70\x66\x5f\x6d\x61\x70\x20\x2a\x6d\x61\x70\x20\x3d\
-\x20\x63\x74\x78\x2d\x3e\x6d\x61\x70\x3b\0\x09\x69\x66\x20\x28\x21\x6d\x61\x70\
-\x29\0\x09\x5f\x5f\x75\x36\x34\x20\x73\x65\x71\x5f\x6e\x75\x6d\x20\x3d\x20\x63\
-\x74\x78\x2d\x3e\x6d\x65\x74\x61\x2d\x3e\x73\x65\x71\x5f\x6e\x75\x6d\x3b\0\x30\
-\x3a\x32\0\x09\x69\x66\x20\x28\x73\x65\x71\x5f\x6e\x75\x6d\x20\x3d\x3d\x20\x30\
-\x29\0\x09\x09\x42\x50\x46\x5f\x53\x45\x51\x5f\x50\x52\x49\x4e\x54\x46\x28\x73\
-\x65\x71\x2c\x20\x22\x20\x20\x69\x64\x20\x6e\x61\x6d\x65\x20\x20\x20\x20\x20\
-\x20\x20\x20\x20\x20\x20\x20\x20\x6d\x61\x78\x5f\x65\x6e\x74\x72\x69\x65\x73\
-\x5c\x6e\x22\x29\x3b\0\x62\x70\x66\x5f\x6d\x61\x70\0\x69\x64\0\x6e\x61\x6d\x65\
-\0\x6d\x61\x78\x5f\x65\x6e\x74\x72\x69\x65\x73\0\x5f\x5f\x75\x33\x32\0\x75\x6e\
-\x73\x69\x67\x6e\x65\x64\x20\x69\x6e\x74\0\x63\x68\x61\x72\0\x5f\x5f\x41\x52\
-\x52\x41\x59\x5f\x53\x49\x5a\x45\x5f\x54\x59\x50\x45\x5f\x5f\0\x09\x42\x50\x46\
-\x5f\x53\x45\x51\x5f\x50\x52\x49\x4e\x54\x46\x28\x73\x65\x71\x2c\x20\x22\x25\
-\x34\x75\x20\x25\x2d\x31\x36\x73\x25\x36\x64\x5c\x6e\x22\x2c\x20\x6d\x61\x70\
-\x2d\x3e\x69\x64\x2c\x20\x6d\x61\x70\x2d\x3e\x6e\x61\x6d\x65\x2c\x20\x6d\x61\
-\x70\x2d\x3e\x6d\x61\x78\x5f\x65\x6e\x74\x72\x69\x65\x73\x29\x3b\0\x7d\0\x62\
-\x70\x66\x5f\x69\x74\x65\x72\x5f\x5f\x62\x70\x66\x5f\x70\x72\x6f\x67\0\x70\x72\
-\x6f\x67\0\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\x70\x72\x6f\x67\0\x69\x74\x65\
-\x72\x2f\x62\x70\x66\x5f\x70\x72\x6f\x67\0\x09\x73\x74\x72\x75\x63\x74\x20\x62\
-\x70\x66\x5f\x70\x72\x6f\x67\x20\x2a\x70\x72\x6f\x67\x20\x3d\x20\x63\x74\x78\
-\x2d\x3e\x70\x72\x6f\x67\x3b\0\x09\x69\x66\x20\x28\x21\x70\x72\x6f\x67\x29\0\
-\x62\x70\x66\x5f\x70\x72\x6f\x67\0\x61\x75\x78\0\x09\x61\x75\x78\x20\x3d\x20\
-\x70\x72\x6f\x67\x2d\x3e\x61\x75\x78\x3b\0\x09\x09\x42\x50\x46\x5f\x53\x45\x51\
-\x5f\x50\x52\x49\x4e\x54\x46\x28\x73\x65\x71\x2c\x20\x22\x20\x20\x69\x64\x20\
-\x6e\x61\x6d\x65\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x61\x74\
-\x74\x61\x63\x68\x65\x64\x5c\x6e\x22\x29\x3b\0\x62\x70\x66\x5f\x70\x72\x6f\x67\
-\x5f\x61\x75\x78\0\x61\x74\x74\x61\x63\x68\x5f\x66\x75\x6e\x63\x5f\x6e\x61\x6d\
-\x65\0\x64\x73\x74\x5f\x70\x72\x6f\x67\0\x66\x75\x6e\x63\x5f\x69\x6e\x66\x6f\0\
-\x62\x74\x66\0\x09\x42\x50\x46\x5f\x53\x45\x51\x5f\x50\x52\x49\x4e\x54\x46\x28\
-\x73\x65\x71\x2c\x20\x22\x25\x34\x75\x20\x25\x2d\x31\x36\x73\x20\x25\x73\x20\
-\x25\x73\x5c\x6e\x22\x2c\x20\x61\x75\x78\x2d\x3e\x69\x64\x2c\0\x30\x3a\x34\0\
-\x30\x3a\x35\0\x09\x69\x66\x20\x28\x21\x62\x74\x66\x29\0\x62\x70\x66\x5f\x66\
-\x75\x6e\x63\x5f\x69\x6e\x66\x6f\0\x69\x6e\x73\x6e\x5f\x6f\x66\x66\0\x74\x79\
-\x70\x65\x5f\x69\x64\0\x30\0\x73\x74\x72\x69\x6e\x67\x73\0\x74\x79\x70\x65\x73\
-\0\x68\x64\x72\0\x62\x74\x66\x5f\x68\x65\x61\x64\x65\x72\0\x73\x74\x72\x5f\x6c\
-\x65\x6e\0\x09\x74\x79\x70\x65\x73\x20\x3d\x20\x62\x74\x66\x2d\x3e\x74\x79\x70\
-\x65\x73\x3b\0\x09\x62\x70\x66\x5f\x70\x72\x6f\x62\x65\x5f\x72\x65\x61\x64\x5f\
-\x6b\x65\x72\x6e\x65\x6c\x28\x26\x74\x2c\x20\x73\x69\x7a\x65\x6f\x66\x28\x74\
-\x29\x2c\x20\x74\x79\x70\x65\x73\x20\x2b\x20\x62\x74\x66\x5f\x69\x64\x29\x3b\0\
-\x09\x73\x74\x72\x20\x3d\x20\x62\x74\x66\x2d\x3e\x73\x74\x72\x69\x6e\x67\x73\
-\x3b\0\x62\x74\x66\x5f\x74\x79\x70\x65\0\x6e\x61\x6d\x65\x5f\x6f\x66\x66\0\x09\
-\x6e\x61\x6d\x65\x5f\x6f\x66\x66\x20\x3d\x20\x42\x50\x46\x5f\x43\x4f\x52\x45\
-\x5f\x52\x45\x41\x44\x28\x74\x2c\x20\x6e\x61\x6d\x65\x5f\x6f\x66\x66\x29\x3b\0\
-\x30\x3a\x32\x3a\x30\0\x09\x69\x66\x20\x28\x6e\x61\x6d\x65\x5f\x6f\x66\x66\x20\
-\x3e\x3d\x20\x62\x74\x66\x2d\x3e\x68\x64\x72\x2e\x73\x74\x72\x5f\x6c\x65\x6e\
-\x29\0\x09\x72\x65\x74\x75\x72\x6e\x20\x73\x74\x72\x20\x2b\x20\x6e\x61\x6d\x65\
-\x5f\x6f\x66\x66\x3b\0\x30\x3a\x33\0\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\x6d\
-\x61\x70\x2e\x5f\x5f\x5f\x66\x6d\x74\0\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\x6d\
-\x61\x70\x2e\x5f\x5f\x5f\x66\x6d\x74\x2e\x31\0\x64\x75\x6d\x70\x5f\x62\x70\x66\
-\x5f\x70\x72\x6f\x67\x2e\x5f\x5f\x5f\x66\x6d\x74\0\x64\x75\x6d\x70\x5f\x62\x70\
-\x66\x5f\x70\x72\x6f\x67\x2e\x5f\x5f\x5f\x66\x6d\x74\x2e\x32\0\x4c\x49\x43\x45\
-\x4e\x53\x45\0\x2e\x72\x6f\x64\x61\x74\x61\0\x6c\x69\x63\x65\x6e\x73\x65\0\0\0\
-\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x2d\x09\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x02\0\0\
-\0\x04\0\0\0\x62\0\0\0\x01\0\0\0\x80\x04\0\0\0\0\0\0\0\0\0\0\x69\x74\x65\x72\
-\x61\x74\x6f\x72\x2e\x72\x6f\x64\x61\x74\x61\0\0\0\0\0\0\0\0\0\0\0\0\0\x2f\0\0\
-\0\0\0\0\0\0\0\0\0\0\0\0\0\x20\x20\x69\x64\x20\x6e\x61\x6d\x65\x20\x20\x20\x20\
-\x20\x20\x20\x20\x20\x20\x20\x20\x20\x6d\x61\x78\x5f\x65\x6e\x74\x72\x69\x65\
-\x73\x0a\0\x25\x34\x75\x20\x25\x2d\x31\x36\x73\x25\x36\x64\x0a\0\x20\x20\x69\
-\x64\x20\x6e\x61\x6d\x65\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
-\x61\x74\x74\x61\x63\x68\x65\x64\x0a\0\x25\x34\x75\x20\x25\x2d\x31\x36\x73\x20\
-\x25\x73\x20\x25\x73\x0a\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
-\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x47\x50\x4c\0\0\0\0\0\
-\x79\x12\0\0\0\0\0\0\x79\x26\0\0\0\0\0\0\x79\x17\x08\0\0\0\0\0\x15\x07\x1b\0\0\
-\0\0\0\x79\x11\0\0\0\0\0\0\x79\x11\x10\0\0\0\0\0\x55\x01\x08\0\0\0\0\0\xbf\xa4\
-\0\0\0\0\0\0\x07\x04\0\0\xe8\xff\xff\xff\xbf\x61\0\0\0\0\0\0\x18\x62\0\0\0\0\0\
-\0\0\0\0\0\0\0\0\0\xb7\x03\0\0\x23\0\0\0\xb7\x05\0\0\0\0\0\0\x85\0\0\0\x7e\0\0\
-\0\x61\x71\0\0\0\0\0\0\x7b\x1a\xe8\xff\0\0\0\0\xb7\x01\0\0\x04\0\0\0\xbf\x72\0\
-\0\0\0\0\0\x0f\x12\0\0\0\0\0\0\x7b\x2a\xf0\xff\0\0\0\0\x61\x71\x14\0\0\0\0\0\
-\x7b\x1a\xf8\xff\0\0\0\0\xbf\xa4\0\0\0\0\0\0\x07\x04\0\0\xe8\xff\xff\xff\xbf\
-\x61\0\0\0\0\0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\x23\0\0\0\xb7\x03\0\0\x0e\0\0\0\
-\xb7\x05\0\0\x18\0\0\0\x85\0\0\0\x7e\0\0\0\xb7\0\0\0\0\0\0\0\x95\0\0\0\0\0\0\0\
-\0\0\0\0\x07\0\0\0\0\0\0\0\x42\0\0\0\x7b\0\0\0\x1e\x3c\x01\0\x01\0\0\0\x42\0\0\
-\0\x7b\0\0\0\x24\x3c\x01\0\x02\0\0\0\x42\0\0\0\xee\0\0\0\x1d\x44\x01\0\x03\0\0\
-\0\x42\0\0\0\x0f\x01\0\0\x06\x4c\x01\0\x04\0\0\0\x42\0\0\0\x1a\x01\0\0\x17\x40\
-\x01\0\x05\0\0\0\x42\0\0\0\x1a\x01\0\0\x1d\x40\x01\0\x06\0\0\0\x42\0\0\0\x43\
-\x01\0\0\x06\x58\x01\0\x08\0\0\0\x42\0\0\0\x56\x01\0\0\x03\x5c\x01\0\x0f\0\0\0\
-\x42\0\0\0\xdc\x01\0\0\x02\x64\x01\0\x1f\0\0\0\x42\0\0\0\x2a\x02\0\0\x01\x6c\
-\x01\0\0\0\0\0\x02\0\0\0\x3e\0\0\0\0\0\0\0\x08\0\0\0\x08\0\0\0\x3e\0\0\0\0\0\0\
-\0\x10\0\0\0\x02\0\0\0\xea\0\0\0\0\0\0\0\x20\0\0\0\x02\0\0\0\x3e\0\0\0\0\0\0\0\
-\x28\0\0\0\x08\0\0\0\x3f\x01\0\0\0\0\0\0\x78\0\0\0\x0d\0\0\0\x3e\0\0\0\0\0\0\0\
-\x88\0\0\0\x0d\0\0\0\xea\0\0\0\0\0\0\0\xa8\0\0\0\x0d\0\0\0\x3f\x01\0\0\0\0\0\0\
-\x1a\0\0\0\x21\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
-\0\0\0\0\0\0\0\0\0\0\0\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\x6d\x61\x70\0\0\0\0\
-\0\0\0\0\x1c\0\0\0\0\0\0\0\x08\0\0\0\0\0\0\0\0\0\0\0\x01\0\0\0\x10\0\0\0\0\0\0\
-\0\0\0\0\0\x0a\0\0\0\x01\0\0\0\0\0\0\0\x08\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
-\0\x10\0\0\0\0\0\0\0\x62\x70\x66\x5f\x69\x74\x65\x72\x5f\x62\x70\x66\x5f\x6d\
-\x61\x70\0\0\0\0\0\0\0\0\x47\x50\x4c\0\0\0\0\0\x79\x12\0\0\0\0\0\0\x79\x26\0\0\
-\0\0\0\0\x79\x12\x08\0\0\0\0\0\x15\x02\x3c\0\0\0\0\0\x79\x11\0\0\0\0\0\0\x79\
-\x27\0\0\0\0\0\0\x79\x11\x10\0\0\0\0\0\x55\x01\x08\0\0\0\0\0\xbf\xa4\0\0\0\0\0\
-\0\x07\x04\0\0\xd0\xff\xff\xff\xbf\x61\0\0\0\0\0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\
-\x31\0\0\0\xb7\x03\0\0\x20\0\0\0\xb7\x05\0\0\0\0\0\0\x85\0\0\0\x7e\0\0\0\x7b\
-\x6a\xc8\xff\0\0\0\0\x61\x71\0\0\0\0\0\0\x7b\x1a\xd0\xff\0\0\0\0\xb7\x03\0\0\
-\x04\0\0\0\xbf\x79\0\0\0\0\0\0\x0f\x39\0\0\0\0\0\0\x79\x71\x28\0\0\0\0\0\x79\
-\x78\x30\0\0\0\0\0\x15\x08\x18\0\0\0\0\0\xb7\x02\0\0\0\0\0\0\x0f\x21\0\0\0\0\0\
-\0\x61\x11\x04\0\0\0\0\0\x79\x83\x08\0\0\0\0\0\x67\x01\0\0\x03\0\0\0\x0f\x13\0\
-\0\0\0\0\0\x79\x86\0\0\0\0\0\0\xbf\xa1\0\0\0\0\0\0\x07\x01\0\0\xf8\xff\xff\xff\
-\xb7\x02\0\0\x08\0\0\0\x85\0\0\0\x71\0\0\0\xb7\x01\0\0\0\0\0\0\x79\xa3\xf8\xff\
-\0\0\0\0\x0f\x13\0\0\0\0\0\0\xbf\xa1\0\0\0\0\0\0\x07\x01\0\0\xf4\xff\xff\xff\
-\xb7\x02\0\0\x04\0\0\0\x85\0\0\0\x71\0\0\0\xb7\x03\0\0\x04\0\0\0\x61\xa1\xf4\
-\xff\0\0\0\0\x61\x82\x10\0\0\0\0\0\x3d\x21\x02\0\0\0\0\0\x0f\x16\0\0\0\0\0\0\
-\xbf\x69\0\0\0\0\0\0\x7b\x9a\xd8\xff\0\0\0\0\x79\x71\x18\0\0\0\0\0\x7b\x1a\xe0\
-\xff\0\0\0\0\x79\x71\x20\0\0\0\0\0\x79\x11\0\0\0\0\0\0\x0f\x31\0\0\0\0\0\0\x7b\
-\x1a\xe8\xff\0\0\0\0\xbf\xa4\0\0\0\0\0\0\x07\x04\0\0\xd0\xff\xff\xff\x79\xa1\
-\xc8\xff\0\0\0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\x51\0\0\0\xb7\x03\0\0\x11\0\0\0\
-\xb7\x05\0\0\x20\0\0\0\x85\0\0\0\x7e\0\0\0\xb7\0\0\0\0\0\0\0\x95\0\0\0\0\0\0\0\
-\0\0\0\0\x17\0\0\0\0\0\0\0\x42\0\0\0\x7b\0\0\0\x1e\x80\x01\0\x01\0\0\0\x42\0\0\
-\0\x7b\0\0\0\x24\x80\x01\0\x02\0\0\0\x42\0\0\0\x60\x02\0\0\x1f\x88\x01\0\x03\0\
-\0\0\x42\0\0\0\x84\x02\0\0\x06\x94\x01\0\x04\0\0\0\x42\0\0\0\x1a\x01\0\0\x17\
-\x84\x01\0\x05\0\0\0\x42\0\0\0\x9d\x02\0\0\x0e\xa0\x01\0\x06\0\0\0\x42\0\0\0\
-\x1a\x01\0\0\x1d\x84\x01\0\x07\0\0\0\x42\0\0\0\x43\x01\0\0\x06\xa4\x01\0\x09\0\
-\0\0\x42\0\0\0\xaf\x02\0\0\x03\xa8\x01\0\x11\0\0\0\x42\0\0\0\x1f\x03\0\0\x02\
-\xb0\x01\0\x18\0\0\0\x42\0\0\0\x5a\x03\0\0\x06\x04\x01\0\x1b\0\0\0\x42\0\0\0\0\
-\0\0\0\0\0\0\0\x1c\0\0\0\x42\0\0\0\xab\x03\0\0\x0f\x10\x01\0\x1d\0\0\0\x42\0\0\
-\0\xc0\x03\0\0\x2d\x14\x01\0\x1f\0\0\0\x42\0\0\0\xf7\x03\0\0\x0d\x0c\x01\0\x21\
-\0\0\0\x42\0\0\0\0\0\0\0\0\0\0\0\x22\0\0\0\x42\0\0\0\xc0\x03\0\0\x02\x14\x01\0\
-\x25\0\0\0\x42\0\0\0\x1e\x04\0\0\x0d\x18\x01\0\x28\0\0\0\x42\0\0\0\0\0\0\0\0\0\
-\0\0\x29\0\0\0\x42\0\0\0\x1e\x04\0\0\x0d\x18\x01\0\x2c\0\0\0\x42\0\0\0\x1e\x04\
-\0\0\x0d\x18\x01\0\x2d\0\0\0\x42\0\0\0\x4c\x04\0\0\x1b\x1c\x01\0\x2e\0\0\0\x42\
-\0\0\0\x4c\x04\0\0\x06\x1c\x01\0\x2f\0\0\0\x42\0\0\0\x6f\x04\0\0\x0d\x24\x01\0\
-\x31\0\0\0\x42\0\0\0\x1f\x03\0\0\x02\xb0\x01\0\x40\0\0\0\x42\0\0\0\x2a\x02\0\0\
-\x01\xc0\x01\0\0\0\0\0\x14\0\0\0\x3e\0\0\0\0\0\0\0\x08\0\0\0\x08\0\0\0\x3e\0\0\
-\0\0\0\0\0\x10\0\0\0\x14\0\0\0\xea\0\0\0\0\0\0\0\x20\0\0\0\x14\0\0\0\x3e\0\0\0\
-\0\0\0\0\x28\0\0\0\x18\0\0\0\x3e\0\0\0\0\0\0\0\x30\0\0\0\x08\0\0\0\x3f\x01\0\0\
-\0\0\0\0\x88\0\0\0\x1a\0\0\0\x3e\0\0\0\0\0\0\0\x98\0\0\0\x1a\0\0\0\xea\0\0\0\0\
-\0\0\0\xb0\0\0\0\x1a\0\0\0\x52\x03\0\0\0\0\0\0\xb8\0\0\0\x1a\0\0\0\x56\x03\0\0\
-\0\0\0\0\xc8\0\0\0\x1f\0\0\0\x84\x03\0\0\0\0\0\0\xe0\0\0\0\x20\0\0\0\xea\0\0\0\
-\0\0\0\0\xf8\0\0\0\x20\0\0\0\x3e\0\0\0\0\0\0\0\x20\x01\0\0\x24\0\0\0\x3e\0\0\0\
-\0\0\0\0\x58\x01\0\0\x1a\0\0\0\xea\0\0\0\0\0\0\0\x68\x01\0\0\x20\0\0\0\x46\x04\
-\0\0\0\0\0\0\x90\x01\0\0\x1a\0\0\0\x3f\x01\0\0\0\0\0\0\xa0\x01\0\0\x1a\0\0\0\
-\x87\x04\0\0\0\0\0\0\xa8\x01\0\0\x18\0\0\0\x3e\0\0\0\0\0\0\0\x1a\0\0\0\x42\0\0\
-\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
-\0\0\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\x70\x72\x6f\x67\0\0\0\0\0\0\0\x1c\0\0\
-\0\0\0\0\0\x08\0\0\0\0\0\0\0\0\0\0\0\x01\0\0\0\x10\0\0\0\0\0\0\0\0\0\0\0\x1a\0\
-\0\0\x01\0\0\0\0\0\0\0\x13\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x10\0\0\0\0\0\
-\0\0\x62\x70\x66\x5f\x69\x74\x65\x72\x5f\x62\x70\x66\x5f\x70\x72\x6f\x67\0\0\0\
-\0\0\0\0";
- opts.insns_sz = 2216;
+\0\0\x01\x24\0\0\0\x01\0\0\x0c\x05\0\0\0\xb0\0\0\0\x03\0\0\x04\x18\0\0\0\xbe\0\
+\0\0\x09\0\0\0\0\0\0\0\xc2\0\0\0\x0b\0\0\0\x40\0\0\0\xcd\0\0\0\x0b\0\0\0\x80\0\
+\0\0\0\0\0\0\0\0\0\x02\x0a\0\0\0\xd5\0\0\0\0\0\0\x07\0\0\0\0\xde\0\0\0\0\0\0\
+\x08\x0c\0\0\0\xe4\0\0\0\0\0\0\x01\x08\0\0\0\x40\0\0\0\xae\x01\0\0\x03\0\0\x04\
+\x18\0\0\0\xb6\x01\0\0\x0e\0\0\0\0\0\0\0\xb9\x01\0\0\x11\0\0\0\x20\0\0\0\xbe\
+\x01\0\0\x0e\0\0\0\xa0\0\0\0\xca\x01\0\0\0\0\0\x08\x0f\0\0\0\xd0\x01\0\0\0\0\0\
+\x01\x04\0\0\0\x20\0\0\0\xdd\x01\0\0\0\0\0\x01\x01\0\0\0\x08\0\0\x01\0\0\0\0\0\
+\0\0\x03\0\0\0\0\x10\0\0\0\x12\0\0\0\x10\0\0\0\xe2\x01\0\0\0\0\0\x01\x04\0\0\0\
+\x20\0\0\0\0\0\0\0\x01\0\0\x0d\x14\0\0\0\x26\x05\0\0\x04\0\0\0\x2b\x02\0\0\0\0\
+\0\x08\x15\0\0\0\x31\x02\0\0\0\0\0\x01\x08\0\0\0\x40\0\0\x01\x3b\x02\0\0\x01\0\
+\0\x0c\x13\0\0\0\0\0\0\0\0\0\0\x02\x18\0\0\0\x52\x02\0\0\x02\0\0\x04\x10\0\0\0\
+\x13\0\0\0\x03\0\0\0\0\0\0\0\x65\x02\0\0\x19\0\0\0\x40\0\0\0\0\0\0\0\0\0\0\x02\
+\x1c\0\0\0\0\0\0\0\x01\0\0\x0d\x06\0\0\0\x1c\0\0\0\x17\0\0\0\x6a\x02\0\0\x01\0\
+\0\x0c\x1a\0\0\0\xb6\x02\0\0\x01\0\0\x04\x08\0\0\0\xbf\x02\0\0\x1d\0\0\0\0\0\0\
+\0\0\0\0\0\0\0\0\x02\x1e\0\0\0\x10\x03\0\0\x06\0\0\x04\x38\0\0\0\xb6\x01\0\0\
+\x0e\0\0\0\0\0\0\0\xb9\x01\0\0\x11\0\0\0\x20\0\0\0\x1d\x03\0\0\x1f\0\0\0\xc0\0\
+\0\0\x2e\x03\0\0\x19\0\0\0\0\x01\0\0\x37\x03\0\0\x21\0\0\0\x40\x01\0\0\x41\x03\
+\0\0\x22\0\0\0\x80\x01\0\0\0\0\0\0\0\0\0\x02\x20\0\0\0\0\0\0\0\0\0\0\x0a\x10\0\
+\0\0\0\0\0\0\0\0\0\x02\x23\0\0\0\0\0\0\0\0\0\0\x02\x24\0\0\0\x8b\x03\0\0\x02\0\
+\0\x04\x08\0\0\0\x99\x03\0\0\x0e\0\0\0\0\0\0\0\xa2\x03\0\0\x0e\0\0\0\x20\0\0\0\
+\x41\x03\0\0\x03\0\0\x04\x18\0\0\0\xac\x03\0\0\x1f\0\0\0\0\0\0\0\xb4\x03\0\0\
+\x25\0\0\0\x40\0\0\0\xba\x03\0\0\x27\0\0\0\x80\0\0\0\0\0\0\0\0\0\0\x02\x26\0\0\
+\0\0\0\0\0\0\0\0\x02\x28\0\0\0\xbe\x03\0\0\x01\0\0\x04\x04\0\0\0\xc9\x03\0\0\
+\x0e\0\0\0\0\0\0\0\x32\x04\0\0\x01\0\0\x04\x04\0\0\0\x3b\x04\0\0\x0e\0\0\0\0\0\
+\0\0\0\0\0\0\0\0\0\x03\0\0\0\0\x20\0\0\0\x12\0\0\0\x30\0\0\0\xb1\x04\0\0\0\0\0\
+\x0e\x29\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x03\0\0\0\0\x20\0\0\0\x12\0\0\0\x1a\0\0\0\
+\xc5\x04\0\0\0\0\0\x0e\x2b\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x03\0\0\0\0\x20\0\0\0\
+\x12\0\0\0\x20\0\0\0\xdb\x04\0\0\0\0\0\x0e\x2d\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x03\
+\0\0\0\0\x20\0\0\0\x12\0\0\0\x11\0\0\0\xf0\x04\0\0\0\0\0\x0e\x2f\0\0\0\0\0\0\0\
+\0\0\0\0\0\0\0\x03\0\0\0\0\x10\0\0\0\x12\0\0\0\x04\0\0\0\x07\x05\0\0\0\0\0\x0e\
+\x31\0\0\0\x01\0\0\0\x0f\x05\0\0\x01\0\0\x0f\x04\0\0\0\x36\0\0\0\0\0\0\0\x04\0\
+\0\0\x16\x05\0\0\x04\0\0\x0f\x7b\0\0\0\x2a\0\0\0\0\0\0\0\x30\0\0\0\x2c\0\0\0\
+\x30\0\0\0\x1a\0\0\0\x2e\0\0\0\x4a\0\0\0\x20\0\0\0\x30\0\0\0\x6a\0\0\0\x11\0\0\
+\0\x1e\x05\0\0\x01\0\0\x0f\x04\0\0\0\x32\0\0\0\0\0\0\0\x04\0\0\0\x26\x05\0\0\0\
+\0\0\x0e\x06\0\0\0\x01\0\0\0\0\x62\x70\x66\x5f\x69\x74\x65\x72\x5f\x5f\x62\x70\
+\x66\x5f\x6d\x61\x70\0\x6d\x65\x74\x61\0\x6d\x61\x70\0\x63\x74\x78\0\x69\x6e\
+\x74\0\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\x6d\x61\x70\0\x69\x74\x65\x72\x2f\
+\x62\x70\x66\x5f\x6d\x61\x70\0\x30\x3a\x30\0\x2f\x68\x6f\x6d\x65\x2f\x61\x73\
+\x70\x73\x6b\x2f\x73\x72\x63\x2f\x62\x70\x66\x2d\x6e\x65\x78\x74\x2f\x6b\x65\
+\x72\x6e\x65\x6c\x2f\x62\x70\x66\x2f\x70\x72\x65\x6c\x6f\x61\x64\x2f\x69\x74\
+\x65\x72\x61\x74\x6f\x72\x73\x2f\x69\x74\x65\x72\x61\x74\x6f\x72\x73\x2e\x62\
+\x70\x66\x2e\x63\0\x09\x73\x74\x72\x75\x63\x74\x20\x73\x65\x71\x5f\x66\x69\x6c\
+\x65\x20\x2a\x73\x65\x71\x20\x3d\x20\x63\x74\x78\x2d\x3e\x6d\x65\x74\x61\x2d\
+\x3e\x73\x65\x71\x3b\0\x62\x70\x66\x5f\x69\x74\x65\x72\x5f\x6d\x65\x74\x61\0\
+\x73\x65\x71\0\x73\x65\x73\x73\x69\x6f\x6e\x5f\x69\x64\0\x73\x65\x71\x5f\x6e\
+\x75\x6d\0\x73\x65\x71\x5f\x66\x69\x6c\x65\0\x5f\x5f\x75\x36\x34\0\x75\x6e\x73\
+\x69\x67\x6e\x65\x64\x20\x6c\x6f\x6e\x67\x20\x6c\x6f\x6e\x67\0\x30\x3a\x31\0\
+\x09\x73\x74\x72\x75\x63\x74\x20\x62\x70\x66\x5f\x6d\x61\x70\x20\x2a\x6d\x61\
+\x70\x20\x3d\x20\x63\x74\x78\x2d\x3e\x6d\x61\x70\x3b\0\x09\x69\x66\x20\x28\x21\
+\x6d\x61\x70\x29\0\x30\x3a\x32\0\x09\x5f\x5f\x75\x36\x34\x20\x73\x65\x71\x5f\
+\x6e\x75\x6d\x20\x3d\x20\x63\x74\x78\x2d\x3e\x6d\x65\x74\x61\x2d\x3e\x73\x65\
+\x71\x5f\x6e\x75\x6d\x3b\0\x09\x69\x66\x20\x28\x73\x65\x71\x5f\x6e\x75\x6d\x20\
+\x3d\x3d\x20\x30\x29\0\x09\x09\x42\x50\x46\x5f\x53\x45\x51\x5f\x50\x52\x49\x4e\
+\x54\x46\x28\x73\x65\x71\x2c\x20\x22\x20\x20\x69\x64\x20\x6e\x61\x6d\x65\x20\
+\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x6d\x61\x78\x5f\x65\x6e\x74\
+\x72\x69\x65\x73\x20\x20\x63\x75\x72\x5f\x65\x6e\x74\x72\x69\x65\x73\x5c\x6e\
+\x22\x29\x3b\0\x62\x70\x66\x5f\x6d\x61\x70\0\x69\x64\0\x6e\x61\x6d\x65\0\x6d\
+\x61\x78\x5f\x65\x6e\x74\x72\x69\x65\x73\0\x5f\x5f\x75\x33\x32\0\x75\x6e\x73\
+\x69\x67\x6e\x65\x64\x20\x69\x6e\x74\0\x63\x68\x61\x72\0\x5f\x5f\x41\x52\x52\
+\x41\x59\x5f\x53\x49\x5a\x45\x5f\x54\x59\x50\x45\x5f\x5f\0\x09\x42\x50\x46\x5f\
+\x53\x45\x51\x5f\x50\x52\x49\x4e\x54\x46\x28\x73\x65\x71\x2c\x20\x22\x25\x34\
+\x75\x20\x25\x2d\x31\x36\x73\x20\x20\x25\x31\x30\x64\x20\x20\x20\x25\x31\x30\
+\x6c\x6c\x64\x5c\x6e\x22\x2c\0\x7d\0\x5f\x5f\x73\x36\x34\0\x6c\x6f\x6e\x67\x20\
+\x6c\x6f\x6e\x67\0\x62\x70\x66\x5f\x6d\x61\x70\x5f\x73\x75\x6d\x5f\x65\x6c\x65\
+\x6d\x5f\x63\x6f\x75\x6e\x74\0\x62\x70\x66\x5f\x69\x74\x65\x72\x5f\x5f\x62\x70\
+\x66\x5f\x70\x72\x6f\x67\0\x70\x72\x6f\x67\0\x64\x75\x6d\x70\x5f\x62\x70\x66\
+\x5f\x70\x72\x6f\x67\0\x69\x74\x65\x72\x2f\x62\x70\x66\x5f\x70\x72\x6f\x67\0\
+\x09\x73\x74\x72\x75\x63\x74\x20\x62\x70\x66\x5f\x70\x72\x6f\x67\x20\x2a\x70\
+\x72\x6f\x67\x20\x3d\x20\x63\x74\x78\x2d\x3e\x70\x72\x6f\x67\x3b\0\x09\x69\x66\
+\x20\x28\x21\x70\x72\x6f\x67\x29\0\x62\x70\x66\x5f\x70\x72\x6f\x67\0\x61\x75\
+\x78\0\x09\x61\x75\x78\x20\x3d\x20\x70\x72\x6f\x67\x2d\x3e\x61\x75\x78\x3b\0\
+\x09\x09\x42\x50\x46\x5f\x53\x45\x51\x5f\x50\x52\x49\x4e\x54\x46\x28\x73\x65\
+\x71\x2c\x20\x22\x20\x20\x69\x64\x20\x6e\x61\x6d\x65\x20\x20\x20\x20\x20\x20\
+\x20\x20\x20\x20\x20\x20\x20\x61\x74\x74\x61\x63\x68\x65\x64\x5c\x6e\x22\x29\
+\x3b\0\x62\x70\x66\x5f\x70\x72\x6f\x67\x5f\x61\x75\x78\0\x61\x74\x74\x61\x63\
+\x68\x5f\x66\x75\x6e\x63\x5f\x6e\x61\x6d\x65\0\x64\x73\x74\x5f\x70\x72\x6f\x67\
+\0\x66\x75\x6e\x63\x5f\x69\x6e\x66\x6f\0\x62\x74\x66\0\x09\x42\x50\x46\x5f\x53\
+\x45\x51\x5f\x50\x52\x49\x4e\x54\x46\x28\x73\x65\x71\x2c\x20\x22\x25\x34\x75\
+\x20\x25\x2d\x31\x36\x73\x20\x25\x73\x20\x25\x73\x5c\x6e\x22\x2c\x20\x61\x75\
+\x78\x2d\x3e\x69\x64\x2c\0\x30\x3a\x34\0\x30\x3a\x35\0\x09\x69\x66\x20\x28\x21\
+\x62\x74\x66\x29\0\x62\x70\x66\x5f\x66\x75\x6e\x63\x5f\x69\x6e\x66\x6f\0\x69\
+\x6e\x73\x6e\x5f\x6f\x66\x66\0\x74\x79\x70\x65\x5f\x69\x64\0\x30\0\x73\x74\x72\
+\x69\x6e\x67\x73\0\x74\x79\x70\x65\x73\0\x68\x64\x72\0\x62\x74\x66\x5f\x68\x65\
+\x61\x64\x65\x72\0\x73\x74\x72\x5f\x6c\x65\x6e\0\x09\x74\x79\x70\x65\x73\x20\
+\x3d\x20\x62\x74\x66\x2d\x3e\x74\x79\x70\x65\x73\x3b\0\x09\x62\x70\x66\x5f\x70\
+\x72\x6f\x62\x65\x5f\x72\x65\x61\x64\x5f\x6b\x65\x72\x6e\x65\x6c\x28\x26\x74\
+\x2c\x20\x73\x69\x7a\x65\x6f\x66\x28\x74\x29\x2c\x20\x74\x79\x70\x65\x73\x20\
+\x2b\x20\x62\x74\x66\x5f\x69\x64\x29\x3b\0\x09\x73\x74\x72\x20\x3d\x20\x62\x74\
+\x66\x2d\x3e\x73\x74\x72\x69\x6e\x67\x73\x3b\0\x62\x74\x66\x5f\x74\x79\x70\x65\
+\0\x6e\x61\x6d\x65\x5f\x6f\x66\x66\0\x09\x6e\x61\x6d\x65\x5f\x6f\x66\x66\x20\
+\x3d\x20\x42\x50\x46\x5f\x43\x4f\x52\x45\x5f\x52\x45\x41\x44\x28\x74\x2c\x20\
+\x6e\x61\x6d\x65\x5f\x6f\x66\x66\x29\x3b\0\x30\x3a\x32\x3a\x30\0\x09\x69\x66\
+\x20\x28\x6e\x61\x6d\x65\x5f\x6f\x66\x66\x20\x3e\x3d\x20\x62\x74\x66\x2d\x3e\
+\x68\x64\x72\x2e\x73\x74\x72\x5f\x6c\x65\x6e\x29\0\x09\x72\x65\x74\x75\x72\x6e\
+\x20\x73\x74\x72\x20\x2b\x20\x6e\x61\x6d\x65\x5f\x6f\x66\x66\x3b\0\x30\x3a\x33\
+\0\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\x6d\x61\x70\x2e\x5f\x5f\x5f\x66\x6d\x74\
+\0\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\x6d\x61\x70\x2e\x5f\x5f\x5f\x66\x6d\x74\
+\x2e\x31\0\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\x70\x72\x6f\x67\x2e\x5f\x5f\x5f\
+\x66\x6d\x74\0\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\x70\x72\x6f\x67\x2e\x5f\x5f\
+\x5f\x66\x6d\x74\x2e\x32\0\x4c\x49\x43\x45\x4e\x53\x45\0\x2e\x6b\x73\x79\x6d\
+\x73\0\x2e\x72\x6f\x64\x61\x74\x61\0\x6c\x69\x63\x65\x6e\x73\x65\0\x64\x75\x6d\
+\x6d\x79\x5f\x6b\x73\x79\x6d\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
+\xc9\x09\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x02\0\0\0\x04\0\0\0\x7b\0\0\0\x01\0\0\0\
+\x80\0\0\0\0\0\0\0\0\0\0\0\x69\x74\x65\x72\x61\x74\x6f\x72\x2e\x72\x6f\x64\x61\
+\x74\x61\0\0\0\0\0\0\0\0\0\0\0\0\0\x34\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x20\x20\
+\x69\x64\x20\x6e\x61\x6d\x65\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
+\x20\x6d\x61\x78\x5f\x65\x6e\x74\x72\x69\x65\x73\x20\x20\x63\x75\x72\x5f\x65\
+\x6e\x74\x72\x69\x65\x73\x0a\0\x25\x34\x75\x20\x25\x2d\x31\x36\x73\x20\x20\x25\
+\x31\x30\x64\x20\x20\x20\x25\x31\x30\x6c\x6c\x64\x0a\0\x20\x20\x69\x64\x20\x6e\
+\x61\x6d\x65\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x61\x74\x74\
+\x61\x63\x68\x65\x64\x0a\0\x25\x34\x75\x20\x25\x2d\x31\x36\x73\x20\x25\x73\x20\
+\x25\x73\x0a\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
+\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x47\x50\x4c\0\0\0\0\0\x79\x12\0\0\0\
+\0\0\0\x79\x26\0\0\0\0\0\0\x79\x17\x08\0\0\0\0\0\x15\x07\x1d\0\0\0\0\0\x79\x21\
+\x10\0\0\0\0\0\x55\x01\x08\0\0\0\0\0\xbf\xa4\0\0\0\0\0\0\x07\x04\0\0\xe0\xff\
+\xff\xff\xbf\x61\0\0\0\0\0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\0\0\0\0\xb7\x03\0\0\
+\x30\0\0\0\xb7\x05\0\0\0\0\0\0\x85\0\0\0\x7e\0\0\0\x61\x71\0\0\0\0\0\0\x7b\x1a\
+\xe0\xff\0\0\0\0\xb7\x01\0\0\x04\0\0\0\xbf\x72\0\0\0\0\0\0\x0f\x12\0\0\0\0\0\0\
+\x7b\x2a\xe8\xff\0\0\0\0\x61\x71\x14\0\0\0\0\0\x7b\x1a\xf0\xff\0\0\0\0\xbf\x71\
+\0\0\0\0\0\0\x85\x20\0\0\0\0\0\0\x7b\x0a\xf8\xff\0\0\0\0\xbf\xa4\0\0\0\0\0\0\
+\x07\x04\0\0\xe0\xff\xff\xff\xbf\x61\0\0\0\0\0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\
+\x30\0\0\0\xb7\x03\0\0\x1a\0\0\0\xb7\x05\0\0\x20\0\0\0\x85\0\0\0\x7e\0\0\0\xb7\
+\0\0\0\0\0\0\0\x95\0\0\0\0\0\0\0\0\0\0\0\x07\0\0\0\0\0\0\0\x42\0\0\0\x88\0\0\0\
+\x1e\x44\x01\0\x01\0\0\0\x42\0\0\0\x88\0\0\0\x24\x44\x01\0\x02\0\0\0\x42\0\0\0\
+\xfb\0\0\0\x1d\x4c\x01\0\x03\0\0\0\x42\0\0\0\x1c\x01\0\0\x06\x54\x01\0\x04\0\0\
+\0\x42\0\0\0\x2b\x01\0\0\x1d\x48\x01\0\x05\0\0\0\x42\0\0\0\x50\x01\0\0\x06\x60\
+\x01\0\x07\0\0\0\x42\0\0\0\x63\x01\0\0\x03\x64\x01\0\x0e\0\0\0\x42\0\0\0\xf6\
+\x01\0\0\x02\x6c\x01\0\x21\0\0\0\x42\0\0\0\x29\x02\0\0\x01\x80\x01\0\0\0\0\0\
+\x02\0\0\0\x3e\0\0\0\0\0\0\0\x08\0\0\0\x08\0\0\0\x3e\0\0\0\0\0\0\0\x10\0\0\0\
+\x02\0\0\0\xf7\0\0\0\0\0\0\0\x20\0\0\0\x08\0\0\0\x27\x01\0\0\0\0\0\0\x70\0\0\0\
+\x0d\0\0\0\x3e\0\0\0\0\0\0\0\x80\0\0\0\x0d\0\0\0\xf7\0\0\0\0\0\0\0\xa0\0\0\0\
+\x0d\0\0\0\x27\x01\0\0\0\0\0\0\x1a\0\0\0\x23\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
+\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x64\x75\x6d\x70\x5f\x62\
+\x70\x66\x5f\x6d\x61\x70\0\0\0\0\0\0\0\0\x1c\0\0\0\0\0\0\0\x08\0\0\0\0\0\0\0\0\
+\0\0\0\x01\0\0\0\x10\0\0\0\0\0\0\0\0\0\0\0\x09\0\0\0\x01\0\0\0\0\0\0\0\x07\0\0\
+\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x10\0\0\0\0\0\0\0\x62\x70\x66\x5f\x69\x74\
+\x65\x72\x5f\x62\x70\x66\x5f\x6d\x61\x70\0\0\0\0\0\0\0\0\x62\x70\x66\x5f\x6d\
+\x61\x70\x5f\x73\x75\x6d\x5f\x65\x6c\x65\x6d\x5f\x63\x6f\x75\x6e\x74\0\0\x47\
+\x50\x4c\0\0\0\0\0\x79\x12\0\0\0\0\0\0\x79\x26\0\0\0\0\0\0\x79\x11\x08\0\0\0\0\
+\0\x15\x01\x3b\0\0\0\0\0\x79\x17\0\0\0\0\0\0\x79\x21\x10\0\0\0\0\0\x55\x01\x08\
+\0\0\0\0\0\xbf\xa4\0\0\0\0\0\0\x07\x04\0\0\xd0\xff\xff\xff\xbf\x61\0\0\0\0\0\0\
+\x18\x62\0\0\0\0\0\0\0\0\0\0\x4a\0\0\0\xb7\x03\0\0\x20\0\0\0\xb7\x05\0\0\0\0\0\
+\0\x85\0\0\0\x7e\0\0\0\x7b\x6a\xc8\xff\0\0\0\0\x61\x71\0\0\0\0\0\0\x7b\x1a\xd0\
+\xff\0\0\0\0\xb7\x03\0\0\x04\0\0\0\xbf\x79\0\0\0\0\0\0\x0f\x39\0\0\0\0\0\0\x79\
+\x71\x28\0\0\0\0\0\x79\x78\x30\0\0\0\0\0\x15\x08\x18\0\0\0\0\0\xb7\x02\0\0\0\0\
+\0\0\x0f\x21\0\0\0\0\0\0\x61\x11\x04\0\0\0\0\0\x79\x83\x08\0\0\0\0\0\x67\x01\0\
+\0\x03\0\0\0\x0f\x13\0\0\0\0\0\0\x79\x86\0\0\0\0\0\0\xbf\xa1\0\0\0\0\0\0\x07\
+\x01\0\0\xf8\xff\xff\xff\xb7\x02\0\0\x08\0\0\0\x85\0\0\0\x71\0\0\0\xb7\x01\0\0\
+\0\0\0\0\x79\xa3\xf8\xff\0\0\0\0\x0f\x13\0\0\0\0\0\0\xbf\xa1\0\0\0\0\0\0\x07\
+\x01\0\0\xf4\xff\xff\xff\xb7\x02\0\0\x04\0\0\0\x85\0\0\0\x71\0\0\0\xb7\x03\0\0\
+\x04\0\0\0\x61\xa1\xf4\xff\0\0\0\0\x61\x82\x10\0\0\0\0\0\x3d\x21\x02\0\0\0\0\0\
+\x0f\x16\0\0\0\0\0\0\xbf\x69\0\0\0\0\0\0\x7b\x9a\xd8\xff\0\0\0\0\x79\x71\x18\0\
+\0\0\0\0\x7b\x1a\xe0\xff\0\0\0\0\x79\x71\x20\0\0\0\0\0\x79\x11\0\0\0\0\0\0\x0f\
+\x31\0\0\0\0\0\0\x7b\x1a\xe8\xff\0\0\0\0\xbf\xa4\0\0\0\0\0\0\x07\x04\0\0\xd0\
+\xff\xff\xff\x79\xa1\xc8\xff\0\0\0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\x6a\0\0\0\xb7\
+\x03\0\0\x11\0\0\0\xb7\x05\0\0\x20\0\0\0\x85\0\0\0\x7e\0\0\0\xb7\0\0\0\0\0\0\0\
+\x95\0\0\0\0\0\0\0\0\0\0\0\x1b\0\0\0\0\0\0\0\x42\0\0\0\x88\0\0\0\x1e\x94\x01\0\
+\x01\0\0\0\x42\0\0\0\x88\0\0\0\x24\x94\x01\0\x02\0\0\0\x42\0\0\0\x86\x02\0\0\
+\x1f\x9c\x01\0\x03\0\0\0\x42\0\0\0\xaa\x02\0\0\x06\xa8\x01\0\x04\0\0\0\x42\0\0\
+\0\xc3\x02\0\0\x0e\xb4\x01\0\x05\0\0\0\x42\0\0\0\x2b\x01\0\0\x1d\x98\x01\0\x06\
+\0\0\0\x42\0\0\0\x50\x01\0\0\x06\xb8\x01\0\x08\0\0\0\x42\0\0\0\xd5\x02\0\0\x03\
+\xbc\x01\0\x10\0\0\0\x42\0\0\0\x45\x03\0\0\x02\xc4\x01\0\x17\0\0\0\x42\0\0\0\
+\x80\x03\0\0\x06\x04\x01\0\x1a\0\0\0\x42\0\0\0\x45\x03\0\0\x02\xc4\x01\0\x1b\0\
+\0\0\x42\0\0\0\xd1\x03\0\0\x0f\x10\x01\0\x1c\0\0\0\x42\0\0\0\xe6\x03\0\0\x2d\
+\x14\x01\0\x1e\0\0\0\x42\0\0\0\x1d\x04\0\0\x0d\x0c\x01\0\x20\0\0\0\x42\0\0\0\
+\x45\x03\0\0\x02\xc4\x01\0\x21\0\0\0\x42\0\0\0\xe6\x03\0\0\x02\x14\x01\0\x24\0\
+\0\0\x42\0\0\0\x44\x04\0\0\x0d\x18\x01\0\x27\0\0\0\x42\0\0\0\x45\x03\0\0\x02\
+\xc4\x01\0\x28\0\0\0\x42\0\0\0\x44\x04\0\0\x0d\x18\x01\0\x2b\0\0\0\x42\0\0\0\
+\x44\x04\0\0\x0d\x18\x01\0\x2c\0\0\0\x42\0\0\0\x72\x04\0\0\x1b\x1c\x01\0\x2d\0\
+\0\0\x42\0\0\0\x72\x04\0\0\x06\x1c\x01\0\x2e\0\0\0\x42\0\0\0\x95\x04\0\0\x0d\
+\x24\x01\0\x30\0\0\0\x42\0\0\0\x45\x03\0\0\x02\xc4\x01\0\x3f\0\0\0\x42\0\0\0\
+\x29\x02\0\0\x01\xd4\x01\0\0\0\0\0\x18\0\0\0\x3e\0\0\0\0\0\0\0\x08\0\0\0\x08\0\
+\0\0\x3e\0\0\0\0\0\0\0\x10\0\0\0\x18\0\0\0\xf7\0\0\0\0\0\0\0\x20\0\0\0\x1c\0\0\
+\0\x3e\0\0\0\0\0\0\0\x28\0\0\0\x08\0\0\0\x27\x01\0\0\0\0\0\0\x80\0\0\0\x1e\0\0\
+\0\x3e\0\0\0\0\0\0\0\x90\0\0\0\x1e\0\0\0\xf7\0\0\0\0\0\0\0\xa8\0\0\0\x1e\0\0\0\
+\x78\x03\0\0\0\0\0\0\xb0\0\0\0\x1e\0\0\0\x7c\x03\0\0\0\0\0\0\xc0\0\0\0\x23\0\0\
+\0\xaa\x03\0\0\0\0\0\0\xd8\0\0\0\x24\0\0\0\xf7\0\0\0\0\0\0\0\xf0\0\0\0\x24\0\0\
+\0\x3e\0\0\0\0\0\0\0\x18\x01\0\0\x28\0\0\0\x3e\0\0\0\0\0\0\0\x50\x01\0\0\x1e\0\
+\0\0\xf7\0\0\0\0\0\0\0\x60\x01\0\0\x24\0\0\0\x6c\x04\0\0\0\0\0\0\x88\x01\0\0\
+\x1e\0\0\0\x27\x01\0\0\0\0\0\0\x98\x01\0\0\x1e\0\0\0\xad\x04\0\0\0\0\0\0\xa0\
+\x01\0\0\x1c\0\0\0\x3e\0\0\0\0\0\0\0\x1a\0\0\0\x41\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
+\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x64\x75\x6d\x70\x5f\
+\x62\x70\x66\x5f\x70\x72\x6f\x67\0\0\0\0\0\0\0\x1c\0\0\0\0\0\0\0\x08\0\0\0\0\0\
+\0\0\0\0\0\0\x01\0\0\0\x10\0\0\0\0\0\0\0\0\0\0\0\x19\0\0\0\x01\0\0\0\0\0\0\0\
+\x12\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x10\0\0\0\0\0\0\0\x62\x70\x66\x5f\
+\x69\x74\x65\x72\x5f\x62\x70\x66\x5f\x70\x72\x6f\x67\0\0\0\0\0\0\0";
+ opts.insns_sz = 2456;
opts.insns = (void *)"\
\xbf\x16\0\0\0\0\0\0\xbf\xa1\0\0\0\0\0\0\x07\x01\0\0\x78\xff\xff\xff\xb7\x02\0\
\0\x88\0\0\0\xb7\x03\0\0\0\0\0\0\x85\0\0\0\x71\0\0\0\x05\0\x14\0\0\0\0\0\x61\
@@ -331,79 +326,83 @@ iterators_bpf__load(struct iterators_bpf *skel)
\0\0\0\x85\0\0\0\xa8\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x61\x01\0\0\0\0\
\0\0\xd5\x01\x02\0\0\0\0\0\xbf\x19\0\0\0\0\0\0\x85\0\0\0\xa8\0\0\0\xbf\x70\0\0\
\0\0\0\0\x95\0\0\0\0\0\0\0\x61\x60\x08\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\
-\x48\x0e\0\0\x63\x01\0\0\0\0\0\0\x61\x60\x0c\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\
-\0\0\x44\x0e\0\0\x63\x01\0\0\0\0\0\0\x79\x60\x10\0\0\0\0\0\x18\x61\0\0\0\0\0\0\
-\0\0\0\0\x38\x0e\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\0\x05\0\0\
-\x18\x61\0\0\0\0\0\0\0\0\0\0\x30\x0e\0\0\x7b\x01\0\0\0\0\0\0\xb7\x01\0\0\x12\0\
-\0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\x30\x0e\0\0\xb7\x03\0\0\x1c\0\0\0\x85\0\0\0\
+\xe8\x0e\0\0\x63\x01\0\0\0\0\0\0\x61\x60\x0c\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\
+\0\0\xe4\x0e\0\0\x63\x01\0\0\0\0\0\0\x79\x60\x10\0\0\0\0\0\x18\x61\0\0\0\0\0\0\
+\0\0\0\0\xd8\x0e\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\0\x05\0\0\
+\x18\x61\0\0\0\0\0\0\0\0\0\0\xd0\x0e\0\0\x7b\x01\0\0\0\0\0\0\xb7\x01\0\0\x12\0\
+\0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\xd0\x0e\0\0\xb7\x03\0\0\x1c\0\0\0\x85\0\0\0\
\xa6\0\0\0\xbf\x07\0\0\0\0\0\0\xc5\x07\xd4\xff\0\0\0\0\x63\x7a\x78\xff\0\0\0\0\
-\x61\xa0\x78\xff\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x80\x0e\0\0\x63\x01\0\0\0\
+\x61\xa0\x78\xff\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x20\x0f\0\0\x63\x01\0\0\0\
\0\0\0\x61\x60\x1c\0\0\0\0\0\x15\0\x03\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\
-\x5c\x0e\0\0\x63\x01\0\0\0\0\0\0\xb7\x01\0\0\0\0\0\0\x18\x62\0\0\0\0\0\0\0\0\0\
-\0\x50\x0e\0\0\xb7\x03\0\0\x48\0\0\0\x85\0\0\0\xa6\0\0\0\xbf\x07\0\0\0\0\0\0\
+\xfc\x0e\0\0\x63\x01\0\0\0\0\0\0\xb7\x01\0\0\0\0\0\0\x18\x62\0\0\0\0\0\0\0\0\0\
+\0\xf0\x0e\0\0\xb7\x03\0\0\x48\0\0\0\x85\0\0\0\xa6\0\0\0\xbf\x07\0\0\0\0\0\0\
\xc5\x07\xc3\xff\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x63\x71\0\0\0\0\0\
-\0\x79\x63\x20\0\0\0\0\0\x15\x03\x08\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x98\
-\x0e\0\0\xb7\x02\0\0\x62\0\0\0\x61\x60\x04\0\0\0\0\0\x45\0\x02\0\x01\0\0\0\x85\
+\0\x79\x63\x20\0\0\0\0\0\x15\x03\x08\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x38\
+\x0f\0\0\xb7\x02\0\0\x7b\0\0\0\x61\x60\x04\0\0\0\0\0\x45\0\x02\0\x01\0\0\0\x85\
\0\0\0\x94\0\0\0\x05\0\x01\0\0\0\0\0\x85\0\0\0\x71\0\0\0\x18\x62\0\0\0\0\0\0\0\
-\0\0\0\0\0\0\0\x61\x20\0\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x08\x0f\0\0\x63\
-\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\0\x0f\0\0\x18\x61\0\0\0\0\0\0\0\0\
-\0\0\x10\x0f\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x98\x0e\0\0\
-\x18\x61\0\0\0\0\0\0\0\0\0\0\x18\x0f\0\0\x7b\x01\0\0\0\0\0\0\xb7\x01\0\0\x02\0\
-\0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\x08\x0f\0\0\xb7\x03\0\0\x20\0\0\0\x85\0\0\0\
+\0\0\0\0\0\0\0\x61\x20\0\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\xc0\x0f\0\0\x63\
+\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\xb8\x0f\0\0\x18\x61\0\0\0\0\0\0\0\
+\0\0\0\xc8\x0f\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x38\x0f\0\0\
+\x18\x61\0\0\0\0\0\0\0\0\0\0\xd0\x0f\0\0\x7b\x01\0\0\0\0\0\0\xb7\x01\0\0\x02\0\
+\0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\xc0\x0f\0\0\xb7\x03\0\0\x20\0\0\0\x85\0\0\0\
\xa6\0\0\0\xbf\x07\0\0\0\0\0\0\xc5\x07\x9f\xff\0\0\0\0\x18\x62\0\0\0\0\0\0\0\0\
-\0\0\0\0\0\0\x61\x20\0\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x28\x0f\0\0\x63\
-\x01\0\0\0\0\0\0\xb7\x01\0\0\x16\0\0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\x28\x0f\0\0\
+\0\0\0\0\0\0\x61\x20\0\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\xe0\x0f\0\0\x63\
+\x01\0\0\0\0\0\0\xb7\x01\0\0\x16\0\0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\xe0\x0f\0\0\
\xb7\x03\0\0\x04\0\0\0\x85\0\0\0\xa6\0\0\0\xbf\x07\0\0\0\0\0\0\xc5\x07\x92\xff\
-\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x30\x0f\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\
-\x78\x11\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x38\x0f\0\0\x18\
-\x61\0\0\0\0\0\0\0\0\0\0\x70\x11\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\
-\0\0\0\x40\x10\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\xb8\x11\0\0\x7b\x01\0\0\0\0\0\0\
-\x18\x60\0\0\0\0\0\0\0\0\0\0\x48\x10\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\xc8\x11\0\
-\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\xe8\x10\0\0\x18\x61\0\0\0\0\
-\0\0\0\0\0\0\xe8\x11\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\0\0\0\
-\0\x18\x61\0\0\0\0\0\0\0\0\0\0\xe0\x11\0\0\x7b\x01\0\0\0\0\0\0\x61\x60\x08\0\0\
-\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x80\x11\0\0\x63\x01\0\0\0\0\0\0\x61\x60\x0c\
-\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x84\x11\0\0\x63\x01\0\0\0\0\0\0\x79\x60\
-\x10\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x88\x11\0\0\x7b\x01\0\0\0\0\0\0\x61\
-\xa0\x78\xff\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\xb0\x11\0\0\x63\x01\0\0\0\0\0\
-\0\x18\x61\0\0\0\0\0\0\0\0\0\0\xf8\x11\0\0\xb7\x02\0\0\x11\0\0\0\xb7\x03\0\0\
+\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\xe8\x0f\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\
+\x20\x12\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\xf0\x0f\0\0\x18\
+\x61\0\0\0\0\0\0\0\0\0\0\x18\x12\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\
+\0\0\0\x08\x11\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x60\x12\0\0\x7b\x01\0\0\0\0\0\0\
+\x18\x60\0\0\0\0\0\0\0\0\0\0\x10\x11\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x70\x12\0\
+\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\xa0\x11\0\0\x18\x61\0\0\0\0\
+\0\0\0\0\0\0\x90\x12\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\0\0\0\
+\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x88\x12\0\0\x7b\x01\0\0\0\0\0\0\x61\x60\x08\0\0\
+\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x28\x12\0\0\x63\x01\0\0\0\0\0\0\x61\x60\x0c\
+\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x2c\x12\0\0\x63\x01\0\0\0\0\0\0\x79\x60\
+\x10\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x30\x12\0\0\x7b\x01\0\0\0\0\0\0\x61\
+\xa0\x78\xff\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x58\x12\0\0\x63\x01\0\0\0\0\0\
+\0\x18\x61\0\0\0\0\0\0\0\0\0\0\xa0\x12\0\0\xb7\x02\0\0\x11\0\0\0\xb7\x03\0\0\
\x0c\0\0\0\xb7\x04\0\0\0\0\0\0\x85\0\0\0\xa7\0\0\0\xbf\x07\0\0\0\0\0\0\xc5\x07\
-\x5c\xff\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x68\x11\0\0\x63\x70\x6c\0\0\0\0\0\
-\x77\x07\0\0\x20\0\0\0\x63\x70\x70\0\0\0\0\0\xb7\x01\0\0\x05\0\0\0\x18\x62\0\0\
-\0\0\0\0\0\0\0\0\x68\x11\0\0\xb7\x03\0\0\x8c\0\0\0\x85\0\0\0\xa6\0\0\0\xbf\x07\
-\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\xd8\x11\0\0\x61\x01\0\0\0\0\0\0\xd5\
-\x01\x02\0\0\0\0\0\xbf\x19\0\0\0\0\0\0\x85\0\0\0\xa8\0\0\0\xc5\x07\x4a\xff\0\0\
-\0\0\x63\x7a\x80\xff\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x10\x12\0\0\x18\x61\0\
-\0\0\0\0\0\0\0\0\0\x10\x17\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\
-\x18\x12\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x08\x17\0\0\x7b\x01\0\0\0\0\0\0\x18\
-\x60\0\0\0\0\0\0\0\0\0\0\x28\x14\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x50\x17\0\0\
-\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x30\x14\0\0\x18\x61\0\0\0\0\0\
-\0\0\0\0\0\x60\x17\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\xd0\x15\
-\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x80\x17\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\
-\0\0\0\0\0\0\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x78\x17\0\0\x7b\x01\0\0\0\0\
-\0\0\x61\x60\x08\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x18\x17\0\0\x63\x01\0\0\
-\0\0\0\0\x61\x60\x0c\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x1c\x17\0\0\x63\x01\
-\0\0\0\0\0\0\x79\x60\x10\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x20\x17\0\0\x7b\
-\x01\0\0\0\0\0\0\x61\xa0\x78\xff\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x48\x17\0\
-\0\x63\x01\0\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x90\x17\0\0\xb7\x02\0\0\x12\
-\0\0\0\xb7\x03\0\0\x0c\0\0\0\xb7\x04\0\0\0\0\0\0\x85\0\0\0\xa7\0\0\0\xbf\x07\0\
-\0\0\0\0\0\xc5\x07\x13\xff\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\0\x17\0\0\x63\
-\x70\x6c\0\0\0\0\0\x77\x07\0\0\x20\0\0\0\x63\x70\x70\0\0\0\0\0\xb7\x01\0\0\x05\
-\0\0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\0\x17\0\0\xb7\x03\0\0\x8c\0\0\0\x85\0\0\0\
-\xa6\0\0\0\xbf\x07\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x70\x17\0\0\x61\x01\
-\0\0\0\0\0\0\xd5\x01\x02\0\0\0\0\0\xbf\x19\0\0\0\0\0\0\x85\0\0\0\xa8\0\0\0\xc5\
-\x07\x01\xff\0\0\0\0\x63\x7a\x84\xff\0\0\0\0\x61\xa1\x78\xff\0\0\0\0\xd5\x01\
-\x02\0\0\0\0\0\xbf\x19\0\0\0\0\0\0\x85\0\0\0\xa8\0\0\0\x61\xa0\x80\xff\0\0\0\0\
-\x63\x06\x28\0\0\0\0\0\x61\xa0\x84\xff\0\0\0\0\x63\x06\x2c\0\0\0\0\0\x18\x61\0\
-\0\0\0\0\0\0\0\0\0\0\0\0\0\x61\x10\0\0\0\0\0\0\x63\x06\x18\0\0\0\0\0\xb7\0\0\0\
-\0\0\0\0\x95\0\0\0\0\0\0\0";
+\x5c\xff\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x10\x12\0\0\x63\x70\x6c\0\0\0\0\0\
+\x77\x07\0\0\x20\0\0\0\x63\x70\x70\0\0\0\0\0\x18\x68\0\0\0\0\0\0\0\0\0\0\xa8\
+\x10\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\xb8\x12\0\0\xb7\x02\0\0\x17\0\0\0\xb7\x03\
+\0\0\x0c\0\0\0\xb7\x04\0\0\0\0\0\0\x85\0\0\0\xa7\0\0\0\xbf\x07\0\0\0\0\0\0\xc5\
+\x07\x4d\xff\0\0\0\0\x75\x07\x03\0\0\0\0\0\x62\x08\x04\0\0\0\0\0\x6a\x08\x02\0\
+\0\0\0\0\x05\0\x0a\0\0\0\0\0\x63\x78\x04\0\0\0\0\0\xbf\x79\0\0\0\0\0\0\x77\x09\
+\0\0\x20\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\0\x01\0\0\x63\x90\0\0\0\0\0\0\x55\
+\x09\x02\0\0\0\0\0\x6a\x08\x02\0\0\0\0\0\x05\0\x01\0\0\0\0\0\x6a\x08\x02\0\x40\
+\0\0\0\xb7\x01\0\0\x05\0\0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\x10\x12\0\0\xb7\x03\0\
+\0\x8c\0\0\0\x85\0\0\0\xa6\0\0\0\xbf\x07\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\
+\0\0\x01\0\0\x61\x01\0\0\0\0\0\0\xd5\x01\x02\0\0\0\0\0\xbf\x19\0\0\0\0\0\0\x85\
+\0\0\0\xa8\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x80\x12\0\0\x61\x01\0\0\0\0\0\0\
+\xd5\x01\x02\0\0\0\0\0\xbf\x19\0\0\0\0\0\0\x85\0\0\0\xa8\0\0\0\xc5\x07\x2c\xff\
+\0\0\0\0\x63\x7a\x80\xff\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\xd0\x12\0\0\x18\
+\x61\0\0\0\0\0\0\0\0\0\0\xa8\x17\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\
+\0\0\0\xd8\x12\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\xa0\x17\0\0\x7b\x01\0\0\0\0\0\0\
+\x18\x60\0\0\0\0\0\0\0\0\0\0\xe0\x14\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\xe8\x17\0\
+\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\xe8\x14\0\0\x18\x61\0\0\0\0\
+\0\0\0\0\0\0\xf8\x17\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x78\
+\x16\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x18\x18\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\
+\0\0\0\0\0\0\0\0\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x10\x18\0\0\x7b\x01\0\0\
+\0\0\0\0\x61\x60\x08\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\xb0\x17\0\0\x63\x01\
+\0\0\0\0\0\0\x61\x60\x0c\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\xb4\x17\0\0\x63\
+\x01\0\0\0\0\0\0\x79\x60\x10\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\xb8\x17\0\0\
+\x7b\x01\0\0\0\0\0\0\x61\xa0\x78\xff\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\xe0\
+\x17\0\0\x63\x01\0\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x28\x18\0\0\xb7\x02\0\
+\0\x12\0\0\0\xb7\x03\0\0\x0c\0\0\0\xb7\x04\0\0\0\0\0\0\x85\0\0\0\xa7\0\0\0\xbf\
+\x07\0\0\0\0\0\0\xc5\x07\xf5\xfe\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x98\x17\0\
+\0\x63\x70\x6c\0\0\0\0\0\x77\x07\0\0\x20\0\0\0\x63\x70\x70\0\0\0\0\0\xb7\x01\0\
+\0\x05\0\0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\x98\x17\0\0\xb7\x03\0\0\x8c\0\0\0\x85\
+\0\0\0\xa6\0\0\0\xbf\x07\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x08\x18\0\0\
+\x61\x01\0\0\0\0\0\0\xd5\x01\x02\0\0\0\0\0\xbf\x19\0\0\0\0\0\0\x85\0\0\0\xa8\0\
+\0\0\xc5\x07\xe3\xfe\0\0\0\0\x63\x7a\x84\xff\0\0\0\0\x61\xa1\x78\xff\0\0\0\0\
+\xd5\x01\x02\0\0\0\0\0\xbf\x19\0\0\0\0\0\0\x85\0\0\0\xa8\0\0\0\x61\xa0\x80\xff\
+\0\0\0\0\x63\x06\x28\0\0\0\0\0\x61\xa0\x84\xff\0\0\0\0\x63\x06\x2c\0\0\0\0\0\
+\x18\x61\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x61\x10\0\0\0\0\0\0\x63\x06\x18\0\0\0\0\0\
+\xb7\0\0\0\0\0\0\0\x95\0\0\0\0\0\0\0";
err = bpf_load_and_run(&opts);
if (err < 0)
return err;
- skel->rodata = skel_finalize_map_data(&skel->maps.rodata.initial_value,
- 4096, PROT_READ, skel->maps.rodata.map_fd);
- if (!skel->rodata)
- return -ENOMEM;
return 0;
}
@@ -422,4 +421,15 @@ iterators_bpf__open_and_load(void)
return skel;
}
+__attribute__((unused)) static void
+iterators_bpf__assert(struct iterators_bpf *s __attribute__((unused)))
+{
+#ifdef __cplusplus
+#define _Static_assert static_assert
+#endif
+#ifdef __cplusplus
+#undef _Static_assert
+#endif
+}
+
#endif /* __ITERATORS_BPF_SKEL_H__ */
--
2.34.1
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [v3 PATCH bpf-next 5/6] selftests/bpf: test map percpu stats
2023-06-30 8:25 [v3 PATCH bpf-next 0/6] bpf: add percpu stats for bpf_map Anton Protopopov
` (3 preceding siblings ...)
2023-06-30 8:25 ` [v3 PATCH bpf-next 4/6] bpf: make preloaded map iterators to display map elements count Anton Protopopov
@ 2023-06-30 8:25 ` Anton Protopopov
2023-07-04 14:41 ` Hou Tao
2023-06-30 8:25 ` [v3 PATCH bpf-next 6/6] selftests/bpf: check that ->elem_count is non-zero for the hash map Anton Protopopov
5 siblings, 1 reply; 20+ messages in thread
From: Anton Protopopov @ 2023-06-30 8:25 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, John Fastabend,
Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, bpf
Cc: Anton Protopopov
Add a new map test, map_percpu_stats.c, which is checking the correctness of
map's percpu elements counters. For supported maps the test upserts a number
of elements, checks the correctness of the counters, then deletes all the
elements and checks again that the counters sum drops down to zero.
The following map types are tested:
* BPF_MAP_TYPE_HASH, BPF_F_NO_PREALLOC
* BPF_MAP_TYPE_PERCPU_HASH, BPF_F_NO_PREALLOC
* BPF_MAP_TYPE_HASH,
* BPF_MAP_TYPE_PERCPU_HASH,
* BPF_MAP_TYPE_LRU_HASH
* BPF_MAP_TYPE_LRU_PERCPU_HASH
Signed-off-by: Anton Protopopov <aspsk@isovalent.com>
---
.../bpf/map_tests/map_percpu_stats.c | 336 ++++++++++++++++++
.../selftests/bpf/progs/map_percpu_stats.c | 24 ++
2 files changed, 360 insertions(+)
create mode 100644 tools/testing/selftests/bpf/map_tests/map_percpu_stats.c
create mode 100644 tools/testing/selftests/bpf/progs/map_percpu_stats.c
diff --git a/tools/testing/selftests/bpf/map_tests/map_percpu_stats.c b/tools/testing/selftests/bpf/map_tests/map_percpu_stats.c
new file mode 100644
index 000000000000..5b45af230368
--- /dev/null
+++ b/tools/testing/selftests/bpf/map_tests/map_percpu_stats.c
@@ -0,0 +1,336 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2023 Isovalent */
+
+#include <errno.h>
+#include <unistd.h>
+#include <pthread.h>
+
+#include <bpf/bpf.h>
+#include <bpf/libbpf.h>
+
+#include <bpf_util.h>
+#include <test_maps.h>
+
+#include "map_percpu_stats.skel.h"
+
+#define MAX_ENTRIES 16384
+#define N_THREADS 37
+
+#define MAX_MAP_KEY_SIZE 4
+
+static void map_info(int map_fd, struct bpf_map_info *info)
+{
+ __u32 len = sizeof(*info);
+ int ret;
+
+ memset(info, 0, sizeof(*info));
+
+ ret = bpf_obj_get_info_by_fd(map_fd, info, &len);
+ CHECK(ret < 0, "bpf_obj_get_info_by_fd", "error: %s\n", strerror(errno));
+}
+
+static const char *map_type_to_s(__u32 type)
+{
+ switch (type) {
+ case BPF_MAP_TYPE_HASH:
+ return "HASH";
+ case BPF_MAP_TYPE_PERCPU_HASH:
+ return "PERCPU_HASH";
+ case BPF_MAP_TYPE_LRU_HASH:
+ return "LRU_HASH";
+ case BPF_MAP_TYPE_LRU_PERCPU_HASH:
+ return "LRU_PERCPU_HASH";
+ default:
+ return "<define-me>";
+ }
+}
+
+/* Map i -> map-type-specific-key */
+static void *map_key(__u32 type, __u32 i)
+{
+ static __thread __u8 key[MAX_MAP_KEY_SIZE];
+
+ *(__u32 *)key = i;
+ return key;
+}
+
+static __u32 map_count_elements(__u32 type, int map_fd)
+{
+ void *key = map_key(type, -1);
+ int n = 0;
+
+ while (!bpf_map_get_next_key(map_fd, key, key))
+ n++;
+ return n;
+}
+
+static void delete_all_elements(__u32 type, int map_fd)
+{
+ void *key = map_key(type, -1);
+ void *keys;
+ int n = 0;
+ int ret;
+
+ keys = calloc(MAX_MAP_KEY_SIZE, MAX_ENTRIES);
+ CHECK(!keys, "calloc", "error: %s\n", strerror(errno));
+
+ for (; !bpf_map_get_next_key(map_fd, key, key); n++)
+ memcpy(keys + n*MAX_MAP_KEY_SIZE, key, MAX_MAP_KEY_SIZE);
+
+ while (--n >= 0) {
+ ret = bpf_map_delete_elem(map_fd, keys + n*MAX_MAP_KEY_SIZE);
+ CHECK(ret < 0, "bpf_map_delete_elem", "error: %s\n", strerror(errno));
+ }
+}
+
+static bool is_lru(__u32 map_type)
+{
+ return map_type == BPF_MAP_TYPE_LRU_HASH ||
+ map_type == BPF_MAP_TYPE_LRU_PERCPU_HASH;
+}
+
+struct upsert_opts {
+ __u32 map_type;
+ int map_fd;
+ __u32 n;
+};
+
+static void *patch_map_thread(void *arg)
+{
+ struct upsert_opts *opts = arg;
+ void *key;
+ int val;
+ int ret;
+ int i;
+
+ for (i = 0; i < opts->n; i++) {
+ key = map_key(opts->map_type, i);
+ val = rand();
+ ret = bpf_map_update_elem(opts->map_fd, key, &val, 0);
+ CHECK(ret < 0, "bpf_map_update_elem", "error: %s\n", strerror(errno));
+ }
+ return NULL;
+}
+
+static void upsert_elements(struct upsert_opts *opts)
+{
+ pthread_t threads[N_THREADS];
+ int ret;
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(threads); i++) {
+ ret = pthread_create(&i[threads], NULL, patch_map_thread, opts);
+ CHECK(ret != 0, "pthread_create", "error: %s\n", strerror(ret));
+ }
+
+ for (i = 0; i < ARRAY_SIZE(threads); i++) {
+ ret = pthread_join(i[threads], NULL);
+ CHECK(ret != 0, "pthread_join", "error: %s\n", strerror(ret));
+ }
+}
+
+static __u32 read_cur_elements(int iter_fd)
+{
+ char buf[64];
+ ssize_t n;
+ __u32 ret;
+
+ n = read(iter_fd, buf, sizeof(buf)-1);
+ CHECK(n <= 0, "read", "error: %s\n", strerror(errno));
+ buf[n] = '\0';
+
+ errno = 0;
+ ret = (__u32)strtol(buf, NULL, 10);
+ CHECK(errno != 0, "strtol", "error: %s\n", strerror(errno));
+
+ return ret;
+}
+
+static __u32 get_cur_elements(int map_id)
+{
+ LIBBPF_OPTS(bpf_iter_attach_opts, opts);
+ union bpf_iter_link_info linfo;
+ struct map_percpu_stats *skel;
+ struct bpf_link *link;
+ int iter_fd;
+ int ret;
+
+ opts.link_info = &linfo;
+ opts.link_info_len = sizeof(linfo);
+
+ skel = map_percpu_stats__open();
+ CHECK(skel == NULL, "map_percpu_stats__open", "error: %s", strerror(errno));
+
+ skel->bss->target_id = map_id;
+
+ ret = map_percpu_stats__load(skel);
+ CHECK(ret != 0, "map_percpu_stats__load", "error: %s", strerror(errno));
+
+ link = bpf_program__attach_iter(skel->progs.dump_bpf_map, &opts);
+ CHECK(!link, "bpf_program__attach_iter", "error: %s\n", strerror(errno));
+
+ iter_fd = bpf_iter_create(bpf_link__fd(link));
+ CHECK(iter_fd < 0, "bpf_iter_create", "error: %s\n", strerror(errno));
+
+ return read_cur_elements(iter_fd);
+}
+
+static void __test(int map_fd)
+{
+ __u32 n = MAX_ENTRIES - 1000;
+ __u32 real_current_elements;
+ __u32 iter_current_elements;
+ struct upsert_opts opts = {
+ .map_fd = map_fd,
+ .n = n,
+ };
+ struct bpf_map_info info;
+
+ map_info(map_fd, &info);
+ opts.map_type = info.type;
+
+ /*
+ * Upsert keys [0, n) under some competition: with random values from
+ * N_THREADS threads
+ */
+ upsert_elements(&opts);
+
+ /*
+ * The sum of percpu elements counters for all hashtable-based maps
+ * should be equal to the number of elements present in the map. For
+ * non-lru maps this number should be the number n of upserted
+ * elements. For lru maps some elements might have been evicted. Check
+ * that all numbers make sense
+ */
+ map_info(map_fd, &info);
+ real_current_elements = map_count_elements(info.type, map_fd);
+ if (!is_lru(info.type))
+ CHECK(n != real_current_elements, "map_count_elements",
+ "real_current_elements(%u) != expected(%u)\n", real_current_elements, n);
+
+ iter_current_elements = get_cur_elements(info.id);
+ CHECK(iter_current_elements != real_current_elements, "get_cur_elements",
+ "iter_current_elements=%u, expected %u (map_type=%s,map_flags=%08x)\n",
+ iter_current_elements, real_current_elements, map_type_to_s(info.type), info.map_flags);
+
+ /*
+ * Cleanup the map and check that all elements are actually gone and
+ * that the sum of percpu elements counters is back to 0 as well
+ */
+ delete_all_elements(info.type, map_fd);
+ map_info(map_fd, &info);
+ real_current_elements = map_count_elements(info.type, map_fd);
+ CHECK(real_current_elements, "map_count_elements",
+ "expected real_current_elements=0, got %u", real_current_elements);
+
+ iter_current_elements = get_cur_elements(info.id);
+ CHECK(iter_current_elements != 0, "get_cur_elements",
+ "iter_current_elements=%u, expected 0 (map_type=%s,map_flags=%08x)\n",
+ iter_current_elements, map_type_to_s(info.type), info.map_flags);
+
+ close(map_fd);
+}
+
+static int map_create_opts(__u32 type, const char *name,
+ struct bpf_map_create_opts *map_opts,
+ __u32 key_size, __u32 val_size)
+{
+ int map_fd;
+
+ map_fd = bpf_map_create(type, name, key_size, val_size, MAX_ENTRIES, map_opts);
+ CHECK(map_fd < 0, "bpf_map_create()", "error:%s (name=%s)\n",
+ strerror(errno), name);
+
+ return map_fd;
+}
+
+static int map_create(__u32 type, const char *name, struct bpf_map_create_opts *map_opts)
+{
+ return map_create_opts(type, name, map_opts, sizeof(int), sizeof(int));
+}
+
+static int create_hash(void)
+{
+ struct bpf_map_create_opts map_opts = {
+ .sz = sizeof(map_opts),
+ .map_flags = BPF_F_NO_PREALLOC,
+ };
+
+ return map_create(BPF_MAP_TYPE_HASH, "hash", &map_opts);
+}
+
+static int create_percpu_hash(void)
+{
+ struct bpf_map_create_opts map_opts = {
+ .sz = sizeof(map_opts),
+ .map_flags = BPF_F_NO_PREALLOC,
+ };
+
+ return map_create(BPF_MAP_TYPE_PERCPU_HASH, "percpu_hash", &map_opts);
+}
+
+static int create_hash_prealloc(void)
+{
+ return map_create(BPF_MAP_TYPE_HASH, "hash", NULL);
+}
+
+static int create_percpu_hash_prealloc(void)
+{
+ return map_create(BPF_MAP_TYPE_PERCPU_HASH, "percpu_hash_prealloc", NULL);
+}
+
+static int create_lru_hash(void)
+{
+ return map_create(BPF_MAP_TYPE_LRU_HASH, "lru_hash", NULL);
+}
+
+static int create_percpu_lru_hash(void)
+{
+ return map_create(BPF_MAP_TYPE_LRU_PERCPU_HASH, "lru_hash_percpu", NULL);
+}
+
+static void map_percpu_stats_hash(void)
+{
+ __test(create_hash());
+ printf("test_%s:PASS\n", __func__);
+}
+
+static void map_percpu_stats_percpu_hash(void)
+{
+ __test(create_percpu_hash());
+ printf("test_%s:PASS\n", __func__);
+}
+
+static void map_percpu_stats_hash_prealloc(void)
+{
+ __test(create_hash_prealloc());
+ printf("test_%s:PASS\n", __func__);
+}
+
+static void map_percpu_stats_percpu_hash_prealloc(void)
+{
+ __test(create_percpu_hash_prealloc());
+ printf("test_%s:PASS\n", __func__);
+}
+
+static void map_percpu_stats_lru_hash(void)
+{
+ __test(create_lru_hash());
+ printf("test_%s:PASS\n", __func__);
+}
+
+static void map_percpu_stats_percpu_lru_hash(void)
+{
+ __test(create_percpu_lru_hash());
+ printf("test_%s:PASS\n", __func__);
+}
+
+void test_map_percpu_stats(void)
+{
+ map_percpu_stats_hash();
+ map_percpu_stats_percpu_hash();
+ map_percpu_stats_hash_prealloc();
+ map_percpu_stats_percpu_hash_prealloc();
+ map_percpu_stats_lru_hash();
+ map_percpu_stats_percpu_lru_hash();
+}
diff --git a/tools/testing/selftests/bpf/progs/map_percpu_stats.c b/tools/testing/selftests/bpf/progs/map_percpu_stats.c
new file mode 100644
index 000000000000..10b2325c1720
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/map_percpu_stats.c
@@ -0,0 +1,24 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2023 Isovalent */
+
+#include "vmlinux.h"
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+__u32 target_id;
+
+__s64 bpf_map_sum_elem_count(struct bpf_map *map) __ksym;
+
+SEC("iter/bpf_map")
+int dump_bpf_map(struct bpf_iter__bpf_map *ctx)
+{
+ struct seq_file *seq = ctx->meta->seq;
+ struct bpf_map *map = ctx->map;
+
+ if (map && map->id == target_id)
+ BPF_SEQ_PRINTF(seq, "%lld", bpf_map_sum_elem_count(map));
+
+ return 0;
+}
+
+char _license[] SEC("license") = "GPL";
--
2.34.1
^ permalink raw reply related [flat|nested] 20+ messages in thread
* Re: [v3 PATCH bpf-next 5/6] selftests/bpf: test map percpu stats
2023-06-30 8:25 ` [v3 PATCH bpf-next 5/6] selftests/bpf: test map percpu stats Anton Protopopov
@ 2023-07-04 14:41 ` Hou Tao
2023-07-04 15:02 ` Anton Protopopov
0 siblings, 1 reply; 20+ messages in thread
From: Hou Tao @ 2023-07-04 14:41 UTC (permalink / raw)
To: Anton Protopopov
Cc: Alexei Starovoitov, Daniel Borkmann, John Fastabend,
Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, bpf
Hi,
On 6/30/2023 4:25 PM, Anton Protopopov wrote:
> Add a new map test, map_percpu_stats.c, which is checking the correctness of
> map's percpu elements counters. For supported maps the test upserts a number
> of elements, checks the correctness of the counters, then deletes all the
> elements and checks again that the counters sum drops down to zero.
>
> The following map types are tested:
>
> * BPF_MAP_TYPE_HASH, BPF_F_NO_PREALLOC
> * BPF_MAP_TYPE_PERCPU_HASH, BPF_F_NO_PREALLOC
> * BPF_MAP_TYPE_HASH,
> * BPF_MAP_TYPE_PERCPU_HASH,
> * BPF_MAP_TYPE_LRU_HASH
> * BPF_MAP_TYPE_LRU_PERCPU_HASH
A test for BPF_MAP_TYPE_HASH_OF_MAPS is also needed.
>
> Signed-off-by: Anton Protopopov <aspsk@isovalent.com>
> ---
> .../bpf/map_tests/map_percpu_stats.c | 336 ++++++++++++++++++
> .../selftests/bpf/progs/map_percpu_stats.c | 24 ++
> 2 files changed, 360 insertions(+)
> create mode 100644 tools/testing/selftests/bpf/map_tests/map_percpu_stats.c
> create mode 100644 tools/testing/selftests/bpf/progs/map_percpu_stats.c
>
> diff --git a/tools/testing/selftests/bpf/map_tests/map_percpu_stats.c b/tools/testing/selftests/bpf/map_tests/map_percpu_stats.c
> new file mode 100644
> index 000000000000..5b45af230368
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/map_tests/map_percpu_stats.c
> @@ -0,0 +1,336 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/* Copyright (c) 2023 Isovalent */
> +
> +#include <errno.h>
> +#include <unistd.h>
> +#include <pthread.h>
> +
> +#include <bpf/bpf.h>
> +#include <bpf/libbpf.h>
> +
> +#include <bpf_util.h>
> +#include <test_maps.h>
> +
> +#include "map_percpu_stats.skel.h"
> +
> +#define MAX_ENTRIES 16384
> +#define N_THREADS 37
Why 37 thread is needed here ? Does a small number of threads work as well ?
> +
> +#define MAX_MAP_KEY_SIZE 4
> +
> +static void map_info(int map_fd, struct bpf_map_info *info)
> +{
> + __u32 len = sizeof(*info);
> + int ret;
> +
> + memset(info, 0, sizeof(*info));
> +
> + ret = bpf_obj_get_info_by_fd(map_fd, info, &len);
> + CHECK(ret < 0, "bpf_obj_get_info_by_fd", "error: %s\n", strerror(errno));
Please use ASSERT_OK instead.
> +}
> +
> +static const char *map_type_to_s(__u32 type)
> +{
> + switch (type) {
> + case BPF_MAP_TYPE_HASH:
> + return "HASH";
> + case BPF_MAP_TYPE_PERCPU_HASH:
> + return "PERCPU_HASH";
> + case BPF_MAP_TYPE_LRU_HASH:
> + return "LRU_HASH";
> + case BPF_MAP_TYPE_LRU_PERCPU_HASH:
> + return "LRU_PERCPU_HASH";
> + default:
> + return "<define-me>";
> + }
> +}
> +
> +/* Map i -> map-type-specific-key */
> +static void *map_key(__u32 type, __u32 i)
> +{
> + static __thread __u8 key[MAX_MAP_KEY_SIZE];
Why a per-thread key is necessary here ? Could we just define it when
the key is needed ?
> +
> + *(__u32 *)key = i;
> + return key;
> +}
> +
> +static __u32 map_count_elements(__u32 type, int map_fd)
> +{
> + void *key = map_key(type, -1);
> + int n = 0;
> +
> + while (!bpf_map_get_next_key(map_fd, key, key))
> + n++;
> + return n;
> +}
> +
> +static void delete_all_elements(__u32 type, int map_fd)
> +{
> + void *key = map_key(type, -1);
> + void *keys;
> + int n = 0;
> + int ret;
> +
> + keys = calloc(MAX_MAP_KEY_SIZE, MAX_ENTRIES);
> + CHECK(!keys, "calloc", "error: %s\n", strerror(errno));
> +
> + for (; !bpf_map_get_next_key(map_fd, key, key); n++)
> + memcpy(keys + n*MAX_MAP_KEY_SIZE, key, MAX_MAP_KEY_SIZE);
> +
> + while (--n >= 0) {
> + ret = bpf_map_delete_elem(map_fd, keys + n*MAX_MAP_KEY_SIZE);
> + CHECK(ret < 0, "bpf_map_delete_elem", "error: %s\n", strerror(errno));
> + }
> +}
Please use ASSERT_xxx() to replace CHECK().
> +
> +static bool is_lru(__u32 map_type)
> +{
> + return map_type == BPF_MAP_TYPE_LRU_HASH ||
> + map_type == BPF_MAP_TYPE_LRU_PERCPU_HASH;
> +}
> +
> +struct upsert_opts {
> + __u32 map_type;
> + int map_fd;
> + __u32 n;
> +};
> +
> +static void *patch_map_thread(void *arg)
> +{
> + struct upsert_opts *opts = arg;
> + void *key;
> + int val;
> + int ret;
> + int i;
> +
> + for (i = 0; i < opts->n; i++) {
> + key = map_key(opts->map_type, i);
> + val = rand();
> + ret = bpf_map_update_elem(opts->map_fd, key, &val, 0);
> + CHECK(ret < 0, "bpf_map_update_elem", "error: %s\n", strerror(errno));
> + }
> + return NULL;
> +}
> +
> +static void upsert_elements(struct upsert_opts *opts)
> +{
> + pthread_t threads[N_THREADS];
> + int ret;
> + int i;
> +
> + for (i = 0; i < ARRAY_SIZE(threads); i++) {
> + ret = pthread_create(&i[threads], NULL, patch_map_thread, opts);
> + CHECK(ret != 0, "pthread_create", "error: %s\n", strerror(ret));
> + }
> +
> + for (i = 0; i < ARRAY_SIZE(threads); i++) {
> + ret = pthread_join(i[threads], NULL);
> + CHECK(ret != 0, "pthread_join", "error: %s\n", strerror(ret));
> + }
> +}
> +
> +static __u32 read_cur_elements(int iter_fd)
> +{
> + char buf[64];
> + ssize_t n;
> + __u32 ret;
> +
> + n = read(iter_fd, buf, sizeof(buf)-1);
> + CHECK(n <= 0, "read", "error: %s\n", strerror(errno));
> + buf[n] = '\0';
> +
> + errno = 0;
> + ret = (__u32)strtol(buf, NULL, 10);
> + CHECK(errno != 0, "strtol", "error: %s\n", strerror(errno));
> +
> + return ret;
> +}
> +
> +static __u32 get_cur_elements(int map_id)
> +{
> + LIBBPF_OPTS(bpf_iter_attach_opts, opts);
> + union bpf_iter_link_info linfo;
> + struct map_percpu_stats *skel;
> + struct bpf_link *link;
> + int iter_fd;
> + int ret;
> +
> + opts.link_info = &linfo;
> + opts.link_info_len = sizeof(linfo);
> +
> + skel = map_percpu_stats__open();
> + CHECK(skel == NULL, "map_percpu_stats__open", "error: %s", strerror(errno));
> +
> + skel->bss->target_id = map_id;
> +
> + ret = map_percpu_stats__load(skel);
> + CHECK(ret != 0, "map_percpu_stats__load", "error: %s", strerror(errno));
> +
> + link = bpf_program__attach_iter(skel->progs.dump_bpf_map, &opts);
> + CHECK(!link, "bpf_program__attach_iter", "error: %s\n", strerror(errno));
> +
> + iter_fd = bpf_iter_create(bpf_link__fd(link));
> + CHECK(iter_fd < 0, "bpf_iter_create", "error: %s\n", strerror(errno));
> +
> + return read_cur_elements(iter_fd);
Need to do close(iter_fd), bpf_link__destroy(link) and
map__percpu_stats__destroy() before return, otherwise there will be
resource leak.
> +}
> +
> +static void __test(int map_fd)
> +{
> + __u32 n = MAX_ENTRIES - 1000;
> + __u32 real_current_elements;
> + __u32 iter_current_elements;
> + struct upsert_opts opts = {
> + .map_fd = map_fd,
> + .n = n,
> + };
> + struct bpf_map_info info;
> +
> + map_info(map_fd, &info);
> + opts.map_type = info.type;
> +
> + /*
> + * Upsert keys [0, n) under some competition: with random values from
> + * N_THREADS threads
> + */
> + upsert_elements(&opts);
> +
> + /*
> + * The sum of percpu elements counters for all hashtable-based maps
> + * should be equal to the number of elements present in the map. For
> + * non-lru maps this number should be the number n of upserted
> + * elements. For lru maps some elements might have been evicted. Check
> + * that all numbers make sense
> + */
> + map_info(map_fd, &info);
I think there is no need to call map_info() multiple times because the
needed type and id will not be changed after creation.
> + real_current_elements = map_count_elements(info.type, map_fd);
> + if (!is_lru(info.type))
> + CHECK(n != real_current_elements, "map_count_elements",
> + "real_current_elements(%u) != expected(%u)\n", real_current_elements, n);
For LRU map, please use ASSERT_EQ(). For LRU map, should we check "n >=
real_current_elements" instead ?
> +
> + iter_current_elements = get_cur_elements(info.id);
> + CHECK(iter_current_elements != real_current_elements, "get_cur_elements",
> + "iter_current_elements=%u, expected %u (map_type=%s,map_flags=%08x)\n",
> + iter_current_elements, real_current_elements, map_type_to_s(info.type), info.map_flags);
> +
> + /*
> + * Cleanup the map and check that all elements are actually gone and
> + * that the sum of percpu elements counters is back to 0 as well
> + */
> + delete_all_elements(info.type, map_fd);
> + map_info(map_fd, &info);
> + real_current_elements = map_count_elements(info.type, map_fd);
> + CHECK(real_current_elements, "map_count_elements",
> + "expected real_current_elements=0, got %u", real_current_elements);
ASSERT_EQ
> +
> + iter_current_elements = get_cur_elements(info.id);
> + CHECK(iter_current_elements != 0, "get_cur_elements",
> + "iter_current_elements=%u, expected 0 (map_type=%s,map_flags=%08x)\n",
> + iter_current_elements, map_type_to_s(info.type), info.map_flags);
> +
ASSERT_NEQ
> + close(map_fd);
> +}
> +
> +static int map_create_opts(__u32 type, const char *name,
> + struct bpf_map_create_opts *map_opts,
> + __u32 key_size, __u32 val_size)
> +{
> + int map_fd;
> +
> + map_fd = bpf_map_create(type, name, key_size, val_size, MAX_ENTRIES, map_opts);
> + CHECK(map_fd < 0, "bpf_map_create()", "error:%s (name=%s)\n",
> + strerror(errno), name);
Please use ASSERT_GE instead.
> +
> + return map_fd;
> +}
> +
> +static int map_create(__u32 type, const char *name, struct bpf_map_create_opts *map_opts)
> +{
> + return map_create_opts(type, name, map_opts, sizeof(int), sizeof(int));
> +}
> +
> +static int create_hash(void)
> +{
> + struct bpf_map_create_opts map_opts = {
> + .sz = sizeof(map_opts),
> + .map_flags = BPF_F_NO_PREALLOC,
> + };
> +
> + return map_create(BPF_MAP_TYPE_HASH, "hash", &map_opts);
> +}
> +
> +static int create_percpu_hash(void)
> +{
> + struct bpf_map_create_opts map_opts = {
> + .sz = sizeof(map_opts),
> + .map_flags = BPF_F_NO_PREALLOC,
> + };
> +
> + return map_create(BPF_MAP_TYPE_PERCPU_HASH, "percpu_hash", &map_opts);
> +}
> +
> +static int create_hash_prealloc(void)
> +{
> + return map_create(BPF_MAP_TYPE_HASH, "hash", NULL);
> +}
> +
> +static int create_percpu_hash_prealloc(void)
> +{
> + return map_create(BPF_MAP_TYPE_PERCPU_HASH, "percpu_hash_prealloc", NULL);
> +}
> +
> +static int create_lru_hash(void)
> +{
> + return map_create(BPF_MAP_TYPE_LRU_HASH, "lru_hash", NULL);
> +}
> +
> +static int create_percpu_lru_hash(void)
> +{
> + return map_create(BPF_MAP_TYPE_LRU_PERCPU_HASH, "lru_hash_percpu", NULL);
> +}
> +
> +static void map_percpu_stats_hash(void)
> +{
> + __test(create_hash());
> + printf("test_%s:PASS\n", __func__);
> +}
> +
> +static void map_percpu_stats_percpu_hash(void)
> +{
> + __test(create_percpu_hash());
> + printf("test_%s:PASS\n", __func__);
> +}
> +
> +static void map_percpu_stats_hash_prealloc(void)
> +{
> + __test(create_hash_prealloc());
> + printf("test_%s:PASS\n", __func__);
> +}
> +
> +static void map_percpu_stats_percpu_hash_prealloc(void)
> +{
> + __test(create_percpu_hash_prealloc());
> + printf("test_%s:PASS\n", __func__);
> +}
> +
> +static void map_percpu_stats_lru_hash(void)
> +{
> + __test(create_lru_hash());
> + printf("test_%s:PASS\n", __func__);
> +}
> +
> +static void map_percpu_stats_percpu_lru_hash(void)
> +{
> + __test(create_percpu_lru_hash());
> + printf("test_%s:PASS\n", __func__);
After switch to subtest, the printf() can be removed.
> +}
> +
> +void test_map_percpu_stats(void)
> +{
> + map_percpu_stats_hash();
> + map_percpu_stats_percpu_hash();
> + map_percpu_stats_hash_prealloc();
> + map_percpu_stats_percpu_hash_prealloc();
> + map_percpu_stats_lru_hash();
> + map_percpu_stats_percpu_lru_hash();
> +}
Please use test__start_subtest() to create multiple subtests.
> diff --git a/tools/testing/selftests/bpf/progs/map_percpu_stats.c b/tools/testing/selftests/bpf/progs/map_percpu_stats.c
> new file mode 100644
> index 000000000000..10b2325c1720
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/progs/map_percpu_stats.c
> @@ -0,0 +1,24 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/* Copyright (c) 2023 Isovalent */
> +
> +#include "vmlinux.h"
> +#include <bpf/bpf_helpers.h>
> +#include <bpf/bpf_tracing.h>
> +
> +__u32 target_id;
> +
> +__s64 bpf_map_sum_elem_count(struct bpf_map *map) __ksym;
> +
> +SEC("iter/bpf_map")
> +int dump_bpf_map(struct bpf_iter__bpf_map *ctx)
> +{
> + struct seq_file *seq = ctx->meta->seq;
> + struct bpf_map *map = ctx->map;
> +
> + if (map && map->id == target_id)
> + BPF_SEQ_PRINTF(seq, "%lld", bpf_map_sum_elem_count(map));
> +
> + return 0;
> +}
> +
> +char _license[] SEC("license") = "GPL";
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [v3 PATCH bpf-next 5/6] selftests/bpf: test map percpu stats
2023-07-04 14:41 ` Hou Tao
@ 2023-07-04 15:02 ` Anton Protopopov
2023-07-04 15:23 ` Anton Protopopov
2023-07-05 3:03 ` Hou Tao
0 siblings, 2 replies; 20+ messages in thread
From: Anton Protopopov @ 2023-07-04 15:02 UTC (permalink / raw)
To: Hou Tao
Cc: Alexei Starovoitov, Daniel Borkmann, John Fastabend,
Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, bpf
On Tue, Jul 04, 2023 at 10:41:10PM +0800, Hou Tao wrote:
> Hi,
>
> On 6/30/2023 4:25 PM, Anton Protopopov wrote:
> > Add a new map test, map_percpu_stats.c, which is checking the correctness of
> > map's percpu elements counters. For supported maps the test upserts a number
> > of elements, checks the correctness of the counters, then deletes all the
> > elements and checks again that the counters sum drops down to zero.
> >
> > The following map types are tested:
> >
> > * BPF_MAP_TYPE_HASH, BPF_F_NO_PREALLOC
> > * BPF_MAP_TYPE_PERCPU_HASH, BPF_F_NO_PREALLOC
> > * BPF_MAP_TYPE_HASH,
> > * BPF_MAP_TYPE_PERCPU_HASH,
> > * BPF_MAP_TYPE_LRU_HASH
> > * BPF_MAP_TYPE_LRU_PERCPU_HASH
>
> A test for BPF_MAP_TYPE_HASH_OF_MAPS is also needed.
I will add it.
> >
> > Signed-off-by: Anton Protopopov <aspsk@isovalent.com>
> > ---
> > .../bpf/map_tests/map_percpu_stats.c | 336 ++++++++++++++++++
> > .../selftests/bpf/progs/map_percpu_stats.c | 24 ++
> > 2 files changed, 360 insertions(+)
> > create mode 100644 tools/testing/selftests/bpf/map_tests/map_percpu_stats.c
> > create mode 100644 tools/testing/selftests/bpf/progs/map_percpu_stats.c
> >
> > diff --git a/tools/testing/selftests/bpf/map_tests/map_percpu_stats.c b/tools/testing/selftests/bpf/map_tests/map_percpu_stats.c
> > new file mode 100644
> > index 000000000000..5b45af230368
> > --- /dev/null
> > +++ b/tools/testing/selftests/bpf/map_tests/map_percpu_stats.c
> > @@ -0,0 +1,336 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/* Copyright (c) 2023 Isovalent */
> > +
> > +#include <errno.h>
> > +#include <unistd.h>
> > +#include <pthread.h>
> > +
> > +#include <bpf/bpf.h>
> > +#include <bpf/libbpf.h>
> > +
> > +#include <bpf_util.h>
> > +#include <test_maps.h>
> > +
> > +#include "map_percpu_stats.skel.h"
> > +
> > +#define MAX_ENTRIES 16384
> > +#define N_THREADS 37
>
> Why 37 thread is needed here ? Does a small number of threads work as well ?
This was used to evict more elements from LRU maps when they are full.
> > +
> > +#define MAX_MAP_KEY_SIZE 4
> > +
> > +static void map_info(int map_fd, struct bpf_map_info *info)
> > +{
> > + __u32 len = sizeof(*info);
> > + int ret;
> > +
> > + memset(info, 0, sizeof(*info));
> > +
> > + ret = bpf_obj_get_info_by_fd(map_fd, info, &len);
> > + CHECK(ret < 0, "bpf_obj_get_info_by_fd", "error: %s\n", strerror(errno));
> Please use ASSERT_OK instead.
Ok, thanks, will do (and for all other similar cases you've mentioned below).
> > +}
> > +
> > +static const char *map_type_to_s(__u32 type)
> > +{
> > + switch (type) {
> > + case BPF_MAP_TYPE_HASH:
> > + return "HASH";
> > + case BPF_MAP_TYPE_PERCPU_HASH:
> > + return "PERCPU_HASH";
> > + case BPF_MAP_TYPE_LRU_HASH:
> > + return "LRU_HASH";
> > + case BPF_MAP_TYPE_LRU_PERCPU_HASH:
> > + return "LRU_PERCPU_HASH";
> > + default:
> > + return "<define-me>";
> > + }
> > +}
> > +
> > +/* Map i -> map-type-specific-key */
> > +static void *map_key(__u32 type, __u32 i)
> > +{
> > + static __thread __u8 key[MAX_MAP_KEY_SIZE];
>
> Why a per-thread key is necessary here ? Could we just define it when
> the key is needed ?
Thanks, I will remove it to simplify code. (This was used to create keys for
non-hash maps, e.g., LPM-trie in the first version of the patch.)
> > +
> > + *(__u32 *)key = i;
> > + return key;
> > +}
> > +
> > +static __u32 map_count_elements(__u32 type, int map_fd)
> > +{
> > + void *key = map_key(type, -1);
> > + int n = 0;
> > +
> > + while (!bpf_map_get_next_key(map_fd, key, key))
> > + n++;
> > + return n;
> > +}
> > +
> > +static void delete_all_elements(__u32 type, int map_fd)
> > +{
> > + void *key = map_key(type, -1);
> > + void *keys;
> > + int n = 0;
> > + int ret;
> > +
> > + keys = calloc(MAX_MAP_KEY_SIZE, MAX_ENTRIES);
> > + CHECK(!keys, "calloc", "error: %s\n", strerror(errno));
> > +
> > + for (; !bpf_map_get_next_key(map_fd, key, key); n++)
> > + memcpy(keys + n*MAX_MAP_KEY_SIZE, key, MAX_MAP_KEY_SIZE);
> > +
> > + while (--n >= 0) {
> > + ret = bpf_map_delete_elem(map_fd, keys + n*MAX_MAP_KEY_SIZE);
> > + CHECK(ret < 0, "bpf_map_delete_elem", "error: %s\n", strerror(errno));
> > + }
> > +}
>
> Please use ASSERT_xxx() to replace CHECK().
> > +
> > +static bool is_lru(__u32 map_type)
> > +{
> > + return map_type == BPF_MAP_TYPE_LRU_HASH ||
> > + map_type == BPF_MAP_TYPE_LRU_PERCPU_HASH;
> > +}
> > +
> > +struct upsert_opts {
> > + __u32 map_type;
> > + int map_fd;
> > + __u32 n;
> > +};
> > +
> > +static void *patch_map_thread(void *arg)
> > +{
> > + struct upsert_opts *opts = arg;
> > + void *key;
> > + int val;
> > + int ret;
> > + int i;
> > +
> > + for (i = 0; i < opts->n; i++) {
> > + key = map_key(opts->map_type, i);
> > + val = rand();
> > + ret = bpf_map_update_elem(opts->map_fd, key, &val, 0);
> > + CHECK(ret < 0, "bpf_map_update_elem", "error: %s\n", strerror(errno));
> > + }
> > + return NULL;
> > +}
> > +
> > +static void upsert_elements(struct upsert_opts *opts)
> > +{
> > + pthread_t threads[N_THREADS];
> > + int ret;
> > + int i;
> > +
> > + for (i = 0; i < ARRAY_SIZE(threads); i++) {
> > + ret = pthread_create(&i[threads], NULL, patch_map_thread, opts);
> > + CHECK(ret != 0, "pthread_create", "error: %s\n", strerror(ret));
> > + }
> > +
> > + for (i = 0; i < ARRAY_SIZE(threads); i++) {
> > + ret = pthread_join(i[threads], NULL);
> > + CHECK(ret != 0, "pthread_join", "error: %s\n", strerror(ret));
> > + }
> > +}
> > +
> > +static __u32 read_cur_elements(int iter_fd)
> > +{
> > + char buf[64];
> > + ssize_t n;
> > + __u32 ret;
> > +
> > + n = read(iter_fd, buf, sizeof(buf)-1);
> > + CHECK(n <= 0, "read", "error: %s\n", strerror(errno));
> > + buf[n] = '\0';
> > +
> > + errno = 0;
> > + ret = (__u32)strtol(buf, NULL, 10);
> > + CHECK(errno != 0, "strtol", "error: %s\n", strerror(errno));
> > +
> > + return ret;
> > +}
> > +
> > +static __u32 get_cur_elements(int map_id)
> > +{
> > + LIBBPF_OPTS(bpf_iter_attach_opts, opts);
> > + union bpf_iter_link_info linfo;
> > + struct map_percpu_stats *skel;
> > + struct bpf_link *link;
> > + int iter_fd;
> > + int ret;
> > +
> > + opts.link_info = &linfo;
> > + opts.link_info_len = sizeof(linfo);
> > +
> > + skel = map_percpu_stats__open();
> > + CHECK(skel == NULL, "map_percpu_stats__open", "error: %s", strerror(errno));
> > +
> > + skel->bss->target_id = map_id;
> > +
> > + ret = map_percpu_stats__load(skel);
> > + CHECK(ret != 0, "map_percpu_stats__load", "error: %s", strerror(errno));
> > +
> > + link = bpf_program__attach_iter(skel->progs.dump_bpf_map, &opts);
> > + CHECK(!link, "bpf_program__attach_iter", "error: %s\n", strerror(errno));
> > +
> > + iter_fd = bpf_iter_create(bpf_link__fd(link));
> > + CHECK(iter_fd < 0, "bpf_iter_create", "error: %s\n", strerror(errno));
> > +
> > + return read_cur_elements(iter_fd);
>
> Need to do close(iter_fd), bpf_link__destroy(link) and
> map__percpu_stats__destroy() before return, otherwise there will be
> resource leak.
Yes, thanks.
> > +}
> > +
> > +static void __test(int map_fd)
> > +{
> > + __u32 n = MAX_ENTRIES - 1000;
> > + __u32 real_current_elements;
> > + __u32 iter_current_elements;
> > + struct upsert_opts opts = {
> > + .map_fd = map_fd,
> > + .n = n,
> > + };
> > + struct bpf_map_info info;
> > +
> > + map_info(map_fd, &info);
> > + opts.map_type = info.type;
> > +
> > + /*
> > + * Upsert keys [0, n) under some competition: with random values from
> > + * N_THREADS threads
> > + */
> > + upsert_elements(&opts);
> > +
> > + /*
> > + * The sum of percpu elements counters for all hashtable-based maps
> > + * should be equal to the number of elements present in the map. For
> > + * non-lru maps this number should be the number n of upserted
> > + * elements. For lru maps some elements might have been evicted. Check
> > + * that all numbers make sense
> > + */
> > + map_info(map_fd, &info);
>
> I think there is no need to call map_info() multiple times because the
> needed type and id will not be changed after creation.
Thanks, will fix. (This code left from the first version when the counter was
returned by map_info().)
> > + real_current_elements = map_count_elements(info.type, map_fd);
> > + if (!is_lru(info.type))
> > + CHECK(n != real_current_elements, "map_count_elements",
> > + "real_current_elements(%u) != expected(%u)\n", real_current_elements, n);
> For LRU map, please use ASSERT_EQ(). For LRU map, should we check "n >=
> real_current_elements" instead ?
> > +
> > + iter_current_elements = get_cur_elements(info.id);
> > + CHECK(iter_current_elements != real_current_elements, "get_cur_elements",
> > + "iter_current_elements=%u, expected %u (map_type=%s,map_flags=%08x)\n",
> > + iter_current_elements, real_current_elements, map_type_to_s(info.type), info.map_flags);
> > +
> > + /*
> > + * Cleanup the map and check that all elements are actually gone and
> > + * that the sum of percpu elements counters is back to 0 as well
> > + */
> > + delete_all_elements(info.type, map_fd);
> > + map_info(map_fd, &info);
> > + real_current_elements = map_count_elements(info.type, map_fd);
> > + CHECK(real_current_elements, "map_count_elements",
> > + "expected real_current_elements=0, got %u", real_current_elements);
>
> ASSERT_EQ
> > +
> > + iter_current_elements = get_cur_elements(info.id);
> > + CHECK(iter_current_elements != 0, "get_cur_elements",
> > + "iter_current_elements=%u, expected 0 (map_type=%s,map_flags=%08x)\n",
> > + iter_current_elements, map_type_to_s(info.type), info.map_flags);
> > +
> ASSERT_NEQ
> > + close(map_fd);
> > +}
> > +
> > +static int map_create_opts(__u32 type, const char *name,
> > + struct bpf_map_create_opts *map_opts,
> > + __u32 key_size, __u32 val_size)
> > +{
> > + int map_fd;
> > +
> > + map_fd = bpf_map_create(type, name, key_size, val_size, MAX_ENTRIES, map_opts);
> > + CHECK(map_fd < 0, "bpf_map_create()", "error:%s (name=%s)\n",
> > + strerror(errno), name);
>
> Please use ASSERT_GE instead.
> > +
> > + return map_fd;
> > +}
> > +
> > +static int map_create(__u32 type, const char *name, struct bpf_map_create_opts *map_opts)
> > +{
> > + return map_create_opts(type, name, map_opts, sizeof(int), sizeof(int));
> > +}
> > +
> > +static int create_hash(void)
> > +{
> > + struct bpf_map_create_opts map_opts = {
> > + .sz = sizeof(map_opts),
> > + .map_flags = BPF_F_NO_PREALLOC,
> > + };
> > +
> > + return map_create(BPF_MAP_TYPE_HASH, "hash", &map_opts);
> > +}
> > +
> > +static int create_percpu_hash(void)
> > +{
> > + struct bpf_map_create_opts map_opts = {
> > + .sz = sizeof(map_opts),
> > + .map_flags = BPF_F_NO_PREALLOC,
> > + };
> > +
> > + return map_create(BPF_MAP_TYPE_PERCPU_HASH, "percpu_hash", &map_opts);
> > +}
> > +
> > +static int create_hash_prealloc(void)
> > +{
> > + return map_create(BPF_MAP_TYPE_HASH, "hash", NULL);
> > +}
> > +
> > +static int create_percpu_hash_prealloc(void)
> > +{
> > + return map_create(BPF_MAP_TYPE_PERCPU_HASH, "percpu_hash_prealloc", NULL);
> > +}
> > +
> > +static int create_lru_hash(void)
> > +{
> > + return map_create(BPF_MAP_TYPE_LRU_HASH, "lru_hash", NULL);
> > +}
> > +
> > +static int create_percpu_lru_hash(void)
> > +{
> > + return map_create(BPF_MAP_TYPE_LRU_PERCPU_HASH, "lru_hash_percpu", NULL);
> > +}
> > +
> > +static void map_percpu_stats_hash(void)
> > +{
> > + __test(create_hash());
> > + printf("test_%s:PASS\n", __func__);
> > +}
> > +
> > +static void map_percpu_stats_percpu_hash(void)
> > +{
> > + __test(create_percpu_hash());
> > + printf("test_%s:PASS\n", __func__);
> > +}
> > +
> > +static void map_percpu_stats_hash_prealloc(void)
> > +{
> > + __test(create_hash_prealloc());
> > + printf("test_%s:PASS\n", __func__);
> > +}
> > +
> > +static void map_percpu_stats_percpu_hash_prealloc(void)
> > +{
> > + __test(create_percpu_hash_prealloc());
> > + printf("test_%s:PASS\n", __func__);
> > +}
> > +
> > +static void map_percpu_stats_lru_hash(void)
> > +{
> > + __test(create_lru_hash());
> > + printf("test_%s:PASS\n", __func__);
> > +}
> > +
> > +static void map_percpu_stats_percpu_lru_hash(void)
> > +{
> > + __test(create_percpu_lru_hash());
> > + printf("test_%s:PASS\n", __func__);
>
> After switch to subtest, the printf() can be removed.
> > +}
> > +
> > +void test_map_percpu_stats(void)
> > +{
> > + map_percpu_stats_hash();
> > + map_percpu_stats_percpu_hash();
> > + map_percpu_stats_hash_prealloc();
> > + map_percpu_stats_percpu_hash_prealloc();
> > + map_percpu_stats_lru_hash();
> > + map_percpu_stats_percpu_lru_hash();
> > +}
>
> Please use test__start_subtest() to create multiple subtests.
Thanks.
I will update this selftest in v4 with your comments addressed + batch ops
tests.
> > diff --git a/tools/testing/selftests/bpf/progs/map_percpu_stats.c b/tools/testing/selftests/bpf/progs/map_percpu_stats.c
> > new file mode 100644
> > index 000000000000..10b2325c1720
> > --- /dev/null
> > +++ b/tools/testing/selftests/bpf/progs/map_percpu_stats.c
> > @@ -0,0 +1,24 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/* Copyright (c) 2023 Isovalent */
> > +
> > +#include "vmlinux.h"
> > +#include <bpf/bpf_helpers.h>
> > +#include <bpf/bpf_tracing.h>
> > +
> > +__u32 target_id;
> > +
> > +__s64 bpf_map_sum_elem_count(struct bpf_map *map) __ksym;
> > +
> > +SEC("iter/bpf_map")
> > +int dump_bpf_map(struct bpf_iter__bpf_map *ctx)
> > +{
> > + struct seq_file *seq = ctx->meta->seq;
> > + struct bpf_map *map = ctx->map;
> > +
> > + if (map && map->id == target_id)
> > + BPF_SEQ_PRINTF(seq, "%lld", bpf_map_sum_elem_count(map));
> > +
> > + return 0;
> > +}
> > +
> > +char _license[] SEC("license") = "GPL";
>
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [v3 PATCH bpf-next 5/6] selftests/bpf: test map percpu stats
2023-07-04 15:02 ` Anton Protopopov
@ 2023-07-04 15:23 ` Anton Protopopov
2023-07-04 15:49 ` Anton Protopopov
2023-07-05 0:46 ` Hou Tao
2023-07-05 3:03 ` Hou Tao
1 sibling, 2 replies; 20+ messages in thread
From: Anton Protopopov @ 2023-07-04 15:23 UTC (permalink / raw)
To: Hou Tao
Cc: Alexei Starovoitov, Daniel Borkmann, John Fastabend,
Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, bpf
On Tue, Jul 04, 2023 at 03:02:32PM +0000, Anton Protopopov wrote:
> On Tue, Jul 04, 2023 at 10:41:10PM +0800, Hou Tao wrote:
> > Hi,
> >
> > On 6/30/2023 4:25 PM, Anton Protopopov wrote:
> > [...]
> > > +}
> > > +
> > > +void test_map_percpu_stats(void)
> > > +{
> > > + map_percpu_stats_hash();
> > > + map_percpu_stats_percpu_hash();
> > > + map_percpu_stats_hash_prealloc();
> > > + map_percpu_stats_percpu_hash_prealloc();
> > > + map_percpu_stats_lru_hash();
> > > + map_percpu_stats_percpu_lru_hash();
> > > +}
> >
> > Please use test__start_subtest() to create multiple subtests.
After looking at code, I think that I will leave the individual functions here,
as the test__start_subtest() function is only implemented in test_progs (not
test_maps), and adding it here looks like out of scope for this patch.
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [v3 PATCH bpf-next 5/6] selftests/bpf: test map percpu stats
2023-07-04 15:23 ` Anton Protopopov
@ 2023-07-04 15:49 ` Anton Protopopov
2023-07-05 0:46 ` Hou Tao
1 sibling, 0 replies; 20+ messages in thread
From: Anton Protopopov @ 2023-07-04 15:49 UTC (permalink / raw)
To: Hou Tao
Cc: Alexei Starovoitov, Daniel Borkmann, John Fastabend,
Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, bpf
On Tue, Jul 04, 2023 at 03:23:30PM +0000, Anton Protopopov wrote:
> On Tue, Jul 04, 2023 at 03:02:32PM +0000, Anton Protopopov wrote:
> > On Tue, Jul 04, 2023 at 10:41:10PM +0800, Hou Tao wrote:
> > > Hi,
> > >
> > > On 6/30/2023 4:25 PM, Anton Protopopov wrote:
> > > [...]
> > > > +}
> > > > +
> > > > +void test_map_percpu_stats(void)
> > > > +{
> > > > + map_percpu_stats_hash();
> > > > + map_percpu_stats_percpu_hash();
> > > > + map_percpu_stats_hash_prealloc();
> > > > + map_percpu_stats_percpu_hash_prealloc();
> > > > + map_percpu_stats_lru_hash();
> > > > + map_percpu_stats_percpu_lru_hash();
> > > > +}
> > >
> > > Please use test__start_subtest() to create multiple subtests.
>
> After looking at code, I think that I will leave the individual functions here,
> as the test__start_subtest() function is only implemented in test_progs (not
> test_maps), and adding it here looks like out of scope for this patch.
Ah, sorry, looks like the same stands for ASSERT* macros as well, as they are
only used in test_progs. (I will still fix the checks where you commented on
specific values, like n <= cur_elems for LRUs.)
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [v3 PATCH bpf-next 5/6] selftests/bpf: test map percpu stats
2023-07-04 15:23 ` Anton Protopopov
2023-07-04 15:49 ` Anton Protopopov
@ 2023-07-05 0:46 ` Hou Tao
2023-07-05 15:41 ` Anton Protopopov
1 sibling, 1 reply; 20+ messages in thread
From: Hou Tao @ 2023-07-05 0:46 UTC (permalink / raw)
To: Anton Protopopov
Cc: Alexei Starovoitov, Daniel Borkmann, John Fastabend,
Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, bpf
Hi,
On 7/4/2023 11:23 PM, Anton Protopopov wrote:
> On Tue, Jul 04, 2023 at 03:02:32PM +0000, Anton Protopopov wrote:
>> On Tue, Jul 04, 2023 at 10:41:10PM +0800, Hou Tao wrote:
>>> Hi,
>>>
>>> On 6/30/2023 4:25 PM, Anton Protopopov wrote:
>>> [...]
>>>> +}
>>>> +
>>>> +void test_map_percpu_stats(void)
>>>> +{
>>>> + map_percpu_stats_hash();
>>>> + map_percpu_stats_percpu_hash();
>>>> + map_percpu_stats_hash_prealloc();
>>>> + map_percpu_stats_percpu_hash_prealloc();
>>>> + map_percpu_stats_lru_hash();
>>>> + map_percpu_stats_percpu_lru_hash();
>>>> +}
>>> Please use test__start_subtest() to create multiple subtests.
> After looking at code, I think that I will leave the individual functions here,
> as the test__start_subtest() function is only implemented in test_progs (not
> test_maps), and adding it here looks like out of scope for this patch.
> .
I see. But can we just add these tests in test_progs instead which is
more flexible ?
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [v3 PATCH bpf-next 5/6] selftests/bpf: test map percpu stats
2023-07-05 0:46 ` Hou Tao
@ 2023-07-05 15:41 ` Anton Protopopov
0 siblings, 0 replies; 20+ messages in thread
From: Anton Protopopov @ 2023-07-05 15:41 UTC (permalink / raw)
To: Hou Tao
Cc: Alexei Starovoitov, Daniel Borkmann, John Fastabend,
Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, bpf
On Wed, Jul 05, 2023 at 08:46:09AM +0800, Hou Tao wrote:
> Hi,
>
> On 7/4/2023 11:23 PM, Anton Protopopov wrote:
> > On Tue, Jul 04, 2023 at 03:02:32PM +0000, Anton Protopopov wrote:
> >> On Tue, Jul 04, 2023 at 10:41:10PM +0800, Hou Tao wrote:
> >>> Hi,
> >>>
> >>> On 6/30/2023 4:25 PM, Anton Protopopov wrote:
> >>> [...]
> >>>> +}
> >>>> +
> >>>> +void test_map_percpu_stats(void)
> >>>> +{
> >>>> + map_percpu_stats_hash();
> >>>> + map_percpu_stats_percpu_hash();
> >>>> + map_percpu_stats_hash_prealloc();
> >>>> + map_percpu_stats_percpu_hash_prealloc();
> >>>> + map_percpu_stats_lru_hash();
> >>>> + map_percpu_stats_percpu_lru_hash();
> >>>> +}
> >>> Please use test__start_subtest() to create multiple subtests.
> > After looking at code, I think that I will leave the individual functions here,
> > as the test__start_subtest() function is only implemented in test_progs (not
> > test_maps), and adding it here looks like out of scope for this patch.
> > .
> I see. But can we just add these tests in test_progs instead which is
> more flexible ?
I think that it makes more sense to port this test_prog flexibility to the
test_maps program. I can volunteer to do this (but not right away).
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [v3 PATCH bpf-next 5/6] selftests/bpf: test map percpu stats
2023-07-04 15:02 ` Anton Protopopov
2023-07-04 15:23 ` Anton Protopopov
@ 2023-07-05 3:03 ` Hou Tao
2023-07-05 15:34 ` Anton Protopopov
1 sibling, 1 reply; 20+ messages in thread
From: Hou Tao @ 2023-07-05 3:03 UTC (permalink / raw)
To: Anton Protopopov
Cc: Alexei Starovoitov, Daniel Borkmann, John Fastabend,
Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, bpf
Hi,
On 7/4/2023 11:02 PM, Anton Protopopov wrote:
> On Tue, Jul 04, 2023 at 10:41:10PM +0800, Hou Tao wrote:
>> Hi,
>>
>> On 6/30/2023 4:25 PM, Anton Protopopov wrote:
>>> Add a new map test, map_percpu_stats.c, which is checking the correctness of
>>> map's percpu elements counters. For supported maps the test upserts a number
>>> of elements, checks the correctness of the counters, then deletes all the
>>> elements and checks again that the counters sum drops down to zero.
>>>
>>> The following map types are tested:
>>>
>>> * BPF_MAP_TYPE_HASH, BPF_F_NO_PREALLOC
>>> * BPF_MAP_TYPE_PERCPU_HASH, BPF_F_NO_PREALLOC
>>> * BPF_MAP_TYPE_HASH,
>>> * BPF_MAP_TYPE_PERCPU_HASH,
>>> * BPF_MAP_TYPE_LRU_HASH
>>> * BPF_MAP_TYPE_LRU_PERCPU_HASH
>> A test for BPF_MAP_TYPE_HASH_OF_MAPS is also needed.
We could also exercise the test for LRU map with BPF_F_NO_COMMON_LRU.
>
SNIP
>>> diff --git a/tools/testing/selftests/bpf/map_tests/map_percpu_stats.c b/tools/testing/selftests/bpf/map_tests/map_percpu_stats.c
>>> new file mode 100644
>>> index 000000000000..5b45af230368
>>> --- /dev/null
>>> +++ b/tools/testing/selftests/bpf/map_tests/map_percpu_stats.c
>>> @@ -0,0 +1,336 @@
>>> +// SPDX-License-Identifier: GPL-2.0
>>> +/* Copyright (c) 2023 Isovalent */
>>> +
>>> +#include <errno.h>
>>> +#include <unistd.h>
>>> +#include <pthread.h>
>>> +
>>> +#include <bpf/bpf.h>
>>> +#include <bpf/libbpf.h>
>>> +
>>> +#include <bpf_util.h>
>>> +#include <test_maps.h>
>>> +
>>> +#include "map_percpu_stats.skel.h"
>>> +
>>> +#define MAX_ENTRIES 16384
>>> +#define N_THREADS 37
>> Why 37 thread is needed here ? Does a small number of threads work as well ?
> This was used to evict more elements from LRU maps when they are full.
I see. But in my understanding, for the global LRU list, the eviction
(the invocation of htab_lru_map_delete_node) will be possible if the
free element is less than LOCAL_FREE_TARGET(128) * nr_running_cpus. Now
the number of free elements is 1000 as defined in __test(), the number
of vCPU is 8 in my local VM setup (BPF CI also uses 8 vCPUs) and it is
hard to trigger the eviction because 8 * 128 is roughly equal with 1000.
So I suggest to decrease the number of free elements to 512 and the
number of threads to 8, or adjust the number of running thread and free
elements according to the number of online CPUs.
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [v3 PATCH bpf-next 5/6] selftests/bpf: test map percpu stats
2023-07-05 3:03 ` Hou Tao
@ 2023-07-05 15:34 ` Anton Protopopov
0 siblings, 0 replies; 20+ messages in thread
From: Anton Protopopov @ 2023-07-05 15:34 UTC (permalink / raw)
To: Hou Tao
Cc: Alexei Starovoitov, Daniel Borkmann, John Fastabend,
Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, bpf
On Wed, Jul 05, 2023 at 11:03:25AM +0800, Hou Tao wrote:
> Hi,
>
> On 7/4/2023 11:02 PM, Anton Protopopov wrote:
> > On Tue, Jul 04, 2023 at 10:41:10PM +0800, Hou Tao wrote:
> >> Hi,
> >>
> >> On 6/30/2023 4:25 PM, Anton Protopopov wrote:
> >>> Add a new map test, map_percpu_stats.c, which is checking the correctness of
> >>> map's percpu elements counters. For supported maps the test upserts a number
> >>> of elements, checks the correctness of the counters, then deletes all the
> >>> elements and checks again that the counters sum drops down to zero.
> >>>
> >>> The following map types are tested:
> >>>
> >>> * BPF_MAP_TYPE_HASH, BPF_F_NO_PREALLOC
> >>> * BPF_MAP_TYPE_PERCPU_HASH, BPF_F_NO_PREALLOC
> >>> * BPF_MAP_TYPE_HASH,
> >>> * BPF_MAP_TYPE_PERCPU_HASH,
> >>> * BPF_MAP_TYPE_LRU_HASH
> >>> * BPF_MAP_TYPE_LRU_PERCPU_HASH
> >> A test for BPF_MAP_TYPE_HASH_OF_MAPS is also needed.
> We could also exercise the test for LRU map with BPF_F_NO_COMMON_LRU.
Thanks, added.
> >
> SNIP
> >>> diff --git a/tools/testing/selftests/bpf/map_tests/map_percpu_stats.c b/tools/testing/selftests/bpf/map_tests/map_percpu_stats.c
> >>> new file mode 100644
> >>> index 000000000000..5b45af230368
> >>> --- /dev/null
> >>> +++ b/tools/testing/selftests/bpf/map_tests/map_percpu_stats.c
> >>> @@ -0,0 +1,336 @@
> >>> +// SPDX-License-Identifier: GPL-2.0
> >>> +/* Copyright (c) 2023 Isovalent */
> >>> +
> >>> +#include <errno.h>
> >>> +#include <unistd.h>
> >>> +#include <pthread.h>
> >>> +
> >>> +#include <bpf/bpf.h>
> >>> +#include <bpf/libbpf.h>
> >>> +
> >>> +#include <bpf_util.h>
> >>> +#include <test_maps.h>
> >>> +
> >>> +#include "map_percpu_stats.skel.h"
> >>> +
> >>> +#define MAX_ENTRIES 16384
> >>> +#define N_THREADS 37
> >> Why 37 thread is needed here ? Does a small number of threads work as well ?
> > This was used to evict more elements from LRU maps when they are full.
>
> I see. But in my understanding, for the global LRU list, the eviction
> (the invocation of htab_lru_map_delete_node) will be possible if the
> free element is less than LOCAL_FREE_TARGET(128) * nr_running_cpus. Now
> the number of free elements is 1000 as defined in __test(), the number
> of vCPU is 8 in my local VM setup (BPF CI also uses 8 vCPUs) and it is
> hard to trigger the eviction because 8 * 128 is roughly equal with 1000.
> So I suggest to decrease the number of free elements to 512 and the
> number of threads to 8, or adjust the number of running thread and free
> elements according to the number of online CPUs.
Yes, makes sense. I've changed the test to use 8 threads and offset of 512.
^ permalink raw reply [flat|nested] 20+ messages in thread
* [v3 PATCH bpf-next 6/6] selftests/bpf: check that ->elem_count is non-zero for the hash map
2023-06-30 8:25 [v3 PATCH bpf-next 0/6] bpf: add percpu stats for bpf_map Anton Protopopov
` (4 preceding siblings ...)
2023-06-30 8:25 ` [v3 PATCH bpf-next 5/6] selftests/bpf: test map percpu stats Anton Protopopov
@ 2023-06-30 8:25 ` Anton Protopopov
5 siblings, 0 replies; 20+ messages in thread
From: Anton Protopopov @ 2023-06-30 8:25 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, John Fastabend,
Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, bpf
Cc: Anton Protopopov
Previous commits populated the ->elem_count per-cpu pointer for hash maps.
Check that this pointer is non-NULL in an existing map.
Signed-off-by: Anton Protopopov <aspsk@isovalent.com>
---
tools/testing/selftests/bpf/progs/map_ptr_kern.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/tools/testing/selftests/bpf/progs/map_ptr_kern.c b/tools/testing/selftests/bpf/progs/map_ptr_kern.c
index db388f593d0a..d6e234a37ccb 100644
--- a/tools/testing/selftests/bpf/progs/map_ptr_kern.c
+++ b/tools/testing/selftests/bpf/progs/map_ptr_kern.c
@@ -33,6 +33,7 @@ struct bpf_map {
__u32 value_size;
__u32 max_entries;
__u32 id;
+ __s64 *elem_count;
} __attribute__((preserve_access_index));
static inline int check_bpf_map_fields(struct bpf_map *map, __u32 key_size,
@@ -111,6 +112,8 @@ static inline int check_hash(void)
VERIFY(check_default_noinline(&hash->map, map));
+ VERIFY(map->elem_count != NULL);
+
VERIFY(hash->n_buckets == MAX_ENTRIES);
VERIFY(hash->elem_size == 64);
--
2.34.1
^ permalink raw reply related [flat|nested] 20+ messages in thread