From: David Sharp <dhsharp@google.com>
To: Vaibhav Nagarnaik <vnagarnaik@google.com>
Cc: Steven Rostedt <rostedt@goodmis.org>,
Ingo Molnar <mingo@redhat.com>,
Frederic Weisbecker <fweisbec@gmail.com>,
Michael Rubin <mrubin@google.com>,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH v3 4/5] trace: Make removal of ring buffer pages atomic
Date: Tue, 23 Aug 2011 12:16:50 -0700 [thread overview]
Message-ID: <CAJL_ektp-BP52Qqfu_bBDjUnXyX20PWzgNuQF_atB0J9q7n=Zw@mail.gmail.com> (raw)
In-Reply-To: <1314125758-5069-1-git-send-email-vnagarnaik@google.com>
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=UTF-8, Size: 20017 bytes --]
On Tue, Aug 23, 2011 at 11:55 AM, Vaibhav Nagarnaik
<vnagarnaik@google.com> wrote:
> This patch adds the capability to remove pages from a ring buffer
> without destroying any existing data in it.
>
> This is done by removing the pages after the tail page. This makes sure
> that first all the empty pages in the ring buffer are removed. If the
> head page is one in the list of pages to be removed, then the page after
> the removed ones is made the head page. This removes the oldest data
> from the ring buffer and keeps the latest data around to be read.
>
> To do this in a non-racey manner, tracing is stopped for a very short
> time while the pages to be removed are identified and unlinked from the
> ring buffer. The pages are freed after the tracing is restarted to
> minimize the time needed to stop tracing.
>
> The context in which the pages from the per-cpu ring buffer are removed
> runs on the respective CPU. This minimizes the events not traced to only
> NMI trace contexts.
"interrupt contexts". We're not disabling interrupts.
>
> Signed-off-by: Vaibhav Nagarnaik <vnagarnaik@google.com>
> ---
> Changelog v3-v2:
> * Fix compile errors
>
> Â kernel/trace/ring_buffer.c | Â 225 ++++++++++++++++++++++++++++++++------------
>  kernel/trace/trace.c    |  20 +----
> Â 2 files changed, 167 insertions(+), 78 deletions(-)
>
> diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
> index bb0ffdd..f10e439 100644
> --- a/kernel/trace/ring_buffer.c
> +++ b/kernel/trace/ring_buffer.c
> @@ -23,6 +23,8 @@
> Â #include <asm/local.h>
> Â #include "trace.h"
>
> +static void update_pages_handler(struct work_struct *work);
> +
> Â /*
> Â * The ring buffer header is special. We must manually up keep it.
> Â */
> @@ -502,6 +504,8 @@ struct ring_buffer_per_cpu {
> Â Â Â Â /* ring buffer pages to update, > 0 to add, < 0 to remove */
>     int               nr_pages_to_update;
>     struct list_head         new_pages; /* new pages to add */
> +    struct work_struct        update_pages_work;
> +    struct completion        update_completion;
> Â };
>
> Â struct ring_buffer {
> @@ -1080,6 +1084,8 @@ rb_allocate_cpu_buffer(struct ring_buffer *buffer, int nr_pages, int cpu)
> Â Â Â Â spin_lock_init(&cpu_buffer->reader_lock);
> Â Â Â Â lockdep_set_class(&cpu_buffer->reader_lock, buffer->reader_lock_key);
> Â Â Â Â cpu_buffer->lock = (arch_spinlock_t)__ARCH_SPIN_LOCK_UNLOCKED;
> + Â Â Â INIT_WORK(&cpu_buffer->update_pages_work, update_pages_handler);
> + Â Â Â init_completion(&cpu_buffer->update_completion);
>
> Â Â Â Â bpage = kzalloc_node(ALIGN(sizeof(*bpage), cache_line_size()),
> Â Â Â Â Â Â Â Â Â Â Â Â Â Â GFP_KERNEL, cpu_to_node(cpu));
> @@ -1267,32 +1273,107 @@ void ring_buffer_set_clock(struct ring_buffer *buffer,
>
> Â static void rb_reset_cpu(struct ring_buffer_per_cpu *cpu_buffer);
>
> +static inline unsigned long rb_page_entries(struct buffer_page *bpage)
> +{
> + Â Â Â return local_read(&bpage->entries) & RB_WRITE_MASK;
> +}
> +
> +static inline unsigned long rb_page_write(struct buffer_page *bpage)
> +{
> + Â Â Â return local_read(&bpage->write) & RB_WRITE_MASK;
> +}
> +
> Â static void
> -rb_remove_pages(struct ring_buffer_per_cpu *cpu_buffer, unsigned nr_pages)
> +rb_remove_pages(struct ring_buffer_per_cpu *cpu_buffer, unsigned int nr_pages)
> Â {
> - Â Â Â struct buffer_page *bpage;
> - Â Â Â struct list_head *p;
> - Â Â Â unsigned i;
> + Â Â Â unsigned int nr_removed;
> + Â Â Â int page_entries;
> + Â Â Â struct list_head *tail_page, *to_remove, *next_page;
> + Â Â Â unsigned long head_bit;
> + Â Â Â struct buffer_page *last_page, *first_page;
> + Â Â Â struct buffer_page *to_remove_page, *tmp_iter_page;
>
> + Â Â Â head_bit = 0;
> Â Â Â Â spin_lock_irq(&cpu_buffer->reader_lock);
> - Â Â Â rb_head_page_deactivate(cpu_buffer);
> -
> - Â Â Â for (i = 0; i < nr_pages; i++) {
> - Â Â Â Â Â Â Â if (RB_WARN_ON(cpu_buffer, list_empty(cpu_buffer->pages)))
> - Â Â Â Â Â Â Â Â Â Â Â goto out;
> - Â Â Â Â Â Â Â p = cpu_buffer->pages->next;
> - Â Â Â Â Â Â Â bpage = list_entry(p, struct buffer_page, list);
> - Â Â Â Â Â Â Â list_del_init(&bpage->list);
> - Â Â Â Â Â Â Â free_buffer_page(bpage);
> + Â Â Â atomic_inc(&cpu_buffer->record_disabled);
> + Â Â Â /*
> + Â Â Â Â * We don't race with the readers since we have acquired the reader
> + Â Â Â Â * lock. We also don't race with writers after disabling recording.
> + Â Â Â Â * This makes it easy to figure out the first and the last page to be
> + Â Â Â Â * removed from the list. We remove all the pages in between including
> + Â Â Â Â * the first and last pages. This is done in a busy loop so that we
> + Â Â Â Â * lose the least number of traces.
> + Â Â Â Â * The pages are freed after we restart recording and unlock readers.
> + Â Â Â Â */
> + Â Â Â tail_page = &cpu_buffer->tail_page->list;
> + Â Â Â /*
> + Â Â Â Â * tail page might be on reader page, we remove the next page
> + Â Â Â Â * from the ring buffer
> + Â Â Â Â */
> + Â Â Â if (cpu_buffer->tail_page == cpu_buffer->reader_page)
> + Â Â Â Â Â Â Â tail_page = rb_list_head(tail_page->next);
> + Â Â Â to_remove = tail_page;
> +
> + Â Â Â /* start of pages to remove */
> + Â Â Â first_page = list_entry(rb_list_head(to_remove->next),
> + Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â struct buffer_page, list);
> + Â Â Â for (nr_removed = 0; nr_removed < nr_pages; nr_removed++) {
> + Â Â Â Â Â Â Â to_remove = rb_list_head(to_remove)->next;
> + Â Â Â Â Â Â Â head_bit |= (unsigned long)to_remove & RB_PAGE_HEAD;
> Â Â Â Â }
> - Â Â Â if (RB_WARN_ON(cpu_buffer, list_empty(cpu_buffer->pages)))
> - Â Â Â Â Â Â Â goto out;
>
> - Â Â Â rb_reset_cpu(cpu_buffer);
> - Â Â Â rb_check_pages(cpu_buffer);
> -
> -out:
> + Â Â Â next_page = rb_list_head(to_remove)->next;
> + Â Â Â /* now we remove all pages between tail_page and next_page */
> + Â Â Â tail_page->next = (struct list_head *)((unsigned long)next_page |
> + Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â head_bit);
> + Â Â Â next_page = rb_list_head(next_page);
> + Â Â Â next_page->prev = tail_page;
> + Â Â Â /* make sure pages points to a valid page in the ring buffer */
> + Â Â Â cpu_buffer->pages = next_page;
> + Â Â Â /* update head page */
> + Â Â Â if (head_bit)
> + Â Â Â Â Â Â Â cpu_buffer->head_page = list_entry(next_page,
> + Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â struct buffer_page, list);
> + Â Â Â /*
> + Â Â Â Â * change read pointer to make sure any read iterators reset
> + Â Â Â Â * themselves
> + Â Â Â Â */
> + Â Â Â cpu_buffer->read = 0;
> + Â Â Â /* pages are removed, resume tracing and then free the pages */
> + Â Â Â atomic_dec(&cpu_buffer->record_disabled);
> Â Â Â Â spin_unlock_irq(&cpu_buffer->reader_lock);
> +
> + Â Â Â RB_WARN_ON(cpu_buffer, list_empty(cpu_buffer->pages));
> +
> + Â Â Â /* last buffer page to remove */
> + Â Â Â last_page = list_entry(rb_list_head(to_remove), struct buffer_page,
> + Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â list);
> + Â Â Â tmp_iter_page = first_page;
> + Â Â Â do {
> + Â Â Â Â Â Â Â to_remove_page = tmp_iter_page;
> + Â Â Â Â Â Â Â rb_inc_page(cpu_buffer, &tmp_iter_page);
> + Â Â Â Â Â Â Â /* update the counters */
> + Â Â Â Â Â Â Â page_entries = rb_page_entries(to_remove_page);
> + Â Â Â Â Â Â Â if (page_entries) {
> + Â Â Â Â Â Â Â Â Â Â Â /*
> + Â Â Â Â Â Â Â Â Â Â Â Â * If something was added to this page, it was full
> + Â Â Â Â Â Â Â Â Â Â Â Â * since it is not the tail page. So we deduct the
> + Â Â Â Â Â Â Â Â Â Â Â Â * bytes consumed in ring buffer from here.
> + Â Â Â Â Â Â Â Â Â Â Â Â * No need to update overruns, since this page is
> + Â Â Â Â Â Â Â Â Â Â Â Â * deleted from ring buffer and its entries are
> + Â Â Â Â Â Â Â Â Â Â Â Â * already accounted for.
> + Â Â Â Â Â Â Â Â Â Â Â Â */
> + Â Â Â Â Â Â Â Â Â Â Â local_sub(BUF_PAGE_SIZE, &cpu_buffer->entries_bytes);
> + Â Â Â Â Â Â Â }
> + Â Â Â Â Â Â Â /*
> + Â Â Â Â Â Â Â Â * We have already removed references to this list item, just
> + Â Â Â Â Â Â Â Â * free up the buffer_page and its page
> + Â Â Â Â Â Â Â Â */
> + Â Â Â Â Â Â Â nr_removed--;
> + Â Â Â Â Â Â Â free_buffer_page(to_remove_page);
> + Â Â Â } while (to_remove_page != last_page);
> +
> + Â Â Â RB_WARN_ON(cpu_buffer, nr_removed);
> Â }
>
> Â static void
> @@ -1303,6 +1384,12 @@ rb_insert_pages(struct ring_buffer_per_cpu *cpu_buffer,
> Â Â Â Â struct list_head *p;
> Â Â Â Â unsigned i;
>
> + Â Â Â /* stop the writers while inserting pages */
> + Â Â Â atomic_inc(&cpu_buffer->record_disabled);
> +
> + Â Â Â /* Make sure all writers are done with this buffer. */
> + Â Â Â synchronize_sched();
> +
> Â Â Â Â spin_lock_irq(&cpu_buffer->reader_lock);
> Â Â Â Â rb_head_page_deactivate(cpu_buffer);
>
> @@ -1319,17 +1406,21 @@ rb_insert_pages(struct ring_buffer_per_cpu *cpu_buffer,
>
> Â out:
> Â Â Â Â spin_unlock_irq(&cpu_buffer->reader_lock);
> + Â Â Â atomic_dec(&cpu_buffer->record_disabled);
> Â }
>
> -static void update_pages_handler(struct ring_buffer_per_cpu *cpu_buffer)
> +static void update_pages_handler(struct work_struct *work)
> Â {
> + Â Â Â struct ring_buffer_per_cpu *cpu_buffer = container_of(work,
> + Â Â Â Â Â Â Â Â Â Â Â struct ring_buffer_per_cpu, update_pages_work);
> +
> Â Â Â Â if (cpu_buffer->nr_pages_to_update > 0)
> Â Â Â Â Â Â Â Â rb_insert_pages(cpu_buffer, &cpu_buffer->new_pages,
> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â cpu_buffer->nr_pages_to_update);
> Â Â Â Â else
> Â Â Â Â Â Â Â Â rb_remove_pages(cpu_buffer, -cpu_buffer->nr_pages_to_update);
> - Â Â Â /* reset this value */
> - Â Â Â cpu_buffer->nr_pages_to_update = 0;
> +
> + Â Â Â complete(&cpu_buffer->update_completion);
> Â }
>
> Â /**
> @@ -1339,14 +1430,14 @@ static void update_pages_handler(struct ring_buffer_per_cpu *cpu_buffer)
> Â *
> Â * Minimum size is 2 * BUF_PAGE_SIZE.
> Â *
> - * Returns -1 on failure.
> + * Returns 0 on success and < 0 on failure.
> Â */
> Â int ring_buffer_resize(struct ring_buffer *buffer, unsigned long size,
> Â Â Â Â Â Â Â Â Â Â Â Â int cpu_id)
> Â {
> Â Â Â Â struct ring_buffer_per_cpu *cpu_buffer;
> Â Â Â Â unsigned nr_pages;
> - Â Â Â int cpu;
> + Â Â Â int cpu, err = 0;
>
> Â Â Â Â /*
> Â Â Â Â * Always succeed at resizing a non-existent buffer:
> @@ -1361,21 +1452,28 @@ int ring_buffer_resize(struct ring_buffer *buffer, unsigned long size,
> Â Â Â Â if (size < BUF_PAGE_SIZE * 2)
> Â Â Â Â Â Â Â Â size = BUF_PAGE_SIZE * 2;
>
> - Â Â Â atomic_inc(&buffer->record_disabled);
> -
> - Â Â Â /* Make sure all writers are done with this buffer. */
> - Â Â Â synchronize_sched();
> + Â Â Â nr_pages = DIV_ROUND_UP(size, BUF_PAGE_SIZE);
>
> + Â Â Â /*
> + Â Â Â Â * Don't succeed if recording is disabled globally, as a reader might
> + Â Â Â Â * be manipulating the ring buffer and is expecting a sane state while
> + Â Â Â Â * this is true.
> + Â Â Â Â */
> + Â Â Â if (atomic_read(&buffer->record_disabled))
> + Â Â Â Â Â Â Â return -EBUSY;
> + Â Â Â /* prevent another thread from changing buffer sizes */
> Â Â Â Â mutex_lock(&buffer->mutex);
> - Â Â Â get_online_cpus();
> -
> - Â Â Â nr_pages = DIV_ROUND_UP(size, BUF_PAGE_SIZE);
>
> Â Â Â Â if (cpu_id == RING_BUFFER_ALL_CPUS) {
> Â Â Â Â Â Â Â Â /* calculate the pages to update */
> Â Â Â Â Â Â Â Â for_each_buffer_cpu(buffer, cpu) {
> Â Â Â Â Â Â Â Â Â Â Â Â cpu_buffer = buffer->buffers[cpu];
>
> + Â Â Â Â Â Â Â Â Â Â Â if (atomic_read(&cpu_buffer->record_disabled)) {
> + Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â err = -EBUSY;
> + Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â goto out_err;
> + Â Â Â Â Â Â Â Â Â Â Â }
> +
> Â Â Â Â Â Â Â Â Â Â Â Â cpu_buffer->nr_pages_to_update = nr_pages -
> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â cpu_buffer->nr_pages;
>
> @@ -1391,21 +1489,38 @@ int ring_buffer_resize(struct ring_buffer *buffer, unsigned long size,
> Â Â Â Â Â Â Â Â Â Â Â Â */
> Â Â Â Â Â Â Â Â Â Â Â Â INIT_LIST_HEAD(&cpu_buffer->new_pages);
> Â Â Â Â Â Â Â Â Â Â Â Â if (__rb_allocate_pages(cpu_buffer->nr_pages_to_update,
> - Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â &cpu_buffer->new_pages, cpu))
> + Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â &cpu_buffer->new_pages, cpu)) {
> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â /* not enough memory for new pages */
> - Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â goto no_mem;
> + Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â err = -ENOMEM;
> + Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â goto out_err;
> + Â Â Â Â Â Â Â Â Â Â Â }
> + Â Â Â Â Â Â Â }
> +
> + Â Â Â Â Â Â Â /* fire off all the required work handlers */
> + Â Â Â Â Â Â Â for_each_buffer_cpu(buffer, cpu) {
> + Â Â Â Â Â Â Â Â Â Â Â cpu_buffer = buffer->buffers[cpu];
> + Â Â Â Â Â Â Â Â Â Â Â if (!cpu_buffer->nr_pages_to_update)
> + Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â continue;
> + Â Â Â Â Â Â Â Â Â Â Â schedule_work_on(cpu, &cpu_buffer->update_pages_work);
> Â Â Â Â Â Â Â Â }
>
> Â Â Â Â Â Â Â Â /* wait for all the updates to complete */
> Â Â Â Â Â Â Â Â for_each_buffer_cpu(buffer, cpu) {
> Â Â Â Â Â Â Â Â Â Â Â Â cpu_buffer = buffer->buffers[cpu];
> - Â Â Â Â Â Â Â Â Â Â Â if (cpu_buffer->nr_pages_to_update) {
> - Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â update_pages_handler(cpu_buffer);
> - Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â cpu_buffer->nr_pages = nr_pages;
> - Â Â Â Â Â Â Â Â Â Â Â }
> + Â Â Â Â Â Â Â Â Â Â Â if (!cpu_buffer->nr_pages_to_update)
> + Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â continue;
> + Â Â Â Â Â Â Â Â Â Â Â wait_for_completion(&cpu_buffer->update_completion);
> + Â Â Â Â Â Â Â Â Â Â Â cpu_buffer->nr_pages = nr_pages;
> + Â Â Â Â Â Â Â Â Â Â Â /* reset this value */
> + Â Â Â Â Â Â Â Â Â Â Â cpu_buffer->nr_pages_to_update = 0;
> Â Â Â Â Â Â Â Â }
> Â Â Â Â } else {
> Â Â Â Â Â Â Â Â cpu_buffer = buffer->buffers[cpu_id];
> + Â Â Â Â Â Â Â if (atomic_read(&cpu_buffer->record_disabled)) {
> + Â Â Â Â Â Â Â Â Â Â Â err = -EBUSY;
> + Â Â Â Â Â Â Â Â Â Â Â goto out_err;
> + Â Â Â Â Â Â Â }
> +
> Â Â Â Â Â Â Â Â if (nr_pages == cpu_buffer->nr_pages)
> Â Â Â Â Â Â Â Â Â Â Â Â goto out;
>
> @@ -1415,40 +1530,42 @@ int ring_buffer_resize(struct ring_buffer *buffer, unsigned long size,
> Â Â Â Â Â Â Â Â INIT_LIST_HEAD(&cpu_buffer->new_pages);
> Â Â Â Â Â Â Â Â if (cpu_buffer->nr_pages_to_update > 0 &&
> Â Â Â Â Â Â Â Â Â Â Â Â __rb_allocate_pages(cpu_buffer->nr_pages_to_update,
> - Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â &cpu_buffer->new_pages, cpu_id))
> - Â Â Â Â Â Â Â Â Â Â Â goto no_mem;
> + Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â &cpu_buffer->new_pages, cpu_id)) {
> + Â Â Â Â Â Â Â Â Â Â Â err = -ENOMEM;
> + Â Â Â Â Â Â Â Â Â Â Â goto out_err;
> + Â Â Â Â Â Â Â }
>
> - Â Â Â Â Â Â Â update_pages_handler(cpu_buffer);
> + Â Â Â Â Â Â Â schedule_work_on(cpu_id, &cpu_buffer->update_pages_work);
> + Â Â Â Â Â Â Â wait_for_completion(&cpu_buffer->update_completion);
>
> Â Â Â Â Â Â Â Â cpu_buffer->nr_pages = nr_pages;
> + Â Â Â Â Â Â Â /* reset this value */
> + Â Â Â Â Â Â Â cpu_buffer->nr_pages_to_update = 0;
> Â Â Â Â }
>
> Â out:
> - Â Â Â put_online_cpus();
> Â Â Â Â mutex_unlock(&buffer->mutex);
> -
> - Â Â Â atomic_dec(&buffer->record_disabled);
> -
> Â Â Â Â return size;
>
> - no_mem:
> + out_err:
> Â Â Â Â for_each_buffer_cpu(buffer, cpu) {
> Â Â Â Â Â Â Â Â struct buffer_page *bpage, *tmp;
> +
> Â Â Â Â Â Â Â Â cpu_buffer = buffer->buffers[cpu];
> Â Â Â Â Â Â Â Â /* reset this number regardless */
> Â Â Â Â Â Â Â Â cpu_buffer->nr_pages_to_update = 0;
> +
> Â Â Â Â Â Â Â Â if (list_empty(&cpu_buffer->new_pages))
> Â Â Â Â Â Â Â Â Â Â Â Â continue;
> +
> Â Â Â Â Â Â Â Â list_for_each_entry_safe(bpage, tmp, &cpu_buffer->new_pages,
> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â list) {
> Â Â Â Â Â Â Â Â Â Â Â Â list_del_init(&bpage->list);
> Â Â Â Â Â Â Â Â Â Â Â Â free_buffer_page(bpage);
> Â Â Â Â Â Â Â Â }
> Â Â Â Â }
> - Â Â Â put_online_cpus();
> Â Â Â Â mutex_unlock(&buffer->mutex);
> - Â Â Â atomic_dec(&buffer->record_disabled);
> - Â Â Â return -ENOMEM;
> + Â Â Â return err;
> Â }
> Â EXPORT_SYMBOL_GPL(ring_buffer_resize);
>
> @@ -1487,21 +1604,11 @@ rb_iter_head_event(struct ring_buffer_iter *iter)
> Â Â Â Â return __rb_page_index(iter->head_page, iter->head);
> Â }
>
> -static inline unsigned long rb_page_write(struct buffer_page *bpage)
> -{
> - Â Â Â return local_read(&bpage->write) & RB_WRITE_MASK;
> -}
> -
> Â static inline unsigned rb_page_commit(struct buffer_page *bpage)
> Â {
> Â Â Â Â return local_read(&bpage->page->commit);
> Â }
>
> -static inline unsigned long rb_page_entries(struct buffer_page *bpage)
> -{
> - Â Â Â return local_read(&bpage->entries) & RB_WRITE_MASK;
> -}
> -
> Â /* Size is determined by what has been committed */
> Â static inline unsigned rb_page_size(struct buffer_page *bpage)
> Â {
> diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
> index bb3c867..736518f 100644
> --- a/kernel/trace/trace.c
> +++ b/kernel/trace/trace.c
> @@ -2936,20 +2936,10 @@ static int __tracing_resize_ring_buffer(unsigned long size, int cpu)
>
> Â static ssize_t tracing_resize_ring_buffer(unsigned long size, int cpu_id)
> Â {
> - Â Â Â int cpu, ret = size;
> + Â Â Â int ret = size;
>
> Â Â Â Â mutex_lock(&trace_types_lock);
>
> - Â Â Â tracing_stop();
> -
> - Â Â Â /* disable all cpu buffers */
> - Â Â Â for_each_tracing_cpu(cpu) {
> - Â Â Â Â Â Â Â if (global_trace.data[cpu])
> - Â Â Â Â Â Â Â Â Â Â Â atomic_inc(&global_trace.data[cpu]->disabled);
> - Â Â Â Â Â Â Â if (max_tr.data[cpu])
> - Â Â Â Â Â Â Â Â Â Â Â atomic_inc(&max_tr.data[cpu]->disabled);
> - Â Â Â }
> -
> Â Â Â Â if (cpu_id != RING_BUFFER_ALL_CPUS) {
> Â Â Â Â Â Â Â Â /* make sure, this cpu is enabled in the mask */
> Â Â Â Â Â Â Â Â if (!cpumask_test_cpu(cpu_id, tracing_buffer_mask)) {
> @@ -2963,14 +2953,6 @@ static ssize_t tracing_resize_ring_buffer(unsigned long size, int cpu_id)
> Â Â Â Â Â Â Â Â ret = -ENOMEM;
>
> Â out:
> - Â Â Â for_each_tracing_cpu(cpu) {
> - Â Â Â Â Â Â Â if (global_trace.data[cpu])
> - Â Â Â Â Â Â Â Â Â Â Â atomic_dec(&global_trace.data[cpu]->disabled);
> - Â Â Â Â Â Â Â if (max_tr.data[cpu])
> - Â Â Â Â Â Â Â Â Â Â Â atomic_dec(&max_tr.data[cpu]->disabled);
> - Â Â Â }
> -
> - Â Â Â tracing_start();
> Â Â Â Â mutex_unlock(&trace_types_lock);
>
> Â Â Â Â return ret;
> --
> 1.7.3.1
>
>
ÿôèº{.nÇ+·®+%Ëÿ±éݶ\x17¥wÿº{.nÇ+·¥{±þG«éÿ{ayº\x1dÊÚë,j\a¢f£¢·hïêÿêçz_è®\x03(éÝ¢j"ú\x1a¶^[m§ÿÿ¾\a«þG«éÿ¢¸?¨èÚ&£ø§~á¶iOæ¬z·vØ^\x14\x04\x1a¶^[m§ÿÿÃ\fÿ¶ìÿ¢¸?I¥
next prev parent reply other threads:[~2011-08-23 20:10 UTC|newest]
Thread overview: 80+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-07-26 22:59 [PATCH 0/5] Add dynamic updates to trace ring buffer Vaibhav Nagarnaik
2011-07-26 22:59 ` [PATCH 1/5] trace: Add a new readonly entry to report total buffer size Vaibhav Nagarnaik
2011-07-29 18:01 ` Steven Rostedt
2011-07-29 19:09 ` Vaibhav Nagarnaik
2011-07-26 22:59 ` [PATCH 2/5] trace: Add ring buffer stats to measure rate of events Vaibhav Nagarnaik
2011-07-29 18:10 ` Steven Rostedt
2011-07-29 19:10 ` Vaibhav Nagarnaik
2011-07-26 22:59 ` [PATCH 3/5] trace: Add per_cpu ring buffer control files Vaibhav Nagarnaik
2011-07-29 18:14 ` Steven Rostedt
2011-07-29 19:13 ` Vaibhav Nagarnaik
2011-07-29 21:25 ` Steven Rostedt
2011-07-26 22:59 ` [PATCH 4/5] trace: Make removal of ring buffer pages atomic Vaibhav Nagarnaik
2011-07-29 21:23 ` Steven Rostedt
2011-07-29 23:30 ` Vaibhav Nagarnaik
2011-07-30 1:12 ` Steven Rostedt
2011-07-30 1:50 ` David Sharp
2011-07-30 2:43 ` Steven Rostedt
2011-07-30 3:44 ` David Sharp
2011-07-26 22:59 ` [PATCH 5/5] trace: Make addition of pages in ring buffer atomic Vaibhav Nagarnaik
2011-08-16 21:46 ` [PATCH v2 0/5] Add dynamic updates to trace ring buffer Vaibhav Nagarnaik
2011-08-16 21:46 ` [PATCH v2 1/5] trace: Add a new readonly entry to report total buffer size Vaibhav Nagarnaik
2011-08-16 21:46 ` [PATCH v2 2/5] trace: Add ring buffer stats to measure rate of events Vaibhav Nagarnaik
2011-08-16 21:46 ` [PATCH v2 3/5] trace: Add per_cpu ring buffer control files Vaibhav Nagarnaik
2011-08-22 20:29 ` Steven Rostedt
2011-08-22 20:36 ` Vaibhav Nagarnaik
2011-08-22 22:09 ` [PATCH v3] " Vaibhav Nagarnaik
2011-08-23 0:49 ` Steven Rostedt
2011-08-23 1:16 ` Vaibhav Nagarnaik
2011-08-23 1:17 ` Vaibhav Nagarnaik
2011-09-03 2:45 ` Steven Rostedt
2011-09-06 18:56 ` Vaibhav Nagarnaik
2011-09-07 17:13 ` Steven Rostedt
2011-10-12 1:20 ` [PATCH v4 1/4] " Vaibhav Nagarnaik
2012-01-31 23:53 ` Vaibhav Nagarnaik
2012-02-02 2:42 ` Steven Rostedt
2012-02-02 19:20 ` Vaibhav Nagarnaik
2012-02-02 20:00 ` [PATCH v5 " Vaibhav Nagarnaik
2012-02-02 20:00 ` [PATCH v5 2/4] trace: Make removal of ring buffer pages atomic Vaibhav Nagarnaik
2012-04-21 4:27 ` Steven Rostedt
2012-04-23 17:31 ` Vaibhav Nagarnaik
2012-04-25 21:18 ` [PATCH v6 1/3] " Vaibhav Nagarnaik
2012-04-25 21:18 ` [PATCH v6 2/3] trace: Make addition of pages in ring buffer atomic Vaibhav Nagarnaik
2012-04-25 21:18 ` [PATCH v6 3/3] trace: change CPU ring buffer state from tracing_cpumask Vaibhav Nagarnaik
2012-05-03 1:55 ` [PATCH v6 1/3] trace: Make removal of ring buffer pages atomic Steven Rostedt
2012-05-03 6:40 ` Vaibhav Nagarnaik
2012-05-03 12:57 ` Steven Rostedt
2012-05-03 14:12 ` Steven Rostedt
2012-05-03 18:43 ` Vaibhav Nagarnaik
2012-05-03 18:54 ` Steven Rostedt
2012-05-03 18:54 ` Vaibhav Nagarnaik
2012-05-04 1:59 ` [PATCH v7 " Vaibhav Nagarnaik
2012-05-04 1:59 ` [PATCH v7 2/3] trace: Make addition of pages in ring buffer atomic Vaibhav Nagarnaik
2012-05-19 10:18 ` [tip:perf/core] ring-buffer: " tip-bot for Vaibhav Nagarnaik
2012-05-04 1:59 ` [PATCH v7 3/3] trace: change CPU ring buffer state from tracing_cpumask Vaibhav Nagarnaik
2012-05-19 10:21 ` [tip:perf/core] tracing: " tip-bot for Vaibhav Nagarnaik
2012-05-07 20:22 ` [PATCH v7 1/3] trace: Make removal of ring buffer pages atomic Steven Rostedt
2012-05-07 21:48 ` Vaibhav Nagarnaik
2012-05-08 0:14 ` Steven Rostedt
2012-05-09 3:38 ` Steven Rostedt
2012-05-09 5:00 ` Vaibhav Nagarnaik
2012-05-09 14:29 ` Steven Rostedt
2012-05-09 17:46 ` Vaibhav Nagarnaik
2012-05-09 17:54 ` Steven Rostedt
2012-05-19 10:17 ` [tip:perf/core] ring-buffer: " tip-bot for Vaibhav Nagarnaik
2012-02-02 20:00 ` [PATCH v5 3/4] trace: Make addition of pages in ring buffer atomic Vaibhav Nagarnaik
2012-02-02 20:00 ` [PATCH v5 4/4] trace: change CPU ring buffer state from tracing_cpumask Vaibhav Nagarnaik
2012-03-08 23:51 ` [PATCH v5 1/4] trace: Add per_cpu ring buffer control files Vaibhav Nagarnaik
2012-05-02 21:03 ` [tip:perf/core] ring-buffer: " tip-bot for Vaibhav Nagarnaik
2011-10-12 1:20 ` [PATCH v4 2/4] trace: Make removal of ring buffer pages atomic Vaibhav Nagarnaik
2011-10-12 1:20 ` [PATCH v4 3/4] trace: Make addition of pages in ring buffer atomic Vaibhav Nagarnaik
2011-10-12 1:20 ` [PATCH v4 4/4] trace: change CPU ring buffer state from tracing_cpumask Vaibhav Nagarnaik
2011-08-16 21:46 ` [PATCH v2 4/5] trace: Make removal of ring buffer pages atomic Vaibhav Nagarnaik
2011-08-23 3:27 ` Steven Rostedt
2011-08-23 18:55 ` Vaibhav Nagarnaik
2011-08-23 18:55 ` [PATCH v3 " Vaibhav Nagarnaik
2011-08-23 19:16 ` David Sharp [this message]
2011-08-23 19:20 ` Vaibhav Nagarnaik
2011-08-23 19:24 ` Steven Rostedt
2011-08-23 18:55 ` [PATCH v3 5/5] trace: Make addition of pages in ring buffer atomic Vaibhav Nagarnaik
2011-08-16 21:46 ` [PATCH v2 " Vaibhav Nagarnaik
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAJL_ektp-BP52Qqfu_bBDjUnXyX20PWzgNuQF_atB0J9q7n=Zw@mail.gmail.com' \
--to=dhsharp@google.com \
--cc=fweisbec@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=mrubin@google.com \
--cc=rostedt@goodmis.org \
--cc=vnagarnaik@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).