maemo.org Bugzilla – Bug 5407
microfeed-providers-unstable's twitter support eats cpu
Last modified: 2009-10-18 23:48:23 UTC
You need to
before you can comment on or make changes to this bug.
(Control Panel > General > About product)
STEPS TO REPRODUCE THE PROBLEM:
Install mauku, which pulls in microfeed-providers-unstable
Normal system operation with mauku installed
The /usr/lib/microfeed/bin/org.microfeed.Provider.Twitter process runs wild,
eating 90% of the cpu constantly, with knock-on effects on responsiveness and
(at least, it's happened every time I've done it)
EXTRA SOFTWARE INSTALLED:
Various apps, including: All the account-plugin-foo's, bluemaemo, bounce
evolution, conboy, documents to go, drnoksnes, openssh, the facebook client,
fmradio, foreca weather, liqbase, gpodder, numpty physics, ogg support, osm2go,
qik, rfk, rootsh and xournal.
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-GB; rv:188.8.131.52)
Gecko/20091007 Ubuntu/9.10 (karmic) Firefox/3.5.3
*** Bug 5377 has been marked as a duplicate of this bug. ***
This has reported many times, so confirming.
However, I am not able to reproduce the situation. Please, provide more
background information about the usage of Mauku and detailed steps how to
trigger the bug.
Please, someone could attach into the provider process with GDB, and check the
exact location of the loop with the backtrace command. Note, that there may be
several threads running also.
(In reply to comment #2)
> This has reported many times, so confirming.
> However, I am not able to reproduce the situation. Please, provide more
> background information about the usage of Mauku and detailed steps how to
> trigger the bug.
It happens to me frequently but doesn't always. I just use one Twitter account,
and there isn't a pattern to how hit the bug. Just use the application normally
and when exiting Mauku the microfeed process is eating the CPU. You see it if
you have installed load-applet and with top.
> Please, someone could attach into the provider process with GDB, and check the
> exact location of the loop with the backtrace command. Note, that there may be
> several threads running also.
I'll try get a backtrace today.
In my case I just launched Mauku and closed it. After a bit I noticed that my
n900 was very slow.
The backtrace I get is something like:
#0 default_mutex_lock (mutex_implementation=<value optimized out>) at
#1 0x4002e588 in microfeed_mutex_lock (mutex=<value optimized out>) at
#2 0x4002e670 in microfeed_thread_cleanup () at
#3 0x00009f68 in ?? ()
#4 0x00009f68 in ?? ()
#0 0x4002a3b0 in ?? () from /usr/lib/libmicrofeed-common-0.so.5
#1 0x4002e670 in microfeed_thread_cleanup () at
#2 0x00009f68 in ?? ()
#3 0x00009f68 in ?? ()
Sorry but I don't have all the debug symbols installed as this is my personal
Might, or might not be related. But I have also noticed that if Mauku is unable
to update the overview, at all or only partially, this bug will occur for me.
What I mean by this is that in the feed overview I see none or only some (two,
three) updated tweets and all the rest of the tweets are several days old.
Although of course this is not the case in my Twitter feed.
But why is this bug still in a NEW state with priority LOW? Seems like a major
bug to me :)
Thank you, Marco, for backtrace. It seems that provider is shutting down the
main loop when there is still another thread running. That leads a busy loop
waiting the thread to finnish. The busy loop is a bad thing (very bad) itself,
so there will be a quick fix for that. However, the main loop should not exit
if there is a thread running, so I have to find the root cause for the issue
Fixed in libmicrofeed version 0.5.1. The microfeed-20091018 user package should
pull the correct library. Please, test Mauku after the update, and possible
readjust your evaluation in extras-testing QA based on that (there is no need
to update Mauku).