Python/Harmattan/Performance Considerations for Python Apps
(→ShedSkin) |
m (Performance Considerations for Python Apps moved to Python/Harmattan/Performance Considerations for Python Apps) |
||
(One intermediate revision not shown) | |||
Line 37: | Line 37: | ||
ShedSkin is a tool to convert a restricted subset of python into C++. | ShedSkin is a tool to convert a restricted subset of python into C++. | ||
This then can get compiled and used as a module in python. | This then can get compiled and used as a module in python. | ||
- | + | Tests have shown that there are great speed improvements possible. | |
+ | You dont need any knowledge in C or how to use gcc. | ||
{{main|ShedSkin}} | {{main|ShedSkin}} | ||
Latest revision as of 17:52, 24 November 2011
Based on Making Python faster (for fmms initially) and Python Qt startup time tips
See the python.org page and Khertan's writeup for general Python optimizations.
Contents |
[edit] Profiling
Do not worry about performance unless you notice a problem. Then only optimize what you can justify with profiling.
To profile Python code, run it with
$ python -m cProfile -o .profile TOPLEVEL_SCRIPT.py
To then analyze the results
$ python -m pstats .profile > sort cumulative > stats 40
That sorted the results by the time it took for a function and all the functions it called. It then displays the top 40 results.
See the python.org page for more information on profiling
[edit] Improving Performance
[edit] Interpreter Choice
[edit] Unladen Swallow
PEP 3146 - Merging of Unladen Swallow
Currently Unladen Swallow has not seen too much performance benefit but has a longer start up time and takes more memory
[edit] Psyco / Cython
Compiles a restricted subset of python into a Python extension model
Do these work with Arm?
[edit] ShedSkin
ShedSkin is a tool to convert a restricted subset of python into C++. This then can get compiled and used as a module in python. Tests have shown that there are great speed improvements possible. You dont need any knowledge in C or how to use gcc.
Main article: ShedSkin
[edit] Delegating to C with CTypes/SWIG
??
[edit] Startup
[edit] /usr/bin/python Startup
Preloaders exists like PyLauncher that keep a python process around with heavy weight imports like gtk already imported. On application launch it forks the preloader process.
Preloaders were favored back in the Maemo 4.1 days but has fallen out of favor lately. Concerns center around always keeping an unused python process with heavy pieces of code imported always around [1].
[edit] Parsing .py files
[edit] Stripping the Code
A major downside is that the code that your users is running is different than the code you develop with. This means any stack traces that users provide will be a bit more complicated to decipher.
Benchmarks from stripping code[2]
First test - normal code
2104 lines of code 580 blank lines 215 code lines Load time from icon click to fully loaded - 10.04 seconds
Second Test - Cleared up code
2104 lines of code 0 blank lines 80 code lines Load time from icon click to fully loaded - 9.25 seconds
Third - Cleared up code!!
1469 lines of code 0 blank lines 80 code lines Load time from icon click to fully loaded - 8.40 (5 tests , from 8.09 to 8.60)
[edit] Generating pyc/pyo files
Python serializes its state after importing a file to save on re-parsing. It saves these next to the .py files which means if the user does not have write access, Python will not be able to cache it.
Generating pyc/pyo files should be done as a package postinst/postrm per Debian Python Policy[3]
Approaches:
-
py_compilefiles src/*.py
[4] - Python-support is even very easy to use, basically just add
dh_pysupport
todebian/rules
andpython-support
toBuild-depends
andDepends
. Just make sure thatpostinst
has#DEBHELPER#
somewhere [5] -
python -m compileall TOPLEVEL.py
[6]
[edit] pyo Files
A decent description of pyo files [7]
- When the Python interpreter is invoked with the
-O
flag, optimized code is generated and stored in ‘.pyo
’ files. The optimizer currently doesn't help much; it only removes assert statements. When-O
is used, all bytecode is optimized;.pyc
files are ignored and.py
files are compiled to optimized bytecode. - Passing two
-O
flags to the Python interpreter (-OO
) will cause the bytecode compiler to perform optimizations that could in some rare cases result in malfunctioning programs. Currently only__doc__
strings are removed from the bytecode, resulting in more compact ‘.pyo
’ files. Since some programs may rely on having these available, you should only use this option if you know what you're doing. - A program doesn't run any faster when it is read from a ‘
.pyc
’ or ‘.pyo
’ file than when it is read from a ‘.py
’ file; the only thing that's faster about ‘.pyc
’ or ‘.pyo
’ files is the speed with which they are loaded. - When a script is run by giving its name on the command line, the bytecode for the script is never written to a ‘
.pyc
’ or ‘.pyo
’ file. Thus, the startup time of a script may be reduced by moving most of its code to a module and having a small bootstrap script that imports that module. It is also possible to name a ‘.pyc
’ or ‘.pyo
’ file directly on the command line. - The module ‘
compileall
’{} can create ‘.pyc
’ files (or ‘.pyo
’ files when-O
is used) for all modules in a directory.
[edit] Delayed work
With Dialcentral, epage found that caching off results from a good number of "re.compile" at object creation time (which occurs in a background thread) saved a significant amount on startup. Even with profiling until the background thread is finished showed a speed up. epage suspects this is due to fewer class variable assignments / queries which might be more expensive then the equivalent on a instance.
Sadly this was done earlier in the Dialcentral release cycle and no performance numbers are available to back up these claims. It was considered significant enough at the time to make the code slightly uglier.
[edit] Perceived Startup Performance
hildon_gtk_window_take_screenshot takes advantage of user perception to make the user think the app is launched faster.
[edit] Responsiveness
[edit] Thread per Logical Unit
The One Ring has separate threads for its D-Bus logic and its networking logic. It does this separation through a worker thread that the D-Bus thread posts tasks to. Results come as callbacks in the D-Bus thread.
See AsyncLinearExecutor and some example code
[edit] Splitting a call between multiple callbacks
epage's approach[8]:
def make_idler(func): """ Decorator that makes a generator-function into a function that will continue execution on next call """ a = [] @functools.wraps(func) def decorated_func(*args, **kwds): if not a: a.append(func(*args, **kwds)) try: a[0].next() return True except StopIteration: del a[:] return False return decorated_func
Example
@make_idler def func(self): ... long code ... yield ... long code ... yield ... long code ... yield ... long code ... yield ... callback = make_idler(func) gobject.idle_add(callback)
[edit] Memory Usage
Use of slots
[edit] FAQ
[edit] Is Python slow?
The standard response of "it depends". For a graphical application not doing too much processing a user will probably not notice it is written in Python. Compare that to an experiment by epage in writing a GST video filter in Python that at best ran at 2 seconds per frame.
[edit] Further reading
- PyQt Tips and Tricks - similar guide for PyQt
- This page was last modified on 24 November 2011, at 17:52.
- This page has been accessed 25,078 times.