Welcome, testers! Bug reporting and feature requests are managed with Freedesktop's Phabricator. You need a phabricator account to file tasks and comment on them. Take a look at the existing list of bugs/feature requests to see if your problem has already been reported (hint: use control+F in your browser to search!).
- To report a bug/problem in the software, create a task and set Projects: Pitivi.
- To request a new feature/enhancement, create a task, set Projects: Pitivi and Priority: Enhancement.
- Everything - All the tasks (bug reports and feature requests).
- Patches — All the patches (diffs) attached to tasks that have not yet been merged
- Pitivi Love — bugs or feature requests that are considered easier for new contributors to tackle
Providing debugging information
Sharing sample files (and "scenarios") for testing
In some cases we might ask you to share sample media files with us to debug a particular issue. If you don't have your own hosting space, we have a FTP account with unlimited space available for this purpose, provided by idmark.ca.
- Using a FTP client (such as FileZilla, available on most Linux distributions), connect to "idmark.ca" using the username "email@example.com" (@idmark.ca is part of the username). Ask us for the password on IRC.
- Your uploaded files will be in a private staging folder (only visible through FTP); once reviewed, we may move your uploaded files to http://pitivi.ecchi.ca/user-contributed-samples/ for ease of access.
In addition to samples, it is extremely helpful to provide "scenario" files. These are files that are automatically generated each time you use a project. Combined with your project files, those can allow us to reproduce exactly the actions that have occurred to trigger the bug. This makes triggering the issue on our machines a very easy and reliable process, which saves you a ton of time! Here's how to provide scenario files to facilitate the process:
- Use the “Select unused clips” feature to easily remove unused media from your project, this will help you save space (and upload time).
- Save your project, right before triggering the bug.
- Trigger the bug (make Pitivi crash or freeze)
- Get the last/newest scenario file from ~/.cache/pitivi/scenarios/
- Reopen your project, and use the “Export project as tarball...” menu item in the hamburger menu. Save the .xges_tar file somewhere. It will contain your project file and its associated media.
- Temporarily rename the .xges_tar to .tar, add the scenario file to the tarball, then you can rename it to .xges_tar again and upload it for us to reproduce your issue and integrate it into our test suite so that it does not happen again in the future!
Stack traces for crashes
When reporting a crash or when the application freezed deadlock, it would be good to provide a stack trace.
Running the pitivi bundles
To get a core dump you will need to first install gdb and strace
You will need to first get into the bundle "environment"
$ APP_IMAGE_TEST=1 ./pitivi-0.94-x86_64 # Update with your own version of Pitivi bundle!
And then run pitivi inside gdb:
$ PYTHONHOME=/usr strace -o strace.out.txt gdb $APPDIR/bin/python3.3 -ex "set environ PYTHONHOME=$PYTHONHOME" -ex "set environ LD_LIBRARY_PATH=$LD_LIBRARY_PATH" -ex "r $APPDIR/bin/pitivi"
Then reproduce the crash or the deadlock and once the application froze, press Control+c and do the following to finally get the backtrace:
(gdb) bt # obtain backtrace (gdb) thread apply all bt # obtain backtrace for all threads (prefer that to report a bug as it contains more informations) (gdb) quit
When building Pitivi or using packages from your distribution
See GNOME's Getting Traces instructions for some comprehensive documentation and tips on the subject.
For those of you who already know how to install the relevant debug packages etc, we provide you with some simple reminders below of commands that can be particularly useful in Pitivi's context.
When you want to "attach" to an existing Python process (useful for deadlocks, where the application will be hung instead of crashed):
gdb python3 THE_PITIVI_PROCESS_NUMBER
When you want to run Pitivi entirely in gdb from the start:
gdb python3 set pagination 0 # avoids the need to press Enter to "scroll" run /usr/bin/pitivi # the version installed system-wide. run bin/pitivi # the development version from inside the build tree.
And then, you can either use "bt full" or "thread apply all bt" to get the backtrace.
When you need to know what’s going on inside pitivi, you can launch it with a debug level. In loggable.py, there are five levels: ( ERROR, WARN, FIXME, INFO, DEBUG, LOG ) = range(1, 7). As such, if you want to see errors and warnings only, you launch
...and if you want to see everything you do
If that's "too much" and you want to focus on particular parts of the code, you can do so. For example, you can get output from the "Timeline" and "MediaLibraryWidget" classes only:
Here are various examples of commands you can use to generate detailed debug logs that include not only Pitivi's debug output, but also GStreamer's:
A basic log can be obtained by running:
PITIVI_DEBUG=*:5 GST_DEBUG=2 bin/pitivi > debug.log 2>&1
To get debugging information from Non-Linear Engine, you could use:
PITIVI_DEBUG=5 GST_DEBUG=3,nle*:5,python:5 bin/pitivi > debug.log 2>&1
The information most likely to be useful would probably be the debug info from GES in addition to Pitivi's:
PITIVI_DEBUG=5 GST_DEBUG=ges:5 bin/pitivi > debug.log 2>&1;
Some additional tips:
- When using GST_DEBUG, the resulting logs will most likely be too big to be attached to a bug report directly. Instead, compress them (in gzip, bzip2 or lzma format) before attaching them to a bug report.
Python performance profiling
In the rare cases where a performance problem is caused by our UI code, you can profile Pitivi itself, with this command (and yes, "JUMP_THROUGH_HOOPS" is needed for this case, it is an environment variable of bin/pitivi:
JUMP_THROUGH_HOOPS=1 python3 -m cProfile -s time -o pitivi_performance.profile bin/pitivi
The resulting "pitivi_performance.profile" file can then be processed to create a visual representation of where the most time was spent and which functions were called the most often in the code. See also Jeff's blog posts on profiling.