Kexi/Junior Jobs/Design and perform Kexi-LibreOffice benchmarks: Difference between revisions
Appearance
< Kexi | Junior Jobs
Line 22: | Line 22: | ||
==Requirements== | ==Requirements== | ||
#One version of each product should be selected, and it should be as new as possible. | |||
#One version of each product should be selected, and it should be as new as possible | #Intersection of features and parameters present in both products should be covered by the benchmark. | ||
#Intersection of features and parameters present in both products should be covered by the benchmark | #Only include single-user usage scenarios only. | ||
#One database project should be designed for one product | #Only include scenarios available through using the products. For example tests cannot require command-line use of SQLite database. | ||
#In each database, a number of objects of type (table, query) should be designed | #One database project should be designed for one product. | ||
#Tables should be filled with appropriate amount data | #In each database, a number of objects of type (table, query) should be designed. | ||
#Database projects built for the two products should be as equivalent as possible | #Tables should be filled with appropriate amount data. | ||
#Design documents and project files should be made available under the GNU FDL license | #Database projects built for the two products should be as equivalent as possible. | ||
#Design documents should be put inline on this wiki page, in a subsection | #Design documents and project files should be made available under the GNU FDL license. | ||
#Tests should be reproducible given the same hardware, so relevant scripts or recipes should be provided | #Design documents should be put inline on this wiki page, in a subsection. | ||
#Benchmarks should be performed on at least two operating systems, each on different hardware architecture/model (e.g. with differences in hardware performance of disks, RAM, amount of memory) | #Tests should be reproducible given the same hardware, so relevant scripts or recipes should be provided. | ||
#The operating systems have to be both Linux-based (because Kexi for other OSes may not be tested enough at the moment) | #Benchmarks should be performed on at least two operating systems, each on different hardware architecture/model (e.g. with differences in hardware performance of disks, RAM, amount of memory). | ||
#[http://en.wikipedia.org/wiki/Time_%28Unix%29#User_Time_vs_System_Time User and system time] should be measured whenever possible | #The operating systems have to be both Linux-based (because Kexi for other OSes may not be tested enough at the moment). | ||
#[http://en.wikipedia.org/wiki/Time_%28Unix%29#User_Time_vs_System_Time User and system time] should be measured whenever possible. | |||
#Disable any unnecessary applications and service that may consume resources during the benchmark: HTTP, other databases, screen savers, other applications etc. | |||
#Unless concurrent behavior is tested, all tests should be performed sequentially without a need to run more than one product at a time. | |||
#It may be possible that alteration to product's configuration or even source code for the needs of this benchmark. Therefore, it is preferred to have Kexi built from source code so it can be easily adapted; this is not required to LibreOffice Base. | #It may be possible that alteration to product's configuration or even source code for the needs of this benchmark. Therefore, it is preferred to have Kexi built from source code so it can be easily adapted; this is not required to LibreOffice Base. | ||
#Release or ReleaseWithDebugInfo (not Debug or DebugFull) build or equivalent should be used to build product(s) | #Release or ReleaseWithDebugInfo (not Debug or DebugFull) build or equivalent should be used to build product(s). | ||
#Memory-related benchmarks should be designed and performed too | #Memory-related benchmarks should be designed and performed too. | ||
#Cold and warm start of applications loading tested projects should be measured too (user/system time). | |||
#[http://userbase.kde.org/System_Activity KDE's System Activity] "Detailed Memory Information" too should be used for memory reports. Read [http://byte.kde.org/~bcooksley/seli-memory/desktop_benchmark.html this] to see how memory usage comparison may look. | #[http://userbase.kde.org/System_Activity KDE's System Activity] "Detailed Memory Information" too should be used for memory reports. Read [http://byte.kde.org/~bcooksley/seli-memory/desktop_benchmark.html this] to see how memory usage comparison may look. | ||
#Benchmark report should include: | #Benchmark report should include: | ||
##Reference to the benchmark design documents and test projects/data | ##Reference to the benchmark design documents and test projects/data. | ||
##Detailed hardware information (types of components) | ##Detailed hardware information (types of components, amount of resources). | ||
##Operating system information (version, distribution, versions of relevant dependent packages, e.g. Qt or SQLite) | ##Operating system information (version, distribution, versions of relevant dependent packages, e.g. Qt or SQLite). | ||
##Detailed information about both product: their relevant configuration, version, source code patches (if applied) | ##Detailed information about both product: their relevant configuration, version, source code patches (if applied). | ||
##Result for each test for each product; median of time (or memory or other resource) for N runs for should be computed, N >= 10 | ##Result for each test for each product; median of time (or memory or other resource) for N runs for should be computed, N >= 10. The form should be a table and a chart; use the ODS format. | ||
##Remarks about accuracy of the test and limitations | ##Remarks about accuracy of the test and limitations. | ||
# | #Proposals for future expansion of the benchmark should be documented, e.g. new features to benchmark. | ||
==Hints== | |||
*Study publicly explained benchmarks of database or desktop software to identify areas tested and methodologies. |
Revision as of 14:29, 26 March 2014
Status: UNASSIGNED, Difficulty: MEDIUM
Proposed and mentored by Jstaniek (talk) 14:13, 26 March 2014 (UTC)
The Goal
Document pros and cons of Kexi vs LibreOffice Base using performance/stability/resource consumption benchmarks.
Rationale
For many reasons opinions about software is often subjective. We would like to identify following in Kexi:
- strengths,
- potential for further development,
- most important areas to improve.
The Task
In order to do a Kexi-LibreOffice comparison:
- Design performance and stability benchmarks
- Perform the benchmarks
Below we call Kexi and LibreOffice Base as a product.
Requirements
- One version of each product should be selected, and it should be as new as possible.
- Intersection of features and parameters present in both products should be covered by the benchmark.
- Only include single-user usage scenarios only.
- Only include scenarios available through using the products. For example tests cannot require command-line use of SQLite database.
- One database project should be designed for one product.
- In each database, a number of objects of type (table, query) should be designed.
- Tables should be filled with appropriate amount data.
- Database projects built for the two products should be as equivalent as possible.
- Design documents and project files should be made available under the GNU FDL license.
- Design documents should be put inline on this wiki page, in a subsection.
- Tests should be reproducible given the same hardware, so relevant scripts or recipes should be provided.
- Benchmarks should be performed on at least two operating systems, each on different hardware architecture/model (e.g. with differences in hardware performance of disks, RAM, amount of memory).
- The operating systems have to be both Linux-based (because Kexi for other OSes may not be tested enough at the moment).
- User and system time should be measured whenever possible.
- Disable any unnecessary applications and service that may consume resources during the benchmark: HTTP, other databases, screen savers, other applications etc.
- Unless concurrent behavior is tested, all tests should be performed sequentially without a need to run more than one product at a time.
- It may be possible that alteration to product's configuration or even source code for the needs of this benchmark. Therefore, it is preferred to have Kexi built from source code so it can be easily adapted; this is not required to LibreOffice Base.
- Release or ReleaseWithDebugInfo (not Debug or DebugFull) build or equivalent should be used to build product(s).
- Memory-related benchmarks should be designed and performed too.
- Cold and warm start of applications loading tested projects should be measured too (user/system time).
- KDE's System Activity "Detailed Memory Information" too should be used for memory reports. Read this to see how memory usage comparison may look.
- Benchmark report should include:
- Reference to the benchmark design documents and test projects/data.
- Detailed hardware information (types of components, amount of resources).
- Operating system information (version, distribution, versions of relevant dependent packages, e.g. Qt or SQLite).
- Detailed information about both product: their relevant configuration, version, source code patches (if applied).
- Result for each test for each product; median of time (or memory or other resource) for N runs for should be computed, N >= 10. The form should be a table and a chart; use the ODS format.
- Remarks about accuracy of the test and limitations.
- Proposals for future expansion of the benchmark should be documented, e.g. new features to benchmark.
Hints
- Study publicly explained benchmarks of database or desktop software to identify areas tested and methodologies.