From faff69968bea8ffefe843ef9b8016cf70766d10d Mon Sep 17 00:00:00 2001 From: Leena Miettinen Date: Tue, 15 Oct 2019 16:16:46 +0200 Subject: Doc: Add best-practices-info about creating benchmarks From https://wiki.qt.io/Writing_Unit_Tests Change-Id: Idc0bafb32690f443e1f49cd4f8efb653d3aa46a4 Reviewed-by: Edward Welbourne --- src/testlib/doc/src/qttestlib-manual.qdoc | 27 ++++++++++++++++++++++++++- 1 file changed, 26 insertions(+), 1 deletion(-) (limited to 'src/testlib') diff --git a/src/testlib/doc/src/qttestlib-manual.qdoc b/src/testlib/doc/src/qttestlib-manual.qdoc index a72ba1db91..a55671522b 100644 --- a/src/testlib/doc/src/qttestlib-manual.qdoc +++ b/src/testlib/doc/src/qttestlib-manual.qdoc @@ -317,10 +317,35 @@ \section1 Creating a Benchmark To create a benchmark, follow the instructions for creating a test and then add a - QBENCHMARK macro to the test function that you want to benchmark. + \l QBENCHMARK macro or \l QTest::setBenchmarkResult() to the test function that + you want to benchmark. In the following code snippet, the macro is used: \snippet code/doc_src_qtestlib.cpp 12 + A test function that measures performance should contain either a single + \c QBENCHMARK macro or a single call to \c setBenchmarkResult(). Multiple + occurrences make no sense, because only one performance result can be + reported per test function, or per data tag in a data-driven setup. + + Avoid changing the test code that forms (or influences) the body of a + \c QBENCHMARK macro, or the test code that computes the value passed to + \c setBenchmarkResult(). Differences in successive performance results + should ideally be caused only by changes to the product you are testing. + Changes to the test code can potentially result in misleading report of + a change in performance. If you do need to change the test code, make + that clear in the commit message. + + In a performance test function, the \c QBENCHMARK or \c setBenchmarkResult() + should be followed by a verification step using \l QCOMPARE(), \l QVERIFY(), + and so on. You can then flag a performance result as \e invalid if another + code path than the intended one was measured. A performance analysis tool + can use this information to filter out invalid results. + For example, an unexpected error condition will typically cause the program + to bail out prematurely from the normal program execution, and thus falsely + show a dramatic performance increase. + + \section2 Selecting the Measurement Back-end + The code inside the QBENCHMARK macro will be measured, and possibly also repeated several times in order to get an accurate measurement. This depends on the selected measurement back-end. Several back-ends are available. They can be selected on the -- cgit v1.2.3