diff options
author | Lars Knoll <lars.knoll@qt.io> | 2020-01-17 14:33:53 +0100 |
---|---|---|
committer | Lars Knoll <lars.knoll@qt.io> | 2020-04-09 20:02:55 +0200 |
commit | 5b7c3e31b538376f2b4733bd868b5875b504cdb3 (patch) | |
tree | e3e45f65f1bdc2db5dad3b25ec79bfe04320d9e6 /tests/auto/corelib/tools/qhash | |
parent | 926a0886d1961a3f384d3e6c36919e6dd8055dce (diff) |
New QHash implementation
A brand new QHash implementation using a faster and more memory efficient data
structure than the old QHash.
A new implementation for QHash. Instead of a node based approach as the old
QHash, this implementation now uses a two stage lookup table. The total
amount of buckets in the table are divided into spans of 128 entries.
Inside each span, we use an array of chars to index into a storage area
for the span.
The storage area for each span is a simple array, that gets (re-)allocated
with size increments of 16 items. This gives an average memory overhead of
8*sizeof(struct{ Key; Value; }) + 128*sizeof(char) + 16 for each span.
To give good performance and avoid too many collisions, the array keeps its
load factor between .25 and .5 (and grows and rehashes if the load factor goes
above .5).
This design allows us to keep the memory overhead of the Hash very small, while
at the same time giving very good performance. The calculated overhead for a
QHash<int, int> comes to 1.7-3.3 bytes per entry and to 2.2-4.3 bytes for
a QHash<ptr, ptr>.
The new implementation also completely splits the QHash and QMultiHash classes.
One behavioral change to note is that the new QHash implementation will not
provide stable references to nodes in the hash when the table needs to grow.
Benchmarking using https://github.com/Tessil/hash-table-shootout shows
very nice performance compared to many different hash table implementation.
Numbers shown below are for a hash<int64, int64> with 1 million entries. These
numbers scale nicely (mostly in a linear fashion with some variation due to
varying load factors) to smaller and larger tables. All numbers are in seconds,
measured with gcc on Linux:
Hash table random random random random reads full
insertion insertion full full after iteration
(reserved) deletes reads deletes
------------------------------------------------------------------------------
std::unordered_map 0,3842 0,1969 0,4511 0,1300 0,1169 0,0708
google::dense_hash_map 0,1091 0,0846 0,0550 0,0452 0,0754 0,0160
google::sparse_hash_map 0,2888 0,1582 0,0948 0,1020 0,1348 0,0112
tsl::sparse_map 0,1487 0,1013 0,0735 0,0448 0,0505 0,0042
old QHash 0,2886 0,1798 0,5065 0,0840 0,0717 0,1387
new QHash 0,0940 0,0714 0,1494 0,0579 0,0449 0,0146
Numbers for hash<std::string, int64>, with the string having 15 characters:
Hash table random random random random reads
insertion insertion full full after
(reserved) deletes reads deletes
--------------------------------------------------------------------
std::unordered_map 0,4993 0,2563 0,5515 0,2950 0,2153
google::dense_hash_map 0,2691 0,1870 0,1547 0,1125 0,1622
google::sparse_hash_map 0,6979 0,3304 0,1884 0,1822 0,2122
tsl::sparse_map 0,4066 0,2586 0,1929 0,1146 0,1095
old QHash 0,3236 0,2064 0,5986 0,2115 0,1666
new QHash 0,2119 0,1652 0,2390 0,1378 0,0965
Memory usage numbers (in MB for a table with 1M entries) also look very nice:
Hash table Key int64 std::string (15 chars)
Value int64 int64
---------------------------------------------------------
std::unordered_map 44.63 75.35
google::dense_hash_map 32.32 80,60
google::sparse_hash_map 18.08 44.21
tsl::sparse_map 20.44 45,93
old QHash 53.95 69,16
new QHash 23.23 51,32
Fixes: QTBUG-80311
Change-Id: I5679734144bc9bca2102acbe725fcc2fa89f0dff
Reviewed-by: Thiago Macieira <thiago.macieira@intel.com>
Diffstat (limited to 'tests/auto/corelib/tools/qhash')
-rw-r--r-- | tests/auto/corelib/tools/qhash/tst_qhash.cpp | 45 |
1 files changed, 30 insertions, 15 deletions
diff --git a/tests/auto/corelib/tools/qhash/tst_qhash.cpp b/tests/auto/corelib/tools/qhash/tst_qhash.cpp index 2a18f8d3e6..b987adaa3f 100644 --- a/tests/auto/corelib/tools/qhash/tst_qhash.cpp +++ b/tests/auto/corelib/tools/qhash/tst_qhash.cpp @@ -62,7 +62,6 @@ private slots: void keyIterator(); void keyValueIterator(); void keys_values_uniqueKeys(); // slightly modified from tst_QMap - void noNeedlessRehashes(); void const_shared_null(); void twoArguments_qHash(); @@ -70,6 +69,8 @@ private slots: void eraseValidIteratorOnSharedHash(); void equal_range(); void insert_hash(); + + void badHashFunction(); }; struct IdentityTracker { @@ -1325,20 +1326,6 @@ void tst_QHash::keys_values_uniqueKeys() QVERIFY(sorted(hash.values()) == sorted(QList<int>() << 2 << 1 << 4 << -2)); } -void tst_QHash::noNeedlessRehashes() -{ - QHash<int, int> hash; - for (int i = 0; i < 512; ++i) { - int j = (i * 345) % 512; - hash.insert(j, j); - int oldCapacity = hash.capacity(); - hash[j] = j + 1; - QCOMPARE(oldCapacity, hash.capacity()); - hash.insert(j, j + 1); - QCOMPARE(oldCapacity, hash.capacity()); - } -} - void tst_QHash::const_shared_null() { QHash<int, QString> hash2; @@ -1663,5 +1650,33 @@ void tst_QHash::insert_hash() } } +struct BadKey { + int k; + BadKey(int i) : k(i) {} + bool operator==(const BadKey &other) const + { + return k == other.k; + } +}; + +size_t qHash(BadKey, size_t seed) +{ + return seed; +} + +void tst_QHash::badHashFunction() +{ + QHash<BadKey, int> hash; + for (int i = 0; i < 10000; ++i) + hash.insert(i, i); + + for (int i = 0; i < 10000; ++i) + QCOMPARE(hash.value(i), i); + + for (int i = 10000; i < 20000; ++i) + QVERIFY(!hash.contains(i)); + +} + QTEST_APPLESS_MAIN(tst_QHash) #include "tst_qhash.moc" |