| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a follow up of change 413358.
In particular, if some results are skipped because of
of the visibility constraints, more changes need to
be asked from the index, and pageResultSize needs to
be reset to know how many results are returned from each
query.
Without this change querying for more changes could end
up in an infinite loop as exposed in a test in 3.6 [1]
[1]: https://gerrit.googlesource.com/gerrit/+/refs/heads/stable-3.6/javatests/com/google/gerrit/server/query/change/FakeQueryChangesTest.java#126
Release-Notes: skip
Change-Id: I566010c6f5bfcb4fbc003bc6693aa25d4dd44a81
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If the first query to the index did not fill the full limit the caller
wants, we paginate to obtain more results. However, subsequent queries
are not needed if the previous query returned less results
than the limit.
This is a follow up of the change 413358 [1], that has reintroduced the
issue fixed in change 344695.
[1]: https://gerrit-review.googlesource.com/c/gerrit/+/413358/comment/85bca522_2b3552b2/
Bug: Issue 330195358
Release-Notes: Fix pagination to stop querying when there are no more results
Change-Id: Ie326f3628f5312a83bf83177a3fa5134f042b59a
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Verify that the OpenSSH key algorithm matches the one
associated with the public key. Throw a new specific
InvalidKeyAlgorithmException if the two algorithms do not match.
This is a breaking change because invalid OpenSSH keys
have been tolerated since the very beginning, when Shawn introduced
the SSH support in Change 6610.
Attempting to store new invalid SSH keys would fail and result
an HTTP status 400 error as response to the add keys REST-API.
Make SshKeyCacheImpl tolerant to existing keys stored
in the accounts' profile, otherwise Gerrit may start flagging
keys that have been previously stored as invalid, resulting
in random authentication failures by existing users.
Existing invalid keys are reported in the error_log with the
associated exceptions and automatically fixed, removing the
invalid key from the accounts profile and adjusting the key
algorithm with the one associated with the public key.
Bug: Issue 330758152
Release-Notes: Breaking change: validate and reject SSH keys with invalid or mismatched algorithm
Change-Id: I83c89a786f70aa3b88744a70f10012415f45f284
(cherry picked from commit 6eac4fe62a6a081c5c9395f8874bdc49615eea0d)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The SshKeyUtil has always missed the validation of the SSH key algo
specified as a prefix of the Base-64 encoded key.
Whilst the behaviour has always been the same since 2008, it is
nonetheless buggy and should be validated for preventing the storage
of invalid keys.
TODO: Mark the SSH key validation test as disabled for allowing the
build to succeed. The test can be enabled back again once the validation
has been amended to verify the key algorithm.
Bug: Issue 330758152
Release-Notes: skip
Change-Id: I42b1c6474fa876828e5353e81e97b21b981665f9
(cherry picked from commit 4f3ff5abdb18ad34078d8dd2f0ad1d4e610957d1)
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Move the comment about restarting the DataSource for back filling
results skipped for visibility. The comment in fact is valid for
any PaginationType specified.
Adress the comment in change 413358 [1].
[1]: https://gerrit-review.googlesource.com/c/gerrit/+/413358/comment/d7a90bb0_0f8ebc9f/
Release-Notes: skip
Change-Id: I9826b8be4005a946c7952ae89eb81f8c191997e3
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The introduction of paginationType NONE in
change 379354 affected the API pagination.
In particular, if some results are skipped because of
of the visibility constraints, more changes need to
be asked from the index, until we have filled the
full limit the caller wants or if there are no more
changes [1].
[1]: https://gerrit-review.googlesource.com/c/gerrit/+/379354/8/java/com/google/gerrit/index/query/PaginatingSource.java#72
Bug: Issue 328958027
Release-Notes: Fix NONE paginationType filtering of changes
Change-Id: I3ccff7f8ba5a6d3903f9acbe3f5c439c55225ce2
|
|
|
|
|
|
|
|
|
|
|
| |
When querying Lucene, if we have not filled the full
limit the caller wants, `PaginatingSource.read()`
need to restart the source and continue. This should happen
regardless of the paginationType.
Bug: Issue 328958027
Release-Notes: skip
Change-Id: I02bc97eeecde2149e374dd036b87dd4c06034f27
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* stable-3.3:
Make the indexing operation fail upon StorageException(s)
ConsistencyCheckerIT: Delete calls to index broken changes that fail silently
RefAdvertisementIT: Don't call reindex that would fail silently
Release-Notes: skip
Change-Id: I775de4baa70920c515aa040ff2222ec8052dd366
|
| |\
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
* stable-3.2:
Make the indexing operation fail upon StorageException(s)
ConsistencyCheckerIT: Delete calls to index broken changes that fail silently
RefAdvertisementIT: Don't call reindex that would fail silently
Release-Notes: skip
Change-Id: Iecdf1e4ce450c8fab5244338ce46e6e4d95bf959
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Change 270450 caused the blanking of the Lucene document
upon reindexing if any field caused a StorageException.
Whilst the overall intention was good, the implementation
caused the Lucene index replace operation to continue with
a Document without any fields instead of making the whole
operation fail.
StorageExceptions are thrown when the underlying storage,
being a filesystem or anything else, returns some errors.
Whether the failure is permanent or temporary (e.g. concurrent
GCs, repacking or pruning may cause sporadic StorageException(s))
returning a blank Lucene document was incorrect because, instead
of failing the operation, it was causing the change entry to be
completely removed from the index.
Let the StorageException fail the indexing operation, so that
existing entries may continue to exist, allowing the caller
to retry the operation.
The previous implementation, returning an empty Document, did
not allow any retry, because once the change entry was removed
from the index, it could not be discovered and reindexed anymore
for example by a `gerrit index changes <changenum>`.
Tested manually, applying a randomic StorageException thrown
during the fetching of the ChangeField extensions:
public static Set<String> getExtensions(ChangeData cd) {
if (new Random().nextBoolean()) {
throw new StorageException("Simulated storage exception");
}
return extensions(cd).collect(toSet());
}
Before this change, every time one change indexing throws a
StorageException, it disappears from the index. Eventually,
all changes will be eliminated and only an off-line reindex
or the reindex of all changes in the project would allow to
recover them.
After this change, some of the indexing operations are
successful and other fails. However, retrying the reindexing
operation would allow to reindex all of them.
Even if the above test case looks strange at first sight,
it simulates a real-life scenario where a low percentage
of indexing operation (0.1%) may fail because of sporadic
StorageExceptions. Before this change, some index entries
were missing on a daily basis (5 to 10 changes per day),
whilst after this change all indexing operation can be
retried and do not result in any indexing entry loss.
Bug: Issue 314113030
Release-Notes: Fail the change reindex operation upon StorageException(s)
Change-Id: Ia121f47f7a68c290849a22dea657804743a26b0d
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
These calls used to fail silently and are of no further use.
This change is part of a series that allows for the fake change index
to run in integration tests. The fake index throws an exception in
case reindexing fails, which made this test fail.
Release-Notes: skip
Change-Id: I9c9d3f7613bca057bf0963442077bb6c0b3dab51
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
receivePackOmitsMissingObject wants to test a scenario where we
have a change ref, but the patch set ref / commit was deleted or
is otherwise gone.
It did that by adding a new patch set pointing to a non-existent
SHA1 and triggered a reindex of that change. However, that reindex
fails silently because we submit the reindex task to another
executor and provide no feedback to the caller.
This commit modifies the test and just deletes the patch set ref
which makes for the same scenario but without the reindex trouble.
This change is part of a series that allows for the fake change index
to run in integration tests. The fake index throws an exception in
case reindexing fails, which made this test fail.
Release-Notes: skip
Change-Id: Icf79795a343a6abca28c6504b279eb0a5c84fad2
|
|\| |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
* stable-3.3:
Update Jetty to 9.4.53.v20231009 for security updates
Release-Notes: skip
Change-Id: Iebbfe9ea1fc69e91e74208e30ea9a1b0a71ab1bd
|
| |\|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
* stable-3.2:
Update Jetty to 9.4.53.v20231009 for security updates
Release-Notes: skip
Change-Id: If1d4dfc04d9407f0bab937a996d9321358a7b6a8
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Jetty 9.4.53.v20231009 includes the following two security fixes:
- CVE-2023-36478 [1] - zero-days security issue discovered on
the 10th of October, also known as "HTTP/2 Rapid Reset"
- CVE-2023-44487 [2] - HTTP/2 Stream Cancellation Attack
[1] https://nvd.nist.gov/vuln/detail/CVE-2023-36478
[2] https://nvd.nist.gov/vuln/detail/CVE-2023-44487
Release-Notes: Update Jetty to 9.4.53.v20231009 with critical security fixes
Change-Id: Ie93fbcb8b35d9e4997dc0578893a8856b56b173c
(cherry picked from commit c49057e05d35ff2ad1a7307aa9168b84ae7588db)
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Since we need to keep 3.4 compatible with Java 8, we can't update it
past the JGit stable-5.13 branch as 6.0+ is Java 11 only.
* modules/jgit 035e0e23f25..5aa8a7e27 (7):
f6928f573 Revert "RefDirectory: Throw exception if CAS of packed ref list fails"
2fd050c56 GcConcurrentTest: @Ignore flaky testInterruptGc
2aaa56113 Fix CommitTemplateConfigTest
61d4e3134 [bazel] Skip ConfigTest#testCommitTemplatePathInHomeDirecory
f3a3a2e87 Demote severity of some error prone bug patterns to warnings
b9b90d1d3 Add missing since tag for SshBasicTestBase
4f662a26f Add missing since tag for SshTestHarness#publicKey2
631858183 Silence API errors
96d9e3eb1 (origin/stable-5.9) Prevent infinite loop rescanning the pack list on PackMismatchException
f371bfb3e Remove blank in maven.config
f73efea21 Remove blank in maven.config
23b9693a7 DirCache: support option index.skipHash
3212c8fa3 GC: Close File.lines stream
d9f75e8bb If tryLock fails to get the lock another gc has it
1691e3877 Fix GcConcurrentTest#testInterruptGc
49f527386 Don't swallow IOException in GC.PidLock#lock
a6da439b4 Check if FileLock is valid before using or releasing it
8eee800fb Acquire file lock "gc.pid" before running gc
380f091fa Silence API errors introduced by 9424052f
9424052f2 Add pack options to preserve and prune old pack files
ed2cbd9e8 Allow to perform PackedBatchRefUpdate without locking loose refs
e55bad514 Document option "core.sha1Implementation" introduced in 59029aec
21e902dd7 Shortcut during git fetch for avoiding looping through all local refs
765083200 FetchCommand: fix fetchSubmodules to work on a Ref to a blob
8040936f8 Silence API warnings introduced by I466dcde6
ad977f157 Allow the exclusions of refs prefixes from bitmap
e4529cd39 PackWriterBitmapPreparer: do not include annotated tags in bitmap
611412a05 BatchingProgressMonitor: avoid int overflow when computing percentage
cd3fc7a29 Speedup GC listing objects referenced from reflogs
2011fe06d FileSnapshotTest: Add more MISSING_FILE coverage
aa9f736c3 Silence API warnings
48316309b [benchmarks] Remove profiler configuration
c20e9676c Add SHA1 benchmark
4fac79658 [benchmarks] Set version of maven-compiler-plugin to 3.8.1
8d53a1433 Fix running JMH benchmarks
59029aec3 Add option to allow using JDK's SHA1 implementation
924491d4d Ignore IllegalStateException if JVM is already shutting down
Release-Notes: Bump JGit to v5.13.2.202306221912-r
Change-Id: I4cc1acbf1761336fa4556450791b814b275c3370
|
|\ \ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Add meaningful string representation for PeriodicProjectListCacheWarmer
class so that task entry corresponding to it is readable in gerrit
show-queue output.
Change-Id: I8138f19c4354911d76233cd1fe7a7f89e6c4262d
Release-Notes: skip
|
|/ / /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Add NONE option to disable index backend pagination,
allowing the user deciding between performance
and data consistency.
This option needs to be honoured by the indexing backend
and this change introduces the correct implementation
for Lucene.
We should also create the corresponding change on the
elasticsearch libModule for throwing an IAE to highlight
that the backend doesn't support NONE pagination type.
API pagination is not affected by this change and is
always preserved.
When having pagination, we could encounter some problems
in the result set, when moving from one page to the
next of the index results:
* duplicated entries (when new changes are created)
* missing entries (when changes are removed or set to
private)
* wrong entries status (when changes are switching
their state during the page switching)
Having duplicates or missing results could have side effects,
depending on the use-case of the consumer of the data.
For example, since indexes are used to populate some LoadingCache
(SearchingChangeCache [1]), inconsistent results returned by
the indexing backend may cause the cache loading function
to fail or keep an inconsistent state which doesn't reflect
the underlying data.
For this case, a workaround [2] has been put in place in master
and 3.8, however, it will not work for the missing
or inconsistent entries.
[1]: https://gerrit.googlesource.com/gerrit/+/refs/heads/stable-3.6/java/com/google/gerrit/server/git/SearchingChangeCacheImpl.java#157
[2]: https://gerrit-review.googlesource.com/c/gerrit/+/349995
Bug: Issue 287484350
Release-Notes: Introduce NONE pagination type and its support in Lucene indexing backend
Change-Id: I40f87895d9dac951ae30c5562b2ddbf28b34a41a
Co-Authored-With: Fabio Ponciroli <ponch78@gmail.com>
(cherry picked from commit a0da72341b8159c5341af711e6b8991ef1aa7875)
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
GerritServer had all the logic to start the daemon in replica mode only
for the in-memory repository configuration. When the test used
a local disk, it ignored the GerritConfig annotation
and the restartAsSlave() method call.
Make sure that GerritServer has at least the SSHD service started
when the test has the @NoHttpd annotation, because now the
local disk based tests may have not realised that they were
starting a server without any protocol daemon running on it.
Align the behavior of the in-memory and disk-based GerritServer
instantiation, so that tests on Gerrit replica are still passing or
failing regardless of the underlying data storage.
Also, delay in the evaluation of the replica flag provider
GerritIsReplicaProvider so that existing code can understand the
correct status of Gerrit replica in both runtime and test
configurations.
Bug: Issue 290794206
Release-Notes: Fix acceptance tests Gerrit daemon startup in replica mode in conjunction with @UseLocalDisk
Change-Id: I76c1612752f2cab995a4abc5c45fc29c640812c8
|
| | |
| | |
| | |
| | |
| | | |
Release-Notes: skip
Change-Id: I852f359c30f8f8713074f75f293a6d54856af9a2
|
| | |
| | |
| | |
| | |
| | | |
Release-Notes: skip
Change-Id: Ifba886e1476f00824fd707364ee370d49c9d1174
|
| | |
| | |
| | |
| | |
| | | |
Release-Notes: skip
Change-Id: I241897a2d71eceda0f5d26c7bfccbf13c514f5a6
|
|\| |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
* stable-3.3:
Introduce cache.threads option to enable a custom cache executor
GitwebServlet: Fix project root computation
Release-Notes: skip
Change-Id: Ia75d4de94de85d24dae3a27f0916c066a861f7da
|
| |\|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
* stable-3.2:
Introduce cache.threads option to enable a custom cache executor
GitwebServlet: Fix project root computation
Release-Notes: skip
Change-Id: I342a210680ff43796f9fa4403f1b70f78e0ddb3e
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Since the introduction of Caffeine as alternative to Guava
in Change 244612, the execution of the cache event listeners moved
to a background thread, run by the ForkJoinPool's common pool [1].
The subtle difference has caused issues to the plugins that are
registering listeners, like the high-availability and multi-site:
the consequences have been quite serious because of the inability
to understand if the eviction was caused by a forwarded cache
eviction event or by an execution of Gerrit API or other internals
that caused the removal of the entry.
The use of the JVM common pool has several disadvantages and, under
certain conditions [2], it may even lead to deadlocks or unexpected
blockages.
By introducing the cache.threads option, decouple the cache background
threads execution and allow to configure an explicit separate
thread pool which can be tuned and decoupled from the rest of the
JVM common threads.
Also, allow to restore the plugins' cache listeners legacy behaviour
without losing the ability to leverage the performance of Caffeine
cache vs. the traditional Guava.
By default, this change is a NO-OP because it preserves the current
behaviour of background execution tasks of the Caffeine cache.
Introduce DefaultMemoryCacheFactoryTest class from stable-3.4 for
avoiding further conflicts when merging upstream.
References:
[1] https://github.com/ben-manes/caffeine/wiki/Guava#asynchronous-notifications
[2] https://dzone.com/articles/be-aware-of-forkjoinpoolcommonpool
Release-Notes: introduce cache.threads option to allow custom executors for Caffeine caches
Bug: Issue 16565
Change-Id: I204abd1bdbf2bbed5b3d982d14cbc5549ac96ace
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
If9da362140 introduced regression in project root computation.
Also add test coverage for Gitweb servlet so that a similar regresssion
could be avoided in the future.
Test Plan:
bazel test javatests/com/google/gerrit/httpd/...
Bug: Issue 16449
Release-Notes: Fix project root computation in Gitweb servlet
Change-Id: Ia1a7bdea55db4deb55a3b82a292890e7febbe675
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The JGit's SSH loading of the default session factory may lead to a
non-deterministic instance if more than one factory is available.
See below the current logic:
private static SshSessionFactory loadSshSessionFactory() {
ServiceLoader<SshSessionFactory> loader =
ServiceLoader.load(SshSessionFactory.class);
Iterator<SshSessionFactory> iter = loader.iterator();
if(iter.hasNext()) {
return iter.next();
}
return null;
}
If the first service is the Apache Mina SSHD Session Factory, then
it would never be possible to configure the JSCH in clientImplementation
when configured in gerrit.config.
Manage expicitly the JSCH option for assuring that the desired session
manager is always instantiated when requested.
Release-Notes: Fix the ability to configure JSCH as client implementation in gerrit.config
Change-Id: Ifafd717e16b5fc1853553bd1b6e3c60bb048b57e
|
|\| |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
* stable-3.3:
Disable printing cache stats by default on Init/Reindex
Release-Notes: skip
Change-Id: I93404010eb9dfe0b9c2cb49f57091efd815bcb18
|
| |\|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
* stable-3.2:
Disable printing cache stats by default on Init/Reindex
Release-Notes: skip
Change-Id: Ia8356e393bd8cd8601e4805a0a7106b7a047b593
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Printing cache statistics can be slow, particularly when the
caches are large. For example, when the caches are ~400G,
printing cache stats takes over 30 mins.
Release-Notes: Printing cache stats is disabled by default for Init and Reindex
Change-Id: I848d6dfef0be9dee9ebac8a597dd7e617cbe30e4
|
|\ \ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
In https://gerrit-review.googlesource.com/c/gerrit/+/293978, eTag was
removed from GetRevisionActions, but not RevisionResource. This resulted
in the UI to show stale 'submit' action on the change in the same
submission chain. The UI now uses RevisionResource eTag, that does not include
MergeSuperSet (all related changes).
In this change:
- Return non-cacheable result from GetRevisionActions, so it does not
return RevisionResource eTag.
- Add an experiment flag to remove revision eTag. The computation of eTag
can be expensive, we can run experiments to see if it can be dropped
permanently.
Bug: Issue 14686
Bug: Issue 14779
Bug: Issue 16030
Release-Notes: Fix Issue 14779: Caching issue with "Submit including parents"
Change-Id: I65a3277d3cb0a116a56f6dcaed205c7f808443c3
|
|\ \ \ \
| |/ / /
|/| / /
| |/ /
| | |
| | |
| | |
| | |
| | | |
* stable-3.3:
Predicate: Add safety check against not(any())
Add an index-safe predicate that matches all changes
Release-Notes: skip
Change-Id: I0860eb42f74b184da853a064d13dcf62917fa58c
|
| |\|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
* stable-3.2:
Predicate: Add safety check against not(any())
Add an index-safe predicate that matches all changes
Release-Notes: skip
Change-Id: Ie1006ba75c63ef445771acea26dcd371eb750156
|
| | |\
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
* stable-3.1:
Predicate: Add safety check against not(any())
Add an index-safe predicate that matches all changes
Release-Notes: skip
Change-Id: I56708f2d9a3218f2981e65b867003d0ea714407a
|
| | | |\
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
* stable-3.0:
Predicate: Add safety check against not(any())
Add an index-safe predicate that matches all changes
Release-Notes: skip
Change-Id: Id37339b406675721a4bac3411c6e2874986954bb
|
| | | | |\
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
* stable-2.16:
Predicate: Add safety check against not(any())
Add an index-safe predicate that matches all changes
Release-Notes: skip
Change-Id: Ia64e074194cd561bc55a6f0e7976633aaa9bb2ff
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
It's tempting to use as a way to say "match no results", but it's very
unsafe. This simple check won't catch all misuses, but it would have
prevented Iad564dd47 from causing production problems.
Fix one obvious caller that now trips this check. This was a real
problem: a bare [is:watched] query on a large site for a user with no
project watches required iterating all changes to return an empty
result. We have real instances of this causing extremely long requests
in our server logs.
Release-Notes: skip
Change-Id: Ib43f7466db5db315bf87558a37f18a1832ec641f
(cherry picked from commit 02bab2161cd7ce7f682a28dad496a4465cb40844)
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Iad564dd47 was problematic in the error case because it returned
Predicate.not(Predicate.any()). The "any" predicate is not an index
predicate, which produces at least two problems:
* When used in a with another non-index predicate, it
might throw "no ChangeDataSource" from AndSource/OrSource.
* More dangerously, when combined with an index predicate, it is used as
a post-filtering predicate. A post-filtering predicate that matches no
results means it gets applied to every change in the index before
returning an empty set, which is bad.
Instead, reuse a handy existing index predicate that is designed to
return no results efficiently from the index:
ChangeStatusPredicate.NONE. Expose this from a more general location,
ChangeIndexPredicate#none().
Release-Notes: skip
Change-Id: Ic9a7e277070cff7768ec91c1972cf8e9f6deacb1
(cherry picked from commit d9679cdb605b8a10977e1fc821f57dd5749b1dd5)
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
* ChangeIndexRewriterTest: OrPredicate is a raw type. References to
generic type OrPredicate<T> should be parameterized
* ChangeStatusPredicate: The enum constant NEW should have a
corresponding case label in this enum switch on Change.Status.
* AndSource: Type safety: Unchecked cast from Predicate<T> to
DataSource<T>
Release-Notes: skip
Signed-off-by: Edwin Kempin <ekempin@google.com>
Change-Id: I01413362ffd402f8f20ec929cd86610f51f50050
(cherry picked from commit fec2b0ce0d51cbb59106e9b055272377b1e6173d)
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Release-Notes: skip
Change-Id: I69d1c634bfcbc67dfcaa5457186e126d1f87e9b6
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Release-Notes: skip
Change-Id: I98a1fc487236d2bc61d3eccaeef4d7555dd83168
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
During the review of I5906a1f2 the PassthroughLoadingCache added
the mechanism for bulk-loading in the getAll() method. However,
the new flavour of loading was not covered by additional tests
and then failed to honour the method because of the casting of
a regular Map to an ImmutableMap.
Rectify the situation and add one extra test to document the
expected behaviour.
Release-Notes: Fix bulk loading of entries for disabled caches
Change-Id: I7a24deebeef7f6ae853d11c1ee458b53296e1a44
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Release-Notes: skip
Change-Id: I482b71c301a10ab3b3c072138b817d6ffc8e739b
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Release-Notes: skip
Change-Id: I1dc9dc2002736a3c6dadcb8b6559084714b72632
|
|\ \ \ \ \ \ |
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Release-Notes: skip
Change-Id: Ic7c08ea82a5800df2a93eb946183003de8e6aa56
|
|/ / / / / /
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Gerrit documentation states that when the memoryLimit of a cache
is zero then it is disabled. However, the code told a different
story because there was always a Guava/Caffeine cache created
with a potential impact on performance due to the inherent locking
of the underlying ConcurrentHashMap.
The ConcurrentHashMap would always lock additions and removals on
the same key, and the only way to avoid locking is to avoid
the instantiation of the ConcurrentHashMap.
Why should someone want to ever disable a cache?
Disabling the cache typically isn't a good idea, however, there
could be situations where:
1. The rate of the access of the cached value for K is low and the
load time is high
2. The rate of the eviction of the cached value for K is high
The Issue 16379 is one example of the above condition, however,
there are more instances of the same problem.
The eviction done at 2. for K would then cause all the incoming threads
to wait until the load at 1. for K is done.
Introduce a simple LoadingCache implementation called
PassthroughLoadingCache that is *really* just delegating the
calls to the underlying loader without involving any caching
or locking.
This change also makes the memoryLimit behaviour consistent with the
diskLimit set to zero means disabling the implementation of the cache.
Bug: Issue 16379
Release-Notes: Increase overall performance of disabled caches by avoiding any locking due to Guava/Caffeine caches with zero weight.
Change-Id: I5906a1f250014d96aaadcda0f64a37a78030d047
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
ChangeIndexRewriter processes predicate tree to group all index
predicates into one at each level. Rewriter starts from root node
and traverses to all the child nodes. Rewriter recursively marks each
child predicate as indexed or nonindexed. Then it partitions all the
indexed predicates together and also rewrites And/OrPredicates to
either a source or a cardinal predicate. Before this change, rewriter
short-circuits the partitioning & skips rewrite logic when there is
only one indexed child. Due to this behavior, query [2] was not
rewritten to OrCardinal predicate since (status:open OR status:merged)
is one indexed child at its level (as in [3]), whereas query [1] was
rewritten to AndCardinal.
[1] status:open age:1d issue:123
[2] (status:open OR status:merged) issue:123
[3] root
/ \
Or issue:123
/ \
open merged
Fix index rewriter to run the rewrite logic to rewrite Or/AndPredicate
to respective cardinal predicates when it short-circuits. This helps
AndSource to choose the right source more often. Also fix few tests
to assert on OrCardinalPredicate rather than OrPredicate.
Unfortunately there are no operators which are datasources other
than Index in the Core as of now. We at Qualcomm have a datasource
operator in a plugin which takes a bug number and queries the bug
tracker's DB for changes related to that bug number. Suppose this
operator (say issue:123) returns around 15 changes. On a 3.4 test site
against LUCENE which contains ~20K open changes, ~2.8M merged changes,
the performance of,
query "(status:open OR status:merged) issue:123"
before: 6m52s, 6m40s, 6m36s
after: 0.07s, 0.07s, 0.07s
Release-Notes: AndSource chooses right source more often
Change-Id: I87311359e28795d5f461b233cecd82ecaf53ee57
|