| Commit message (Collapse) | Author | Age | Files | Lines |
|\
| |
| |
| |
| |
| |
| | |
* origin/stable-3.3:
Do not retry replication when local repository not found
Change-Id: I6a8d0650ca24a4aac86fdafc819b028cf8864332
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When many repositories get created and deleted in a quick succession it
may happen that a repository gets deleted before its replication task
starts. Such replication task will keep retrying, possibly indefinitely,
but has no chance to succeed.
Another scenario where this issue can occur is when a repository gets
created but a replica is not reachable for some time. If the repository
gets deleted before the replica gets reachable again, the replication
task will keep retrying but the local repository will not exist.
When handling the RepositoryNotFoundException in PushOne, set the
retrying flag to false. This ensures that this replication task is
not retried and gets finished.
Bug: Issue 15804
Change-Id: Ia55c5ec1c961f4c2aec9ecee8056f22b436e9fda
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When using 'replication start --wait' we need
ReplicationState.waitForReplication() to return when the task we're
waiting on has been canceled, either through an admin action or because
the replication distributor determined another node already completed
it.
Add a couple tests for PushAll that confirm this behavior was previously
broken and is fixed now.
Change-Id: I36320ae079af5d7673e05d20ddc94b42a9b04347
|
|\|
| |
| |
| |
| |
| |
| |
| | |
* stable-3.3:
Doc: make explicit that remoteNameStyle is for non-Gerrit repos
Doc: remoteNameStyle might result in a repo name clashes
Change-Id: I1b2e9c5fd408b8f8bd1a3ef3104182c6f6474559
|
| |\
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
* stable-3.2:
Doc: make explicit that remoteNameStyle is for non-Gerrit repos
Doc: remoteNameStyle might result in a repo name clashes
Change-Id: I32d598a36fe20c469528eca8d4c10d8775f7a3c4
|
| | |\ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
When using the remoteNameStyle in the replication.config settings
the Gerrit admin needs to be warned on the risks of using sytles
that may be ambiguous and associate two source repositories to
the same target repository on the replica.
Example: set remoteNameStyle to basenameOnly
/foo/my-repo.git => pushed to my-repo
/bar/my-repo.git => pushed to my-repo
When two commits are pushed to the same branch on the two repos
/foo/my-repo.git and /bar/my-repo.git, the replication plugin
would push them to the same target repo my-repo, causing clashes
and losing commits (depending which one is pushed first).
The risk needs to be highlighted so that the Gerrit admin
can check that univocity of the mapping is respected.
Bug: Issue 15315
Change-Id: Iba42907bceb8d1c27d739f3b0cded4a1d7400686
|
| | |/
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The documentation already specify non-Gerrit examples of use
of remoteNameStyle; however, it does not say that if the remote
repository is backed by Gerrit, the *ONLY* supported option is
"slashes", otherwise the consequences could be catastrophic.
Two Gerrit servers (e.g. primary and replica) need to have
full alignment of repository names, as they are also referenced
in the inherited ACLs. Having a repository name mapping, may
disrupt the ACLs evaluation and make the remote Gerrit replica
unusable.
Bug: Issue 15318
Change-Id: I4d9447a4d0366a98037470c0cceda36f7a1b8a25
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
ProjectDeletion events were not registered event types.
This caused failures when EventGson tried to serialize/deserialize
objects having those events as field, Throwing the JsonParseException:
```
Unknown event type: project-deletion-replication-scheduled
```
Register ProjectDeletion event types, similarly to what already done for
Ref replication events (RefReplicatedEvent, RefReplicationDoneEvent,
RefReplicationScheduledEvent).
This change cherry-picks change [1]
[1]https://gerrit-review.googlesource.com/c/plugins/replication/+/308383
Bug: Issue 14628
Change-Id: I7471e9a0f8ea8ec27d5800f785d1c7006b35055c
|
| | |
| | |
| | |
| | |
| | | |
Depends-On: https://gerrit-review.googlesource.com/c/gerrit/+/301238
Change-Id: I3e0569730b89f80c1209b4370ddb1c8367375e86
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When the distributor runs, it now stores a snapshot of pending
pushes and then removes from this snapshot all the RefUpdates
which were found while adding pending persisted tasks. The
remaining pushes in the snapshot can now be pruned without
needing to do an existence check on them since they were no
longer stored persistently (and thus no longer needed to be
executed). This effectively makes pruning I/O less, thereby
reducing the load put by distributor on disk I/O.
Change-Id: I0916a57b302fd7d207fd31ec26df65d262a76124
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
To test if distributor can prune, we add an event to storage and
then delete the waiting task from storage to simulate it being
started by another node. We then assert that the project task
gets pruned from the work queue by the time the next distribution
cycle completes.
Change-Id: Ifeed8444986be03bddf443fed94170c8ee5ae72c
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This changes adds a basic IT test for Distributor. Puts an event in
storage and asserts that replication work is done for the same.
Change-Id: I7753af4bdcb6fb6675edc020bfd0f56edf1ae69b
|
|\| |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
* stable-3.3:
Call retryDone() when giving up after lock failures
Fix issue with task cleanup after retry
Change-Id: Ib2216e3b06ea62cb06c22ad955a8c252f3bacccc
|
| |\|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
* stable-3.2:
Call retryDone() when giving up after lock failures
Fix issue with task cleanup after retry
Change-Id: I03b5bbfcfca75a3ee54e782e4b64f19b1100e2eb
|
| | |\
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
* stable-3.1:
Call retryDone() when giving up after lock failures
Fix issue with task cleanup after retry
Change-Id: I6dbeaa0d21545a1903bdb11c5de5d9e8f72079c5
|
| | | |\
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
* stable-3.0:
Call retryDone() when giving up after lock failures
Fix issue with task cleanup after retry
Change-Id: Id9ce63cd6112b3c8b16f9daafe3a8a982521baa9
|
| | | | |\
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
* stable-2.16:
Call retryDone() when giving up after lock failures
Fix issue with task cleanup after retry
Change-Id: Id987043c8a26bd3f69fb4bd5b84591ae20cb83ba
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Previously when giving up after retrying due to too many lock failures,
a 'replication start --wait' command would wait indefinitely if it was
waiting on the push that gave up. Fix this by calling retryDone() after
giving up which will trigger the ReplicationStatus to reflect a failure
allowing the waiting to complete.
Change-Id: I0debade83612eb7ce51bab0191ab99464a6e7cd3
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Destination.notifyFinished method calls finish on
ReplicationTasksStorage.Task objects which are not scheduled for retry.
The issue is that for rescheduled tasks PushOne.isRetrying
will always returns true even if task is already replicated.
That creates a situation where tasks scheduled for retry are
never cleaned up.
Bug: Issue 12754
Change-Id: I4b10c2752da6aa7444f57c3ce4ab70eb00c3f14e
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Running the recent error prone version is flagging error pattern
FloggerFormatString:
GerritSshApi.java:69: error: [FloggerFormatString] missing argument for
format specifier '%s'
logger.atInfo().log(
^
(see https://errorprone.info/bugpattern/FloggerFormatString)
Destination.java:466: error: [FloggerFormatString] extra format
arguments: used 0, provided 2
repLog.atFine().log("scheduling deletion of project {} at {}", project, uri);
^
(see https://errorprone.info/bugpattern/FloggerFormatString)
Change-Id: I01ea76f6673cb445924c72d40cef9e4ba57e2e6f
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
allPushTaksCompleted should be allPushTasksCompleted
Change-Id: Ifc7fee0feecf6f8d768da8a050c746996cc7aa11
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Notify the scheduling and the result of the execution of a project
deletion on remote targets.
This is useful for consumers who are interested in understanding not
just when a project deletion is initiated, but also when it is
completed.
The same event notification was applied to ref updates through the
propagation of RefReplicationScheduled and RefReplicationDone events,
but it was never applied to project deletions.
Feature: Issue 13894
Change-Id: I9b8197e67f4eddcc51c408c2db4c5991487a3d5e
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
RefReplicatedEvent and ReplicationScheduledEvent have a targetNode field
that is populated using a method intended for human-readable SSH command
output. Deprecate that field and add a new targetUri that is the ASCII
string representation of the URI. This new field is more suitably named
and populated for consumers.
This will not break existing users of targetNode until a later change
removes that field. The only known user of the targetNode field is the
Jenkins Gerrit Trigger plugin [1][2].
[1] https://github.com/jenkinsci/gerrit-trigger-plugin
[2] https://github.com/sonyxperiadev/gerrit-events/blob/d15f38adc7ea90a98486dbe0df01d31335aaa3af/src/main/java/com/sonymobile/tools/gerrit/gerritevents/dto/events/RefReplicated.java
Change-Id: If07f7103a1f9cf9e49e5eef4c91e9de1b5e46963
|
|\ \ \ \ \ \ |
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
In change [1], we started to avoid starting replication tasks which are
not present in ../waiting dir. But the replication commands were waiting
due to state not being updated for the work which is avoided. This
change marks the refs which are neglected as not attempted and in turn
notifies that update to ReplicationState object.
This fix only affects correctness of an internal state.
[1] Ifbb7018ec1d960015626c089a4dadf6b0247d278
Change-Id: Iff31d62ebbbfb88754b43b60344e05dd4b0a1f6d
|
|\ \ \ \ \ \ \ |
|
| |/ / / / / /
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
In cases where the replication start command is triggered with --now,
logs incorrectly show that replication is scheduled with default delay.
This change fixes it.
In cases where a push is consolidated with an existing pending push,
logs are added to specifically mention it.
Change-Id: I296a2ec5772eb60b38dd47502fa3bd5d247d317a
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Fix formatting missed in 2a7d9793041c2b1045cd814decd8096316859807.
Change-Id: I2f13e38605b5f175e6674e5073840b08805d444d
|
|\ \ \ \ \ \ \
| | |/ / / / /
| |/| | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
* stable-3.3:
Use volatile and AtomicIntegers to be thread safe
Change-Id: I9c9f0c91414414bebe5a9530cca736e1be4a7ad7
|
| |\ \ \ \ \ \
| | | |/ / / /
| | |/| | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
* stable-3.2:
Use volatile and AtomicIntegers to be thread safe
Change-Id: I7183e546b46e17530024cf4368edbd1d32216549
|
| | |\ \ \ \ \
| | | | |/ / /
| | | |/| | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
* stable-3.1:
Use volatile and AtomicIntegers to be thread safe
Change-Id: I923730a525fbffb4c304ab0d23b088f5e8bfa307
|
| | | |\ \ \ \
| | | | | |/ /
| | | | |/| |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
* stable-3.0:
Use volatile and AtomicIntegers to be thread safe
Change-Id: I0be6a13344043a48f2fc4a0367559f5b5f1fbca9
|
| | | | |\ \ \
| | | | | | |/
| | | | | |/|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
* stable-2.16:
Use volatile and AtomicIntegers to be thread safe
Change-Id: I90a3e17e2f49d07707409ba390c0a6dd0501b512
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Modify the fields in ReplicationState class to be volatile and
AtomicIntegers so that changes to them are reflected to other
threads. By not doing so, modifications made by one thread to
these fields may not be reflected instantly depending on
cpu caching thus resulting in incorrect state
Change-Id: I76512b17c19cc68e4f1e6a5223899f9a184bb549
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Improve package hierarchy by introducing an .events. subpackage
containing all events classes.
Change-Id: Ib9b1bde342ea24f3c5b836af642a75ecff56756d
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Ahead of some refactoring, ensure there's at least some minimal test
coverage.
Change-Id: If7c6cd66b19d3b98ab0be2a5cbd6298b3d41a865
|
|\| | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
* stable-3.3:
Document that authGroup must have Access Database
Change-Id: I0b1db88885986487512d72695952e64a679c1620
|
| |\| | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
* stable-3.2:
Document that authGroup must have Access Database
Change-Id: I6d7292dd7e604edbf4e2fd6b3c1615f43c1d1df4
|
| | |\| | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
* stable-3.1:
Document that authGroup must have Access Database
Change-Id: I09378f4288fd1335932bdf120bba8418fc8f51c7
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Bug: Issue 13786
Change-Id: Iaf65252b25b9c40e5cfd1ac25d55fbf70536f83e
|
|\ \ \ \ \ \ \ |
|
| | |_|_|_|_|/
| |/| | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
This change adds refs to show-queue output making it convenient to see
which refs are being replicated by each task.
By default show-queue output will be limited to show 2 refs.
The value 2 is chosen because whenever a new patchset is created
there are two refs to be replicated (change ref and meta ref), refs
needs to be limited since it will become inconvenient if there are too
many refs being replicated.
Gerrit admin can override this behavior by providing "maxRefsToShow"
config in replication config file, to show all refs gerrit admin can set
"maxRefsToShow" config to zero.
Sample show-queue output:
(retry 1) push aaa.com:/git/All-Projects.git [..all..]
(retry 1) push aaa.com:/git/test.git [refs/meta/config refs/heads/b1]
(retry 1) push aaa.com:/git/test.git [refs/heads/b1 refs/heads/b2 (+1)]
(retry 1) push aaa.com:/git/test.git [refs/heads/b1 refs/heads/b2 (+2)]
Change-Id: Iaf7b32a0ac5f029671757658174cfde4e07f365c
|
|\ \ \ \ \ \ \
| |/ / / / / /
|/| / / / / /
| |/ / / / /
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
* stable-3.3:
Split integration tests to different targets
Revert "Adjust to changes in Gerrit core"
Don't check read permission when authgroup isn't set
Change-Id: I57a660d4e851e61455c9118a49b35af6a708b96d
|
| |\| | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
* stable-3.2:
Split integration tests to different targets
Don't check read permission when authgroup isn't set
Change-Id: I4a1e1be5c4323de1554091786c55ca9a84d391e5
|
| | |\| | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
* stable-3.1:
Split integration tests to different targets
Don't check read permission when authgroup isn't set
Change-Id: Ic5c8f0468869476a01923b4d374f0188c271daf2
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Running all integration tests as part of one single 'replication_it'
target does not cope well with the addition of extra tests, because it
is bound to take longer and longer, eventually hitting any test timeout
threshold.
Splitting integration tests into different targets avoids timeout
failures and also provides additional benefits, such as:
- Better understanding of test failures
- More efficient utilization of bazel build outputs and remote caching,
effectively making tests execution faster.
Bug: Issue 13909
Change-Id: Ifc6cce9996d3a8a23ec2a66c377978205fb6680f
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
It's unnecessary to check read permission when authGroup isn't set since
the then the user is a RemoteSiteUser that is-an InternalUser that has
read access to everything.
Change-Id: Ie6985250b0acb50c08fdcae75cc608222b1add35
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Revert submission 283559-currentuser-remove-cache-key
Reason for revert: Causes a latency regression for some hosts
Reverted Changes:
I76bfd3ebc:Adjust to changes in Gerrit core
If7ccfd9a4:Remove unused CurrentUser#cacheKey method
I1378ad083:Remove PerThreadCache
Change-Id: I84965f655d62c258c226ad5d585cee24dea047cc
(cherry picked from commit a6a6ec5982e41a0ee9bfe24a46be96d4f13fcaaa)
|
|\| | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
* stable-3.3:
Rely on WaitUtil moved to the acceptance framework
Change-Id: I80703fcb562ea8f3952cb07809f887a5a27fd5ce
|