| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A waiting task for a non-existing repository can come into existence
when the repository gets deleted before a waiting task gets scheduled,
and thus becomes a running task.
A waiting task for a non-existing repository cannot be (re)scheduled
because there is a check for repository existence in that code-path.
However, such a task would remain in the waiting queue and rescheduling
would be tried again and again without a chance to get finished as the
only way for a task to get finished was to run it.
This change allows to finish a waiting task when its repository doesn't
exist. The ReplicationTasksStorage now tries to delete task file(s)
from both running and waiting directories.
Change-Id: Ibbdd5023e2a008484215da02403c9935d21fbf13
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When many repositories get created and deleted in a quick succession it
may happen that a repository gets deleted before its replication task
starts. Such replication task will keep retrying, possibly indefinitely,
but has no chance to succeed.
Another scenario where this issue can occur is when a repository gets
created but a replica is not reachable for some time. If the repository
gets deleted before the replica gets reachable again, the replication
task will keep retrying but the local repository will not exist.
When handling the RepositoryNotFoundException in PushOne, set the
retrying flag to false. This ensures that this replication task is
not retried and gets finished.
Bug: Issue 15804
Change-Id: Ia55c5ec1c961f4c2aec9ecee8056f22b436e9fda
|
|\
| |
| |
| |
| |
| |
| |
| | |
* stable-3.2:
Doc: make explicit that remoteNameStyle is for non-Gerrit repos
Doc: remoteNameStyle might result in a repo name clashes
Change-Id: I32d598a36fe20c469528eca8d4c10d8775f7a3c4
|
| |\ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When using the remoteNameStyle in the replication.config settings
the Gerrit admin needs to be warned on the risks of using sytles
that may be ambiguous and associate two source repositories to
the same target repository on the replica.
Example: set remoteNameStyle to basenameOnly
/foo/my-repo.git => pushed to my-repo
/bar/my-repo.git => pushed to my-repo
When two commits are pushed to the same branch on the two repos
/foo/my-repo.git and /bar/my-repo.git, the replication plugin
would push them to the same target repo my-repo, causing clashes
and losing commits (depending which one is pushed first).
The risk needs to be highlighted so that the Gerrit admin
can check that univocity of the mapping is respected.
Bug: Issue 15315
Change-Id: Iba42907bceb8d1c27d739f3b0cded4a1d7400686
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The documentation already specify non-Gerrit examples of use
of remoteNameStyle; however, it does not say that if the remote
repository is backed by Gerrit, the *ONLY* supported option is
"slashes", otherwise the consequences could be catastrophic.
Two Gerrit servers (e.g. primary and replica) need to have
full alignment of repository names, as they are also referenced
in the inherited ACLs. Having a repository name mapping, may
disrupt the ACLs evaluation and make the remote Gerrit replica
unusable.
Bug: Issue 15318
Change-Id: I4d9447a4d0366a98037470c0cceda36f7a1b8a25
|
|\|
| |
| |
| |
| |
| |
| |
| | |
* stable-3.2:
Call retryDone() when giving up after lock failures
Fix issue with task cleanup after retry
Change-Id: I03b5bbfcfca75a3ee54e782e4b64f19b1100e2eb
|
| |\
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
* stable-3.1:
Call retryDone() when giving up after lock failures
Fix issue with task cleanup after retry
Change-Id: I6dbeaa0d21545a1903bdb11c5de5d9e8f72079c5
|
| | |\
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
* stable-3.0:
Call retryDone() when giving up after lock failures
Fix issue with task cleanup after retry
Change-Id: Id9ce63cd6112b3c8b16f9daafe3a8a982521baa9
|
| | | |\
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
* stable-2.16:
Call retryDone() when giving up after lock failures
Fix issue with task cleanup after retry
Change-Id: Id987043c8a26bd3f69fb4bd5b84591ae20cb83ba
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Previously when giving up after retrying due to too many lock failures,
a 'replication start --wait' command would wait indefinitely if it was
waiting on the push that gave up. Fix this by calling retryDone() after
giving up which will trigger the ReplicationStatus to reflect a failure
allowing the waiting to complete.
Change-Id: I0debade83612eb7ce51bab0191ab99464a6e7cd3
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Destination.notifyFinished method calls finish on
ReplicationTasksStorage.Task objects which are not scheduled for retry.
The issue is that for rescheduled tasks PushOne.isRetrying
will always returns true even if task is already replicated.
That creates a situation where tasks scheduled for retry are
never cleaned up.
Bug: Issue 12754
Change-Id: I4b10c2752da6aa7444f57c3ce4ab70eb00c3f14e
|
|\| | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
* stable-3.2:
Use volatile and AtomicIntegers to be thread safe
Change-Id: I7183e546b46e17530024cf4368edbd1d32216549
|
| |\| | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
* stable-3.1:
Use volatile and AtomicIntegers to be thread safe
Change-Id: I923730a525fbffb4c304ab0d23b088f5e8bfa307
|
| | |\| |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
* stable-3.0:
Use volatile and AtomicIntegers to be thread safe
Change-Id: I0be6a13344043a48f2fc4a0367559f5b5f1fbca9
|
| | | |\|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
* stable-2.16:
Use volatile and AtomicIntegers to be thread safe
Change-Id: I90a3e17e2f49d07707409ba390c0a6dd0501b512
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Modify the fields in ReplicationState class to be volatile and
AtomicIntegers so that changes to them are reflected to other
threads. By not doing so, modifications made by one thread to
these fields may not be reflected instantly depending on
cpu caching thus resulting in incorrect state
Change-Id: I76512b17c19cc68e4f1e6a5223899f9a184bb549
|
|\| | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
* stable-3.2:
Document that authGroup must have Access Database
Change-Id: I6d7292dd7e604edbf4e2fd6b3c1615f43c1d1df4
|
| |\| | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
* stable-3.1:
Document that authGroup must have Access Database
Change-Id: I09378f4288fd1335932bdf120bba8418fc8f51c7
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Bug: Issue 13786
Change-Id: Iaf65252b25b9c40e5cfd1ac25d55fbf70536f83e
|
|\| | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
* stable-3.2:
Split integration tests to different targets
Don't check read permission when authgroup isn't set
Change-Id: I4a1e1be5c4323de1554091786c55ca9a84d391e5
|
| |\| | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
* stable-3.1:
Split integration tests to different targets
Don't check read permission when authgroup isn't set
Change-Id: Ic5c8f0468869476a01923b4d374f0188c271daf2
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Running all integration tests as part of one single 'replication_it'
target does not cope well with the addition of extra tests, because it
is bound to take longer and longer, eventually hitting any test timeout
threshold.
Splitting integration tests into different targets avoids timeout
failures and also provides additional benefits, such as:
- Better understanding of test failures
- More efficient utilization of bazel build outputs and remote caching,
effectively making tests execution faster.
Bug: Issue 13909
Change-Id: Ifc6cce9996d3a8a23ec2a66c377978205fb6680f
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
It's unnecessary to check read permission when authGroup isn't set since
the then the user is a RemoteSiteUser that is-an InternalUser that has
read access to everything.
Change-Id: Ie6985250b0acb50c08fdcae75cc608222b1add35
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Revert submission 283559-currentuser-remove-cache-key
Reason for revert: Causes a latency regression for some hosts
Reverted Changes:
I76bfd3ebc:Adjust to changes in Gerrit core
If7ccfd9a4:Remove unused CurrentUser#cacheKey method
I1378ad083:Remove PerThreadCache
Change-Id: I84965f655d62c258c226ad5d585cee24dea047cc
(cherry picked from commit a6a6ec5982e41a0ee9bfe24a46be96d4f13fcaaa)
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
WaitUtil has been moved to the acceptance framework
in Gerrit core.
Depends-On: https://gerrit-review.googlesource.com/c/gerrit/+/291229
Change-Id: I3a31335c7878a9e5b9082d6685b860e8e6c42325
|
|\| | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
* stable-3.2:
Fix replication to retry on lock errors
Change-Id: Iab364714135d693e011e8abbf7782ae620d009c4
|
| |\| | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
* stable-3.1:
Fix replication to retry on lock errors
Change-Id: Icacd9095feaefd240803405c5b0a16cc0b3a9ed8
|
| | |\| |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
* stable-3.0:
Fix replication to retry on lock errors
Change-Id: Ib4b2c1fcac5da6551f72bce68a101b93e9b43b19
|
| | | |\|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
* stable-2.16:
Fix replication to retry on lock errors
Change-Id: I6e262d2c22d2dcd49b341b3c752d6d8b6c93b32c
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Versions of Git released since 2014 have created a new status
"failed to update ref" which replaces the two statuses "failed to lock"
and "failed to write". So, we now see the newer status when the remote
is unable to lock a ref.
Refer Git commit:
https://github.com/git/git/commit/6629ea2d4a5faa0a84367f6d4aedba53cb0f26b4
Config 'lockErrorMaxRetries' is not removed as part of this change
as folks who have it configured currently don't run into unexpected
behavior with retries when they upgrade to a newer version of the
plugin. Also, the "failed to lock" check is not removed for folks
still using a version of Git older than 2014.
Change-Id: I9b3b15bebd55df30cbee50a0e0c2190d04f2f443
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
This dependency was added in Id12780948a4 to support remoteNameStyle
"basenameOnly". The only reason for this dependency is to translate
project name from "foo/bar/myrepo" to myrepo.
It seems to be overkill to add 169 KB to the plugin distribution
for one single method.
Another disadvantage is that the version of common-io library used
for this is 2.2 from 2012. If gerrit installation site is using some
other plugins in addition to replication plugin, then it can easily
lead to classpath collision, when different versions of commons-io
libraries are included as transitive dependencies of different plugins.
To rectify, use a replacement method from guava library.
Change-Id: Id254dc38831832a9855bd204e4c2129ec64b88ae
|
|\| | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
* stable-3.2:
CreateProjectTask: Apply google-java formatting
Change-Id: I34dd8301c2b9fcb2a8e10964c5b8d5d448227b6a
|
| | | | |
| | | | |
| | | | |
| | | | | |
Change-Id: If60ee5fa5297a11f8f685fefb61ad80e8f3c2990
|
|\| | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
* stable-3.2:
CreateProjectTask.java: use interface instead of implementation
ReplicationQueue: Remove unused isPersisted param
PushOne: Don't call delta.add(ref) twice
Change-Id: Idcffd9873cbc70c6aad9ed9a0e76233494c93444
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
ReplicationDestinations interface should be used in CreateProjectTask
so that specific implementation can be materialized by Guice (DI).
Change-Id: If7cb21adff5c3feeeea7568c504e8e37d5c08f9e
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Since Ie83763e4a9fe13522f356b569fc2360fa5883224, all callers set this
to false.
Change-Id: I38a8a31853f5d2bc3b292b49bd050bc34f6408fe
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
This is already called as part of the condition, it doesn't need to be
called again inside the body.
Change-Id: Ieb11f738534ed01d09125ac6ef325ee472cb0b44
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Instead of NPEing if we manage to read a task file that's empty, return
an empty Optional.
Running the IT tests 1000 times produced this as the only failure (and
only once).
Change-Id: I3e7392dfb179795348d7f4a207102aa867aed85b
|
|\| | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
* stable-3.2: (23 commits)
Move shouldCleanupBothTasksAndLocks* ITs to ReplicationStorageIT
ReplicationStorageIT: Wait for all pushes without order
Replication*IT: Share getRef method
ReplicationFanoutIT: Share setReplicationDestination
ReplicationFanoutIT: Split shouldReplicateNewBranch tests
ReplicationFanoutIT: Remove generic waitUntil helper
ReplicationFanoutIT: Inherit from ReplicationDaemon
ReplicationFanoutIT: Refactor setRemoteReplicationDestination
ReplicationFanoutIT: Rename setReplicationDestination
ReplicationFanoutIT: Cleanup shouldCreateIndividualReplicationTasksForEveryRemoteUrlPair
Move shouldCleanupTasksAfterNewProjectReplication test
Fix documentation issue
Move storage portion of replicateBranchDeletion ITs
Refactor Replication*IT tests to share a base class
ReplicationIT: Add shouldMatch* e2e tests
ReplicationStorageIT: Move shouldMatch* tests from ReplicationIT
ReplicationTasksStorage: Add multi-primary unit tests
ReplicationTasksStorage: Add multi-primary unit tests
ReplicationStorageIT: Add shouldFire*ChangeRefs tests
Move storage-based ITs into ReplicationStorageIT
...
Change-Id: I81a167ccb77738984069d9433fde75ee7cf06c8e
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
These tests are focused on verifying storage functionality. Improve them
slightly to use the best practices from ReplicationStorageIT.
Change-Id: I66cf87e63c88f040d328793012a4dbf4de7e031e
|
| |\| | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
* stable-3.1:
ReplicationStorageIT: Wait for all pushes without order
Replication*IT: Share getRef method
ReplicationFanoutIT: Share setReplicationDestination
ReplicationFanoutIT: Split shouldReplicateNewBranch tests
ReplicationFanoutIT: Remove generic waitUntil helper
ReplicationFanoutIT: Inherit from ReplicationDaemon
ReplicationFanoutIT: Refactor setRemoteReplicationDestination
ReplicationFanoutIT: Rename setReplicationDestination
ReplicationFanoutIT: Cleanup shouldCreateIndividualReplicationTasksForEveryRemoteUrlPair
Move shouldCleanupTasksAfterNewProjectReplication test
Move storage portion of replicateBranchDeletion ITs
Refactor Replication*IT tests to share a base class
ReplicationIT: Add shouldMatch* e2e tests
ReplicationStorageIT: Move shouldMatch* tests from ReplicationIT
ReplicationTasksStorage: Add multi-primary unit tests
ReplicationTasksStorage: Add multi-primary unit tests
ReplicationStorageIT: Add shouldFire*ChangeRefs tests
Move storage-based ITs into ReplicationStorageIT
ReplicationTasksStorage.Task: Add multi-primary unit tests
ReplicationQueue: Remove unused method
Cleanup specific to stable-3.2 will be done in follow-up changes.
Change-Id: Ib938c661158e8f7a3434010187b87c79e81a01b8
|
| | |\| |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
* stable-3.0:
ReplicationStorageIT: Wait for all pushes without order
ReplicationTasksStorage: Add multi-primary unit tests
Change-Id: I3961368f7bcf7d4aa923d07f7f89beeaaeb307d3
|
| | | |\|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
* stable-2.16:
ReplicationStorageIT: Wait for all pushes without order
ReplicationTasksStorage: Add multi-primary unit tests
Change-Id: I1d749621c189ee2e49f092ddc7558f83e508411f
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Some tests don't have a predefined order for which events will be
replicated first. Using a timeout based on a single replication event is
flawed when we don't know the expected order. Instead, use a timeout for
the group of events and ignore the order.
For two events replicating to a single remote with a single thread, we
expect the complete replication to take twice as long. Two events
replicating to two remotes will use one thread each and therefore not
take any longer than the single remote case.
Change-Id: Ieb21b7eee32105eab5b5a15a35159bb4a837e363
|
| | | | |\ |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
These tests examine the replication scenarios under multi-primary
setup making use of the api calls present in ReplicationTasksStorage
class similarly as done in single primary setup.
These tests ensure that the replication compatibility in multi-primary
setup is not broken.
Change-Id: I375b731829f3c0640d3a7a98635e1e5c526908ca
|
| | |\ \ \ \
| | | | | | |
| | | | | | |
| | | | | | | |
stable-3.1
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
These tests examine the replication scenarios under
multi-primary setup making use of the api calls present
in ReplicationTasksStorage.Task class similarly as done in
single master setup.
These tests ensure that the replication compatability in
multi-primary setup is not broken.
Change-Id: I980e8286bf11d31c6ab89e49ef065fdde1118181
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
This helper is common to a couple test classes, so share it.
Change-Id: I5839c31ad734c384e812e9e1c7bcba8ba05c23cc
|