summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* Merge branch 'stable-3.0' into stable-3.1v3.1.16v3.1.15v3.1.14v3.1.13upstream/stable-3.1Nasser Grainawi2021-02-252-16/+18
|\ | | | | | | | | | | | | | | * stable-3.0: Call retryDone() when giving up after lock failures Fix issue with task cleanup after retry Change-Id: Id9ce63cd6112b3c8b16f9daafe3a8a982521baa9
| * Merge branch 'stable-2.16' into stable-3.0upstream/stable-3.0Nasser Grainawi2021-02-254-14/+39
| |\ | | | | | | | | | | | | | | | | | | | | | * stable-2.16: Call retryDone() when giving up after lock failures Fix issue with task cleanup after retry Change-Id: Id987043c8a26bd3f69fb4bd5b84591ae20cb83ba
| | * Call retryDone() when giving up after lock failuresv2.16.28upstream/stable-2.16Martin Fick2021-02-241-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously when giving up after retrying due to too many lock failures, a 'replication start --wait' command would wait indefinitely if it was waiting on the push that gave up. Fix this by calling retryDone() after giving up which will trigger the ReplicationStatus to reflect a failure allowing the waiting to complete. Change-Id: I0debade83612eb7ce51bab0191ab99464a6e7cd3
| | * Fix issue with task cleanup after retryMarcin Czech2021-02-244-14/+38
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Destination.notifyFinished method calls finish on ReplicationTasksStorage.Task objects which are not scheduled for retry. The issue is that for rescheduled tasks PushOne.isRetrying will always returns true even if task is already replicated. That creates a situation where tasks scheduled for retry are never cleaned up. Bug: Issue 12754 Change-Id: I4b10c2752da6aa7444f57c3ce4ab70eb00c3f14e
* | | Merge branch 'stable-3.0' into stable-3.1Kaushik Lingarkar2021-01-251-15/+16
|\| | | | | | | | | | | | | | | | | | | | * stable-3.0: Use volatile and AtomicIntegers to be thread safe Change-Id: I0be6a13344043a48f2fc4a0367559f5b5f1fbca9
| * | Merge branch 'stable-2.16' into stable-3.0Kaushik Lingarkar2021-01-251-15/+16
| |\| | | | | | | | | | | | | | | | | | | * stable-2.16: Use volatile and AtomicIntegers to be thread safe Change-Id: I90a3e17e2f49d07707409ba390c0a6dd0501b512
| | * Use volatile and AtomicIntegers to be thread safev2.16.27Adithya Chakilam2021-01-151-15/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Modify the fields in ReplicationState class to be volatile and AtomicIntegers so that changes to them are reflected to other threads. By not doing so, modifications made by one thread to these fields may not be reflected instantly depending on cpu caching thus resulting in incorrect state Change-Id: I76512b17c19cc68e4f1e6a5223899f9a184bb549
* | | Document that authGroup must have Access Databasev3.1.12Sven Selberg2021-01-141-0/+6
| | | | | | | | | | | | | | | Bug: Issue 13786 Change-Id: Iaf65252b25b9c40e5cfd1ac25d55fbf70536f83e
* | | Split integration tests to different targetsAntonio Barone2021-01-071-6/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Running all integration tests as part of one single 'replication_it' target does not cope well with the addition of extra tests, because it is bound to take longer and longer, eventually hitting any test timeout threshold. Splitting integration tests into different targets avoids timeout failures and also provides additional benefits, such as: - Better understanding of test failures - More efficient utilization of bazel build outputs and remote caching, effectively making tests execution faster. Bug: Issue 13909 Change-Id: Ifc6cce9996d3a8a23ec2a66c377978205fb6680f
* | | Don't check read permission when authgroup isn't setSven Selberg2020-12-211-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | It's unnecessary to check read permission when authGroup isn't set since the then the user is a RemoteSiteUser that is-an InternalUser that has read access to everything. Change-Id: Ie6985250b0acb50c08fdcae75cc608222b1add35
* | | Merge branch 'stable-3.0' into stable-3.1v3.1.11Nasser Grainawi2020-12-074-27/+60
|\| | | | | | | | | | | | | | | | | | | | * stable-3.0: Fix replication to retry on lock errors Change-Id: Ib4b2c1fcac5da6551f72bce68a101b93e9b43b19
| * | Merge branch 'stable-2.16' into stable-3.0Nasser Grainawi2020-12-074-27/+60
| |\| | | | | | | | | | | | | | | | | | | * stable-2.16: Fix replication to retry on lock errors Change-Id: I6e262d2c22d2dcd49b341b3c752d6d8b6c93b32c
| | * Fix replication to retry on lock errorsv3.0.16v2.16.26Kaushik Lingarkar2020-12-024-27/+60
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Versions of Git released since 2014 have created a new status "failed to update ref" which replaces the two statuses "failed to lock" and "failed to write". So, we now see the newer status when the remote is unable to lock a ref. Refer Git commit: https://github.com/git/git/commit/6629ea2d4a5faa0a84367f6d4aedba53cb0f26b4 Config 'lockErrorMaxRetries' is not removed as part of this change as folks who have it configured currently don't run into unexpected behavior with retries when they upgrade to a newer version of the plugin. Also, the "failed to lock" check is not removed for folks still using a version of Git older than 2014. Change-Id: I9b3b15bebd55df30cbee50a0e0c2190d04f2f443
* | | Merge branch 'stable-3.0' into stable-3.1Nasser Grainawi2020-10-303-5/+47
|\| | | | | | | | | | | | | | | | | | | | | | | * stable-3.0: ReplicationStorageIT: Wait for all pushes without order ReplicationTasksStorage: Add multi-primary unit tests Change-Id: I3961368f7bcf7d4aa923d07f7f89beeaaeb307d3
| * | Merge branch 'stable-2.16' into stable-3.0Nasser Grainawi2020-10-304-6/+116
| |\| | | | | | | | | | | | | | | | | | | | | | * stable-2.16: ReplicationStorageIT: Wait for all pushes without order ReplicationTasksStorage: Add multi-primary unit tests Change-Id: I1d749621c189ee2e49f092ddc7558f83e508411f
| | * ReplicationStorageIT: Wait for all pushes without orderNasser Grainawi2020-10-302-4/+38
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some tests don't have a predefined order for which events will be replicated first. Using a timeout based on a single replication event is flawed when we don't know the expected order. Instead, use a timeout for the group of events and ignore the order. For two events replicating to a single remote with a single thread, we expect the complete replication to take twice as long. Two events replicating to two remotes will use one thread each and therefore not take any longer than the single remote case. Change-Id: Ieb21b7eee32105eab5b5a15a35159bb4a837e363
| | * Merge "ReplicationTasksStorage: Add multi-primary unit tests" into stable-2.16v2.16.23Martin Fick2020-10-282-2/+78
| | |\
| | | * ReplicationTasksStorage: Add multi-primary unit testsAdithya Chakilam2020-10-262-2/+78
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | These tests examine the replication scenarios under multi-primary setup making use of the api calls present in ReplicationTasksStorage class similarly as done in single primary setup. These tests ensure that the replication compatibility in multi-primary setup is not broken. Change-Id: I375b731829f3c0640d3a7a98635e1e5c526908ca
* | | | Merge "ReplicationTasksStorage.Task: Add multi-primary unit tests" into ↵Martin Fick2020-10-302-6/+200
|\ \ \ \ | | | | | | | | | | | | | | | stable-3.1
| * | | | ReplicationTasksStorage.Task: Add multi-primary unit testsAdithya Chakilam2020-10-132-6/+200
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | These tests examine the replication scenarios under multi-primary setup making use of the api calls present in ReplicationTasksStorage.Task class similarly as done in single master setup. These tests ensure that the replication compatability in multi-primary setup is not broken. Change-Id: I980e8286bf11d31c6ab89e49ef065fdde1118181
* | | | | Replication*IT: Share getRef methodNasser Grainawi2020-10-283-8/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This helper is common to a couple test classes, so share it. Change-Id: I5839c31ad734c384e812e9e1c7bcba8ba05c23cc
* | | | | ReplicationFanoutIT: Share setReplicationDestinationNasser Grainawi2020-10-282-31/+34
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Refactor setReplicationDestination to share the per-remote-file and single replication config file implementations more. Change-Id: Ic0a98ccf0f7703f14c01856a42b8a70e3d20aa8b
* | | | | ReplicationFanoutIT: Split shouldReplicateNewBranch testsNasser Grainawi2020-10-281-42/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Split these into storage-based and e2e tests so that the storage tests can be reliably verified through use of replicationDelay large enough that task state on disk doesn't change during the tests. Keep them all in ReplicationFanoutIT for now since the setup for these tests is unique to that class. Also remove the unnecessary cleanup of tasks. Change-Id: I36e0a4affe1f5d1330ea27a496fd8ba295176763
* | | | | ReplicationFanoutIT: Remove generic waitUntil helperNasser Grainawi2020-10-281-13/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Using a non-specific timeout is a bad pattern. Tests should pick a timeout appropriate to the action being tested. Change-Id: I69a7e469df1dc532af6a777ac47d89852091797e
* | | | | ReplicationFanoutIT: Inherit from ReplicationDaemonNasser Grainawi2020-10-284-65/+34
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Reduces duplication across the replication IT classes. More dedup is possible with the helper methods, but leave that for a future change. Change-Id: Iddd6dca9a4fe84b065954cd4dcec7289d7ed68a2
* | | | | ReplicationFanoutIT: Refactor setRemoteReplicationDestinationNasser Grainawi2020-10-281-13/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Simplify callers by providing a way to set the replicationDelay up front. Change-Id: I28cea83559aa1eb379ec2ff962a0beaf25fe4ca6
* | | | | ReplicationFanoutIT: Rename setReplicationDestinationNasser Grainawi2020-10-281-9/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Distinguish the methods that set the per-remote config files vs the methods that set the global replication.config. This helps lead up to ReplicationFanoutIT inheriting from ReplicationDaemon. Change-Id: I6139d2dbde15c0b0449d7d7801c169253bc7449d
* | | | | ReplicationFanoutIT: Cleanup ↵Nasser Grainawi2020-10-281-6/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | shouldCreateIndividualReplicationTasksForEveryRemoteUrlPair Remove some dead code, use Integer.MAX_VALUE for the replicationDelay so that tasks stay in the waiting/ area of storage for the entire test, and use a dedicated listWaitingTasks() to show it only depends on tasks in that state. Change-Id: I0035a4edc656ed4833249322c45204124a66e20d
* | | | | Merge changes I0ef708ab,I81d27fd4 into stable-3.1Nasser Grainawi2020-10-284-300/+520
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * changes: Move shouldCleanupTasksAfterNewProjectReplication test Merge branch 'stable-3.0' into stable-3.1
| * | | | | Move shouldCleanupTasksAfterNewProjectReplication testNasser Grainawi2020-10-283-61/+32
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This test is focused on the storage level, so move it to ReplicationStorageIT. Slightly improve it to use the new best practices for specifying test timeouts. Change-Id: I0ef708ab7813ee09d6f115d3151d2d12b9984a80
| * | | | | Merge branch 'stable-3.0' into stable-3.1Nasser Grainawi2020-10-284-246/+495
| |\ \ \ \ \ | | | |/ / / | | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * stable-3.0: Move storage portion of replicateBranchDeletion ITs Refactor Replication*IT tests to share a base class ReplicationIT: Add shouldMatch* e2e tests ReplicationStorageIT: Move shouldMatch* tests from ReplicationIT ReplicationStorageIT: Add shouldFire*ChangeRefs tests Move storage-based ITs into ReplicationStorageIT ReplicationQueue: Remove unused method This change does not try to reimpose the breakdown of tests that was done in 3.0. That will be done in follow up change(s) to improve reviewability of this change. Change-Id: I81d27fd47da8eecad3aca36d8e6400679fb564a3
| | * | | | Move storage portion of replicateBranchDeletion ITsNasser Grainawi2020-10-273-74/+65
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | All other ITs split e2e and storage tests on stable-2.16, so this change only updates the new replicateBranchDeletion tests that were added in stable-3.0. The e2e check for if the destination branch is removed or not stays in ReplicationIT and the check that a task is created in storage when the branch delete API is invoked moves to ReplicationStorageIT. This split allows the best practices for verifying e2e and storage to be applied independently. Change-Id: Iec7ee090bd614e3442b1f9cb454437c9e05290be
| | * | | | Merge branch 'stable-2.16' into stable-3.0Nasser Grainawi2020-10-275-190/+404
| | |\ \ \ \ | | | | |/ / | | | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * stable-2.16: Refactor Replication*IT tests to share a base class ReplicationIT: Add shouldMatch* e2e tests ReplicationStorageIT: Move shouldMatch* tests from ReplicationIT ReplicationStorageIT: Add shouldFire*ChangeRefs tests Move storage-based ITs into ReplicationStorageIT ReplicationQueue: Remove unused method This change does not try to reimpose the breakdown of tests that was done in 2.16. That will be done in follow up change(s) to improve reviewability of this change. Change-Id: I83202997610c5ad0d8849cb477ca36db8df760f5
| | | * | | Refactor Replication*IT tests to share a base classNasser Grainawi2020-10-263-177/+135
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | These classes have very similar setups and duplicate helper methods. Improve maintainability by reducing the duplication. ReplicationQueueIT is not modified because it is merged into ReplicationIT on stable-3.0. Change-Id: Ibc22ae4d0db2d09009f65c0e745f1095c67827ba
| | | * | | ReplicationIT: Add shouldMatch* e2e testsNasser Grainawi2020-10-261-0/+72
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | These new tests utilize creating a branch in a way that does not trigger replication so that scheduleFullSync() is responsible for replicating the update. In this way, the tests verify the destination receives the update because scheduleFullSync() matched the given URI. Change-Id: I4ae15d0301a308a12cbca3684915e89ca421e02f
| | | * | | ReplicationStorageIT: Move shouldMatch* tests from ReplicationITNasser Grainawi2020-10-263-89/+45
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | These tests are focused on verifying storage, so they belong in ReplicationStorageIT. Improve these tests to better verify storage correctness by switching the 'now' parameter to false such that replicationDelay is honored and follow the ReplicationStorageIT pattern using a very long delay. These improvements make these tests much more stable. The tests improve the ref matching slightly by comparing to the PushOne.ALL_REFS constant. Also removes the disableDeleteForTesting flag as there are no users of it now. A later change can add ReplicationIT e2e tests for these use cases. Change-Id: Iaa14a7429a40fb62325259efa1c7d7637deef95a
| | | * | | ReplicationStorageIT: Add shouldFire*ChangeRefs testsNasser Grainawi2020-10-261-0/+45
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Copy the shouldFire*IncompleteUri tests as shouldFire*ChangeRefs to fill a gap in test coverage. Change-Id: Ia8df64a8574b776e6a9f7201c0862f1e6794687e
| | | * | | Move storage-based ITs into ReplicationStorageITNasser Grainawi2020-10-262-86/+224
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Tests in ReplicationStorageIT utilize very long replication delays such that tasks are never expected to complete during the test. This allows test writers to assume the task files are still there. Refactor tests from ReplicationIT into ReplicationStorageIT and focus them on verifying storage correctness. This is mostly a direct copy except that shouldFirePendingOnlyToStoredUri gets renamed and split into two tests. One that validates tasks are fired and another that validates replication completes to the expected destinations. This split is necessary because of the very long delay methodology mentioned above. Code sharing between ReplicationIT and ReplicationStorageIT will be improved in a later commit. Change-Id: I41179c20a10354953cff3628368dfd5f910cc940
| | | * | | ReplicationQueue: Remove unused methodNasser Grainawi2020-10-121-6/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | And drop the misleading @VisibleForTesting annotation from the method the removed method was wrapping. scheduleFullSync() is public so that PushAll can call it. Change-Id: I0139e653654fcaf20de68dddfb5ea85560a323d0
* | | | | | Merge "ReplicationTasksStorage: Add multi-primary unit tests" into stable-3.1Adithya Chakilam2020-10-282-3/+198
|\ \ \ \ \ \ | |/ / / / / |/| | | | |
| * | | | | ReplicationTasksStorage: Add multi-primary unit testsAdithya Chakilam2020-10-272-3/+198
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | These tests examine the replication scenarios under multi-primary setup making use of the api calls present in ReplicationTasksStorage class similarly as done in single primary setup. These tests ensure that the replication compatability in multi-primary setup is not broken. Change-Id: Ib2d0017c4d2ac3f4cfc7262b68b09a3a357e1337
* | | | | | Merge branch 'stable-3.0' into stable-3.1Nasser Grainawi2020-10-152-15/+23
|\ \ \ \ \ \ | | |/ / / / | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * stable-3.0: ReplicationIT: Remove unnecessary storage inspection ReplicationIT: Fix invalid replicationDelay setting Split replication plugins tests in two groups Change-Id: I9dfa3abd4907d74415bff6b77fc9ae49b9f6735f
| * | | | | Merge branch 'stable-2.16' into stable-3.0Nasser Grainawi2020-10-152-15/+23
| |\ \ \ \ \ | | | |/ / / | | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * stable-2.16: ReplicationIT: Remove unnecessary storage inspection ReplicationIT: Fix invalid replicationDelay setting Split replication plugins tests in two groups Change-Id: I2d27b715a2bfc9832ee559556d1c8acfe671d893
| | * | | | ReplicationIT: Remove unnecessary storage inspectionNasser Grainawi2020-10-121-8/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Integration tests shouldn't need to rely on inspecting the underlying ReplicationTasksStorage layer(s). All of these tests already verify the expected end result. This leaves 4 tests that currently completely rely on inspecting the task storage to verify the expected result. Those tests need further improvement to decouple from the storage layer. Change-Id: I029d63ce7d07414d9bf5d9290d556378beedcabf
| | * | | | ReplicationIT: Fix invalid replicationDelay settingNasser Grainawi2020-10-121-7/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Setting config values for a remote in replication.config vs the remote's own config file results in the replication.config values being ignored. Fix this by setting the values in each remote's config file. This test had delays added to avoid any flakiness, but the delays weren't working because of this issue. While the test generally passes, the delay makes it safer from races. Change-Id: Idcdf5f07b3fc91724068ec6216527665c4a48bb3
| | * | | | Split replication plugins tests in two groupsLuca Milanesio2020-10-081-0/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Run unit-tests and integration tests in parallel by splitting them into two separate tasks. This also allows to potentially identify which group of tests is flaky, because Bazel would flag one or the other in case of instability. Change-Id: I21f969a17e3653dfc5ab93d71cc6955024fc2d8f
* | | | | | Merge branch 'stable-3.0' into stable-3.1Marco Miller2020-10-021-1/+8
|\| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * stable-3.0: Make the shouldReplicateNewProject test more reliable Change-Id: I40ecf25a108f2dfd0926b3fb6ba166a77cf0f039
| * | | | | Merge branch 'stable-2.16' into stable-3.0v3.0.13Marco Miller2020-10-011-1/+8
| |\| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * stable-2.16: Make the shouldReplicateNewProject test more reliable Change-Id: I447043d502987070bc395936484a1cb23a5ddabc
| | * | | | Make the shouldReplicateNewProject test more reliableMartin Fick2020-09-281-1/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The ReplicationIT shouldReplicateNewProject was failing regularly on my machine. Improve the timeout for this test so that it explicitly includes the time needed to wait for the project to be created, not just the scheduling and retry times. Change-Id: Ibf3cc3506991b222ded3ee4ddfbd7e2d60341d60
* | | | | | Remove disableDeleteForTesting flagNasser Grainawi2020-10-013-19/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Instrumenting the actual code to make tests work is generally a bad practice that can let real issues slip into production code. Removing this flag "Just Works" after fixing one ReplicationFanoutIT test with buggy replicationDelay settings (the parent change of this one). Change-Id: Ia93192eeff0fb76c5c100de597a017b8b1f86025