| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
Change-Id: Icc689699eff3eb06a6b10e8221feab87e38b11e0
|
|
|
|
| |
Change-Id: I67be710b6fda2069e798964ec81ad9add637bab5
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Gitiles has special FilteredRepository wrapper that
allows to carefully hide refs based on the project's ACLs.
There is however an optimisation that skips the filtering
in case a user has READ permissions on every ACLs patterns.
When the target repository is All-Users, the optimisation
turns into a security issue because it allows seeing everything
that belongs to everyone:
- draft comments
- PII of all users
- external ids
- draft edits
Block Gitiles or any other part of Gerrit to abuse of this
power when the target repository is All-Users, where nobody
can be authorised to skip the ACLs evaluation.
Cover the additional special case of the All-Users project
access with two explicit positive and negative tests,
so that the security check is covered.
Bug: Issue 13621
Change-Id: Ia6ea1a9fd5473adff534204aea7d8f25324a45b7
(cherry picked from commit 45071d6977932bca5a1427c8abad24710fed2e33)
|
|
|
|
| |
Change-Id: Icc90a7b68e2764cbdb677c7a7f2261c7cf015e7c
|
|
|
|
| |
Change-Id: If3ea98f0db8ef6b102ce3775e19a64739b883f8e
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change fixes a misconception that leads to data being accessible
through Gerrit APIs that should be locked down.
Gerrit had two components for determining if a Git ref is visible to a
user: (Default)RefFilter and PermissionBackend#ForRef (ex RefControl).
The former was always capable of providing correct results for all refs.
The latter only had logic to decide if a Git ref is visible according to
the Gerrit READ permissions. This includes all refs under refs/heads as
well as any other ref that isn't a database ref or a Git tag. This
component was unware of Git tags and database references. Hence, when
asked for a database reference such as refs/changes/xx/yyyyxx/meta the
logic would allow access if the user has READ permissions on any of the
ref prefixes, such as the default "read refs/* Anonymous Users".
That is problematic, because it bypasses documented behavior [1] where
a user should only have access to a change if they can see the destination
ref. The same goes for other database references.
This change fixes the problem. It is intentionally kept to a minimally
invasive code change so that it's easier to backport it.
Add tests to assert the correct behavior. These tests would fail before
this fix. We have included them in this change to be able to backport
just a single commit.
[1] https://gerrit-review.googlesource.com/Documentation/access-control.html
Change-Id: Ice3a756cf573dd9b38e3f198ccc44899ccf65f75
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Update plugins/replication from branch 'stable-2.16'
to 8fdb0f9ac0a7f68b3f942cb4a9fd4c94e488ab57
- ReplicationStorageIT: Wait for all pushes without order
Some tests don't have a predefined order for which events will be
replicated first. Using a timeout based on a single replication event is
flawed when we don't know the expected order. Instead, use a timeout for
the group of events and ignore the order.
For two events replicating to a single remote with a single thread, we
expect the complete replication to take twice as long. Two events
replicating to two remotes will use one thread each and therefore not
take any longer than the single remote case.
Change-Id: Ieb21b7eee32105eab5b5a15a35159bb4a837e363
|
|
|
|
| |
Change-Id: Ie3b33382fe2b8d64894f89afc25061ecd17ece90
|
|
|
|
| |
Change-Id: Ia533bb65648b3799fc742ec982058e11712ac78e
|
|
|
|
|
|
| |
Without escaping '<=' is rendered as ⇐ by AsciiDoc.
Change-Id: I2223cca45f80c2aaee76d1e84c2de34e966d7620
|
|\
| |
| |
| | |
stable-2.16
|
| |
| |
| |
| |
| |
| |
| |
| | |
dk.brics regexp syntax reference [1] doesn't contain examples.
[1] https://www.brics.dk/automaton/doc/index.html?dk/brics/automaton/RegExp.html
Change-Id: I9be2a3e4f1f387ec17f1702831a9bbebc85585be
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* Update plugins/replication from branch 'stable-2.16'
to 53e083fd0f17d1403b4d150e66655907c1ea139d
- Merge "ReplicationTasksStorage: Add multi-primary unit tests" into stable-2.16
- ReplicationTasksStorage: Add multi-primary unit tests
These tests examine the replication scenarios under multi-primary
setup making use of the api calls present in ReplicationTasksStorage
class similarly as done in single primary setup.
These tests ensure that the replication compatibility in multi-primary
setup is not broken.
Change-Id: I375b731829f3c0640d3a7a98635e1e5c526908ca
|
|\ \
| | |
| | |
| | |
| | |
| | |
| | | |
* changes:
Fix tests for stable-2.16 branch
Remove generation for c.g.gwtexpui.* JavaDoc
Fetch JGit JavaDoc from archive.eclipse.org
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Add the 'manual' tag to wct test_suite templates,
so it is excluded from bazel test //...
Change-Id: I73fdddc9c08eeaacff9401ea9531c95e6a782ced
(cherry picked from commit ae42cd00bdfa8a34e75c563b62f0151a561cc82b)
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The JavaDoc for com.google.gwtexpui.* cannot be generated
because the source files are not accessible anymore.
Failing to generate the JavaDocs caused the Gerrit build to
fail with 'No source files for package com.google.gwtexpui...'.
Change-Id: Ie36e650962636813d8f9f615e495a980b7280420
|
| | |
| | |
| | |
| | | |
Change-Id: I363ad0df632fdb25236b3d0a0c06fb15dbf8acf2
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* Update plugins/replication from branch 'stable-2.16'
to 4cb59f096b84f4369f62c8645db326c61826be79
- Refactor Replication*IT tests to share a base class
These classes have very similar setups and duplicate helper methods.
Improve maintainability by reducing the duplication.
ReplicationQueueIT is not modified because it is merged into
ReplicationIT on stable-3.0.
Change-Id: Ibc22ae4d0db2d09009f65c0e745f1095c67827ba
- ReplicationIT: Add shouldMatch* e2e tests
These new tests utilize creating a branch in a way that does not trigger
replication so that scheduleFullSync() is responsible for replicating
the update. In this way, the tests verify the destination receives the
update because scheduleFullSync() matched the given URI.
Change-Id: I4ae15d0301a308a12cbca3684915e89ca421e02f
- ReplicationStorageIT: Move shouldMatch* tests from ReplicationIT
These tests are focused on verifying storage, so they belong in
ReplicationStorageIT. Improve these tests to better verify storage
correctness by switching the 'now' parameter to false such that
replicationDelay is honored and follow the ReplicationStorageIT
pattern using a very long delay. These improvements make these tests
much more stable.
The tests improve the ref matching slightly by comparing to the
PushOne.ALL_REFS constant.
Also removes the disableDeleteForTesting flag as there are no users of
it now.
A later change can add ReplicationIT e2e tests for these use cases.
Change-Id: Iaa14a7429a40fb62325259efa1c7d7637deef95a
- ReplicationStorageIT: Add shouldFire*ChangeRefs tests
Copy the shouldFire*IncompleteUri tests as shouldFire*ChangeRefs to
fill a gap in test coverage.
Change-Id: Ia8df64a8574b776e6a9f7201c0862f1e6794687e
- Move storage-based ITs into ReplicationStorageIT
Tests in ReplicationStorageIT utilize very long replication delays such
that tasks are never expected to complete during the test. This allows
test writers to assume the task files are still there.
Refactor tests from ReplicationIT into ReplicationStorageIT and focus
them on verifying storage correctness. This is mostly a direct copy
except that shouldFirePendingOnlyToStoredUri gets renamed and split into
two tests. One that validates tasks are fired and another that validates
replication completes to the expected destinations. This split is
necessary because of the very long delay methodology mentioned above.
Code sharing between ReplicationIT and ReplicationStorageIT will be
improved in a later commit.
Change-Id: I41179c20a10354953cff3628368dfd5f910cc940
|
|/
|
|
|
|
|
|
|
|
|
|
| |
* Update plugins/replication from branch 'stable-2.16'
to 64617a846c9fa06215031b2ad34a30d58003a732
- ReplicationQueue: Remove unused method
And drop the misleading @VisibleForTesting annotation from the method
the removed method was wrapping. scheduleFullSync() is public so that
PushAll can call it.
Change-Id: I0139e653654fcaf20de68dddfb5ea85560a323d0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Update plugins/replication from branch 'stable-2.16'
to 84d96eb953d51c97b2093d06597bc69812b812e7
- ReplicationIT: Remove unnecessary storage inspection
Integration tests shouldn't need to rely on inspecting the underlying
ReplicationTasksStorage layer(s). All of these tests already verify the
expected end result.
This leaves 4 tests that currently completely rely on inspecting the
task storage to verify the expected result. Those tests need further
improvement to decouple from the storage layer.
Change-Id: I029d63ce7d07414d9bf5d9290d556378beedcabf
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Update plugins/replication from branch 'stable-2.16'
to 5529649274286edbb7559a3af13724cdcb90f1c3
- ReplicationIT: Fix invalid replicationDelay setting
Setting config values for a remote in replication.config vs the remote's
own config file results in the replication.config values being ignored.
Fix this by setting the values in each remote's config file.
This test had delays added to avoid any flakiness, but the delays
weren't working because of this issue. While the test generally passes,
the delay makes it safer from races.
Change-Id: Idcdf5f07b3fc91724068ec6216527665c4a48bb3
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Update plugins/replication from branch 'stable-2.16'
to 882c6147720227c161a2fb573c79cfc683e70379
- Split replication plugins tests in two groups
Run unit-tests and integration tests in parallel by splitting
them into two separate tasks.
This also allows to potentially identify which group of tests
is flaky, because Bazel would flag one or the other in case of
instability.
Change-Id: I21f969a17e3653dfc5ab93d71cc6955024fc2d8f
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In one of our sites we have a giant repository. When trying to migrate
the complete site using offline migration including this giant repo
the migration crawled at few change/sec since it seems to allocate so
much memory that the migration of the other 25k repositories can’t
really make progress when running the migration with 1 thread/core.
This also caused the JVM GC ratio to increase heavily (>60%). Reducing
the number of threads to 16 reduced the gc ratio to 15-20% but still
migration speed reached only 20 changes/sec.
Hence we migrate this giant repo on a staging copy of the site which
takes around 11 hours when only migrating the huge repository
using 16 threads. Then the meta refs from the giant repo are transferred
from the staging site via git fetch. This is possible since
the repository is read-only so we can be sure there are no new changes
on the production server since we migrated it on the staging server.
With this patch series we can migrate the other 25k repos / 4.5m changes
in a bit more than 1 hour (1200 changes/sec) if we skip the giant
repository in this migration run.
This change adds an option to the offline migration which allows to
finish the migration of the other repositories despite we set the option
to skip the giant repository which wasn't supported before. We must skip
it even when all the migration work was already done on the staging site
and the result transferred to the site where the migration of all the
other repositories is done since otherwise we are back to a very slow
20 changes/second.
An alternative approach could be to migrate this huge repository slowly
using online migration but this would have the disadvantage that this
would take much longer and affect the performance of the productive
server until this migration finished.
Change-Id: Ib78d257ce19bf8370ae0c259d887c600e7195dab
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before this change we used one connection to migrate one change. We
observed an excessive number of DB connections using netstat during
the execution of the setNoteDbPrimary method. The number of connections
reached 27K and then opening a new connection started to fail and the
migration started to fail. I assume that this is caused by the
exhaustion of the local port range: we open/close connections in quick
succession and the operating system doesn't have enough time to release
local ports.
By updating a chunk of changes from a single thread, we make sure to use
only one DB connection for one chunk. This should reduce the rate at
which DB connections are open/closed and the overall number of
connections open during the migration.
Change-Id: Ie4a1b4d41b92824c87a0ae39b13a13d9ccb4ca3c
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In order to ensure good performance run auto gc during the notedb
migration every new 10000 refs which are created during the migration of
a project. Auto gc will do garbage collection by default if it finds
more than 6700 loose objects or more than 50 pack files. This can be
changed by setting options gc.auto (default 6700) and gc.autoPackLimit
(default 50) [1].
[1] https://git-scm.com/docs/git-config#Documentation/git-config.txt-gcauto
Change-Id: If56219a1d256d6f1c84e6788f46668b481ff4718
|
|
|
|
| |
Change-Id: If1d5c6a68b2dc15ee45ae9b61b8719d511663565
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Migrating many slices of the same repository concurrently increases
pressure on the thread trying to pack refs of the repository. Hence
shuffle all slices of all projects to decrease concurrency per
repository.
When testing migration of a large project (370k changes, >1m refs) using
80 threads I observed that there were always 20-30k loose refs despite
the fact that refs were constantly packed which took 1-3 minutes for
each repacking.
Change-Id: I39e1d99995d7e543cba8eedcd921706ca1655b5c
|
|
|
|
|
|
|
|
| |
The migrator used a single database connection for all threads
rebuilding changes. Instead use one database connection per rebuild
thread.
Change-Id: If785208cc571421b0a2bac65b4970c24a4c33e1f
|
|
|
|
|
|
|
| |
This option was already defined but not fully implemented to enable
logging more detailed migration logs.
Change-Id: I6b179df9049a31d49e421c45c6a814a76240fa50
|
|
|
|
| |
Change-Id: I3cd0af886f7d8713ed370f8b9a58770e1d45b8e3
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Each project's changes were migrated to noteDb on a single thread. This
might leave most threads of the pool idling when migrating a site with
one big and many small projects. In the beginning, all CPUs are busy
migrating projects. But once the small projects have been migrated,
one thread is still working alone on the big project, while the other
threads are idle.
To avoid this idling we split the big projects into smaller project
slices of 1000 refs and let the thread pool migrate these slices. This
way also the migration of big projects can take advantage of more CPUs.
This approach is similar to the one implemented in [1] to improve
performance of indexing.
[1] https://gerrit-review.googlesource.com/c/gerrit/+/271695
Change-Id: I800d2995569416a9f27b82caff2659aa7946725e
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Update plugins/replication from branch 'stable-2.16'
to 6d0b12c83001805bfc740e3dabd37223acc294d8
- Make the shouldReplicateNewProject test more reliable
The ReplicationIT shouldReplicateNewProject was failing regularly on my
machine. Improve the timeout for this test so that it explicitly
includes the time needed to wait for the project to be created, not just
the scheduling and retry times.
Change-Id: Ibf3cc3506991b222ded3ee4ddfbd7e2d60341d60
|
|
|
|
| |
Change-Id: Iaf7fe3bb9006b8cb583b57419efb34cd29a82d40
|
|
|
|
| |
Change-Id: I82b7a1feedf5faa0edbeb235079e74b1ee4793f1
|
|
|
|
| |
Change-Id: I3143615c35f1a69dd115b47ca4dd62c85177198c
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When Group is being suggested for [cc-]reviewer its UUID gets encoded
by server prior to sending it back to client. That has no impact on
Gerrit internal groups however external groups are affected:
* LDAP - 'ldap/' UUID prefix becomes 'ldap%2F'
* CollabNet - 'teamforge:' UUID prefix becomes 'teamforge%3A'
Hitting the Reply button works fine in GWT as group UUID gets decoded
before being sent back to the server, however, in PolymerUI it is being
sent as is and results in the following error:
Error 400: Account 'teamforge%3Aproj1466%3Ateam2079' not found
teamforge%3Aproj1466%3Ateam2079 does not identify a registered user or
group.
URI decoding before sending the group ID back fixes the issue.
Bug: Issue 13350
Change-Id: Icaf17bdc849f6b9b4b5041f59b3a9cce9a064e5f
|
|
|
|
|
|
| |
See https://gerrit-review.googlesource.com/c/gerrit/+/254438
Change-Id: I330677f292d486a94cf053b42175c17f94cf77a8
|
|\
| |
| |
| | |
stable-2.16
|
| |
| |
| |
| | |
Change-Id: I4d580d0229de5dcb44a3366566f88c621c6fee30
|
|/
|
|
|
|
|
|
|
|
|
| |
* Update plugins/replication from branch 'stable-2.16'
to fef0ec1946617ea7d5446b2136dff8a2ed4434d6
- Fix synopsis in replication start cmd documentation
--url is usable with --all or projects and on its own. Update the
usage to reflect this.
Change-Id: Id3637f7bf61b7f65348b19ec0616808ef3f44ccf
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In 2.16 in GWT project dashboard is available under the following
endpoint:
/projects/[project],dashboards/[dashboard]
However Polygerit UI expects it under:
/p/[project]/+/dashboard/[dashboard]
Add route to Polygerit UI router to redirect former to latter so that
existing (legacy) menu links are handled without any modifications.
Otherwise switching from GWT to Polygerit results in broken links
to project dashboards.
Note that it also handles bookmarked links.
Bug: Issue 13328
Change-id: Ie7f0ef1e588a80b83a46c41d6dd5406686b09990
|
|
|
|
| |
Change-Id: I573904d8bfff36b7888255d14cba9e72c9d9114a
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Update plugins/replication from branch 'stable-2.16'
to 3c764785b62efea8038650dc6d2f7cb8d740972c
- Don't wait for pending events to process on startup
Previously, on large Gerrit installations with many projects and/or many
replication destinations, the replication plugin could take very long
periods of time to startup. This was particularly a problem if the
pending(persisted) event count was large as they all were rescheduled
before the plugin finished initializing. Change this behavior so that
startup merely begins the process of scheduling the pending events, but
does not wait for them to complete.
Bug: Issue 12769
Change-Id: I224c2ce2a35f987af2343089b9bb00a7fcb7e3be
|
|
|
|
|
|
|
|
|
|
| |
During the cherry-pick of I5c2ef8dbabe7 wrapping of edit message
modification response (Response.none()) in Response.ok() was
erroneously added. Revert that part, as it broke handling of
PUT /changes/<change-id>/edit/<path> request.
Bug: Issue 11706
Change-Id: I486f88318ea807f86bc25127ad5141dd25cb4eb4
|
|\
| |
| |
| | |
MacOS" into stable-2.16
|
| |
| |
| |
| | |
Change-Id: I94d0e113205a7fea0aeb4976d3898d4d5afac408
|
|/
|
|
|
|
|
|
| |
* Update plugins/replication from branch 'stable-2.16'
to 05042b1051b2e2b0d67f5a9dbdabe62ae2cfb648
- ReplicationTasksStorage: Add unit tests
Change-Id: I164426e70937bc3c4ac426be3056a01e9229746b
|
|\
| |
| |
| | |
stable-2.16
|
| |
| |
| |
| | |
Change-Id: If60fd8230a818d3b7cbfab867c61aef970ef29e0
|
|\|
| |
| |
| | |
stable-2.16
|