Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IGNITE-22233 Zone replica listener #3931

Open
wants to merge 59 commits into
base: main
Choose a base branch
from

Conversation

kgusakov
Copy link
Contributor

Thank you for submitting the pull request.

To streamline the review process of the patch and ensure better code quality
we ask both an author and a reviewer to verify the following:

The Review Checklist

  • Formal criteria: TC status, codestyle, mandatory documentation. Also make sure to complete the following:
    - There is a single JIRA ticket related to the pull request.
    - The web-link to the pull request is attached to the JIRA ticket.
    - The JIRA ticket has the Patch Available state.
    - The description of the JIRA ticket explains WHAT was made, WHY and HOW.
    - The pull request title is treated as the final commit message. The following pattern must be used: IGNITE-XXXX Change summary where XXXX - number of JIRA issue.
  • Design: new code conforms with the design principles of the components it is added to.
  • Patch quality: patch cannot be split into smaller pieces, its size must be reasonable.
  • Code quality: code is clean and readable, necessary developer documentation is added if needed.
  • Tests code quality: test set covers positive/negative scenarios, happy/edge cases. Tests are effective in terms of execution time and resources.

Notes

@kgusakov kgusakov changed the title IGNITE-22233 IGNITE-22233 Zone replica listener Jun 18, 2024
@kgusakov kgusakov marked this pull request as ready for review June 18, 2024 19:42
* @param replica Table replica
* @return Future, which will be completed when operation done
*/
public CompletableFuture<Void> addTableReplica(ZonePartitionId zonePartitionId, TablePartitionId replicationGroupId, Replica replica) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As discussed,

  1. I'd consider moving the method to the Replica itself.
  2. Change Replica param to Listener.
  3. Rename given method to addTableReplicaListener or addTableRequestProcessor.

@@ -657,7 +656,6 @@ public CompletableFuture<Boolean> startReplica(
*/
public CompletableFuture<Replica> startReplica(
ReplicationGroupId replicaGrpId,
ReplicaListener listener,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We will have some logic in aggregated listener, e.g. txnStateStorage related one or idle safeTime propagation one. Why it's removed?


startReplica(replicaGrpId, storageIndexTracker, newReplicaListenerFut);
CompletableFuture<ReplicaListener> newReplicaListenerFut = resultFuture.thenApply(createListener);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missed that previously createListener is rather confusing naming. I'd rather use common supplier suffix.

@@ -50,7 +53,28 @@ public TopologyAwareRaftGroupService raftClient() {

@Override
public CompletableFuture<ReplicaResult> processRequest(ReplicaRequest request, String senderId) {
return listener.invoke(request, senderId);
if (!(request instanceof TableAware)) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What kind of requests are supposed to be processed here besides TableAware ones, Txn-related and SafeTime propagation related ones, right? Or there are more?
If two mentioned above - could you please create a tickets and add todos here?

ReplicationGroupId replicationGroupId = request.groupId();

// TODO: https://issues.apache.org/jira/browse/IGNITE-22522 Refine this code when the zone based replication will done.
if (replicationGroupId instanceof TablePartitionId) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please name requests that will use TablePartitionId for now?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure, that I understood the question: in general in the main all requests, besides the idle safe time request, I suppose. But in the collocation feature branch I think it is not truth.

@@ -410,6 +412,8 @@ public class TableManager implements IgniteTablesInternal, IgniteComponent {

private final IndexMetaStorage indexMetaStorage;

private PartitionReplicaLifecycleManager partitionReplicaLifecycleManager;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think that TableManager should depend on PartitionReplciaLifecycleManager. Already existing replicaManager dependency should be enough. Meaning that TableManager should retrieve Replica from ReplicaManager and addTableProcessor to the replica directly. Basically it's a continuation of our discussion on collocation sync.

@@ -1004,7 +1011,7 @@ private CompletableFuture<Void> startPartitionAndStartClient(
MvTableStorage mvTableStorage = internalTable.storage();

try {
var ret = replicaMgr.startReplica(
return replicaMgr.startReplica(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add the "code removal todo", it won't be necessary to startReplica after full migration to the zone based ones. Currently, yes, we will add it both to replicas directly and populate aggregated one with table processor.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done


assertDoesNotThrow(() -> keyValueView.put(null, 1L, 200));

// Actually we are testing not the fair put value, but the hardcoded one from temporary noop replica listener
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That should not be true any longer. We only skip the raft replication part, but we will retrieve previously inserted data. Because of this, I believe that we may verify random value insertion/retrieval.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed the comment. What about the values - it's already has two differents values check 100 vs 200. Do you think that we really has fair randomization here?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
2 participants