-
Notifications
You must be signed in to change notification settings - Fork 90
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
IGNITE-22233 Zone replica listener #3931
base: main
Are you sure you want to change the base?
Conversation
modules/replicator/src/main/java/org/apache/ignite/internal/replicator/message/TableAware.java
Outdated
Show resolved
Hide resolved
...les/table/src/integrationTest/java/org/apache/ignite/distributed/ReplicaUnavailableTest.java
Outdated
Show resolved
Hide resolved
* @param replica Table replica | ||
* @return Future, which will be completed when operation done | ||
*/ | ||
public CompletableFuture<Void> addTableReplica(ZonePartitionId zonePartitionId, TablePartitionId replicationGroupId, Replica replica) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As discussed,
- I'd consider moving the method to the Replica itself.
- Change Replica param to Listener.
- Rename given method to addTableReplicaListener or addTableRequestProcessor.
@@ -657,7 +656,6 @@ public CompletableFuture<Boolean> startReplica( | |||
*/ | |||
public CompletableFuture<Replica> startReplica( | |||
ReplicationGroupId replicaGrpId, | |||
ReplicaListener listener, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We will have some logic in aggregated listener, e.g. txnStateStorage related one or idle safeTime propagation one. Why it's removed?
|
||
startReplica(replicaGrpId, storageIndexTracker, newReplicaListenerFut); | ||
CompletableFuture<ReplicaListener> newReplicaListenerFut = resultFuture.thenApply(createListener); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missed that previously createListener is rather confusing naming. I'd rather use common supplier suffix.
@@ -50,7 +53,28 @@ public TopologyAwareRaftGroupService raftClient() { | |||
|
|||
@Override | |||
public CompletableFuture<ReplicaResult> processRequest(ReplicaRequest request, String senderId) { | |||
return listener.invoke(request, senderId); | |||
if (!(request instanceof TableAware)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What kind of requests are supposed to be processed here besides TableAware ones, Txn-related and SafeTime propagation related ones, right? Or there are more?
If two mentioned above - could you please create a tickets and add todos here?
ReplicationGroupId replicationGroupId = request.groupId(); | ||
|
||
// TODO: https://issues.apache.org/jira/browse/IGNITE-22522 Refine this code when the zone based replication will done. | ||
if (replicationGroupId instanceof TablePartitionId) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you please name requests that will use TablePartitionId for now?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure, that I understood the question: in general in the main all requests, besides the idle safe time request, I suppose. But in the collocation feature branch I think it is not truth.
@@ -410,6 +412,8 @@ public class TableManager implements IgniteTablesInternal, IgniteComponent { | |||
|
|||
private final IndexMetaStorage indexMetaStorage; | |||
|
|||
private PartitionReplicaLifecycleManager partitionReplicaLifecycleManager; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think that TableManager should depend on PartitionReplciaLifecycleManager. Already existing replicaManager dependency should be enough. Meaning that TableManager should retrieve Replica from ReplicaManager and addTableProcessor to the replica directly. Basically it's a continuation of our discussion on collocation sync.
@@ -1004,7 +1011,7 @@ private CompletableFuture<Void> startPartitionAndStartClient( | |||
MvTableStorage mvTableStorage = internalTable.storage(); | |||
|
|||
try { | |||
var ret = replicaMgr.startReplica( | |||
return replicaMgr.startReplica( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add the "code removal todo", it won't be necessary to startReplica after full migration to the zone based ones. Currently, yes, we will add it both to replicas directly and populate aggregated one with table processor.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
|
||
assertDoesNotThrow(() -> keyValueView.put(null, 1L, 200)); | ||
|
||
// Actually we are testing not the fair put value, but the hardcoded one from temporary noop replica listener |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That should not be true any longer. We only skip the raft replication part, but we will retrieve previously inserted data. Because of this, I believe that we may verify random value insertion/retrieval.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed the comment. What about the values - it's already has two differents values check 100 vs 200. Do you think that we really has fair randomization here?
Thank you for submitting the pull request.
To streamline the review process of the patch and ensure better code quality
we ask both an author and a reviewer to verify the following:
The Review Checklist
- There is a single JIRA ticket related to the pull request.
- The web-link to the pull request is attached to the JIRA ticket.
- The JIRA ticket has the Patch Available state.
- The description of the JIRA ticket explains WHAT was made, WHY and HOW.
- The pull request title is treated as the final commit message. The following pattern must be used: IGNITE-XXXX Change summary where XXXX - number of JIRA issue.
Notes