-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Set replication_factor 1 to 2, rps is down 50% #4392
Comments
May it is not a bug, I want to know why read_fan_out_factor can't set to be requests calculated by only one replica. |
hey @kawhi-xl, how do you serve distributed deployment? It is multiple machines? How if there a load balancer? |
@generall yes, deployment with distributed, the cluster have 2 machines. read requests by grpc with slb, no write during read. What are the possible reasons? what shoud I do? |
Also my search requests with no read_consistency. read_consistency default value is factor(1), should land into one replica only. |
@kawhi-xl could you please show how you calculate the rps? |
@tellet-q request with concurrency by client, rps = rquests_total_costs / requests_num. the colletion default_segment_number is 2, because my business requires high throughput. |
Could it be that your machines are very limited in terms of resources? What does your resource usage look like? Also, how many points are we talking about? Are you sure they are properly indexed before running the test? |
Current Behavior
I set the collection replication_factor 1 to 2, rps is down 50%
Steps to Reproduce
Expected Behavior
rps should not down, rps should be 100 same to the first collection.
Possible Solution
I set the collections replication_factor with 2. I want the rps same with 1 replication_factor, which means my computing costs double!
Context (Environment)
Detailed Description
Possible Implementation
I read the source code, found the read_fan_out_factor parameter. I want every request only by one replicaset calculated. what should I do?
The text was updated successfully, but these errors were encountered: