You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There may be several reasons why adaptive concurrency limiting is preferred over using a fixed limit:
Dynamic System Conditions: In a distributed system, conditions such as load, resource availability, and topology can change frequently due to factors like auto-scaling, partial outages, code deployments, or fluctuations in traffic patterns. A fixed concurrency limit cannot adapt to these dynamic conditions, leading to either under-utilization of resources or overwhelmed services.
Latency Sensitivity: Different services or use cases may have varying sensitivity to latency. A fixed concurrency limit cannot account for these differences, potentially leading to either excessive queuing and high latency or under-utilization of resources. An adaptive approach can adjust the limit based on observed latencies, maintaining desired performance characteristics.
Simplicity and Autonomy: Manually determining and configuring fixed concurrency limits for every service or instance can be a complex and error-prone process, especially in large-scale distributed systems. An adaptive approach can autonomously and continuously adjust the limit without manual intervention, simplifying operations and reducing the risk of misconfiguration.
Resilience and Self-Healing: By automatically adjusting the concurrency limit based on observed conditions, an adaptive approach promotes resilience and self-healing capabilities. It allows services to shed excessive load during periods of high demand or resource constraints, preventing cascading failures and promoting graceful degradation.
While a fixed concurrency limit may be easier to reason about and configure initially, it lacks the flexibility and adaptability required in modern, dynamic distributed systems. An adaptive approach provides the ability to continuously optimize performance, resource utilization, and resilience in the face of changing conditions, ultimately leading to a more robust and efficient system.
Suggestion
Ideally, a user would be able to describe the limiting algorithm in the ListenerConfig that fit their needs instead of a fixed number for maxConcurrentRequests. The Limit and Limiter interfaces from Netflix's concurrency limits library are a good starting point. In the first iteration we should provide the following implementations
no limit,
fixed limit,
AMID limit.
Instead of passing a Semaphore for requests in the ServerListener to the ConnectionHandler we would pass a Limiter implementation that holds the configured Limit algorithm. The Limiter would be used instead if the Semaphore to acquire a token per request. If no token can be acquired the limit is exceeded and the request can be rejected.
While implementing a Proof of Concept (PoC), I asked myself where do we want to place the Limiting API. I guess, we need a new submodule concurrency-limits which holds Limit and Limiter interfaces and a standard set of implementations. The webserver module then depends on concurrency-limits.
Another question is, how do we want to make the various limiting algorithm configurable. Today, we have just the single property maxConcurrentRequests, but in future we want to choose from a set of different implementations, e.g. no limit, fixed limit, AMID limit, Vegas limit etc.
When testing the PoC, I noticed that when the access log feature is activated, rejected requests are not logged in the access log file. Is this behavior intentional or is this a bug?
Additionally, extending the metrics (looking at KeyPerformanceIndicatorMetricsImpls) would be helpful, to be able to observe how a service is doing. I'm thinking here about the following request limiting metrics:
success counter
ignored counter
rejected counter
The text was updated successfully, but these errors were encountered:
Hello,
this sounds like a great idea. I will provide a few answer for questions you posted:
location of the module you called concurrency-limits: it seems to me that this should be agnostic of the usage, so it would fit under our common module (i.e. common/concurrency-limits). I think the implementations can be part of the same module (unless one of them would require some additional libraries). Please follow our flat package structure (only spi package is allowed for provider interfaces).
Access Log and rejected requests: you are right, these are currently not logged, as access log is implemented as a filter, and to reach it, we must accept the request. The intention of the limit is not to use resources on the server (i.e. to prevent issues from attacks). I think we may be missing the data in these cases - maybe we could use metrics for this - if a counter is increased, it does not really use additional resource)
Metrics - we currently do not depend on the metrics API in webserver; we could do it (or the rate limiter implementations can do it), just be aware that request handling is a critical performance hotspot, and metrics may have an impact when there is a metric implementation in place
Configuration - our blueprints have support for providers, which are loaded using service loader (and will support our service registry as well). This allows for nested configuration that is not fixed, but defined by the limiter implementation. The config could be something like
server:
limiter: # limiter() method in the blueprintfixed: # identification of the service typelimit: 444
Some other thoughts:
this is quite a critical part of Helidon WebServer, so any change in this area would most likely require a few cycles of reviews/design decisions. We also require adherence to Helidon code style, configuration style etc.
if you have a POC, we can maybe have a look at that
Why
There may be several reasons why adaptive concurrency limiting is preferred over using a fixed limit:
Dynamic System Conditions: In a distributed system, conditions such as load, resource availability, and topology can change frequently due to factors like auto-scaling, partial outages, code deployments, or fluctuations in traffic patterns. A fixed concurrency limit cannot adapt to these dynamic conditions, leading to either under-utilization of resources or overwhelmed services.
Latency Sensitivity: Different services or use cases may have varying sensitivity to latency. A fixed concurrency limit cannot account for these differences, potentially leading to either excessive queuing and high latency or under-utilization of resources. An adaptive approach can adjust the limit based on observed latencies, maintaining desired performance characteristics.
Simplicity and Autonomy: Manually determining and configuring fixed concurrency limits for every service or instance can be a complex and error-prone process, especially in large-scale distributed systems. An adaptive approach can autonomously and continuously adjust the limit without manual intervention, simplifying operations and reducing the risk of misconfiguration.
Resilience and Self-Healing: By automatically adjusting the concurrency limit based on observed conditions, an adaptive approach promotes resilience and self-healing capabilities. It allows services to shed excessive load during periods of high demand or resource constraints, preventing cascading failures and promoting graceful degradation.
While a fixed concurrency limit may be easier to reason about and configure initially, it lacks the flexibility and adaptability required in modern, dynamic distributed systems. An adaptive approach provides the ability to continuously optimize performance, resource utilization, and resilience in the face of changing conditions, ultimately leading to a more robust and efficient system.
Suggestion
Ideally, a user would be able to describe the limiting algorithm in the ListenerConfig that fit their needs instead of a fixed number for
maxConcurrentRequests
. TheLimit
andLimiter
interfaces from Netflix's concurrency limits library are a good starting point. In the first iteration we should provide the following implementationsInstead of passing a
Semaphore
for requests in theServerListener
to theConnectionHandler
we would pass aLimiter
implementation that holds the configuredLimit
algorithm. TheLimiter
would be used instead if theSemaphore
to acquire a token per request. If no token can be acquired the limit is exceeded and the request can be rejected.While implementing a Proof of Concept (PoC), I asked myself where do we want to place the Limiting API. I guess, we need a new submodule
concurrency-limits
which holdsLimit
andLimiter
interfaces and a standard set of implementations. Thewebserver
module then depends onconcurrency-limits
.Another question is, how do we want to make the various limiting algorithm configurable. Today, we have just the single property
maxConcurrentRequests
, but in future we want to choose from a set of different implementations, e.g. no limit, fixed limit, AMID limit, Vegas limit etc.When testing the PoC, I noticed that when the access log feature is activated, rejected requests are not logged in the access log file. Is this behavior intentional or is this a bug?
Additionally, extending the metrics (looking at
KeyPerformanceIndicatorMetricsImpls
) would be helpful, to be able to observe how a service is doing. I'm thinking here about the following request limiting metrics:The text was updated successfully, but these errors were encountered: