Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow depends_on referring to a resource which is mentioned in a removed block #35341

Open
apparentlymart opened this issue Jun 14, 2024 · 1 comment

Comments

@apparentlymart
Copy link
Member

This is an idea for a new capability based on the Stack Overflow question Make Terraform update parent resource before deleting child resource.

That question is actually about how a provider could represent that relationship so that the correct behavior would emerge by default, and that's one of the use-cases discussed in https://github.com/hashicorp/terraform-proposals/pull/87, but this idea is for a configuration-driven approach that would be complementary with the provider-driven approach for situations where a provider can't describe the relevant relationship itself. It would also serve as an interim workaround for the lack of provider-declared relationships which could be implemented without modifying any providers, and therefore with considerably less cross-team coordination.

Problem

Today Terraform cannot model the correct order of operations for some combinations of deleting and updating an object. In the proposal I linked above I'd described that as follows (bracketed parts are additions I've added here for those who cannot access the linked proposal):

In some systems, an object B that is a child of object A will block the deletion of A for as long as it exists. Therefore successfully replacing object A requires also replacing object B.

If we were to add a new [provider-specified] relationship type "blocks deletion of" then Terraform could use that as a hint to propose replacing B whenever A is already planned for replacement, without the user needing to explicitly write a replace_triggered_by argument.

  # aws_vpc.example must be replaced
-/+ resource "aws_vpc" "example" {
      ~ cidr_block = "10.0.0.0/16" -> "10.1.0.0/16" # forces replacement
      # (etc)
    }

  # aws_security_group.example must be replaced
  # (because it would block the replacement of aws_vpc.example)
-/+ resource "aws_security_group" "example" {
      vpc_id = "vpc-1234" -> (known after apply)
      # (etc)
    }

A similar situation would arise if aws_vpc.example were planned for destruction, since the EC2 API is designed to return an error if the VPC still contains certain object types, such as network interfaces. In that case, Terraform must propose to destroy those contained objects and must take that action before it tries to destroy the VPC itself.

(I also noted in the other proposal that this particular example is not realistic because the change to vpc_id in aws_security_group would already force replacement of the security group today anyway, but let's pretend that isn't true for the sake of this discussion because there are cases where there's no changing argument to cause this effect.)

Although the quoted proposal doesn't discuss it, there is another variation of this problem where resource "aws_vpc" "example" has been removed from the configuration and aws_security_group.example updated to refer to a different VPC object. In that case Terraform would propose to destroy the VPC object and replace the security group:

resource "aws_security_group" "example" {
  vpc_id = data.aws_vpc.new_example.id
  # (etc)
}
  # aws_vpc.example will be destroyed
  # (because it is no longer declared in the configuration)
  - resource "aws_vpc" "example" {
      - cidr_block = "10.0.0.0/16"
      # (etc)
    }

  # aws_security_group.example must be replaced
-/+ resource "aws_security_group" "example" {
      vpc_id = "vpc-1234" -> "vpc-5678" # forces replacement
      # (etc)
    }

The configuration contains no record of the former dependency edge from aws_security_group.example to aws_vpc.example, because Terraform uses the dependencies of aws_security_group.example that are currently declared in the configuration. Therefore as far as Terraform is concerned the action of destroying aws_vpc.example is independent of the action of replacing aws_security_group.example and so the two can happen in either order.

However, if Terraform tries to delete the VPC first then the EC2 API will reject that request because it's invalid to delete a VPC that has a security group in it. The same would be true if the VPC contained a subnet, or a network interface. There is a hidden dependency here that Terraform has lost track of.

Proposal

As noted earlier, the most ideal case here would be for us to somehow extend the provider API to allow the provider to tell Terraform Core that aws_security_group.example blocks the deletion of aws_vpc.example, which would then allow Terraform Core to know that a special dependency ordering is required. Specifically for this case, the provider would need to have previously told Terraform Core that the current aws_security_group.example is blocking, but the planned new aws_security_group.example is not blocking that VPC, because it no longer refers to it. Terraform Core could then in principle notice that it must complete at least the destroy leg of the security group replacement before it begins deleting the VPC.

However, for the sake of this issue let's assume that there's no way for the provider to describe that relationship. That could arise if the underlying API design doesn't allow that inference to be made, or if the reason for the blockage is something additional the module author has decided rather than an inherent part of the remote API. In that case what we want is a way for the module author to describe this relationship.

Therefore I propose the compromise of making it valid for a depends_on argument to refer to a resource that's mentioned in the from argument of a removed block in the same module:

# This block affirms that the author is expecting that there
# might be an aws_vpc.example in the prior state, and
# so explicit references to this address are not typos.
removed {
  from = aws_vpc.example

  # (no other arguments required for this situation,
  # although any of the usual resource removed arguments
  # could be used if their effects are also separately
  # desirable.)
}

resource "aws_security_group" "example" {
  # The removed block makes it valid to refer to
  # aws_vpc.example here, even though that resource
  # is not part of the desired state.
  depends_on = [aws_vpc.example]

  vpc_id = data.aws_vpc.new_example.id
  # (etc)
}

Today's Terraform would treat depends_on = [aws_vpc.example] as invalid because there is no resource "aws_vpc" "example' block in this module, and it's helpful to have Terraform report that as an error so that authors can quickly notice if they've made a typo or other similar referencing mistake.

However, the removed block acts as an explicit record of the former existence of a resource "aws_vpc" "example" block, implying the possibility that a resource instance aws_vpc.example might exist in the prior state. My assertion is that the agreement between these two statements is enough to assume that the author is intentionally referring to something that isn't currently declared, and so this depends_on entry is unlikely to be a typo.

Terraform Core can then understand this as "if there's a aws_vpc.example tracked in the prior state then it must outlive any pre-existing aws_security_group.example object". Or, in other words, the required order of operations is:

  1. Destroy the old aws_security_group.example object.
  2. Destroy the "orphaned" aws_vpc.example object.
  3. Create the new replacement aws_security_group.example object.

(Steps 2 and 3 are technically independent and could happen in either order because the new object cannot possibly depend on something that isn't declared in configuration at the time it's created; the relative ordering of steps 1 and 2 is the important thing here and 3 could potentially happen concurrently with 2.)

If these objects were instead in a create_before_destroy chain, the ordering would be:

  1. Depose the old aws_security_group.example object.
  2. Create the new replacement aws_security_group.example object.
  3. Delete the previously-deposed old aws_security_group.example object.
  4. Destroy the "orphaned" aws_vpc.example object.

(Again, the ordering of steps 3 and 4 is the main point for this proposal; steps 1-3 are just the normal create_before_destroy behavior.)

Disadvantages

Although this model seems like a reasonable extension of some existing Terraform concepts, this treatment is admittedly quite subtle and might be hard to explain in the documentation. If we did this then I expect we'd document it primarily as part of the depends_on documentation, describing it as a way to declare a dependency on an object that's planned for deletion.

However, it's likely true that someone would only discover that they needed this when they try to apply a plan created without the hidden dependency declared, at which point it might be too late to learn about it and add it.

For shared modules it can at least then benefit other users of the module that haven't upgraded yet, but for a single-instance module I expect the experience would be to encounter an error during apply, somehow learn that it was caused by a missing dependency, but then to resolve that by manually deleting the object that Terraform misordered and retry because that'll often be faster than making another configuration change to fix it, and it's often necessary to respond to a failed apply with some urgency.

Since this situation seems to arise pretty rarely, it may be justified for it to be a somewhat-hidden feature. I expect its implementation is relatively straightforward -- just loosening a depends_on validation rule and then letting the graph builder add the dependency edges it would already add in the presence of such a reference anyway -- in which case the maintenance cost is relatively low as long as we make sure this situation is covered by a regression test.

@jbardin
Copy link
Member

jbardin commented Jun 14, 2024

IIUC I think we have 2 disparate requirements here, both of which are satisfied with possible current configuration, however it would definitely be more user-friendly if these were not up to the user to declare correctly.

The quote from the proposal is a combination of these two requirements. The first aspect is that we would prefer to not have the user be required to add replace_triggered_by in when the provider knows that this type of relationship will require replacement already. The only way we have to do right now is what exists in the example, where an attribute change is used to trigger the replacement. If there is not natural key for this, then it's an awkward choice between a synthetic attribute to force the change (which is often not hard to derive), or document that replace_triggered_by is required for this type of resource combination.

The other aspect to the problem is the ordering of the destroy operation with respect to the resource which will continue to exist in the configuration. It is not true that these are un-ordered however, all destroy operations are ordered by the relationships stored in the state, so if both resources existed simultaneously in config at some point, the destroy actions will still be strictly ordered with any updates or replacements to configured resources. The default ordering which is not working in this example is not because it is un-ordered, but rather because it is the incorrect order. The order required is create_before_destroy (i.e. "create-and-or-update-or-replace-dependencies-before-destroy"), which unfortunately currently requires applying that lifecycle option in the config before removing the old resource.

In the proposed depends_on = [aws_vpc.example], the default ordering of operations should already be correct. Actually creating this example and inspecting the graph shows that the ordering is fully defined, with the final aws_security_group.example waiting on the destroy for aws_vpc.example, which in turn is waiting on the destroy for the old aws_security_group.example.

So global-relationships aside (automatic replace_triggered_by), maybe this boils down to is actually defining a way to change the destroy and update order at the time of removal rather than at the time of create (or an intermediate update).

That specific request has 2 issues that I can see, both of which may not be deal breakers themselves, but need careful consideration. In graph construction, nothing can directly reference a destroy node, and destroy nodes can only depend directly on other destroy nodes (obviously other implementation details connect the destroy and create/update sides of the graph). This is a key premise that ensures we are building an order of operations which is valid (and part of why destroy-time provisioners cannot reference anything). Building this feature would not necessarily need to break that rule, but it needs to be carefully accounted for since we are dealing with a destroy-only operation.

The other technical issue is that because users don't have direct control over the destroy ordering, we ensure that the destroy can proceed by using the dependencies recorded during the create and update operations. In essence, the entire destroy order is already recorded in the state, and this would add a way for users to alter that destroy order. If this is adding order where there wasn't any (possibly for 2 things which did not exist simultaneously in the config), then there's little chance of conflict, but if there is already a defined order, changing that is likely to conflict with other planned operations.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants