Terraform: Aprenda a gerenciar dependências entre recursos na GCP
- Parte 1: https://blog.4linux.com.br/introducao-ao-terraform/
- Parte 2: https://blog.4linux.com.br/terraform-parte2-alterando-sua-infraestrutura-de-forma-incremental/
Hands On
resource "google_compute_network" "tf-network" {
name = "tf-network"
auto_create_subnetworks = true
}resource "google_compute_instance" "default" {
name = "linux-vm-1"
machine_type = "f1-micro"
zone = "us-central1-a"
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
}
}
labels = {
environment = "development"
distro = "debian-9"
}
network_interface {
network = google_compute_network.tf-network.self_link
}
}$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
google_compute_instance.default: Refreshing state... [id=projects/projeto-1-265222/zones/us-central1-a/instances/linux-vm-1]
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
-/+ destroy and then create replacement
Terraform will perform the following actions:
# google_compute_instance.default must be replaced
-/+ resource "google_compute_instance" "default" {
can_ip_forward = false
~ cpu_platform = "Intel Haswell" -> (known after apply)
deletion_protection = false
- enable_display = false -> null
~ guest_accelerator = [] -> (known after apply)
~ id = "projects/projeto-1-265222/zones/us-central1-a/instances/linux-vm-1" -> (known after apply)
~ instance_id = "5368174570526729490" -> (known after apply)
~ label_fingerprint = "1eO_ZGp1K5M=" -> (known after apply)
labels = {
"distro" = "debian-9"
"environment" = "development"
}
machine_type = "f1-micro"
- metadata = {} -> null
~ metadata_fingerprint = "y3D14wyHqNs=" -> (known after apply)
+ min_cpu_platform = (known after apply)
name = "linux-vm-1"
~ project = "projeto-1-265222" -> (known after apply)
~ self_link = "https://www.googleapis.com/compute/v1/projects/projeto-1-265222/zones/us-central1-a/instances/linux-vm-1" -> (known after apply)
- tags = [] -> null
~ tags_fingerprint = "42WmSpB8rSM=" -> (known after apply)
zone = "us-central1-a"
~ boot_disk {
auto_delete = true
~ device_name = "persistent-disk-0" -> (known after apply)
+ disk_encryption_key_sha256 = (known after apply)
+ kms_key_self_link = (known after apply)
mode = "READ_WRITE"
~ source = "https://www.googleapis.com/compute/v1/projects/projeto-1-265222/zones/us-central1-a/disks/linux-vm-1" -> (known after apply)
~ initialize_params {
~ image = "https://www.googleapis.com/compute/v1/projects/debian-cloud/global/images/debian-9-stretch-v20191210" -> "debian-cloud/debian-9"
~ labels = {} -> (known after apply)
~ size = 10 -> (known after apply)
~ type = "pd-standard" -> (known after apply)
}
}
~ network_interface {
~ name = "nic0" -> (known after apply)
~ network = "https://www.googleapis.com/compute/v1/projects/projeto-1-265222/global/networks/default" -> (known after apply) # forces replacement
~ network_ip = "10.128.0.5" -> (known after apply)
~ subnetwork = "https://www.googleapis.com/compute/v1/projects/projeto-1-265222/regions/us-central1/subnetworks/default" -> (known after apply)
~ subnetwork_project = "projeto-1-265222" -> (known after apply)
}
~ scheduling {
~ automatic_restart = true -> (known after apply)
~ on_host_maintenance = "MIGRATE" -> (known after apply)
~ preemptible = false -> (known after apply)
+ node_affinities {
+ key = (known after apply)
+ operator = (known after apply)
+ values = (known after apply)
}
}
}
# google_compute_network.tf-network will be created
+ resource "google_compute_network" "tf-network" {
+ auto_create_subnetworks = true
+ delete_default_routes_on_create = false
+ gateway_ipv4 = (known after apply)
+ id = (known after apply)
+ ipv4_range = (known after apply)
+ name = "tf-network"
+ project = (known after apply)
+ routing_mode = (known after apply)
+ self_link = (known after apply)
}
Plan: 2 to add, 0 to change, 1 to destroy.
------------------------------------------------------------------------
Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.Aqui percebemos que temos 2 recursos que devem ser adicionados e 1 recurso deve ser destruído.
Confirme a execução:
$ terraform apply -auto-approve
Com o seguinte resultado:
google_compute_instance.default: Refreshing state... [id=projects/projeto-1-265222/zones/us-central1-a/instances/linux-vm-1] google_compute_network.tf-network: Creating... google_compute_instance.default: Destroying... [id=projects/projeto-1-265222/zones/ ...... ...... ...... ...... google_compute_instance.default: Creating... google_compute_instance.default: Still creating... [10s elapsed] google_compute_instance.default: Creation complete after 10s [id=projects/projeto-1-265222/zones/us-central1-a/instances/linux-vm-1] Apply complete! Resources: 2 added, 0 changed, 1 destroyed.
Temos uma rede criada, mas até o momento praticamente o que fizemos foi criar uma rede semelhante a rede “default” do Google, portanto colocar o valor de auto_create_subnetworks para false, assim teremos que criar uma subrede com o valor que desejamos.
Altere o arquivo network.tf para:
resource "google_compute_network" "tf-network" {
name = "tf-network"
auto_create_subnetworks = false
}
Isso fará com que todas as redes criadas automaticamente anteriormente sejam destruídas e no lugar iremos criar uma sub rede com IP 10.10.1.0./24.
Crie um arquivo subnetwork.tf com o seguinte conteúdo:
resource "google_compute_subnetwork" "tf-subnetwork" {
name = "tf-subnetwork"
region = "us-central1"
network = google_compute_network.tf-network.self_link
ip_cidr_range = "10.10.1.0/24"
}E temos que agora adicionar esta nova subrede com nossa instância para que possa ser utilizada. Caso você não informe que está utilizando uma subrede, o Terraform irá avisá-lo pelo terminal que esta rede tem uma subrede e que deverá ser informada.
resource “google_compute_instance” “default” {
name = “linux-vm-1”
machine_type = “f1-micro”
zone = “us-central1-a”
boot_disk {
initialize_params {
image = “debian-cloud/debian-9”
}
}
labels = {
environment = “development” distro = “debian-9”
}
network_interface {
network = google_compute_network.tf-network.self_link subnetwork = google_compute_subnetwork.tf-subnetwork.self_link
}
}
Execute o plano de ação: $ terraform plan
Aqui temos outra saída super longa, mas importante para verificar o que estará acontecendo com sua infraestrutura.
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
google_compute_network.tf-network: Refreshing state... [id=projects/projeto-1-265222/global/networks/tf-network]
google_compute_instance.default: Refreshing state... [id=projects/projeto-1-265222/zones/us-central1-a/instances/linux-vm-1]
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
-/+ destroy and then create replacement
Terraform will perform the following actions:
# google_compute_instance.default must be replaced
-/+ resource "google_compute_instance" "default" {
can_ip_forward = false
~ cpu_platform = "Intel Haswell" -> (known after apply)
deletion_protection = false
- enable_display = false -> null
~ guest_accelerator = [] -> (known after apply)
~ id = "projects/projeto-1-265222/zones/us-central1-a/instances/linux-vm-1" -> (known after apply)
~ instance_id = "7404175307202617516" -> (known after apply)
~ label_fingerprint = "1eO_ZGp1K5M=" -> (known after apply)
labels = {
"distro" = "debian-9"
"environment" = "development"
}
machine_type = "f1-micro"
- metadata = {} -> null
~ metadata_fingerprint = "y3D14wyHqNs=" -> (known after apply)
+ min_cpu_platform = (known after apply)
name = "linux-vm-1"
~ project = "projeto-1-265222" -> (known after apply)
~ self_link = "https://www.googleapis.com/compute/v1/projects/projeto-1-265222/zones/us-central1-a/instances/linux-vm-1" -> (known after apply)
- tags = [] -> null
~ tags_fingerprint = "42WmSpB8rSM=" -> (known after apply)
zone = "us-central1-a"
~ boot_disk {
auto_delete = true
~ device_name = "persistent-disk-0" -> (known after apply)
+ disk_encryption_key_sha256 = (known after apply)
+ kms_key_self_link = (known after apply)
mode = "READ_WRITE"
~ source = "https://www.googleapis.com/compute/v1/projects/projeto-1-265222/zones/us-central1-a/disks/linux-vm-1" -> (known after apply)
~ initialize_params {
~ image = "https://www.googleapis.com/compute/v1/projects/debian-cloud/global/images/debian-9-stretch-v20191210" -> "debian-cloud/debian-9"
~ labels = {} -> (known after apply)
~ size = 10 -> (known after apply)
~ type = "pd-standard" -> (known after apply)
}
}
~ network_interface {
~ name = "nic0" -> (known after apply)
~ network = "https://www.googleapis.com/compute/v1/projects/projeto-1-265222/global/networks/tf-network" -> (known after apply) # forces replacement
~ network_ip = "10.128.0.2" -> (known after apply)
~ subnetwork = "https://www.googleapis.com/compute/v1/projects/projeto-1-265222/regions/us-central1/subnetworks/tf-network" -> (known after apply) # forces replacement
~ subnetwork_project = "projeto-1-265222" -> (known after apply)
}
~ scheduling {
~ automatic_restart = true -> (known after apply)
~ on_host_maintenance = "MIGRATE" -> (known after apply)
~ preemptible = false -> (known after apply)
+ node_affinities {
+ key = (known after apply)
+ operator = (known after apply)
+ values = (known after apply)
}
}
}
# google_compute_network.tf-network must be replaced
-/+ resource "google_compute_network" "tf-network" {
~ auto_create_subnetworks = true -> false # forces replacement
delete_default_routes_on_create = false
+ gateway_ipv4 = (known after apply)
~ id = "projects/projeto-1-265222/global/networks/tf-network" -> (known after apply)
+ ipv4_range = (known after apply)
name = "tf-network"
~ project = "projeto-1-265222" -> (known after apply)
~ routing_mode = "REGIONAL" -> (known after apply)
~ self_link = "https://www.googleapis.com/compute/v1/projects/projeto-1-265222/global/networks/tf-network" -> (known after apply)
}
# google_compute_subnetwork.tf-subnetwork will be created
+ resource "google_compute_subnetwork" "tf-subnetwork" {
+ creation_timestamp = (known after apply)
+ enable_flow_logs = (known after apply)
+ fingerprint = (known after apply)
+ gateway_address = (known after apply)
+ id = (known after apply)
+ ip_cidr_range = "10.10.1.0/24"
+ name = "tf-subnetwork"
+ network = (known after apply)
+ project = (known after apply)
+ region = "us-central1"
+ secondary_ip_range = (known after apply)
+ self_link = (known after apply)
}
Plan: 3 to add, 0 to change, 2 to destroy.
------------------------------------------------------------------------
Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.
Agora temos 3 recursos que devem ser adicionados e 2 que devem ser destruídos, basicamente tudo será refeito de novo com adição apenas da nossa subrede.
Execute sua infraestrutura:
$ terraform apply -auto-approve
Com o resultado:
google_compute_network.tf-network: Refreshing state... [id=projects/projeto-1-265222/global/networks/tf-network] google_compute_instance.default: Refreshing state... [id=projects/projeto-1-265222/zones/ ...... ...... ...... ...... ...... google_compute_instance.default: Creating... google_compute_instance.default: Still creating... [10s elapsed] google_compute_instance.default: Creation complete after 11s [id=projects/projeto-1-265222/zones/us-central1-a/instances/linux-vm-1] Apply complete! Resources: 3 added, 0 changed, 2 destroyed.
Para finalizar nossa sequência série sobre Terraform, em nosso próximo post vamos criar um módulo simples, mas que permitirá o gerenciamento de instâncias na GCP.
Até lá!
About author
Você pode gostar também
Novidades do PostgreSQL 16
Com o lançamento da nova versão 16 do PostgreSQL, teremos melhorias significativas em seu desempenho, com notáveis aprimoramentos na paralelização de consultas, carregamento em massa de dados e replicação lógica.
Descubra o futuro do desenvolvimento de software com Docker e Kubernetes
Docker, Kubernetes, Openshift, enfim … escalabilidade! A tecnologia de containers está moldando o futuro do desenvolvimento de software e está causando uma mudança estrutural no mundo da computação, principalmente quando
Testando e validando suas Roles do Ansible com o Molecule
Hoje, no blog da 4Linux, vamos falar sobre como podemos testar as nossas Roles antes de serem aplicadas no ambiente de produção. O Molecule é um projeto que permite que







