サクサク読めて、アプリ限定の機能も多数!
トップへ戻る
ブックレビュー
developer.hashicorp.com
The flexibility of Terraform's configuration language gives you many options to choose from as you write your code, structure your directories, and test your configuration. While some design decisions depend on your organization's needs or preferences, there are some common patterns that we suggest you adopt. Adopting and adhering to a style guide keeps your Terraform code legible, scalable, and m
Note: Explicit refactoring declarations with moved blocks is available in Terraform v1.1 and later. For earlier Terraform versions or for refactoring actions too complex to express as moved blocks, you can use the terraform state mv CLI command as a separate step. In shared modules and long-lived configurations, you may eventually outgrow your initial module structure and resource names. For examp
AWS AssumeRole allows you to grant temporary credentials with additional privileges to users as needed, following the principle of least privilege. To configure AssumeRole access, you must define an IAM role that specifies the privileges that it grants and which entities can assume it. AssumeRole can grant access within or across AWS accounts. If you are administering multiple AWS accounts, you ca
The Cloud Development Kit for Terraform (CDKTF) generates JSON Terraform configuration from code in C#, Python, TypeScript, Java, or Go, and creates infrastructure using Terraform. With CDKTF, you can use hundreds of providers and thousands of module definitions provided by HashiCorp and the Terraform community. By using your programming language of choice, you can take advantage of the features a
Terraform manages infrastructure on cloud computing providers such as AWS, Azure, and GCP. But, it can also manage resources in hundreds of other services, including the music service Spotify. In this tutorial, you will use a Terraform data source to search Spotify for an artist, album, or song, and use that data to build a playlist. PrerequisitesTo complete this tutorial, you will need: Terraform
GitHub Actions add continuous integration to GitHub repositories to automate your software builds, tests, and deployments. Automating Terraform with CI/CD enforces configuration best practices, promotes collaboration, and automates the Terraform workflow. HashiCorp provides GitHub Actions that integrate with the HCP Terraform API. These actions let you create your own custom CI/CD workflows to mee
Define Terraform Configuration with CDKTFManage infrastructure with your preferred programming language, including TypeScript, Python, Go, C#, and Java with the Cloud Development Kit for Terraform (CDKTF).
Terraform providers manage resources by communicating between Terraform and target APIs. Whenever the target APIs change or add functionality, provider maintainers may update and version the provider. When multiple users or automation tools run the same Terraform configuration, they should all use the same versions of their required providers. There are two ways for you to manage provider versions
Get Started - Google CloudBuild, change, and destroy Google Cloud Platform (GCP) infrastructure using Terraform. Step-by-step, command-line tutorials will walk you through the Terraform basics for the first time.
Outside of development mode, Vault servers are configured using a file. The format of this file is HCL or JSON. Enabling the file permissions check via the environment variable VAULT_ENABLE_FILE_PERMISSIONS_CHECK allows Vault to check if the config directory and files are owned by the user running Vault. It also checks if there are no write or execute permissions for group or others. Vault allows
Nearly all requests to Vault must be accompanied by an authentication token. This includes all API requests, as well as via the Vault CLI and other libraries. If you can securely get the first secret from an originator to a consumer, all subsequent secrets transmitted between this originator and consumer can be authenticated with the trust established by the successful distribution and user of tha
Terraform Plugin SDKv2Maintain plugins built on the legacy SDK. Terraform Plugin SDKv2 is a way to maintain Terraform Plugins on protocol version 5. We recommend using the framework to develop new provider functionality because it offers significant advantages as compared to the SDKv2. We also recommend migrating existing providers to the framework when possible. Refer to Plugin Framework Benefits
AWS's Elastic Kubernetes Service (EKS) is a managed service that lets you deploy, manage, and scale containerized applications on Kubernetes. In this tutorial, you will deploy an EKS cluster using Terraform. Then, you will configure kubectl using Terraform output and verify that your cluster is ready to use. Warning AWS EKS clusters cost $0.10 per hour, so you may incur charges by running this tut
TutorialsGet StartedAWSAzureDockerGCPHCP TerraformOCIFundamentalsCLIConfiguration LanguageHCP TerraformModulesProvisionStateUse CasesApplicationsAWS ServicesAzure ServicesHashiCorp ProductsIT/SaaS ProvidersKubernetesMachine ImagesNetworkingPolicySecurityCertification PrepAssociate PrepAssociate TutorialsPro PrepProductionAutomate TerraformEnterprise PatternsTerraform EnterpriseIntegrationsCDK for
This guide applies to Vault versions 1.7 and above and Consul versions 1.8 and above. This guide describes recommended best practices for infrastructure architects and operators to follow when deploying Vault using the Consul storage backend in a production environment. This guide includes general guidance as well as specific recommendations for popular cloud infrastructure platforms. For producti
Note: This engine can use external X.509 certificates as part of TLS or signature validation. Verifying signatures against X.509 certificates that use SHA-1 is deprecated and is no longer usable without a workaround starting in Vault 1.12. See the deprecation FAQ for more information. The PKI secrets engine generates dynamic X.509 certificates. With this secrets engine, services can get certificat
Tip Before following the recommendations in this guide to build your own infrastructure pipelines from scratch, consider whether HCP Terraform's built in version control integration, run triggers, and other features meet your needs while automatically implementing best practices. For teams that use Terraform as a key part of a change management and deployment pipeline, it can be desirable to orche
Recent versions of Windows 10 now include Windows Subsystem for Linux (WSL) as an optional Windows feature. The WSL supports running a Linux environment within Windows. Vagrant support for WSL is still in development and should be considered beta. Warning: Advanced Topic! Using Vagrant within the Windows Subsystem for Linux is an advanced topic that only experienced Vagrant users who are reasonabl
You now have enough Terraform knowledge to create useful configurations, but the examples so far have used hard-coded values. Terraform configurations can include variables to make your configuration more dynamic and flexible. PrerequisitesAfter following the earlier tutorials, you will have a directory named learn-terraform-aws-instance with the following configuration in a file called main.tf. t
9min|TerraformTerraformVideoVideoInteractiveInteractiveShow Terminal To use Terraform you will need to install it. HashiCorp distributes Terraform as a binary package. You can also install Terraform using popular package managers. Install Terraform
Provisioner name: ansible_local The Vagrant Ansible Local provisioner allows you to provision the guest using Ansible playbooks by executing ansible-playbook directly on the guest machine. Warning: If you are not familiar with Ansible and Vagrant already, we recommend starting with the shell provisioner. However, if you are comfortable with Vagrant already, Vagrant is a great way to learn Ansible.
Command: vagrant snapshot This is the command used to manage snapshots with the guest machine. Snapshots record a point-in-time state of a guest machine. You can then quickly restore to this environment. This lets you experiment and try things and quickly restore back to a previous state. Snapshotting is not supported by every provider. If it is not supported, Vagrant will give you an error messag
Name: ssh The Vault SSH secrets engine provides secure authentication and authorization for access to machines via the SSH protocol. The Vault SSH secrets engine helps manage access to machine infrastructure, providing several ways to issue SSH credentials. The Vault SSH secrets engine supports the following modes. Each mode is individually documented on its own page. Signed SSH CertificatesOne-ti
The following characteristics generally differentiate Nomad from related products: Simplicity: Nomad runs as a single process with zero external dependencies. Operators can easily provision, manage, and scale Nomad. Developers can easily define and run applications.Flexibility: Nomad can run a diverse workload of containerized, legacy, microservice, and batch applications. Nomad can schedule servi
Vault is an intricate system with numerous distinct components. This page details the system architecture and hopes to assist Vault users and developers to build a mental model while understanding the theory of operation. Note: This page covers the technical details of Vault. The descriptions and elements contained within are for users that wish to learn about Vault without having to reference the
All Chef ProvisionersThe following options are available to all Vagrant Chef provisioners. Many of these options are for advanced users only and should not be used unless you understand their purpose. binary_path (string) - The path to Chef's bin/ directory on the guest machine. binary_env (string) - Arbitrary environment variables to set before running the Chef provisioner command. This should be
The /agent endpoints are used to interact with the local Consul agent. Usually, services and checks are registered with an agent which then takes on the burden of keeping that data synchronized with the cluster. For example, the agent registers services and checks with the Catalog and performs anti-entropy to recover from outages. In addition to these endpoints, additional endpoints are grouped in
audit { enabled = true sink "My sink" { type = "file" format = "json" path = "data/audit/audit.json" delivery_guarantee = "best-effort" rotate_duration = "24h" rotate_max_files = 15 rotate_bytes = 25165824 } } The following sub-keys are available: enabled - Controls whether Consul logs out each time a user performs an operation. ACLs must be enabled to use this feature. Defaults to false. sink - T
次のページ
このページを最初にブックマークしてみませんか?
『developer.hashicorp.com』の新着エントリーを見る
j次のブックマーク
k前のブックマーク
lあとで読む
eコメント一覧を開く
oページを開く