Terraform
Generate production-ready Terraform providers from your OpenAPI specification
The Stainless Terraform provider generator creates idiomatic Terraform providers from your OpenAPI specification. Generated providers include Hashicorp docs, handle complex datatypes, and support custom acceptance tests and state migrations.
Example repositories:
Considerations
Section titled “Considerations”A Terraform provider enables users to manage your API resources declaratively through infrastructure-as-code. Users can define resource configurations in Terraform files, check them into source control, and apply changes with a consistent workflow. This is particularly common for infrastructure provisioning, monitoring configuration, billing rules, and security workflows.
Terraform providers require more maintenance than our other SDKs and impose some additional constraints on your API.
A CRUD-like API
Section titled “A CRUD-like API”The Terraform provider works best for APIs where the endpoints on a given resource are uniform — the create, update, and read requests and responses all have the same shape and field names, and conceptually “set” all the properties in a straightforward way.
For example, a CRUD-like API for a resource called Product in an e-commerce application might look like this:
POST /products- Create a product, and return the product (with ID) in the responseGET /products- List the products (potentially filtering with query params)PUT /products/{product_id}- Update the product fields and return the productGET /products/{product_id}- Return the product given the IDDELETE /products/{product_id}- Delete the product given the ID
In OpenAPI terms, it might look something like this:
paths: /products: get: responses: '200': $ref: '#/components/schemas/ProductListResponse'
post: requestBody: $ref: '#/components/requestBodies/ProductRequest' responses: '200': $ref: '#/components/schemas/ProductResponse'
/products/{product_id}: get: parameters: - $ref: '#/components/parameters/ProductID' responses: '200': $ref: '#/components/schemas/ProductResponse'
put: parameters: - $ref: '#/components/parameters/ProductID' requestBody: $ref: '#/components/requestBodies/ProductRequest' responses: '200': $ref: '#/components/schemas/ProductResponse'
delete: parameters: - $ref: '#/components/parameters/ProductID'
components: parameters: ProductID: name: product_id in: path required: true schema: type: string requestBodies: ProductRequest: required: true content: application/json: schema: $ref: '#/components/schemas/Product' responses: ProductResponse: content: application/json: schema: $ref: '#/components/schemas/Product' ProductArrayResponse: content: application/json: schema: type: array items: $ref: '#/components/schemas/Product' schemas: Product: properties: product_id: type: string readonly: true name: type: string description: type: string image_url: type: string price: type: integerAn example resource for the above schema:
resource "ecommerce_product" "the_cube" { name = "The Cube" description = "A large stainless steel cube" image_url = "https://images.squarespace-cdn.com/content/v1/5488f21fe4b055c9cb360909/1427135193771-IM8B5KC0OUVQCRG7YCQD/2013-10+Steel+Cubes+-2.jpg?format=750w" price = 1000}Your API doesn’t have to be perfectly regular, but the more CRUD-like it is, the less configuration it needs.
If needed, you can conform your API to have more CRUD-like behavior using the Stainless config, OpenAPI spec, or custom code.
Acceptance tests against real infrastructure
Section titled “Acceptance tests against real infrastructure”Because not all server behavior is captured in the OpenAPI spec, we recommend testing your Terraform provider against real infrastructure to make sure that it behaves correctly.
You can write acceptance tests using Hashicorp’s built-in testing framework. They execute against your provider and can run against your real infrastructure. You’ll want to have a suitable demo or dev environment such that you can create and delete resources safely.
To ensure that data correctly round-trips to your server and back, we recommend writing at least one acceptance test per resource and data source.
State migrations for breaking changes
Section titled “State migrations for breaking changes”Unlike our other SDKs, Terraform providers have a concept of “state”. Every time a user creates or edits resources, the state of that resource is saved into a file with the extension *.tfstate (or saved to the cloud). This state is used for future runs to preserve mappings of IDs and version information and to determine which resources already exist.
This means that if you make a breaking change to your API, you need to write a state migration to automatically upgrade the user’s state to the new version.
A state migration is a Go function within the provider codebase that describes how to go from one schema version to the next for a given resource, much like a database migration for Terraform state.
Breaking changes in Terraform are similar to breaking changes in your API or other SDKs. Some examples of breaking changes include:
- Renaming a Terraform resource
- Renaming an attribute of the resource
- Removing an attribute
See Hashicorp’s documentation for the full list and guidance on how to version properly.
To write state migrations, implement deprecations, and do other customization, edit your Terraform repository using Stainless’s custom code feature — we’ll preserve the changes.
Configuration
Section titled “Configuration”To generate a Terraform provider, add the terraform target to your Stainless configuration file.
The Terraform provider is built on top of the Go SDK, so you’ll need to enable both targets:
targets: go: package_name: github.com/my-company/my-sdk-go production_repo: my-org/my-sdk-go terraform: package_name: my-provider production_repo: my-org/terraform-provider-my-provider edition: terraform.2025-10-08For a complete list of configuration options, see the Terraform target reference.
Choosing resources
Section titled “Choosing resources”You can control which Terraform resources and data sources are generated by editing the Stainless config. The infer_all_services option in the terraform target enables generation of all Terraform resources in your project.
terraform: package_name: my-provider options: infer_all_services: trueThis option is set to true for all new projects. If it’s false or absent, we don’t generate any Terraform resources or data sources by default.
To enable or disable Terraform for a specific resource, you can annotate the resource with terraform: true or terraform: false.
resources: accounts: terraform: true methods: create: post /accounts edit: put /accounts/{account_id} get: get /accounts/{account_id} delete: delete /accounts/{account_id}For more fine-grained control, you can specify Terraform configuration options for the resource:
resources: accounts: terraform: name: my_cool_resource resource: true # do generate Terraform resource data_source: false # don't generate Terraform data sources for this resource methods: create: post /accounts edit: put /accounts/{account_id} get: get /accounts/{account_id} delete: delete /accounts/{account_id}See the Stainless config reference for the full list of supported options.
Resource-specific configuration
Section titled “Resource-specific configuration”In addition to setting terraform: true, you can provide an object to further customize a resource.
resources: accounts: terraform: resource: true # whether to enable the resource data_source: true # whether to enable the data sources name: my_custom_name # a custom name for the resource/data source methods: ...See the resource reference for more information.
Changing endpoint mapping and ID properties
Section titled “Changing endpoint mapping and ID properties”Sometimes your API endpoints don’t fully match CRUD semantics. You can customize which endpoints have which Terraform behavior (create, read, update, delete), and how to find the ID parameter.
methodspecifies which CRUD operation the endpoint should be used forid_propertyfor create, specifies the request or response property that represents the IDid_path_paramfor read, update, and delete, specifies the path param that represents the ID
See the following config where each of the default options is explicitly set:
products: terraform: true methods: list: endpoint: get /products terraform: method: list create: endpoint: post /products terraform: id_property: product_id method: create retrieve: endpoint: get /products/{product_id} terraform: id_path_param: product_id method: read update: endpoint: put /products/{product_id} terraform: id_path_param: product_id method: update delete: endpoint: delete /products/{product_id} terraform: id_path_param: product_id method: deleteCustom configurability
Section titled “Custom configurability”Every “attribute” (aka property) in a Terraform resource’s schema has a “configurability” setting attached to it.
The “configurability” of an attribute determines how it can change over time:
- required: must be provided by the user at all times, cannot be null
- optional: must be provided by the user at all times, can be null
- computed: cannot be provided by the user, is provided by the API or provider
- computed and optional: can be provided by the user or the API or provider
There isn’t quite enough information in the OpenAPI spec to specify each of these precisely, so you can add an annotation to your OpenAPI spec if we don’t infer it correctly.
By default, we infer configurability like so:
- required: marked as required in the create endpoint’s request
- optional: not marked as required
- computed: a property only defined in the response of the create or update endpoint, or marked as
read-onlyin the OpenAPI spec
To customize the configurability of a given property, specify the x-stainless-terraform-configurability extension property on your schema. For example:
cardholder_name: type: string description: The cardholder's name x-stainless-terraform-configurability: computed_optional # required, optional, computed, computed_optional are all valid valuesEditions
Section titled “Editions”Editions allow Stainless to make improvements to SDKs that aren’t backwards-compatible. You can explicitly opt in to new editions when you’re ready. See the SDK and config editions reference for more information.
terraform.2025-10-08
- Initial edition for Terraform (used by default if no edition is specified)
Testing and validation
Section titled “Testing and validation”Address diagnostics
Section titled “Address diagnostics”When you switch to the Terraform tab in Stainless Studio, you may see some diagnostics that relate to the resources you enabled. View our diagnostics reference for instructions on how to resolve them. You don’t have to address them all right away, but consider addressing any Error diagnostics to maximize the chances that the provider will work while testing.
View your generated provider
Section titled “View your generated provider”Once the provider is generating, take a look at the repo to see the generated output.
You can find generated documentation and examples in the ./docs/resources and ./docs/data-sources folders.
You can use Terraform’s documentation preview tool to see a formatted version of each resource and confirm that they make sense.
Installation
Section titled “Installation”To test your generated provider locally:
-
Install Terraform on your machine.
-
Clone the generated Terraform repository from GitHub, and
cdinto it. -
Run
./scripts/bootstrapto installgoand dependencies. -
Run
go build -o terraform-provider-<providername>to build the binary. -
Edit (or create) your
~/.terraformrcfile and configure it to point to the directory where you put the Terraform provider. This ensures that when you try out the provider yourself, it looks locally instead of at the registry:provider_installation {dev_overrides {"<your-org>/<providername>" = "/path/to/local/terraform/directory"}direct {}} -
Copy an example resource from the
./examplesdirectory of your generated Terraform repo to a file calledmain.tf, which should look similar to:terraform {required_providers {<your-org> = {source = "<your-org>/<providername>"version = "~> 1.0.0"}}}provider "<providername>" {api_key = "<dev api key>"}# copied from examples/resourcesresource "<providername>_<resourcename>" "example" {name = "The Cube"description = "A large stainless steel cube"image_url = "https://images.squarespace-cdn.com/content/v1/5488f21fe4b055c9cb360909/1427135193771-IM8B5KC0OUVQCRG7YCQD/2013-10+Steel+Cubes+-2.jpg?format=750w"price = 1000} -
Run
terraform applyto see the resource get created.
Writing acceptance tests
Section titled “Writing acceptance tests”Now that you have tested some resources manually, write automated tests that cover all relevant cases and ensure that the provider is ready to ship.
Follow Hashicorp’s acceptance test guide to run a test against the resource. A test typically looks like:
package example
// example.Widget represents a concrete Go type that represents an API resourcefunc TestAccExampleWidget_basic(t *testing.T) { var widgetBefore, widgetAfter example.Widget rName := acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)
resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckExampleResourceDestroy, Steps: []resource.TestStep{ { Config: testAccExampleResource(rName), ConfigStateChecks: []statecheck.StateCheck{ stateCheckExampleResourceExists("example_widget.foo", &widgetBefore), }, }, { Config: testAccExampleResource_removedPolicy(rName), ConfigStateChecks: []statecheck.StateCheck{ stateCheckExampleResourceExists("example_widget.foo", &widgetAfter), }, }, }, })}Our recommended approach is to have at least:
- A basic test with only required parameters
- A more complete or complex test that covers optional parameters
We also recommend implementing some test sweepers to clear out straggling entities on the server that don’t get deleted in the event a test fails.
Run the test
Because acceptance tests run against real infrastructure and can be slow, go test will not run them by default (and instead just runs unit tests). The way that you run the acceptance tests is by setting the TF_ACC environment variable before running go test:
TF_ACC=1 go test ./internal/services/my_service -count 1Set up acceptance tests in CI
You can run your acceptance tests in CI to gain ongoing confidence in the correctness of the provider.
Sometimes the tests are too slow to be run upon every commit, but you could have them run on a daily schedule, or just manually run via Github’s Actions tab. See the GitHub Actions workflow on Hashicorp’s site.
It’s highly recommended that you ensure the tests all pass before merging a release PR.
Release a beta
Section titled “Release a beta”Once your resources are all tested, and you’ve resolved all error diagnostics, you can release a beta terraform provider.
To issue the beta release, set up a production repo and follow the instructions in the Publishing section below.
Publishing to Terraform Registry
Section titled “Publishing to Terraform Registry”Publish your Terraform provider to the Terraform Registry for distribution. If you have more custom requirements, you can also refer to the official HashiCorp publishing documentation.
Connect Terraform Registry to your production repo
- Log in or sign up at the Terraform Registry.
- Select
Publish Provider. - Follow the instructions to install the Hashicorp GitHub App.
- In the provider publishing page, select the connected user / organization.
- Select your production repo and follow the instructions to install the webhooks required to publish the provider.
Generate a PGP key
- Install GnuPG.
-
In a terminal, run
gpg --full-generate-key. Follow the prompts to select an RSA/DSA key, and input your organization’s information. -
Run
gpg --list-keys --keyid-format short:Terminal window % gpg --list-keys --keyid-format short[keyboxd]---------pub rsa3072/69568313 2025-03-22 [SC] [expires: 2025-03-23]1577714D2E1545CC87291B3A1703FF1469568313uid [ultimate] Your Organization <you@yourorg.com>sub rsa3072/26D93FE8 2025-03-22 [E] [expires: 2025-03-23] -
Export the key in ASCII-armored format. In the above example, the short key ID is
69568313.Terminal window $ gpg --export-secret-keys --armor 69568313-----BEGIN PGP PRIVATE KEY BLOCK-----lQWGBGffFm0BDADEUBBMBLQ2qRg8YPKBln9VKolAcFiq6CwehhnHw1B8xr9BiVnY0ekDmOkAWIhU1zG+AFULhsvWowld55pMglPW1Mptpfb2AqJdbglr2iStUjSn+Jn1riRKaP3YQErtV0/dBRFcM8F4LdGK8NhSYnrEyGb1eFCBawHxU/P09+Otcxs+9mvyjSFiWBrrK45H62sDQq2HYrjLXcytD0WpqLAxgxLN6cGwtwmfd6TAMQeJ248Qa97M...CriOVP9mAUyhez1k6tuVi5U0mDp9KeMj3zJcsTjTEvqe+8wQQoRgtqyY60yXkJJ6HvapRVLjxiU8/x+VltFn7sxm+LDoSEc8hcsIrT19uIyA4HOEo9etJI/VT31Tt39SXW4n8ELhMIdxDq0jZp/93ONsPBmjZ0L/qbDH6ytT0FSEuSt05CY7=VDjv-----END PGP PRIVATE KEY BLOCK----- -
Publish your PGP public key.
Terminal window $ gpg --keyserver keyserver.ubuntu.com --send-keys 69568313If you get the following error:
gpg: sending key XXXXXX to hkp://keyserver.ubuntu.comgpg: keyserver send failed: No route to hostthen your computer might be trying to use IPv6 and failing. Run the following to get keyserver.ubuntu.com’s IPv4 addresses:
Terminal window $ host keyserver.ubuntu.comkeyserver.ubuntu.com has address XXX.XXX.XXX.XXkeyserver.ubuntu.com has address XXX.XXX.XXX.XXkeyserver.ubuntu.com has IPv6 address XXXX:XX:XXXX:XXXX::XXXkeyserver.ubuntu.com has IPv6 address XXXX:XX:XXXX:XXXX::XXXThen rerun the
gpgcommand with one of the IPv4 addresses:Terminal window $ gpg --keyserver XXX.XXX.XXX.XX --send-keys 69568313
Add secrets to your production repo
- In the production repo, navigate to Secrets and variables > Actions > New repository secret.
The URL should look like
https://github.com/<org>/<repo>/settings/secrets/actions/new. - Add the following secrets:
-
GPG_SIGNING_KEY: your PGP private key. It should be the entire output ofgpg --export-secret-keys --armor <keyid>. -
GPG_SIGNING_PASSWORD: the passphrase you entered when creating the keypair.
Update your Stainless config
-
Update the Stainless config and save.
targets:terraform:publish:hashicorp_registry: true
Merge your release PR and see it appear in the Registry
Merge your release PR when you’re ready. This should then create a tag, use goreleaser to build artifacts, and attach it to the release. Hashicorp will then import the release into their registry. You might need to wait a few minutes for it to show up on your provider page.