Prompting Best Practices

The agent responds to natural language. The more clearly you express what you want, the better the infrastructure it generates. This guide covers how to get the most out of your conversations.

Focus on intent, not syntax

Describe what you want deployed and how it should behave. You don't need to describe the Terraform syntax or resource structure. The agent handles that. Your job is to express the requirements.

Instead of "Create an aws_s3_bucket resource with versioning and server_side_encryption_configuration blocks", try "Create an S3 bucket for storing application logs with versioning enabled, AES-256 encryption, and a lifecycle rule that moves objects to Glacier after 90 days."

The first tells the agent how to write the code. The second tells it what the infrastructure should do. The agent is better at translating intent into correct infrastructure than following partial syntax instructions.

Provide cloud and project context

The agent generates better infrastructure when it knows the full picture. At the start of a conversation, explain the project, the cloud provider, the regions, and any constraints that matter.

For example, "This workspace manages the backend infrastructure for a payment processing service. We run on AWS in us-east-1 and eu-west-1. All resources need to be tagged with a cost center and environment."

Context you provide early carries forward through the entire conversation. You don't need to repeat it in every message.

Build incrementally

Complex infrastructure is easier to get right when you build it layer by layer. Start with the foundation, review the output, then ask for the next piece.

  1. "Create the VPC and subnets"
  2. "Add the security groups for the application tier"
  3. "Now add the ALB with HTTPS listeners"
  4. "Add the ECS service and task definitions"

Each step gives you a chance to review and course-correct before the agent moves on. This is more reliable than asking for an entire environment in a single message.

Reference existing files

When you want the agent to work with something that already exists in the workspace, mention it by name. "Review security-groups.tf for any rules that allow 0.0.0.0/0" is more effective than "Check my security groups." You can also use @ to reference docs you've uploaded, like requirements documents or architecture plans.

Ask for explanations

If the agent generates something you don't understand, ask it to explain. "Why did you choose a NAT gateway instead of a NAT instance?" or "Explain the IAM policy you created." Understanding the reasoning helps you catch semantic issues that look correct on the surface but don't match your actual requirements.

This matters because AI-generated infrastructure can be syntactically valid but semantically wrong. The code compiles and deploys, but it might not do what you intended. Asking "why" is one of the best ways to catch that.

Review like you would from a teammate

Treat the agent's output the way you'd treat a pull request from a colleague. Read the code, verify the logic, and check that it matches your requirements. The agent is good at generating infrastructure quickly, but you're the one who knows what "correct" means for your project.

Pay attention to security boundaries, network exposure, IAM permissions, and resource sizing. These are areas where the agent might make reasonable defaults that don't match your specific requirements.

Know what belongs in the chat vs. elsewhere

The chat is for intent-driven, project-specific work. It's where you express what you need for this particular implementation.

Other types of guidance have better homes in the product.

  • Naming conventions, tagging standards, security baselines belong in rulesets. Rulesets are always active in the background so you don't need to repeat them.
  • Repeatable processes like "gather requirements, create architecture plan, write code, validate" belong in workflows. Workflows ensure the agent follows your process consistently.
  • Requirements documents, architecture plans, design specs belong in docs. They persist across conversations and can be referenced with @.
  • Connections to external systems belong in MCP. MCP servers give the agent live context from platforms like Terraform Cloud and AWS without you needing to paste information into the chat.

This separation keeps your conversations focused on the work at hand while the product handles the guardrails, process, and context around it.

Manage long conversations

The agent has a 200k token context window. For long-running conversations, the context can fill up. If you notice the agent losing track of earlier instructions, start a new conversation and provide the key context upfront. You can have multiple conversations in a workspace so you don't lose your previous threads.