Aurora PostgreSQL Express: Using AWS CDK and Drizzle to provision ephemeral serverless development databases
Deploy Aurora PostgreSQL Express with AWS CDK using an AWS SDK custom resource and integrate Drizzle Kit for schema and data workflows in ephemeral stacks.
Introduction — what this setup delivers and why it matters
Aurora PostgreSQL Express is a lightweight, serverless flavor of Amazon Aurora designed for rapid, on-demand database creation; pairing it with AWS CDK and Drizzle provides a pragmatic path to spin up short-lived, developer-friendly databases that integrate with schema tooling. This article walks through a working approach that uses an AWS SDK-driven CDK custom resource to create and tear down Aurora clusters with express configuration, and shows how Drizzle Kit and Drizzle Studio can be used to push schema changes and seed data using IAM-authenticated connections. The pattern is intended for development and experimentation, offering a template for ephemeral environments such as per-branch feature stacks or CI jobs.
Why choose Aurora PostgreSQL Express for ephemeral environments
Aurora PostgreSQL Express targets scenarios where provisioning time and operational overhead matter: clusters come up quickly, scale with serverless v2 settings, and are billed in capacity units rather than fixed instances. For teams building feature branches, demos, or test environments, the appeal is reduced friction and faster feedback loops compared to standing databases. Using Infrastructure-as-Code (IaC) with CDK keeps provisioning reproducible while Drizzle handles schema migrations and developer-facing Studio inspection.
Using an AWS SDK-based CDK custom resource to provision Express clusters
Because first-party CloudFormation/CDK constructs for the express configuration are not always available, this approach relies on CDK’s AwsCustomResource to call the RDS API directly (CreateDBCluster, DescribeDBClusters, DeleteDBCluster, DeleteDBInstance). The custom resource pattern lets you trigger the exact AWS SDK calls necessary to opt into express-mode creation by passing WithExpressConfiguration: true and ServerlessV2ScalingConfiguration (MinCapacity / MaxCapacity). The custom resource also models lifecycle actions so the stack can perform cleanup on delete — deleting the instance and cluster explicitly — which is essential for ephemeral stacks.
Key implementation notes:
- The custom resource creates the cluster with Engine set to aurora-postgresql and WithExpressConfiguration set to true. It also sends ServerlessV2ScalingConfiguration to set MinCapacity and MaxCapacity (ACU limits).
- A second AwsCustomResource performs describe calls to read the cluster metadata (for example to capture DBClusterArn).
- A third resource issues DeleteDBInstance and DeleteDBCluster calls on stack deletion so the full teardown sequence is visible and controlled.
- To make identifiers safe for AWS naming constraints, the implementation normalizes and trims strings before passing them as DBClusterIdentifier and DBInstanceIdentifier.
This pattern leverages AwsCustomResourcePolicy to grant only the necessary permissions (rds:CreateDBCluster, rds:CreateDBInstance, rds:DeleteDBCluster, rds:DeleteDBInstance, rds:DescribeDBClusters, etc.), plus a small set of support actions like ec2:DescribeAvailabilityZones and iam:CreateServiceLinkedRole when required by RDS.
Practical deployment observations and service constraints
When creating Aurora PostgreSQL Express in this configuration you should expect a few notable behaviors:
- The cluster is created without being associated with a customer VPC by this flow. Instead, express clusters may use a managed networking configuration that enables a connectivity gateway.
- Internet access gateway is enabled to allow connectivity over the RDS gateway; the gateway model means standard VPC security groups are not used in the same way as for VPC-attached clusters.
- Gateway-based access requires IAM authentication for client connections. Rather than using a fixed password, clients request a short-lived auth token from the RDS signer API and use it as the connection password.
- Encryption at rest is enabled using an AWS/RDS-managed KMS key by default; the express create flow does not present a customer-managed KMS key selection at create time in this pattern.
- The Data API is disabled by default at cluster creation; it can be enabled or configured afterward, but there are additional authentication considerations documented by AWS. In practice, enabling and using the Data API with this express setup may require extra configuration and testing.
These aspects influence how you architect access, observability, and compliance for ephemeral databases. For example, if your organization mandates CMKs for sensitive data, the inability to pick a customer-managed key during initial creation is a constraint to plan around.
How IAM-based authentication and the RDS signer integrate with client tooling
Because the express cluster leverages gateway access that requires IAM authentication, client tooling must generate RDS auth tokens. The @aws-sdk/rds-signer package is a lightweight way to generate the AWS-signed authentication token for PostgreSQL connections. The general flow is:
- Client code (or CLI tooling) constructs an RDS Signer instance with region, hostname, port, and username.
- The signer returns a short-lived token which is then used as the password for the PostgreSQL client connection.
- Connections must be made with SSL enabled to satisfy RDS gateway requirements.
This token-based approach works well with ephemeral credentials in CI/CD, but it changes how you handle connection pooling and long-lived connections: tokens expire and must be refreshed for long-running processes.
Drizzle integration: schema management, Studio inspection, and seeding
Drizzle Kit and Drizzle Studio can be used with an IAM-authenticated Aurora PostgreSQL Express cluster when the RDS signer token is supplied as the password. In the example setup:
- Environment variables (hostname, AWS_REGION) are provided via Varlock, a secrets/environment helper used in the repository.
- The drizzle-kit configuration uses defineConfig to provide a postgresql dialect, schema file location, and dbCredentials object that includes host, port (5432), user (postgres), password (auth token), database name (postgres), and ssl: true.
- Drizzle schema code organizes objects using pgSchema and defines a simple schema (for example, schema dummy with table dummy_table having id and name).
- Drizzle Kit push applies schema changes; Drizzle Studio can connect to the live database to show schema and rows.
- A short Node.js seeding script uses drizzle-orm/node-postgres combined with @aws-sdk/rds-signer to generate a password at runtime, then connects via Drizzle to delete any existing rows and insert seed data (three rows in the example). The script logs inserts so you can confirm the operation.
This combination demonstrates that modern ORMs and migration tools can work with IAM-authenticated serverless clusters, provided the tooling can accept a runtime-generated token for authentication.
Required packages and commands used during testing
The setup uses a handful of npm packages and a small command surface. The core runtime and tooling packages include:
- @aws-sdk/rds-signer — to generate IAM auth tokens for PostgreSQL connections.
- drizzle-orm and drizzle-kit — schema/ORM and migration/studio tooling.
- pg and @types/pg — Postgres driver (and types for TypeScript).
- varlock — lightweight environment/secrets manager used here in development.
The repository demonstrated usage with a vp wrapper (Vite+) so commands were expressed like:
- vp add @aws-sdk/rds-signer drizzle-orm pg varlock
- vp add -D drizzle-kit @types/pg
To apply schema and run Studio or seed scripts, the example runs varlock to inject environment variables and then executes drizzle-kit commands. For example, to push the schema and open Studio, the author used wrapper commands that ultimately run drizzle-kit push and drizzle-kit studio with the configuration that consumes the RDS signer token.
If you adopt the pattern, translate the wrapper commands to your package manager (npm, pnpm, yarn) and integrate token generation into your CI steps or local developer scripts.
Teardown behavior and ephemeral environments
One of the main advantages of the custom resource approach is deterministic teardown. The CDK custom resource sequence includes explicit delete operations:
- The stack deletion triggers a DeleteDBInstance call for the underlying instance identifier (e.g., cluster-id-instance-1), ignoring instance-not-found or invalid-state errors.
- After instance deletion, the stack issues a DeleteDBCluster call, optionally skipping the final snapshot and deleting automated backups for short-lived environments.
This explicit sequence shows progress in the AWS console and avoids leftover resources that can lead to surprise costs. That makes the pattern attractive for per-branch feature stacks, ephemeral QA environments, or short-lived demo systems where you want clean creation and removal from CI.
Security and operational considerations
Using Aurora PostgreSQL Express in this gateway/IAM mode brings security trade-offs and operational considerations:
- IAM-authenticated tokens are ephemeral and improve credential hygiene, but require tooling and app logic to fetch tokens and refresh them (for long-lived processes).
- Default managed KMS encryption may be acceptable for many dev/test use cases, but organizations with strict key management policies might need additional controls or await support for selecting a customer-managed KMS key in the create flow.
- Because the cluster can be deployed without a VPC, standard VPC network controls and security groups are not the primary access guard; instead, IAM policies and gateway controls become first-class protections.
- Auditability and logging remain important: ensure RDS and CloudTrail capture the relevant API calls (CreateDBCluster, DeleteDBCluster, etc.) and integrate with your monitoring/alerting for cost oversight and unexpected creations.
- If you intend to enable the Data API later, test its compatibility with gateway-authenticated clusters and review the Data API authentication model documented by AWS to avoid surprises.
Developer workflow and troubleshooting tips
To work smoothly with this pattern:
- Script token generation into developer tooling: wrap the RDS signer call so local dev processes and drizzle-kit/drizzle-orm scripts automatically fetch an auth token before connecting.
- Use a persistent short script to refresh connections when running long studio sessions or long-running services.
- Make sure SSL is enabled in client connections; the gateway requires TLS.
- When debugging connection failures, verify region, hostname, and that the token was requested for the correct hostname+port+username combination.
- If Data API is required for your use case (for serverless backends that prefer HTTP), plan to enable it after cluster creation and test its auth flow separately.
- For CI, generate tokens as part of the job using role-based credentials or an IAM principal with permissions to call rds:AuthToken (via the signer).
How this pattern fits into the broader cloud-native database tooling landscape
Aurora PostgreSQL Express, when combined with IaC and modern ORMs, points toward a more ephemeral-first development model: databases that are cheap and fast to create, managed entirely by automation, and integrated into developer tooling for schema and data. This intersects with several trends:
- Developer experience: faster, branch-specific databases reduce the friction of integration testing and feature development.
- GitOps and IaC: CDK custom resources enable automation even when first-party constructs lag new AWS features.
- Shift-left security: IAM auth and short-lived tokens reduce reliance on static credentials.
- Observable costs: ephemeral creation with explicit deletion enables clearer cost allocation per feature branch or team.
That said, enterprise production usage of express clusters may be limited until features like VPC attachment options, customer-managed KMS selection, and more mature CloudFormation/CDK constructs are available. For now, the pattern is best suited to dev/test and experimental scenarios.
When and who should adopt this approach
This approach is a good fit for:
- Platform engineers building ephemeral environments for developers or CI/CD.
- Teams using serverless application patterns who want short-lived databases close to the code lifecycle.
- Developers adopting Drizzle or similar migration tooling who need quick verification of schema changes against a live PostgreSQL-compatible endpoint.
Avoid using this exact pattern for regulated production workloads until you confirm KMS and network controls meet compliance requirements and until AWS provides first-class CDK constructs for express clusters, if those constructs fit your governance model.
Operationalizing for teams: recommended next steps
If you want to adopt this pattern at scale:
- Wrap the AwsCustomResource logic into a reusable CDK construct in your platform library to standardize identifiers, permission scopes, and cleanup behavior.
- Add guardrails: pre-deploy checks, tagging conventions, and automated cost alerts to prevent runaway ephemeral resources.
- Extend your developer CLI to abstract token generation and Drizzle operations, making the experience single-command: create, migrate, seed, inspect, and teardown.
- Add tests in CI that exercise schema migrations and seeds against ephemeral clusters to catch migration regressions early.
Broader implications for database lifecycle and developer productivity
The combination of Aurora PostgreSQL Express, IaC-driven custom resources, and developer-focused tooling such as Drizzle signals a change in how teams can treat databases in the development lifecycle. When databases can be created and destroyed quickly and securely, teams can:
- Shift more testing earlier in the pipeline with realistic database-backed tests.
- Reduce shared dev database contention by giving engineers isolated environments.
- Improve confidence in migrations by running them against real, live clusters rather than emulators.
However, this shift also transfers responsibility for lifecycle management, backups, and security controls into automation and platform engineering. Teams must invest in reliable teardown, cost monitoring, and token management to fully realize the benefits.
Looking ahead, expect AWS to iteratively expand CloudFormation support and CDK constructs for Aurora express, and for ORMs and developer tools to add native patterns for IAM-signer flows. That will make the pattern easier to adopt and easier to govern.
The next phase for this pattern will likely include better first-class CDK constructs for express-mode clusters, clearer Data API behavior for gateway-attached clusters, and richer key management options; tooling like Drizzle moving from beta to stable releases will also smooth schema and seeding workflows for IAM-authenticated PostgreSQL endpoints.


















