Deployment Pipeline
A hybrid CI/CD and event-driven model provides fully automated, zero-touch deployments. This section explains how code is seamlessly delivered from a Git repository to a global audience through a pipeline that spans both GitHub Actions and AWS.
End-to-End Flow
Pipeline Stages
The deployment process is deterministic and broken into three distinct stages that span the CI/CD service and the cloud provider.
CI: Build & Preparation
A `git push` triggers the GitHub Actions workflow. The job checks out the code, builds and minifies a production stylesheet, and injects it into the HTML files, preparing them for deployment.
CD: Secure Delivery
Using temporary OIDC credentials, the workflow syncs the prepared `public/` directory to the S3 origin bucket. The `--delete` flag ensures the bucket's contents exactly match the commit. This step is the handoff to AWS.
Post-Delivery: Cache Invalidation
The S3 `PUT` event for `index.html` triggers an AWS Lambda function. This function then creates a CloudFront invalidation for `/*`, purging the global cache and ensuring users see the updated site.
Zero-Downtime Deployments
The pipeline is designed to ensure the site remains available and consistent throughout the entire deployment process.
Atomic Object Updates
S3 writes are atomic at the object level. This means a user will never be served a partially uploaded or corrupted file. They will either receive the old version of a file or the new version, but never an inconsistent state in between.
Reliable Cache Strategy
While invalidating the entire site (`/*`) can be costly for very large sites, it is the most reliable strategy for a project of this scale. Triggered by the Lambda function after a successful sync, this approach guarantees that all changes are reflected immediately, preventing broken user experiences caused by mismatched HTML, CSS, or JS asset versions.