Building Shift Advisory’s website: from edge services to a deterministic serving node
A small DNS limitation unexpectedly turned into a broader architectural exercise around ownership, operational complexity, routing boundaries, and deterministic infrastructure design.
As part of launching Shift Advisory, I initially chose a lightweight static hosting
setup using European infrastructure services:
At first glance, this looked like the simplest and most operationally efficient approach:
- no virtual machines
- no web server maintenance
- edge distribution handled by the platform
- static assets served directly from object storage
However, a seemingly small requirement changed the architecture completely:
I wanted the website to live on https://shiftadvisory.nl, not https://www.shiftadvisory.nl
That requirement exposed a deeper infrastructure reality.
The apex domain problem
Standard DNS does not allow a CNAME record at the root domain (shiftadvisory.nl).
The edge service exposed a hostname-based endpoint, which works naturally for:
www.shiftadvisory.nl → CNAME → edge-endpoint
but not for the apex domain.
One common solution would have been introducing another external layer such as Cloudflare, which supports apex flattening and proxying root domains through its global edge network.
Technically, this would have solved the problem immediately.
But it also would have changed the architecture in a more fundamental way.
The website would no longer be served directly from infrastructure I explicitly control. Instead, the root domain would become dependent on an additional intermediary layer responsible for DNS resolution, edge routing, TLS termination, and request handling behavior.
I deliberately chose not to go in that direction.
Part of the positioning of Shift Advisory is thoughtful infrastructure ownership and stronger alignment with European sovereignty principles where they make sense operationally and commercially.
In this case, introducing another globally-distributed intermediary layer would have solved the DNS limitation, but also added an additional operational dependency and shifted important routing and TLS behavior outside the infrastructure boundary I wanted to keep explicit.
The decision was less about rejecting a particular vendor, and more about choosing the architecture that best matched the operational characteristics I wanted from the system.
The architecture gradually simplified
At that point, several options were possible:
- additional DNS/CDN layers
- redirect infrastructure
- load balancers
- proxy nodes
One possible evolution of the architecture looked roughly like this:
Each layer solved a specific technical concern:
- apex-domain routing
- TLS handling
- caching
- proxying
- edge distribution
But each additional layer also introduced:
- another operational boundary
- another trust dependency
- another control plane
- another source of routing and caching behavior
- another place where debugging and ownership become less explicit
The infrastructure was becoming more capable, but also harder to reason about.
After following the constraints carefully, the solution became unexpectedly simpler.
Instead of adding more infrastructure layers, I removed them.
The final architecture became:
The website assets are synchronized from object storage onto the instance and served locally.
This reduced:
- external runtime dependencies
- edge-layer complexity
- cross-provider routing behavior
- operational indirection
- hidden infrastructure behavior
while increasing:
- determinism
- explicit ownership
- debuggability
- infrastructure clarity
- operational predictability
Operationally, the node behaves more like a deterministic appliance than a traditional mutable server:
- explicit ownership boundaries
- minimal runtime dependencies
- predictable behavior
- low operational overhead
- replaceable infrastructure nodes
- no unnecessary control-plane coupling
The interesting part is not nginx itself. The interesting part is how a DNS limitation forced a broader architectural discussion around:
- ownership
- operational complexity
- abstraction layers
- infrastructure determinism
- sovereignty-aware infrastructure decisions
Reducing complexity is also architectural work
Modern infrastructure discussions often assume that more layers automatically imply more maturity.
In practice, every additional abstraction:
- introduces operational behavior
- changes ownership boundaries
- affects failure modes
- shifts trust assumptions
Sometimes the right architecture is not the most “cloud-native” looking one. Sometimes the right architecture is the one that makes system behavior easier to reason about.
Why the website itself is statically generated
One more deliberate choice in this setup is that the website itself is statically generated rather than CMS-driven. That decision is also connected to:
- determinism
- operational surface area
- ownership boundaries
- AI-era content architectures
I will cover that in the next post.