From Datacentres to Cloud: How Computing Changed Through My Career

Feb 15, 2026

A modern data centre

When I started my career, infrastructure was something you could touch. Servers sat in racks. Cables ran between switches. If something went wrong, you walked to the datacentre, physically seated a card, checked a link light. The work was concrete in a way that’s hard to explain to engineers who started their careers writing Terraform.

That world is largely gone now — or at least it’s hidden behind layers of abstraction. But I think starting there gave me something that’s quietly useful: an understanding of how computing actually works, from the bottom up.

The On-Premise World

The company I started at ran its own infrastructure across multiple datacentres. This wasn’t unusual at the time — most organisations of any scale did. You owned the hardware, you owned the network, you owned the problem when either of them broke.

The datacentre floor was its own environment. Power distribution, cooling, structured cabling, raised floors. Before you ever thought about an operating system, you were thinking about rack units, power draw, and airflow. A server was a physical commitment — lead times, capital expenditure, procurement sign-off. You didn’t spin one up in thirty seconds.

Networking was physical and deliberate. VLANs were configured on managed switches. Routing happened on dedicated hardware. Firewall rules were applied to physical appliances. If you wanted to understand why traffic wasn’t flowing, you traced it through the stack — literally, sometimes with a cable tester.

VMware and the First Abstraction

Virtualisation changed things significantly. VMware, and specifically vSphere/ESXi, meant you could run multiple virtual machines on a single physical host. Hardware that had been sitting at 10% utilisation suddenly got used properly. Provisioning a new server went from a weeks-long hardware procurement process to spinning up a VM from a template in minutes.

This was infrastructure’s first encounter with the kind of on-demand provisioning that cloud would later perfect. The underlying concepts — abstracting compute away from physical hardware, snapshots, live migration (vMotion), high availability — are the same ideas that underpin everything AWS and Azure do. The cloud didn’t invent these. It industrialised them.

Managing VMware at scale taught you things that still matter. You understood CPU and memory overcommit — the fact that you could allocate more virtual resources than the host physically had, because not everything runs at full load simultaneously. You understood storage I/O contention. You understood what happened when a host failed and VMs needed to restart elsewhere. These aren’t abstract concepts when you’ve debugged them at 2am.

The OSI Model and Why It Still Matters

Starting in an environment with physical networking gave me something I didn’t fully appreciate until later: genuine familiarity with the lower layers of the OSI model.

For those less familiar, the OSI model is a conceptual framework that describes how network communication works across seven layers — from the physical transmission of bits (Layer 1) up through data link, network, transport, session, presentation, and finally application (Layer 7). Most developers only ever think about Layer 7 — HTTP, APIs, DNS. The layers beneath are someone else’s problem.

When you’ve worked with physical infrastructure, they’re not someone else’s problem. You’ve configured Layer 2 switching, dealt with MAC address tables, debugged spanning tree issues. You’ve configured Layer 3 routing, written ACLs, understood how packets actually traverse a network. You’ve watched TCP sessions at Layer 4 with a packet capture and understood why a connection was timing out.

Does this matter in a cloud world? More than you’d think.

When you’re setting up a VPC in AWS, you’re making Layer 3 decisions — CIDR blocks, routing tables, subnet design. When you’re debugging why two services can’t talk to each other, understanding whether the problem is a security group (Layer 4), a DNS resolution failure (Layer 7), or a routing table misconfiguration (Layer 3) is the difference between a fast diagnosis and hours of guessing. When you’re configuring a service mesh, the concepts of mTLS, traffic routing, and retries map directly onto transport and session layer concepts.

Cloud abstracts the physical layers away. It doesn’t make them irrelevant. It just means you’re configuring them through an API rather than a CLI on a switch.

The Move to Cloud

The shift wasn’t sudden. For most organisations it came in waves.

First came the low-hanging fruit — dev and test environments that didn’t need to be on expensive on-premise hardware. Why maintain a VMware cluster for ephemeral workloads when you could spin up EC2 instances and pay only for what you used? The economics were obvious.

Then came the workloads that benefited from managed services. Why run your own database server, patch it, back it up, and worry about its storage, when RDS would do all of that for you? Why manage your own message queue infrastructure when SQS existed? The cloud providers weren’t just offering compute — they were offering entire operational functions as a service.

The harder conversations were around the workloads that genuinely couldn’t move easily. Legacy applications with architecture that assumed fixed IP addresses and persistent local storage. Regulatory requirements that mandated data residency. Applications tightly coupled to on-premise systems. These are often still on-premise today, which is why hybrid is the honest answer for most large organisations rather than “fully cloud.”

Containers: Another Layer of Abstraction

If virtualisation abstracted compute away from physical hardware, containers abstracted applications away from the operating system. Docker made it practical to package an application and everything it needed to run into a single portable unit — no more “works on my machine” because the machine was now part of the package.

The progression is a clean one in hindsight: physical servers gave way to VMs, VMs gave way to containers. Each layer added density and portability. A physical host ran a handful of VMs; a VM could run dozens of containers.

But containers also changed how people thought about infrastructure. VMs were still largely treated like servers — long-lived, named, ssh’d into, patched in place. Containers pushed the industry toward immutability. You don’t modify a running container; you build a new image and redeploy. That shift in thinking — away from pets and toward cattle, as the analogy goes — is one of the more significant cultural changes in how infrastructure gets managed.

Kubernetes then arrived to orchestrate containers at scale, and brought its own networking model with it. Pod networking, services, ingress controllers, network policies — all of it sits on top of the same Layer 3 and Layer 4 concepts that physical networking was built on, just with more layers of abstraction between you and the wire. Understanding what was underneath still helped.

Where We Are Now: Hybrid Reality

The narrative for a while was that everything would move to the cloud. Full cloud, all the time, on-premise is dead.

The reality is more nuanced. Most established organisations are hybrid — some workloads in the cloud, some still on-premise, some spanning both. The on-premise estate is usually smaller than it was, more intentional, and often running on private cloud platforms like VMware Cloud Foundation or OpenStack rather than raw physical hardware.

What’s changed is the default. A decade ago, on-premise was the default and you needed a reason to use the cloud. Now cloud is the default and you need a reason to stay on-premise. That’s a meaningful shift, even if the end state isn’t all-or-nothing.

What Starting On-Premise Gave Me

I’m glad I started where I did. Not out of nostalgia — physical servers are not something I want to go back to managing — but because the foundation it gave me is genuinely useful.

Understanding that a VPC subnet is a Layer 3 concept, that a security group is essentially a stateful ACL, that a load balancer is terminating TCP connections and establishing new ones — these aren’t things I had to learn abstractly. I’d worked with their physical equivalents. The cloud versions made immediate sense because I already understood what problem they were solving.

There’s a generation of engineers entering the industry now who’ve only ever worked with cloud infrastructure. They’re often excellent — the tooling is better, the feedback loops are faster, and you can learn a huge amount without ever racking a server. But I sometimes see gaps around the fundamentals: confusion about why routing isn’t working when the security group looks fine, or why two services in the same VPC can’t reach each other across subnets.

The abstractions are good. Knowing what’s underneath them is better.

The compute has changed beyond recognition since I started. The fundamentals haven’t.

#cloud #infrastructure #vmware #virtualisation #networking