A few times in my career, leadership looked at a well-known technology and decided it should be possible to do it in months. CTOs, VPs of Engineering and Product, CPOs, and Heads of Engineering and Platform set an aggressive target. Then we, together, had to make it real.
Two short stories.
At one of the companies, we are facing a migration of a large SaaS application, with hundreds of thousands of users, from Azure with managed services to AWS on EKS. It will be our first production deployment of Kubernetes with that product. At the same time, the platform it needs to run is in a design state.
At a different company, we built a new platform on GCP's managed Kubernetes, whereas our previous runs on on-premises via VMware Tanzu. We needed entirely new ways to provision clusters, do disaster recovery, and deploy applications. The calendar said "months." The work said "years of habits to unlearn."
On slides, these shifts look obvious. In execution, complexity lives in the details. And the business does not stop while you figure them out.
Leadership sets the pace. Staff engineering teams shape the path. These calls often come from the top for good reasons: cost, consolidation, strategy, and customers. The timeline is declared first. Then the work starts. Our job isn't to complain. Our job is to make risk visible, reduce blast radius, propose a path that protects the business, and be blunt about dependencies and sequencing.
There is a trap in "mature on the market." Mature doesn't mean simple in your organisation. Abstractions leak. Parity of features is not parity of day‑to‑day operations. Change introduces new failure modes you have not met yet. Adoption dips in throughput and stability are normal. Plan for the dip instead of being caught off guard by it. And yes, Kubernetes is powerful - and also complex. "It's just Kubernetes" is never just Kubernetes.
Let's keep this less technical and more real. A move from a managed database to a self-hosted one appears as a checkbox on a roadmap. Before, the vendor shouldered patches, backups, and part of the on‑call pain. After you own upgrades, high availability, backup and restore drills, performance tuning, compliance checks, and the pager. The trade‑off is control and flexibility versus a permanent operational tax. Name the tax upfront. Be honest about who pays it and when.
Habits from the VM world don't map one-to-one into the Kubernetes world. In the VM world, you add machines and feel limits slowly. In the Kubernetes world, you meet different ceilings, quotas, and policies. You change how you deploy, carve capacity, and govern it. This isn't about YAML. It's about how your organisation works under load and under pressure.
What stretches timelines isn't usually the one hard technical problem. It's everything around it. Identity and access become a matter of "who can do what, where" across multiple teams and clouds. Networking turns into routes, DNS cutovers, certificate renewals, and the simple question of who owns the front door. Observability is not a tool choice; it involves new dashboards, new alerts, new runbooks, and rehearsing incidents before they happen. Disaster recovery only counts if you have restored something, not if you have configured a backup. Pipelines and "paved roads" matter because, without a standard way to deploy, every team will invent a unique way, and you will pay for that uniqueness in on‑call. People need time to learn. If you do not create the time, people will learn on production.
The transition is where you either blow up the organisation… or not. A dual‑cloud period is often the safe path. Sometimes you run both clouds for a while. Sometimes you place some services in one cloud and others in another to reduce scope and complexity. Start with edges and stateless pieces. Keep heavy data where it is until last. The price is a temporary cost and on‑call complexity. The benefit is safety and learning. Parallel operations then require clear ownership lines for incidents, canary rollouts that are real, rollback paths that you have actually tested, and freeze windows that everyone respects. Expect a dip, measure it, and communicate it.
Staff engineering is a team sport. You might lead the migration of one large service or one product within a multi-product organisation while the platform is still being developed. Own a slice, not the world. Define success jointly with the platform team so that your service outcomes align with platform readiness milestones. Create tight collaboration loops: weekly platform and product syncs, a shared backlog of platform gaps discovered by your service, and a visible dependency map and risk burndown that anyone can reference. Try to be an early adopter. Build thin vertical slices together. Choose the smallest viable path to production, then iterate. Prove the path with your service, then template it so others can follow without reinventing it. Use feature flags, canaries, and agreed freeze windows so no one is surprised by risk.
The only way to make big changes safe is to make them boring on purpose. Try to work on things that you know you will need to do, even though you will not be able to test them now. You were using blob storage, now you will use S3... or even both. When possible, make your code cloud/product-agnostic through shims, abstractions, or custom code. You can work on that now and validate it in the current environment. Try to work proactively on things that will reduce the complexity of running products on both platforms, making them as similar to each other as possible. Leverage the platform you already have to test your ideas - remove the container, reduce the size of the container, reduce the size of the machine, but increase the number of machines.
You will not be able to control everything or know about everything, so prepare for chaos. Control one thing, but still expect chaos. Documents will be outdated the day they are written, referencing code names that no longer exist or products that were supposed to be developed but are not, and will not be. And prepare to fail; it's nothing bad. Just make sure that when you fail, you've done everything you could to deliver the product according to the timelines.
Sometimes, the decision and the deadline arrive from above. That's fine. There are big, hairy goals, and there are really hairy ones. Ambition is good. Denial is expensive. Just because something has been on the market for years doesn't make it simple in your context. Complexity resides in the details we don't initially see. Our job, together, is to surface those details early, be blunt about reality, and make change boring enough that the business stays safe while we evolve it.