Container Lake

Running containers at scale is hard, it requires an orchestrator which you need to manage, and often it requires (virtual) machines that also need to be maintained. I believe that this is an intermediate step that will soon become obsolete. In the future you’ll run your software in a “Container Lake”, by simply pushing your code into the cloud. All the messy stuff currently required; compile code, create and publish a container image and deploying it to some orchestrator will vanish. This will allow you to focus on adding business value, using the development technology of your choice.



You can already create Functions that offer similar behavior. This platform allows you to take a bit of code and run it in the cloud, while paying for consumption (among other models). The problem with Functions, is the lack options to have them collaborate. It’s also hard/expensive to protect your consumption based back-end functions from the evil outside world, while keeping them accessible from front-end functions.


For example, in Azure you can already use Azure Container Instances to create a virtual kubelet in a Kubernetes cluster. This kubelet requires no maintenance at all, there’s no need to patch or reboot it. The downside of this approach at this time, is pricing.  The pricing model defines that you pay for the amount of vCPU’s used and the amount of memory available to the container, for every second that your container runs. This means that you cannot share resources on a granular scale. This low-density model makes it hard to tune resources to container requirements, resulting in overhead. You’re paying for resources that you’re not using.

Service Fabric

Another upcoming platform is called Service Fabric Mesh this platform offers a completely serverless environment to host your containerized applications. You simply define how many instances of each container you want to run and how they interact with each other and that’s it. The downside of this, is that Service Fabric Mesh still requires you to build and publish container images. Also, the platform is currently in preview, so you can’t run it in a production environment with a proper SLA yet. So we’re very close, but not quite there yet.

Container Lake

In the future, you’ll see that not only the orchestrator will disappear from view, but also the management of container images. Under the hood, this stuff will still exist, but you won’t be bothered by it. By offloading these complexities, the cloud vendors can run your software in the way they prefer, leading to a highly optimized environment for your workloads. By combining multiple isolated workloads on the same resources, the cloud vendor can create a high-density container clusters. High density in the cloud means low cost for you as a consumer.

I believe, future applications will run inside a Container Lake.


Author: loekd

Loek is a Technical Trainer, Cloud solution architect at Xpirit, a public speaker and Microsoft Azure MVP. He focuses on creating secure, scalable and maintainable systems. To help companies make the most efficient transition into the Cloud, he is always looking for even better ways to leverage the Microsoft Azure Platform. As an active member of open source projects, Loek likes to exchange knowledge with other community members. Let’s engage!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s