Featured

Running Windows Containers on Azure Service Fabric

Since the release of Service Fabric runtime version 5.4.145, Microsoft added a(preview) feature to run Windows Containers on Windows Server 2016. The Linux version already supported this for a while. This post explains why Containers are useful and how to get it to work.

Advertisements

Background


Since the release of Service Fabric runtime version 5.4.145, Microsoft added a (preview) feature to run Windows Containers on Windows Server 2016. The Linux version already supported this for a while. This post explains why Containers are useful and how to get it to work.

What is Service Fabric?

Most companies have a lot of applications to run, and usually have multiple overprovisioned servers that run them. Service Fabric connects multiple servers, combined with some clever mechanisms to optimize the use of the underlying resources. Service Fabric is a platform that host distributed applications. It is used to run packaged applications on. You just tell it you want to run your application, and Service Fabric takes care of placement, health monitoring, rebalancing applications based on their resource consumption and application upgrades.

SDK

Service Fabric also comes with an SDK that can be used to make Microservices applications. 
 

Containers

Running many applications together on a set of machines sounds great, but it also introduces some problems. For example, what if one of the applications uses up all available memory? Or what if you want to run applications that target different versions of the .NET framework? What if your application runs on IIS, but IIS isn’t installed on your servers. Wouldn’t it be nice if you could put that application inside a box, together with its dependencies, without boxes being aware of each other?
Well you can, using containers. Containers encapsulate and isolate applications and their prerequisites. Each container has its own isolated view of the underlying operating system. Changes made inside a container are not visible outside of the container. This isolation is enhanced with resource consumption governance options. Similar to Virtual Machines, the amount of memory or CPU-power assigned to a container can be restricted. The main difference is that in Containers, the operation system is shared. This makes Containers very fast to start up.
Another great advantage is that – because of the virtualization – they are portable. You can move a Windows Container to any Windows Server 2016 host that has the Containers feature installed and run it there.
An additional feature of Windows Server 2016 is Hyper-V Containers. These are run on special Virtual Machines, with their own Operating System, to provide an even higher
level of isolation.


Containers on Service Fabric

At this time you can only start containers as guest executables. That means you can’t create Stateful / Stateless Reliable Services and Actors and run them inside Windows Containers yet. The feature is in preview.

Hands On

Let’s create a cluster and run IIS inside a Windows Container now!

Create a cluster

In order to create a Service Fabric cluster that can run containers, you need to use an ARM template. The portal doesn’t allow you to choose the Virtual Machine SKU ‘2016-Datacenter-with-Containers’ yet. It’s easy to configure your setup on the Azure Portal, and then download the template and make some modifications. 
 
Make sure you configure it to open up port 80 for the test application we’ll create later!
 

Parameters

 
In the parameters.json add:
 
        “vmImageOffer”: {
            “value”: “WindowsServer”
        },
        “vmImageSku”: {
            “value”: “2016-Datacenter-with-Containers”
        },
        “vmImageVersion”: {
                “value”: “latest”
        },
This will make your Virtual Machine Scale Set machines use Windows Server 2016 with the Windows Containers feature configured.

Template

In the template.json, under the Virtual Machine Profile, add:
 
“NicPrefixOverride”: “10.0.0”
So it looks similar to this:
“settings”: {
        “clusterEndpoint”: “[reference(parameters(‘clusterName’)).clusterEndpoint]”,
         “nodeTypeRef”: “[parameters(‘vmNodeType0Name’)]”,
          “dataPath”: “D:\\\\SvcFab”,
          “durabilityLevel”: “Bronze”,
          “NicPrefixOverride”: “10.0.0”                                        
},
When selecting the “2016-Datacenter-with-Containers” SKU, you’ll get multiple Network Interfaces on your Virtual Machines, this makes sure that the correct NIC is used for cluster communication.
Once the deployment completes you should have a functioning cluster. While you wait for that, start creating the Application.

Create a Service Fabric Application

Now it’s time to create a Service Fabric Application that will run a Windows Container. The latest bits of the Service Fabric SDK comes with a project template that will make this very simple. Create a new project using the Guest Container (Preview) project template.
Enter the name of the image (from docker hub) you want to run. I selected the image ‘loekd/iis’, which will run Microsoft IIS and expose port 80. Let’s call the project “MyContainerService
To enable interaction with your container, it’s important to know that the image must explicitly expose a port, and have a process listening on that port inside the container.
When the Service Fabric application is created, open the ‘ServiceManifest.xml’ file to provide any additional commands to your image.
For example, to start PowerShell inside the container, in the ContainerHost node add:
 <Commands>powershell</Commands>
so it looks like this:
 

<CodePackage Name=”Code” Version=”1.0.0″>

<EntryPoint>

<ContainerHost>

<ImageName>loekd/iis</ImageName>

<Commands>powershell</Commands>

</ContainerHost>

</EntryPoint>

</CodePackage>

   
Now let’s publish the exposed port 80 for HTTP traffic from the internet. In the Resources node, add:
<Endpoint Name=”IISContainerServiceTypeEndpoint” Port=”80″ UriScheme=”http” />
.. so it looks like this:

<Resources>

<Endpoints>

<Endpoint Name=”IISContainerServiceTypeEndpoint” Port=”80″ UriScheme=”http” />

</Endpoints>

</Resources>

Open the file called ‘ApplicationManifest.xml’ and inside the node ‘ServiceManifestImport‘ add:

 <Policies>

<ContainerHostPolicies CodePackageRef=”Code“>

<PortBinding ContainerPort=”80″ EndpointRef=”IISContainerServiceTypeEndpoint“/>

</ContainerHostPolicies>

</Policies>

 
So it looks like this:

<ServiceManifestImport>

<ServiceManifestRef ServiceManifestName=”MyContainerServicePkg” ServiceManifestVersion=”1.0.0″ />

<ConfigOverrides />

<Policies>

<ContainerHostPolicies CodePackageRef=”Code”>

<PortBinding ContainerPort=”80″ EndpointRef=”IISContainerServiceTypeEndpoint”/>

</ContainerHostPolicies>

</Policies>

</ServiceManifestImport>

  • Notice that the Name of the Endpoint from the service manifest matches the name of the EndpointRef attribute in the PortBinding node.
  • Also, note that the value of the CodePackageRef attribute matches the Name of the CodePackage node in the service manifest.

Publish your application

If your cluster has been created, you can now deploy your application to it.
Right click on the project and select ‘Publish’. Enter the client connection endpoint address of your cluster and click ‘Publish’.

Now navigate to the Service Fabric explorer (same url, but port 19080) to see the deployment in progress.

You’ll notice that it takes quite a long time to start the Windows Containers the first time. This is because the image is based on Windows Server Core, and about 8GB needs to be downloaded. Depending on the size of your VM this can take a couple of minutes. The second time you start a container will be a matter of seconds.
When the service instances report ‘Ready’ instead of ‘In build’ they are good to go.
Navigate to your cluster DNS name, on port 80, using a browser.
You should see this:
Congratulations! You’re now running IIS inside a Windows Container on Azure Service Fabric!

Using this strategy you can lift and shift your legacy applications into your Azure Service Fabric cluster, and run them side by side with your brand new Microservices. You no longer need to maintain two separate clusters.

Links

ARM template for a test cluster with Windows Containers support

MSDN article about Containers in Service Fabric

IIS image on Docker Hub

TechDays 2017 Talk – slides

Here is the slide-deck of my talk ‘Building high quality services using Service Fabric’, at TechDays 2017.

The talk featured data partitioning strategies (for Stateful services) and writing your service code in such a way, that it can be unit tested.

If you were there, thank you for attending. If not, I’ll put a link to the video on Channel9 here, once it’s available.

Building high quality services using Azure Service Fabric

And here’s the video too:

Talking to Kubernetes from VSTS

After you have created a Kubernetes cluster, for instance, by using Azure Container Service, you probably want to start running some containers on it. In this post, I will describe how to do this, by using VSTS. I’ll explain how to execute commands and queries on Kubernetes, by using the CLI and by using Tasks.

After you have created a Kubernetes cluster, for instance, by using Azure Container Service, you probably want to start running some containers on it. In this post, I will describe how to do this, by using VSTS. I’ll explain how to execute commands and queries on Kubernetes, by using the CLI and by using Tasks.

If you use VSTS, you can configure a Build and Release pipeline. The Build pipeline will take your code, compile it, and package it into container images. The Release pipeline will then deploy those containers to your cluster. First, you’ll need to specify a Service Endpoint to be able to talk to the cluster.

Service Endpoint

Open the the VSTS portal, navigate to ‘Services’.

VSTS-servicesClick on the button ‘New Service Endpoint’, and select ‘Kubernetes’ as the type.

VSTS-add-endpoint

This will show the following screen:

VSTS-k8s-service

The connection name should describe the use of the Endpoint. It will appear in a dropdown while creating the Release later.

For the Server URL, use the DNS name of the Load Balancer in front of the Master node(s). (a.k.a. Master FQDN)

Finally, you’ll need to provide Kubeconfig. To get this value, open an SSH connection to the Master FQDN. For example, by using Putty, or SSH. (connect to cluster)

Once connected, run this command and copy the output to the clipboard:

kubectl config view --flatten

K8s-kubectl-config

This will output an un-redacted definition of the cluster connection configuration. The information comes from a file ‘$HOME/.kube/config’. Please note that this is sensitive information. (this is why it’s partly missing from the screendump above…) It can be used to access and configure the cluster. Not just from VSTS, but from anywhere.

During a release, VSTS will configure the tool kubectl on the Build/Release agent using this information.

Paste the entire file content into the field Kubeconfig. Click ‘OK’ to save the new endpoint.

Release Definition

In your Release definition, you can start using this Endpoint. Use the Kubernetes template, to get a quick start.

Release-1

In the field Kubernetes Service Connection select the Endpoint you have just created. The Task will be able use the tool kubectl, combined with the configuration, to manage your Kubernetes cluster.

Commands

One example is to run the command ‘apply’. This command can be used for many things, one of which is to deploy a container to the cluster.

kubectl apply -f k8s.yml

In this example I’m applying the desired configuration described in a YAML file I created, called ‘k8s.yml’. To do this, configure the task like this:

K8s-kubectl-apply

Queries

You can also query information from the cluster. For example, you can query the current state of a deployed service called ‘svc-api-gateway’ by using this command:

kubectl get service svc-api-gateway -o json

The output of this command is configured to be in JSON format. It will look similar to this:

K8s-kubectl-get-output

To do this in VSTS, configure the task like this:

K8s-kubectl-get

The output returned from the command, is copied to the specified variable. This enables you to use it in later Tasks. To use the output in PowerShell, I first needed to copy the content of the variable to a file. This can be done by using a tokenize Task and an empty file that holds a token with the same variable name as you used in Output variable name.

 

Creating and restoring backups in ASF Reliable Stateful Services.

Creating and restoring backups of Azure Service Fabric Stateful Service replicas can be challenging. In this article I’ll describe how you can use my Nuget package “ServiceFabric.BackupRestore” (or its source code) that will help make this much simpler.

ServiceFabric.BackupRestore

ServiceFabric.BackupRestore simplifies creating and restoring backups for Reliable Stateful Service replicas. It supports both Full and Incremental backups.

Continue reading “Creating and restoring backups in ASF Reliable Stateful Services.”

Running Windows Containers on Azure Service Fabric – Part II

The previous post showed how you can create an unsecure Service Fabric test cluster in Azure, and how to run a Windows Container on it. In this follow up post, I’ll show you what’s going on inside the cluster, using the Docker command line. Knowledge about this can be very useful when troubleshooting.

Verify your container is running

First start a new RDP session into one of your Service Fabric cluster nodes. (How to do that is described below.)
Open up a new console window and type:
set DOCKER_HOST=localhost:2375

This will set an Environment Variable that configures where the Docker Service can be reached.

Now check that Docker is up and running, by typing:

docker version

This command displays the current version of Docker, which is 1.12 at this time.

Next type:

docker ps –no-trunc

This command will list all running containers, with information not truncated for display.

The result should be similar to this:

It shows information about your running containers. Some important elements here are:

  • Container ID – which you can use to perform operations on a specific container
  • Ports – Which indicates that port 80 inside the container is exposed and published (note that this value will be missing if your image doesn’t explicitly expose a port.)
  • Command – which shows what was executed when the container started. In this case, the w3svc service and PowerShell. (Note that PowerShell was configured in the ServiceManifest.xml file in part 1.)

Verify IIS is running

If you want to validate whether IIS is running, you can’t just open up http://localhost in your browser (at this time) due to an issue in WinNAT. You can use the IP Address of the Windows Container. To find that, type:

docker inspect -f “{{ .NetworkSettings.Networks.nat.IPAddress }}” c8e5 

Note that ‘c8e5‘ is the start of my running Containers’ Container ID, so it will be different in your situation.

The result should be similar to this:

It shows the IP Address of your running container. In my environment, it’s 172.20.116.222. In your environment it will likely be different.

Open up Internet Explorer and navigate to that IP Address, and you should see the familiar IIS start page. This works, because port 80 was defined as exposed in my image loekd/iis.

Enable Remote Desktop to cluster nodes

If you want to enable RDP access to every node in the cluster (even when it scales up and down) you can do so by specifying it in your ARM template. (My sample template has this already configured.)

Note that this will expose RDP access over the internet, which has security implications. Use strong passwords for the login account. Consider using a Network Security Group or a non public IP Address to restrict access.

The relevant part of the ARM template is this:

   “inboundNatPools”: [
   {
      “name”: “LoadBalancerBEAddressNatPool”,
      “properties”: {
         “backendPort”: “3389”,
         “frontendIPConfiguration”: {
            “id”: “[variables(‘lbIPConfig0’)]”
         },
         “frontendPortRangeEnd”: “4500”,
         “frontendPortRangeStart”: “3389”,
         “protocol”: “tcp”
      }
   }]

Using this tempate definition, the Azure Load Balancer will be configured to forward Internet ports 3389 and up to specific VMSS nodes. The first node will get port 3389, the second one will get port 3390 and so on.

Load Balancer rules that map internet ports to different backend node ports are called ‘Inbound NAT Rules‘.

On the Azure Portal the result from the template deployment should look like this:

In my environment I have a three node cluster, and every node is now accessible through the public IP Address, using its own port.

Read more info about Service Fabric and Scale Sets here.

In this post I’ve showed some ways to verify your Windows Containers are running correctly on Azure Service Fabric.