Saturday, February 11, 2017

Serialize data with Powershell

imageCurrently I am working on a big new module. In this module, I need to persist data to disk and reprocess them at some point even if the module/powershell session was closed. I needed to serialize objects and save them to disk. It needed to be very efficient to be able to support a high volume of objects. Hence I decided to turn this serializer into a module called HashData.

Other Serializing methods

In Powershell we have several possibilities to serialize objects. There are two cmdlets you can use which are built in:
  • Export-CliXml
  • ConvertTo-JSON

Both are excellent options if you do not care about the size of the file. In my case I needed something lean and mean in terms of the size on disk for the serialized object. Lets do some tests to compare the different types:


You might be curious why I do not use the Export-CliXML cmdlet and just use the [System.Management.Automation.PSSerializer]::Serialize static method. The static method will generate the same xml, however we do not need to read back the content of the file the cmdlet creates. 

The If we compare the length of the string we get this:


As you can see, the XML serialization is very bloated with metadata, however the JSON serialization is much better. The winner is the HashData module with a 30% smaller size compared to a JSON string.

HashData module

Currently the module implements these cmdlets:

  • Assert-ScriptString
  • ConvertTo-HashString
  • ConvertTo-Hashtable
  • Export-HashData    
  • Import-HashData    
  • New-Date 

Like for the Import-XMLCli and Export-XMLCli, the logic for serialization and deserialization is implemented in Import-HashData and Export-HashData. I chose to also include and export from the module the helper functions ConvertTo-Hashtable and ConvertTo-HashString. Those could be useful in other scenarios as well. The New-Date function is probably my smallest function I have ever published. It purpose is to be able to convert datetime objects on deserializing objects.

Lets inspect the object we created above and look at it’s string representation:


As you can see, the datetime object are converted to a [long] ticks value, which the function New-Date converts to a datetime object on deserialize.

Currently implemented property-types

In this version, your object may have properties of the following type:

  • String
  • Integer
  • Boolean
  • Double
  • DateTime
  • Array of String
  • Array of Integers

Currently supported and tested object depth is 1. That might change in the future. You may pipe or supply an array of PSCustomObject to the Export-HashData function.

I have deliberately chosen not to convert the objects from Import-Hashdata to PSCustomObject in this release. Depending on feedback and the need, I will consider adding this at a later stage.


The Assert-ScriptString function is a security boundary and is implemented and used in the Import-Hashdata function. The reason for that is, when you serialize a object as an hashtable string, you are in essence generating a script file which in this instance will behave like a scriptblock. When you Import something, that is invoking a scriptblock, the Assert-ScriptString will make sure nothing evil will ever execute. The only function allowed in the serialized object currently, is the New-Date function.

The Import-HashData function has a switch parameter (UnsafeMode) that lets you override this security feature. Use it with care.

PowershellGallery and GitHub

The module is published to the PowershellGallery and here is the link to the GitHub repro

Please reach out to me on twitter or leave a comment. I love feedback both good and bad.



Tuesday, January 24, 2017

Create notification in Windows 10

From time to time I find myself needing a notification tool. Normally a simple show messagebox will suffice, however the trouble with that is that is blocking until someone clicks that OK button. I have seen workarounds that uses wscript and other things, however that is just meh.

There is a function in my general purpose repro on BitBucket that is called Show-Message. Here is the function:

The “good” thing about this is that it also works on Windows Server 2016 with the GUI experience, however who is using a GUI on a server nowadays? Everybody should be using Nano or Server Core, right?

In addition the function will fallback to a regular good old MessageBox if the Windows.UI.Notification namespace is not available. Please note that in those scenarios, the function will block execution until the OK button is clicked.

Here is how a notification looks like:


You can also control for how long the notification should be shown to the user with the DisplayDuration parameter.

If you simply want to display a regular MessageBox, just run Show-Message -Message "Test message" and it will show you a message box:

That is it.


Monday, January 16, 2017

Something completely different – PoshARM

I needed a project for my Xmas holiday and I needed something remotely work related. Thus the dubious PoshARM Powershell module was born and brought to life during my Xmas holiday. Simply put it is a module that lets you build – for now – simple Azure Resource Manager (ARM) templates with Powershell. 

The module can also import templates from a file or from the clipboard/string. Your partial template or ready made template can be exported as a Powershell script. This blog post will walk you through how to use it and the features that is currently implemented. 

Update 08.02.2017:

The module is now published to the PowershellGallery ( It is still in beta version, however test coverage have increased and some bugs have been squashed during the testing. Also help is present, however somewhat lacking here and there.

Update 18.01.2017:

The module is now on GitHub. Here is the link to the repro (PoshARM on GitHub)

What is a ARM template?

It is a text file, or more correctly a JSON text file. Here is a sample template which is empty:


The ARM template is an input to the Azure Resource Manager which is responsible for deploying you resource definition (your ARM template) onto an Azure Subscription. There are multiple ways you can make or build your template:

  • Any pure text editor (Notepad, Notepad++)
  • Visual Studio
  • Visual Studio Code
  • PoshARM (this module)

To summarize an ARM template consists of these main building blocks:

  • Parameters
  • Variables
  • Resources
  • Outputs

In addition you should also have a metadata.json file associated with your template. You can find the complete Microsoft documentation of an ARM template on this link: Authoring ARM-templates

Why PoshARM?

Good question. In my experience this will probably not be the primary way of creating an ARM template for the professionals. For them is will probably be quicker to manually copy/paste and edit the template in an text editor or in Visual Studio. Trouble is when your template expands, it can get quite big. In addition I have yet to say hello to any IT-pro (with very few exceptions) that embrace and understand big JSON files, much less IT-pros that build their own ARM templates. If only a single person find it useful or any part of this module is useful, I will be happy.

Module status

This is a public alpha preview. There are bugs in the module and it is not feature complete in any way. Currently I have Pester coverage for most of the cmdlets, however the current ARM-template test file is just to create a simple VM in Azure and it contains 6 resources, some parameters and variables. As always, help is missing everywhere and this is the reason I have not published it to Powershell Gallery yet.

There are currently no cmdlet for working with the template outputs property. It is handled and imported if you use the Import-ARMtemplate cmdlet, however it will be missing if you export it.

ARM Variables

To interact with variables we have these cmdlets:

  • Get-ARMvariable
  • New-ARMvariable
  • Add-ARMvariable
  • Get-ARMvariableScript
  • Set-ARMvariable

Creating a new variable is straight forward and we can pipe the output to Add-ARMvariable to add it to the template:

Set-ARMvariable and Get-ARMvariable cmdlets implements a dynamic parameter for the Name of the variable. This makes it impossible to set or get the value of a variable if it does not exists:


ARM Parameters

A parameter have many more properties than a variable, however you need to specify a Name and the Type of the parameter. These are the cmdlets we have:

  • Get-ARMparameter
  • Get-ARMparameterScript
  • New-ARMparameter
  • Add-ARMparameter
  • Set-ARMparameter

Creating a parameter for adminUserName can be as simple as this:

As with the variable cmdlets, we have a dynamic parameter for the name both for Get-ARMparameter and Set-ARMparameter.

ARM Resources

This is where it gets rather complicated. The resources property of the ARM template, expects an array of resources which in turn can have nested resources, which again can have nested resources. As you would expect, we have a few cmdlets to work with resources as well:

  • Get-ARMresourceList
  • Update-ARMresourceList
  • Get-ARMresourceScript
  • New-ARMresource
  • Add-ARMresource

Get-ARMresourceList provides dynamic resource type parameter for New-ARMresource. The Update-ARMresourceList cmdlet is used to update the cached version of the resource providers that is available in Azure. Currently the cached resource list is saved in the module path (.\Data\AllResources.json), however it should probably be moved to AppData.

Creating a new resource is straight forward. Currently it does not support lookup of variables and parameters, however that feature could be added later. Here is an example that creates an new Storage Account on Azure:

The New-ARMresource cmdlet implements a Dynamic parameter named Type. The value for this parameter is generated by the Get-ARMresourceList command. 

ARM template metadata

Each template should have some metadata that help to identify the template. There is a Set-ARMmetadata cmdlet that will create the metadata.json file for you. Here is an example metadata.json file:

Importing existing ARM templates

On GitHub you can find loads of quick starter templates that you can modify and update. It would be pretty useless if this module did not let you import these templates and work with them. The Import-ARMtemplate will import an template from the clipboard/string or from a file on your computer. Here is how you can use it:

ARM template

For working with ARM templates, we have the following cmdlets:

  • Get-ARMtemplate
  • Get-ARMtemplateScript
  • New-ARMtemplate

The New-ARMtemplate cmdlet will create a new empty ARM template in the current Powershell session. Currently it will overwrite the current template if you have started creating one. This will change and will require you to specify the Force parameter if a template exists.

Get-ARMtemplate executed without any parameters will return the template which is stored in a module variable called $Script:Template. It also have 2 switch parameters:

  • Get-ARMtemplate –AsJSON
  • Get-ARMtempalte –AsHashTableString

The hashtable string version is easier on the eye compared to the JSON version, however that depends on your JSON experience level and your hashtable fondness level.

Helper functions

There are two helper functions available in the module. Both of them are used heavily in the Script cmdlets which we will talk about next.


If you have worked with Powershell it should be pretty simple to understand what this cmdlet does. It converts an Inputobject to an hashtable, that is it actually outputs a ordered hashtable. It will chew through the Inputobject and create an ordered hashtable even for nested objects and arrays. Lets take it for a spin:



Give this cmdlet an hashtable or an ordered hashtable an it will output the text version of it that you can paste into a Powershell host and execute. Let’s use the $fileObject hashtabel and see if we can get back the text representation of the object:


Yes, there it is with proper indention and everything.


You may have noticed that I have added a cmdlet for each property that have the Get-ARM*Script name syntax. The purpose of those cmdlets are to generate the Powershell script for each property in the template. Here is how you use it:

In the example we have created 2 variables, a parameter and a resource. These have been added to our template as we have moved along. Now we introduce the Get-ARMtemplateScript cmdlet which will give you the template as a script. Here are the commands we have executed:

Now we are going to run Get-ARMtemplateScript and see what we get back:


There we have it. We just created a ARM template with Powershell and converted the template back to a Powershell script. This also works with imported templates which enables you to copy snippets of code to create templates. The observant reader may spot the bug in the screenshot above. The SKU key is “System.Collections.Hashtable” which is not correct. Did I mention that it is not ready yet? Well it is not, but it is almost working.

Planned features

Depending on the reception of the module, I have planned some enhancements for the module:

  • Add help
  • Improve Pester coverage
  • Add cmdlets for creating outputs
  • Add support for template functions and keywords ([variables()], [parameters()], [concat()], [resourceId()] etc)
  • Template linking

Please contact me if you have other suggestions or ideas. I cannot think of everything.

Final thoughts

There is a very small amount of job left to make this module work at the current functional level. Please leave feedback here on my blog or reach out to me on Twitter (@ToreGroneng). The module will be published on and the link to the repro is here (link to PoshARM).



Friday, November 4, 2016

Giving a helping hand - Community power

PowerShell is getting increasing attention and gaining followers each day. That is a good thing in my book. I saw a tweet about Citrix OctoBlu automation where Dave Brett (@dbretty) was using it to save money with a PowerShell script (full post here) to power on and off VMs. I reached out to him and asked if he would like a little help with his PowerShell script. To my delight, he happily accepted and this post is about how I transformed his scripts to take advantage of the full power of The Shell. Fair warning is in order, since I have never used or touched a OctoBlu solution.

Starting scripts



What we would like to change

First of, a PowerShell function should do one thing and do it well. My first goal was to split the function into two parts where we have one function that handles both the startup and the shutdown of the VM-guests. Secondly I would like to move the mail notification out of the function and either put it in a separate function or use the built in cmdlet Send-MailMessage which has been available since PowerShell version 3.0. Nothing wrong with using the .net class, however I like to use cmdlets if they provide similar functionality.

Secondly I changed the function to an advanced function to leverage WhatIf and all the streams (debug, verbose, information etc). I also added some Write-Verbose statements. The difference between a regular function and an advanced function can be as simple as adding [cmdletbinding()] to your function. If you do, you have to use a Param() section to define your parameters.

Third I added parameters to the function. From the scripts I decided to create the following parameters:

  • Credential as [PScredential]
  • XenServerUrl as [string]
  • VMname as [string[]]
  • Shutdown as [switch]

Forth I added Begin, Process and End blocks to enable PipeLineInput for the VMname parameter. Also to take advantage of configuring the requirements like Import-Module and Connect-XenServer in the Begin-block.

Fifth I added an output object to the function in which I output the VMname and the action taken with the VM (startup or shutdown). The reason for that becomes clear when we start to setup notification.

Those are the 5 big changes I have made to the initial scripts. Other than that I added some personal features related to the use of Write-Verbose and other minor stuff.

How to handle credentials

Every time you add a parameter to your function called username or password you should stop and think. You should most likely use a PScredential object instead. So how do you access those credentials at runtime? This script needs credentials and you cannot prompt the OctoBlu automation engine to provide those. Perhaps OctoBlu have a credential store, however I do not know. 

An secure and easy solution to this problem is to use the DAPI built-in encrypting API. The same logic can be applied to any service or service automation solutio that need specific credentials to execute your scripts included scheduled tasks. We will leverage tree cmdlets to accomplish this:

  • Get-Credential
  • Export-CliXml
  • Import-CliXml

First you need to start a PowerShell host as the user that need to use your credentials. Then we need to run these commands:

This will create a PScredential object and the Export-CliXml will protect the password with DAPI when you create the XenCred.xml file. That file can only be decrypted with Import-CliXml running under the account it was created with. So when you need to access those credentials you run:


The updated script


The Shell Thing


(Screenshot of OctoBlu, image by Dave Brett)

Dave Brett uses the profiles.ps1 script to make functions available in OctoBlu. That is fine, however it makes it hard for people that don’t know PowerShell to figure out where the function (Lab-Shutdown) comes from. I would suggest to add something like this in the script box:


This is just a suggestion which in my opinion makes it easier to follow what is happening. Since the Set-LabPowerState and the parameter VMName takes an array of strings, we could take the content of the file holding the names of the VMs and use that. I decided to use a foreach loop for readability reasons. 

I probably need to say something about a technique called splatting in PowerShell. Have a look at this line:

Set-LabPowerState @setLabPower -VMname $vm

A few lines up, you can see I create a variable $SetLabPower which is a hashtable. The keys in the hashtable match the name of the parameters of the function Set-LabPowerState. This makes it easier to read when you call functions or cmdlets that have many parameters. We can then provide those keyvalue-pairs to the function using a @ in front of the variable name.

The other thing to note is that I am using dotsourcing to make the Set-LabPowerState function available in the Script Thing session. I am assuming that the content of my new function is saved in the c:\scripts\Set-LabPowerState.ps1 file. 

Since my function outputs an object for each VM it processes, we can leverage that in the email notification setting and provide feedback on the VMs we have messed with. The output for the foreach loop is saved in the $results object. We convert this object to a string representation with the Out-String cmdlet and use that string object as the body of the email.

A note about ErrorAction

Since this script needs access to the XenServerPSModule module and you need to connect to an XenServer, I am using ErrorAction Stop on the Import-Module and the Connect-XenServer statements. This will prevent the script to continue if both prerequisites are not met. In addition the user is presented with a nice message explaining what the issue is.

Benefits of the new script

  1. We have a function that does a single task even if it can start and shutdown VMs.
  2. The functions accepts parameters so we can reuse it later
  3. The function is discoverable by the PowerShell help engine since we have added help in the function
  4. The automation task in OctoBlu is easier to understand. Think of the next guy
  5. We can execute the function without actually making changes since it is an advanced function and we have implemented ShouldProcess (WhatIf)
  6. The function outputs an object which we can reuse in the email notification scenario

So the only thing that is needed is someone to test my improved solution on an OctoBlu server. I have no idea if it works or if you think this is a better solution. I think it is.



Friday, October 14, 2016

Ignite 2016 summary – Innovate, optimize, manage and empower your business with IT

Image result for cloud microsoft

This years Microsoft Ignite conference was all about transforming your business with technology. Here is a techy summary for business-minds.

Going forward, IT-Pros must prepare to answer both tricky business questions, and leverage new tools to meet business demands. I imagine questions like these:
  • What are the needs of our business?
  • How can we empower our users to apply the cloud to gain competitive advantages?
  • How can we innovate with greater agility and optimize our IT resources?
  • How can we migrate from the traditional model where IT is just a cost-center, to a lean/mean machine where IT is the engine that powers our business strategy with increased earnings?

A model of the traditional business case

We live in a traditional world with traditional problems. Simplified a business consists of a few silos:
  • Internal users
  • Your customers
  • Your suppliers and partners
  • The remainder of the universe

All of these are connected directly and indirectly through processes, some of them manual and some maybe through automation. The job of the IT department is to deliver services, preferably in the most cost effective way possible. Generally, if you change a process through a tool or automation (Powershell), and you saved time/cost, you become the hero. Cost- and time-savings are always welcome, however the possible impact is superior when IT is driving your revenue, like in the new model.

The new model for IT

In the new world, everything is about processes, data and applications. In other words, algorithms. Everything is moving and changing at a higher speed than we have ever experienced before. Silos probably still exists, however they are interconnected and data-aware. Your CRM application will have access to and understand other applications and their data structure. It will empower your employees and provide you with just in time insights. With the new Azure PowerApp and Flow applications which implement the CDM (Common Data Model) you have this available today as a preview service. Throw Azure Functions into the picture, and you have a pretty robust and extendable model which is highly customizable and scalable.

In addition, Azure has implemented predictive analytics and machine learning (ML) in the different APIs, like Storage, Azure SQL, Hadoop etc. They are enabling ML for the masses by implementing it across their datacenters and in the Azure model. Your developer is not responsible for implementing intelligence in your application, he consumes predictive data from the Azure machine learning API possible through the integration with the Storage API. You do not consider IT as a cost-center, however as a business enabler, that helps you to increase revenue by applying analysis of big data through algorithms that is constantly updated to provide perfect information just in time. Theoretically possible, however immensely difficult to implement in practice if you are not in Azure.

What do you need?

:Speed and agility: If you have a clear understanding of your needs, your market and competitors, why not move as agile and fast as you can? If you can change faster than your competitors, you have an advantage and a head start. Let me illustrate with an example; You have probably heard about robot-trading in the stock-market? They move very fast and agile because the first person/robot that receives and understands specific market information, is the winning party and walks away with some profits. In our business case, it is the same thing. Rapid changes to your algorithm and IT system to understand the business and receive correct information just in time, is essential to become the leader and increasing profits.

:Scale: Your IT system need to be able to scale, up and down. You should not have to worry about it as the cloud does this for you within the limitations you have defined. The cloud empowers businesses of all sizes to use scaling technology that previously was the privilege of large enterprises with expensive dedicated appliances. Committing to services and applications that handles scaling is key in the new world. Relying on old legacy applications and services will prevent you from becoming a new force in your market. Startups in your market will become your new IT system performance benchmark and they probably do not consider legacy systems a match for their agile needs.

:Knowledge – Close the gap: The adoption of cloud resources and the hybrid cloud is just the beginning of the disruptive change that is here. Hybrid cloud is just a steppingstone towards the connected cloud with unlimited resources at your fingertips. That does not imply that the private clouds will not exists. They just need to be connected to the public cloud and empower it by binging some added value. In the other case, if it is not connected, it will be a relic and an edge-case for very special circumstances. In this scenario, knowledge will be important. New features and services are launched on an almost weekly basis. Products are migrating from private preview, to public preview and finally to general availability in matter of months. If you do not take advantage, someone else will, perhaps your competitors.

:New People and Organization 2.0: Best case scenario, you need a huge amount of training and designing. If ordering a new web-server or virtual machine takes longer than the time usually needed to create/deploy it automatically, trust me, you have to do something. Your organization is already changing, perhaps you just have not noticed it yet? Ever heard about Shadow IT, the evil from within? If it is not knocking on your door, it is because it is already inside. Shadow IT is a real problem that you need to take seriously. In the emerging world, people want things yesterday, like always. Problem is that if you do not deliver, someone else can, and asking for forgiveness beats asking for permission 9 out of 10 times, especially if it yielded a positive result. Rules, policies and guidelines are nice, however immediate results are king.

DevOps is a “must”: The new world relies on DevOps. DevOps is a merge between a developer and a IT-Pro where you bring the knowledge of both parties together and apply that knowledge to your business and culture in a series of new processes. DevOps is not automation; however, automation is a key part of DevOps.

:Security: You do know that hackers target IT-Pros due to the fact that they normally have access to everything? The tools to handle this is available and has been for quite some time now. Microsoft Identity Manager comes with PAM (Privileged Access Management) which audits privileged access with time constrains. Then your privileged access token expires, your access is revoked. The Powershell team has created a toolkit called Just Enough Administration (JEA) which is very similar to the Identity Manager solution. Both solutions should be designed with a “break the glass” option for that time when you really don’t care about the security, but need to fix the issue. If you break the glass, all kinds of things happen and you probably would expect to face some sort of hearing where you have to justify the action, which is a good thing.

With Windows Server 2016 a new Hyper-V feature was launched giving us Shielded VMs. With shielded VMs the tenant of a shared resource owns the VM completely. The entity responsible for the platform it is running on, have the ability to manage it to a certain degree (like start, stop and make a backup). The backup of a shielded VM is encrypted if you were wondering.

Last but not least, security starts at the operating system level. In general, reducing the attach surface is regarded as a first line of defense. Windows Server 2016 Nano is the new operating system for the cloud and will change the way you work and handle datacenter workloads. Nano Server has a tiny footprint, small attach surface and is blazingly fast, which makes it a perfect match for a fast moving and agile business.

:Help – Private cloud or hybrid cloud: Even with a new organization and knowledge, it is highly likely that you will need some consultancy. According to Gartner, 95% of all attempts to create a private cloud fails or fails to yield the expected outcome. Building and implementing a private cloud is very hard and you should be very confident on your organization’s abilities before you embark on such a journey. Microsoft is the only public cloud provider that will provide you with a key-ready solution to run your hybrid cloud. If you have not heard about Microsoft AzureStack you should probably read up on it. Basically it is Azure wrapped up in a Hyper Converged ready solution for you to deploy in your datacenter delivered from OEM vendors like Dell, Lenovo, HP et al. New features initiated in Azure most likely will migrate to AzureStack ready for usage in your hybrid cloud.

AzureStack is targeted for release some time mid 2017 or later that year. That is almost a year away. The good thing is that AzureStack is based upon Azure. It has the same underlying technology that powers Azure like the portal and the Azure Resource Manager (ARM). Microsoft is delivering a consistent experience across the public and hybrid cloud with the ARM technology. To prepare yourself for AzureStack, you should invest time and effort into learning Azure and that knowledge will empower you if you decide to implement AzureStack next year.

All in - or not

Do you need to get all in on the private cloud or should you just integrate yourself with the public cloud? It depends on your organization and your business needs. One thing is for certain, you probably have to do something. Implementing your own version of ready to consume features in the public cloud in your own private datacenter, is not an option you should consider. If would require a tremendous effort and tie down your resources and in effect, make you static. You need to rub DevOps and business strategy on your business and culture. There are some really smart people out there that can help you with that and like everything else, it is an ongoing process that requires your constant attention.

The change is here. How will you empower your organization and become the new star? I am happy to discuss opportunities if you reach out by sending me an email.



Friday, September 30, 2016

Windows Server 2016 – DevOps tools and features

I needed to dedicate a full blog post about Windows Server 2016 and the way it will impact you going forward. At some point some of these features will apply to you too, as your infrastructure start to run the new server bits. Here are the highlights from MSignite.

> Highlights

  • Installation
  • Development
  • Packaging and deployment
  • Configuration
  • Containers
  • Operation Validation and Pester Testing
  • Operating security

> Installation

Server 2016 comes in three flavors. You have the “Desktop experience” server intended for management of other flavors of 2016 or as a terminal server. Next is Server Core which is just the same full server without the desktop and is headless, intended to be managed from Powershell or from a server using the desktop experience. Then there is the new kid on the block, Nano Server. It is the new Cloud OS, born in the cloud and the workhorse for everyone serious about creating modern, lean, super-fast and easy to manage applications. 

Installation of the Desktop Experience and Server Core is just like installing like you are familiar with. For Nano server you need to use a new GUI tool that guide you through the process of creating an image or you can just use Powershell. The GUI tool is currently not in the Evaluation version of Server 2016, however it will be available when it reaches general availability in mid October. 


It is really small compared to the Core Server and not to mention the full Desktop Experience server. Here are some key metrics for you to think about:

How do you mange Nano server and/or Core Server?

There are a quite a few options for you. The Nano Server is headless and only have a very simplistic local GUI which is text based. To manage your server, you need to use one of the following:

  1. Install a remote management Gateway and use the Web-GUI in the Azure Portal
  2. Install a Desktop Experience 2016 server and use all your regular tools like:
  • MMC in general
  • Event Viewer MMC
  • Registry
  • Services MMC
  • Server Manager MMC
  • Powershell ISE (remote file editing)
  3. Powershell and Powershell Remoting
  4. Local textbased GUI (very rough and few settings available)

You can still have System Center VMM agents on your Nano Server and System Center Operations Management Agent. Those are packages you will have to install during image creation or add with Powershell and PackageManager.

The intended workloads for Nano Server are:

  • Clustering
  • Hyper-V
  • Storage – Scale out File system (SoFS)
  • DNS server
  • IIS (.net Core and ASP.Net Core)
  • Containers, both Windows Containers and Hyper-V containers

Nano Server is a first class Powershell citizen with support for Desired State Configuration and Classes in Management Framework 5. The Nano server runs Powershell Core which is a subset of the full Powershell version you have in Server Core and Desktop Experience servers. 

> Development

Nano server has a full developer experience, server core is not. You have support for the Windows SDK and Visual Studio 2015 can target the Nano server. You even have full remote debugging capabilities from Visual Studio.

> Packaging and Deployment

Nano server do not support MSI-installers. Reason for that is custom actions that are allowed in MSI. Instead Microsoft has created a new app installer built upon AppX which is called WSA (Windows Server App) installer. The WSA extends the AppX schema and becomes a declarative server installer. You still have support for server specific extensions in WSA like NT service, Perf counters, WMI-providers and ETW events. Of course the WSA does not support custom actions!

Package management architecture:

This might look a little complex, however it is quite simple. You have some core Package management cmdlets which relies upon Package Management Providers who are responsible for sourcing packages from Package Sources. This is really great because for the End User you use the same cmdlets against all Package sources (NuGet, Powershell Gallery, Chocolaty etc). The middle ware is handled by the Package Management providers. So the End User just need these cmdlets to work with packages:

So to install a new package provider, you just use the PackageManagement module:

Here are some of the Providers you can install. Notice that you have a separate Provider for Nano server which you can use to install the VMM/SCOM agent:

> Configuration

Since the Nano server is small and have a cloud friendly footprint, you most likely will have a lot of them running. To configure them and make sure the configuration does not drift and to make it easy to update their configuration, you use something called Desired State Configuration (DSC). This was introduced in WMF 4 and is declarative way of specifying the configuration of a server or a collection of servers.

There are tools out there you can use to leverage management of your configuration. Lookup Chef or Puppet or even Azure Automation for how to do that. This is a big concept and would require a separate blog post to get into details. Please also contact me if you have any further questions about DSC.

> Containers

This is also a big topic and something that has been around for ages in the Linux part of the world. Basically it is just another form of virtualization of the operating system into a single package that is small and runs super-fast. If you have ever heard about Docker, you have heard about containers. Docker is containers. Docker is supported in the new Windows Server 2016, hence you can run Docker containers on it.

One of the core concepts of containers, is that you may have many of them running in the same container at the same time. This is possible because the containers share the same kernel/operating system.

In Windows we will have to flavors of containers:
  • Windows Containers
  • Hyper-V Containers

So with Hyper-V containers we get isolation with performance since the containers do not share the kernel but have their own copy of it. This is important for multi-tenant scenarios and for regulatory requirements. Auditors usually do not like systems that have shared kernel in the sentence, someone told me.

> Operation Validation Testing

This is one of my babies and the coolest thing about how we embrace the future. Microsoft have created a small framework on top of the Pester Unit Testing framework/Module shipped with Windows 10 and Windows Server 2016. The concept is very simple and very powerfull; Create Unit Tests that verify your infrastructure. These tests can be very simple or extremely detailed. You will have to figure out what you are comfortable with and what suits your environment. 

The Pester Module enables us to write declarative statements and executing those tests to verify that the infrastructure is operating according to our needs. 

When you invoke the test, you will see something like this:

This is something I have been working with the last 2 years and I can tell you that it has saved my bacon quite a few times. It has also enabled me to notify my clients about issues with their infrastructure which they were not aware of until I told them. This could be something as simple as a SQL service account that have an expired password or that has been locked out somehow. 

I have created a GitHub repro which contains Pester or Operation Validation Tests for Identity Manager, VMM, Active Directory among other things. This is a community repro which accept pull requests from people who have created tests for other applications and services. Please contact me if you need further information or want to discuss this in detail.

> Operating Security

Just after Snowden shared his knowledge with the world, Microsoft launched a new concept called JEA – Just Enough Administration. It is a new Powershell framework that secures administrative privileges by only issuing Admin Privileges in a constrained way and for a limited amount of time.
You can find more information about JEA here:

> Other things

There are a couple of things you should be aware of. First off, if you plan to use Containers in your infrastructure, you must run them on Server Core or Nano Server. Containers are not supported on Servers installed with the Desktop Experience. This implies that you should probably consider installing your Hyper-V servers with the Nano server OS or the Server Core option. Also all the new cool features like SoFS and Storage Replicas with Storage Direct requires the Datacenter licensing option.



Wednesday, September 28, 2016

Microsoft Ignite 2016 – Announcements and features


I have now spent 3 days at Ignite and walked a total distance of about 22km hustling from sessions and the Expo area according to my Iphone. These are some of my thoughts about what might affect you going forward the next year.


  • Windows server 2016
  • Azure Monitoring
  • Azure Functions
  • Azure Networking

Oh yeah, and System Center 2016 was launched. Why is it not on my list? Well to be perfectly honest, the feature list is almost identical to the latest rollup on 2012 R2. More on that later.

Windows Server 2016 GA

This release of Windows server is the chosen one that is going to power the Azure infrastructure and tenant workloads on Azure and AzureStack when it is released next year. From the Hyper-V perspective, things have changed quite a lot. You will have to forget all best practices and how you setup Hyper-V and Storage. 2016 is all about scaleout filesystem (reFS/NTFS) and storage direct.

You also want to check the OEM hardware list to make sure your servers are listed there. Pay extra attention to you physical NICs. They have to support RDMA. Your switches, or at least your storage switch need to support DCB (Data Center Bridging).

Windows Server 2016 is all about software defined everything. With the switch and NIC hardware requirements listed above, they should add that you also need the special hardware to enable the software defined everything. That is just a personal opinion and I do not expect anyone to change that.

For us mere deadly, not supporting super scale, you will be happy to know that the smallest cluster supported on 2016 is 2 nodes when you use Storage Space Direct! Yay!

Azure Monitoring

In my head, I have been waiting for this. Basically it is a System Center Operation Manager light for your Azure Resources running in the cloud with a super responsive and beautiful console. In addition you can consume logs (logfiles/eventlogs) and search them from the console in Azure. Great for troubleshooting when you need to correlate different logs and servers.

These are some of the items you can “monitor”:
  • Activity logs
  • Metrics
  • Diagnostics logs
Azure Monitor is even connected to Azure Analytics. I am not being fair when I called this SCOM light. It has far more reach and the correlation is out of this world compared to SCOM.


Example of a dashboard:

Image result for azure monitor

You pin items/graphs/tables to your dashboard. That dashboard can be cloned and shared with other users in your Azure subscription. When you add a new pinned item, other users that you have shared the dashboard with, will get a notification that you have added an item and may choose to add it to your dashboard.

Alerts – Trigger on events

Azure Monitoring can be configured to produce alerts on certain events. The following channels is supported out of the “box”:
  • SMS
  • Email
  • Webhooks (http callback)
That might not look very impressive, however the key element here is webhooks. That will enable you to integrate Azure Montor with almost any solution that have webhooks integration. If your Azure Monitor alerts target does not support webhooks, Azure have you covered there also, using the Azure Function service. With functions you can create that webhook/callback target and integrate your custom application as a consumer of Azure Montor Alerts.

Operation Management Suite (OMS)

If you currently use OMS, you have access to all the information in OMS from your Azure Monitor dashboard. This enables you to do queries against the data collected from your agents whether they are running in your datacenter or in Azure.

Azure Monitoring is currently in Private Preview. If you are interested in trying it out, contact me and I will help you. I expect this will reach public preview within 2 months.  

Azure Functions

Have you ever heard about Serverless compute? That is Azure Functions. Before you get to confused, it is executed on a server, however it may be managed by you or by Microsoft. The serverless expression comes from the fact that you decouple your bussiness logic/code from the concept of a virtual machine that host it.

Your code is executed in the cloud. You design the function/code to be very specialized and generic at the same time. Sounds a bit confusing, however it makes perfect sense when you look into it.

Azure functions is great feature that enables you to process data, integrate with other systems not in the cloud, Internet of things devices (IoT) and for building your own API/microservices.

The Azure Function console is loaded with ready to use templates and more is added each day by Microsoft and the community if they pass the unit tests created by the Azure Functions team.

Azure Networking

Quite a few new features added at Ignite:
  • IPv6 support
  • Azure DNS
  • Accelerated networking
  • Web Application Firewall
  • Virtual network peering
Microsoft has silently upgraded their Azure Datacenters with a FPGAs which enables up to 25 Gbps networking performance. The keynote showed that a 1000 node cluster on Azure could translate all the articles on Wikipedia in less than 0.1 seconds using this technology. Expensive yes, however if you need performance for a short period of time, it is not that expensive even if I have not done the math.