Thursday, August 18, 2016

Creating Menus in Powershell

I have created another Powershell module. This time it is about Console Menus you can use to ease the usage for members of your oranization. It is available on GitHub and published to the PowershellGallery. It is called cliMenu.


This is a Controller module. It uses Write-Host to create a Menu in the console. Some of you may recall that using Write-Host is bad practice. Controller scripts and modules are the exception to this rule. In addition with WMF5 Write-Host writes to the Information stream in Powershell, so it really does not matter anymore.

Design goal

I have seen to many crappy menus that is a mixture of controller script and business logic. It is in essence a wild west out there, hence my ultimate goal is to create something that makes it as easy as possible to create a menu and change the way it looks.
  1. Make it easy to build Menus and change them
  2. Make it as "declarative" as possible


The module supports multiple Menus, however only one Main-Menu with as many Sub-Menus as you like. Each menu has a collection of Menu-Items that the user can choose from.

Example menu:


Menu options

Currently you can control the following aspects of the menu (they are shared across all menus unless you change them before showing a sub-menu):

  • Choose the char that creates the outer frame of the menu
  • Change the color of the frame
  • Change the color and DisplayName for the Menus
  • Change the color and Heading for the Menus
  • Change the color and Sub-Heading for the Menus
  • Change the color and DisplayName for the Menu-Items
  • Change the color and footer text for the menus
  • Change the Width of the Menu


Menu-Items are the elements your users can invoke in your Menu. They have a ScriptBlock and a DisableConfirm switch parameter in addition to a Name and DisplayName. With the DisableConfirm parameter, you may selectively force the user to confirm the action before it is invoked.  

Validation and Return value

The goal of this module is neither. As a tool-builder you are responsible for validating user input when they invoke the ScriptBlock associated with the Menu-Item. 

Any output from the ScriptBlock will be written in the console. As you may know, a ScriptBlock may be a small script or a call to a cmdlet with parameters. I would suggest that you stick to calling custom or built-in cmdlets and design it using the best practice guides from Microsoft in regards to mandatory parameters etc.


This is the core cmdlet responsible for building the Menu and displaying it to the user. Executed without parameters it will display the Main-Menu (remember you can only have one Main-Menu). Nevertheless you may also use it to display Sub-Menus by specifying the parameter MenuId which is the index of the menu. 

Further you may also invoke a specific Menu-Item in a specific Menu by supplying InvokeItem and MenuId parameters. If the Menu-Item is defined to confirm with the user before invocation, it will prompt the user with a confirmation request before execution. You can override this with the -Force parameter to execute it directly.


A menu which uses the Show-Command cmdlet (complete script in example.ps1):


An example with a Main-Menu and Sub-Menu:



Big thank you to Fausto Nascimento for invaluable input and suggestions!

That is it. If you have any questions or issues, look me up on twitter (@toreGroneng) or file an issue on GitHub.



Sunday, May 8, 2016

Identity Manager and Powershell

It is year 2016 and Identity Manager looks like it did in 2010 when Forefront Identity Manager (FIM) was released. Who came up with the name and added it to the Forefront portfolio anyway, crazy stuff. As you probably know, looks can be deceiving. Even if MIM looks the same and run basically the same features, it is still a powerful state machine. I have been buzy collecting scripts and tools I have used the last couple of years and have started on 2 Powershell modules for MIM. This post is just a brief introduction to the module and the plans for the future.

Why create a module

Wait a minute. Does not Identity Manager come with a Powershell module? No, it comes with a Powershell snap-in from 2010. Back in those days Snap-ins where the cool kids on the block and everybody created snap-ins for things that should be created as a module. I blame Powershell version 1.0, however they fixed that in Powershell version 2.0, I think. I use the snap-in as a nested module and have created a Powershell manifest for the snap-in. That way you can choose to load the snap-in as a module if you like (look in the FIMmodule folder for the manifest).

The Snap-In that comes with Identity Manager is very generic/crude and allows you to do almost anything you can in the Identity Manager portal. You just need to remember the syntax and the XPath queries that you need to run. Doable, nevertheless quite hard to remember and prone to producing errors. Hence the effort on my side to create a module that is easy to use and a lovely experience.

I also have a side project where I focus on Operation Validation in FIM/MIM using Pester. Pester is the Unit Test framework for Powershell and the Operation Validation framework from Microsoft. You can have a look at the unit test in this link “Operation Validation”. Point of this being an test you can run to validate you Identity Manager infrastructure and make sure that all the bells and whistles are working as they should. A nice way to detect if your infrastructure peers have installed a new domain controller you should install PCNS on!

Introducing the Identity Manager Powershell module

It is still work in progress and I am focusing in on the Get-CMDlets for all the different object types in FIM/MIM. Currently I have added the following cmdlets:

Name Description
Get-IMObject A generic cmdlet used by all of the Get-Cmdlets. It is responsible for running the core cmdlets in the Identity Manager snap-in
Get-IMObjectMember Used to list members of a group/set. It can list ComputedMembers or ExplicitMembers
Get-IMPerson Get person information
Get-IMPersonMembership Show person membership in Groups/sets
Get-IMSecurityGroup Show information about Security groups in Identity Manager
Get-IMSet Show information about Sets in Identity Manager
Get-IMSetUsage Show all related usage of a Set in Identity Manager
Get-IMXPathQuery Create simple XPath queries with hashtable
Out-IMAttribute Cast a ResourceManagementObject to a PSCustomObject Used by the Get-IMObject cmdlet

It is currently not on the PowershellGallery, however it will be in May 2016. The module will require Powershell version 4.0 (Windows Management Framework 4) or later. It may work with Powershell version 3.0, however I have not tested it with that version. It will work with either Forefront Identity Manager 2010 R2 or Microsoft Identity Manger 2016.

If you want to browse the code and have a look, you can visit the GitHub repro on this link.

Introducing the Identity Manager Synchronization Powershell module

But, wait, there is more :-) This month I will also publish a new Powershell module for the Synchronization engine in Identity Manager. Normally this would be executed as a VBscript per Microsoft. Nothing wrong with that and it works. I on the other hand would like to use Powershell to do this. Thankfully Microsoft has included a WMI/CIM namespace/class for Identity Manager that we can leverage to do this. My Identity Manager Synchronization module (IdentityManagerSynch) will support the following cmdlets:

Name Description
Get-IMManagementAgent List Management Agents or Agent details
Get-IMAgentRunProfile List the RunProfiles associated with an Agent
Get-IMAgentStatus List the last known status of an Agent
Invoke-IMAgentRunProfile Execute a RunProfile for an Agent
Invoke-IMManagementAgentMethod Invoke a CIM-method on the Agent

The cmdlets implement dynamic parameters for the agent and runprofile thus preventing you to try and start a runprofile that is not implemented in the agent. 

I may or may not include a cmdlet that enables you to search for Metaverse Objects. The synchronization client has a nice GUI that solves most issues and lets you poke around. From time to time I find myself wishing for a way to extract information from Metaverse that is not possible in the GUI.



Sunday, April 24, 2016

Pester Operational Tests


I have created yet another GitHub repro "PesterOperationTest". Feels like I do this every week, however that is not the case.

But why?

The purpose of this repository is to collect Pester Unit tests for your infrastructure. Anything that is mission critical for you business deserves a tests that validates the correct configuration. It will make you sleep like a baby and stop worrying when you implement a change in your environment if you tests pass.

This will only become as good as the contribution you make to the repository. I would love to do all the work myself, however there is not enough time in the world. Think of it like you are helping yourself and at the same time a lot of other people could benefit from the test you create.


My original thought was to organize this in folders, one for each vendor and with subfolders for each products and/or feature. The scripts will be published to the Powershell Gallery for easy and convenient access for everyone. The tests should not be specific to your environment, however as generic as possible.

My first contribution

Been working for some time now with Forefront Identity Manager. The last 6 months has changed a lot in terms of how I work and what tools I use. Pester – a unit test framework included in Windows 10 (and in Windows Server 2016) – has become one of my key assets. Heck it has made me better at my work and increased the value added for my customers. Thank you Pester!

I have published a first version of the Microsoft Identity Manager Test to the repro. It validates important stuff like

  • files/folders that need to be present
  • services that should be running and configured the proper way
  • scheduled task that should/could be enabled and configured

I will be adding more stuff later. Pull requests are welcome!



Sunday, January 10, 2016

Desired State Configuration - Consistency

Happy new year! 

Ages since I blogged about DSC and found a little topic that might be something others also are wondering about.

I reached out to Chris Hunt (@LogicalDiagram) on twitter. If I understand him correctly, he was wondering about capturing the verbose stream during a system invoked (Consistency Scheduled task) check. DSC has a scheduled task called Consistency (full Task Schedule path is \Microsoft\Windows\Desired State Configuration) which launch every 30 minutes. This task does the equivalent of running the Start-DSCconfiguration cmdlet and making sure that the configuration does not drift from the desired state. Did this a long time ago, however I had forgotten how I could do it.

The scheduled task


As you already probably guessed, this is just a task that executes an powershell command using the cmdet Invoke-CIMmethod with some parameters: The task starts an hidden powershell window and executes the following command:


I have copied the command and applied it to a splatting variable. That makes it much easier to read:

(GIST - Consistency.ps1)

My original thought was to use the Write-Verbose “override” by defining a function called Write-Verbose and capture the verbose output from that. That is possible because Powershell has an internal command resolver that tries to find the command from this priority list (see help about_Command_Precedence):
  1. Alias
  2. Function
  3. Cmdlet
  4. Native Windows Command

If you create a function with the identical name of an Cmdlet, your function will be executed instead of the real cmdlet. This is also how proxy functions work.
Sadly I must say (no I am kidding), the developers of the Invoke-CIMMethod used fully qualified paths in their call to Write-Verbose so that was a no go.

Redirect streams

June Blender (@juneb_get_help) has written a nice article about redirecting streams on the ScriptingGuys blog (Understanding Streams, Redirection, and Write-Host in PowerShell). Read up about it, it way come useful one day like this moment because we are going to redirect the verbose stream and send it to a file.

Changing the Consistency Scheduled task

We are going to change the action of the task. I prefer to have a powershell file that is launched by the task scheduler instead of a command parameter. Change the action to something like this (you may of course change the path and filename):


The powershell file should have something like this:

(GIST - ConsistencyFULL.ps1)

I have added a $outputFile variable that is where the verbose stream will be written. In the foreach loop I write to the file each time a new item is added to the verbosestream/output. This way you can follow along with the DSC engine as is progress. As an alternative, you could just drop the pipe to the foreach loop and assign the output from the Invoke-CIMMethod and write that to the outputfile.

So how to you follow along with the verbose stream. You use the Get-Content Cmdlet with the wait parameter, like so:

Get-Content –Path "c:\temp\ConsistencyverboseStream.txt" -Wait

Of course the file has to exists before you run the Get-Content command. The Out-File cmdlet in the Consistency.ps1 script will create the ConsistencyverboseStream.txt file if it does not exists, however you may create it first and run the Get-Content with the wait flag to prepare yourself before you launch the Consistency Scheduled task.

That is all folks, cheers

Monday, October 26, 2015

Powershell Gallery – For your pleasure

Sad to say this, however I received an account for Powershell Gallery to be able to submit modules to the new PowershellGet (formerly know as OneGet)  ecosystem a looong time ago. Today was the first time I published a module to the repository. The work involved was, like I expected, very light. Process went smoothly and my module was uploaded within 10 seconds.

If you have not looked into it yet, I highly recommend you start to dig in. The statistics on show that there is a dramatic increase in downloads the last couple of months. I expect this to increate as time goes by and by the fact that there is a package management preview available for people with Powershell version 3.0 or 4.0 running in Windows Server 2008 R2, Windows7, Windows8, Windows Server 2012 and 2012 R2 (download link). Please note that the preview is dependent on .Net version 4.5.

First module

So what did I upload as my first module? Well after being on Windows 10 for a couple of months (yes, I did not install it at once it was available), I got tired of changing the PowerPlan my laptop was on using the GUI. I have a rather large Lenovo laptop with 2 batteries which people make a point of noticing.

Hence I created a small module that has two simple advanced functions:

  • Get-PowerPlan
  • Set-PowerPlan

You might have guessed the name of the module: PowerPlan. Code is up on GitHub and as I have mentioned, the module is published to PowershellGallery. Just search for PowerPlan or go to a powershell window (launched as administrator) and type one of the following:


The functions use Get-CimInstance for the Win32_PowerPlan class and the Activate method with Invoke-CimMethod cmdlet.

Please report any issues on GitHub or if you have any enhancements you would like to include. Pull-requests are welcome!



Wednesday, October 14, 2015

Use powershell to validate Email address

A time comes along when you need to validate an Email address using Powershell. Created a function just for this purpose as an introduction to how you may create advanced functions in powershell.

This function will “validate” an email address using the System.Net.Mail namespace and output an Boolean value indicating if an string is a valid address. It uses a structure and style that is my personal preference. Feel free to comment if you like.

Basic learning points

  • It is an advanced function
  • It supports the pipeline
  • It displays how to create an instance of a .Net class



Wednesday, October 7, 2015

What is around the corner for IT-Pros – Part 2

This is part 2 of a multi post around topics discussed in Part1. Next up is Hybrid Cloud and how it will impact us and why it is the way of the future.

Hybrid cloud

Microsoft changed their focus a couple of years ago, from the primary solution provider for enterprise software running in datacenters to a Mobile/Cloud first strategy. Their bet was that the cloud would be more important and that the economic growth would be created in the emerging cloud market. Needless to say that it looks like their bet is paying of tenfold.


Over the last year there have been a significant number of announcements that connect Azure to your datacenter, thus enabling enterprise to utilize cloud resources with enterprise On-Prem datasources or local hybrid runbook workers. It should not come as any surprise that this trend is accelerating in preparation for the next big shift in hybrid technology, that is expected to be released with Windows Server 2016 – Microsoft Azure Stack. Before we go down that rabbit hole, a short history summary of the hybrid cloud efforts by Microsoft.

Azure Pack – The ugly

imageIn my honest opinion, this was a disaster. It was Microsoft first attempt on building a hybrid cloud using workarounds (Service Provider Foundation anyone) to enable a multi tenant aware solution. It relied heavily on System Center technology and was a beast to configure, setup and troubleshoot. Although there was/is integration with Azure, it relies on the old API which is Azure Service Manager and it is not an consistent experience. Some times the early adopters pay the ultimate price, and this was one of them. Currently there is not announced any upgrade path from Azure Pack to the new Azure Stack solution, and I doubt there will ever be one.

That being said, it works and provides added value for the enterprise. On the downside it does not scale like Azure and requires expert knowledge to be managed. My advice if you are considering a Private or Hybrid Cloud, wait until Windows Server 2016 is released and have a look at Azure Stack instead.

CPS (Cloud Platform System) – The bad

imageThis is Microsoft first all-in-a-box-cloud solution powered by Dell hardware. The entire system runs and scale very nicely. When you want more capacity, buy a new all-in-one-box and hock it up to the first box. It was built upon the first attempt at creating a Private Cloud in a “box” running Windows Azure Pack. CPS initial configuration is done by a massive single powershell script and was planned and released before the new Azure Resource Manager (ARM) technology hit the ground running.

Why is it bad? Well because in it’s current release, it is powered by Azure Pack and it fits in nicely with the Clint Eastwood analogy I lined up. I would be very surprised if it is not bundled with Azure Stack when that is released later this year or early next year. Time will show.

Just if you were wondering. The price-tag for this solution with hardware, software and software assurance would run you something like in the region of $2.5 million. That is for the first box. You may get a discount if you buy several boxes at the same time.

MAS (Microsoft Azure Stack) – The good

Fast forward to Microsoft Igninte 2015 and MAS was announced. It is currently in limited preview (the same as for Windows 2016 preview program) and is expected to be released to the market when Windows Server 2016 reach RTM.

MAS is the software defined datacenter you can install in your datacenter and create your own private cloud. It is identical to Azure, behaves like Azure in any respect and it can run in your datacenter giving you the consistent experience across boundaries. Think about that for a minute and reflect on how this will change your world going forward.

A true Hybrid Cloud will manage and scale your resources using technology built and enabled by the cloud. Resource templates (JSON ARM templates) you create to build services in MAS, can with the flip of a switch be deployed to Azure instead and the other way around.

MAS – Overview

This is a image I borrowed from a presentation held by Jeffry Snover during the Powershell Summin held in Stockholm this year (full video here). It does not rely on any System Center components and is built to be a true multi tenant solution. There will be providers that will support the different System Center products, which is probably a good idea.

The MAS structure is strikingly similar to something we all know very well. It contains the conceptual building blocks of an operating system or a server if you like.

MAS - Hardware and Abstraction layer

The hardware layer explains it self. It is the common components that a server is build of like CPU, storage, network and other components. Above this we have the Abstraction layer that consists of Plug-n-Play and a drivers stack. This layer is there to assist you when you “plug in” new hardware in your datacenter or add more storage etc. This is also the layer the MAS kernel communicates with.

Big progress have been made into creating a Datacenter Abstraction Layer (DAL or what is otherwise known as Hardware Abstraction Layer (HAL) on Windows) that conforms into a standard that hardware vendors implement. These are

  • System Management Architecture for Server Hardware (SMASH)
  • Common Information Model (CIM, or WIM or earlier versions of windows)
  • Storage Management Initiative (SMI-S)


The main goal of DAL is to create a unified management of hardware resources. Microsoft have created an open source standard for this and it is called Open Management Infrastructure (OMI). OMI has been adopted and implemented by Cisco, Arista, HP, Huawei, IBM and different Linux distros. This is why you can run Linux in Azure and MAS can talk to and configure hardware resources like network, storage and other devices for you.

Now for Server and Rack Management there will be something called RedFish which implement a OData endpoint that support paging, server-side filtering and have request header support. There will be Powreshell cmdlets you can use to interact with RedFish, however at this time it is uncertain if it will be ready by Windows Server 2016 RTM.

MAS - Initial System Load

The process of the initial setup of MAS is entirely done and enforced by Desired State Configuration (DSC), not Powershell like you might expect. This has a number of implied consequences you might want to reflect on;

  1. If DSC is used in MAS, is Azure also under the hood using “DSC”?
  2. If DSC is used in MAS, would it be fair to say that Microsoft has made a deep commitment into DSC?

The answer to no 1 is; "I do not know, yet". For no 2, it is a big fat YES.

The Azure Resource Manager (ARM) in Azure and MAS bears a striking resemblance to Desired State Configuration:

  • They are both idempotent
  • Both use resource or resource providers
  • They both run in parallel 
  • They are both declarative
  • ARM uses JSON and DSC uses MOF/textfiles
  • A DSC configuration or a JSON template file can be re-applied several times and only missing elements or new configuration is applied to the underlying system.

MAS Kernel

You only want secure and trustworthy software to be running here. II is the heart and soul of MAS and it is protected and run by Microsoft new Cloud OS – Nano server. Nano server is the new scaled down Windows 2016 server that is build for the cloud. In footprint it is more that 20 times smaller than server Core and boots in less than 6 seconds.

There has been a number of security enhancements that directly apply to the MAS kernel:

  • Enhanced security logging – Every Powershell command is logged, no exceptions
  • Protected event logging – You can now encrypt your event log with a public key and forward them to a server holding the matching private key that can decrypt the log.
  • Assume breached – This implies that there has been a mindset change in Microsoft. They now assume that the server will be breached and the security measures/plan is implemented accordingly.
  • Just Enough Admin (xJea) – JEA is about locking down your infrastructure with the JEA toolkit and thus limiting the exposure of privileged access to the core infrastructure/systems. It now also supports a big red panic button for those cases that require emergency access to the core to solve a critical problem that otherwise would have to be approved through appropriate channels.

To show developers that Microsoft is serious about Powershell, they have made some changes to Visual studio to increase the support for Powershell and included some nice tools for you:

  • Static Powershell code analysis with script analyzer
  • Unit testing for Powershell with Pester (see Part1)
  • Support for Classes in Powershell like in C-sharp

MAS - User space

This is where the tenant Portal, Gallery and Resource providers live. Yes, MAS will have a gallery for your services that your tenants can consume. This is where the DevOps lifestyle come into play. Like we talked about in Part1.

In addition Microsoft has proved it cherishes Linux with the announcement that they will implement Open SSH in windows. Furthermore they have started to port DSC to Linux and spinning of their OMI commitment in the open source community.

Shadow IT

Everybody have a shadow IT problem. People that say they do not, just does not realize it yet. It has become so easy to consume cloud resources that solve line of business problems, that IT can’t or is unable to solve in a timely manner. It could be any number for reasons for this, commonly it is related to legacy requirement, budget restraints or pure resistance towards any change not initiated by IT themselves.

One of the goals in implementing a hybrid/private cloud should be to use the technology to re-enable IT as a strategic tool for management that creates completive advantages that drive economic growth. In my opinion Executive Management has for to long regarded IT as a cost center and not as an instrument they can use to achieve business goals, strategic advancement and financial progression.

Missing automation link

1,5 year ago I wrote a 2 part blog (Part1 and Part2) about the missing automation link. Basically it was a rant where I could not understand why DSC was not used more to enable the Hybrid Cloud. Windows Azure Pack just did not feel right, and it turns out I was right. Well now we have the answer and it is Microsoft Azure Stack. It runs Microsoft Azure and it will perhaps one day run in your datacenter too.

Will the pure datacenter survive?

For the time being, I think they will, however they will we greatly outnumbered by the number of hybrid clouds running in conjunction with the cloud and not in spite of the cloud. Currently we are closing in on a Kodak moment. It does not matter if your datacenter is perfect in the eyes of who ever is in charge. If it does not solve the LOB problems in your organization, the cloud will win if it provides the right solution at the right time.

Why should you implement a Hybrid Cloud?

Question is more like, why not? I know it is a bit arrogant, however Microsoft has made a serious commitment into a consistent experience whether you are spinning up resources in the Cloud or in your private Hybrid Cloud. Why would you not be prepared to utilize the elasticity and scalability of the cloud? With the Hybrid Cloud you get the best from both worlds in addition to most of the innovation Microsoft does in the Cloud.

As Azure merges closer and closer with On-Prem datacenters, it should become obvious that not implementing a hybrid cloud will be the wrong way. Even if Azure will merge nicely with On-Prem it will not compare to the integration between Azure and MAS.

Two more important things that will accelerate the shift in IT. Containers/Azure Container Service and the new cloud operating system Nano server will change the world due to their portability light weight. For the first time I see opportunities for a Cloud Broker that trades computing power in an open market. Computing power or capacity will become a commodity like pork belly on the stock exchange.

How do you manage Nano server and Containers? Glad you asked, with powershell of course. Do you still think that powershell is an optional skill going forward?

In part 3 we will talk in more depth about the game changers; Nano server and Containers.