Friday, October 14, 2016

Ignite 2016 summary – Innovate, optimize, manage and empower your business with IT

Image result for cloud microsoft

This years Microsoft Ignite conference was all about transforming your business with technology. Here is a techy summary for business-minds.

Going forward, IT-Pros must prepare to answer both tricky business questions, and leverage new tools to meet business demands. I imagine questions like these:
  • What are the needs of our business?
  • How can we empower our users to apply the cloud to gain competitive advantages?
  • How can we innovate with greater agility and optimize our IT resources?
  • How can we migrate from the traditional model where IT is just a cost-center, to a lean/mean machine where IT is the engine that powers our business strategy with increased earnings?

A model of the traditional business case

We live in a traditional world with traditional problems. Simplified a business consists of a few silos:
  • Internal users
  • Your customers
  • Your suppliers and partners
  • The remainder of the universe

All of these are connected directly and indirectly through processes, some of them manual and some maybe through automation. The job of the IT department is to deliver services, preferably in the most cost effective way possible. Generally, if you change a process through a tool or automation (Powershell), and you saved time/cost, you become the hero. Cost- and time-savings are always welcome, however the possible impact is superior when IT is driving your revenue, like in the new model.

The new model for IT

In the new world, everything is about processes, data and applications. In other words, algorithms. Everything is moving and changing at a higher speed than we have ever experienced before. Silos probably still exists, however they are interconnected and data-aware. Your CRM application will have access to and understand other applications and their data structure. It will empower your employees and provide you with just in time insights. With the new Azure PowerApp and Flow applications which implement the CDM (Common Data Model) you have this available today as a preview service. Throw Azure Functions into the picture, and you have a pretty robust and extendable model which is highly customizable and scalable.

In addition, Azure has implemented predictive analytics and machine learning (ML) in the different APIs, like Storage, Azure SQL, Hadoop etc. They are enabling ML for the masses by implementing it across their datacenters and in the Azure model. Your developer is not responsible for implementing intelligence in your application, he consumes predictive data from the Azure machine learning API possible through the integration with the Storage API. You do not consider IT as a cost-center, however as a business enabler, that helps you to increase revenue by applying analysis of big data through algorithms that is constantly updated to provide perfect information just in time. Theoretically possible, however immensely difficult to implement in practice if you are not in Azure.

What do you need?

:Speed and agility: If you have a clear understanding of your needs, your market and competitors, why not move as agile and fast as you can? If you can change faster than your competitors, you have an advantage and a head start. Let me illustrate with an example; You have probably heard about robot-trading in the stock-market? They move very fast and agile because the first person/robot that receives and understands specific market information, is the winning party and walks away with some profits. In our business case, it is the same thing. Rapid changes to your algorithm and IT system to understand the business and receive correct information just in time, is essential to become the leader and increasing profits.

:Scale: Your IT system need to be able to scale, up and down. You should not have to worry about it as the cloud does this for you within the limitations you have defined. The cloud empowers businesses of all sizes to use scaling technology that previously was the privilege of large enterprises with expensive dedicated appliances. Committing to services and applications that handles scaling is key in the new world. Relying on old legacy applications and services will prevent you from becoming a new force in your market. Startups in your market will become your new IT system performance benchmark and they probably do not consider legacy systems a match for their agile needs.

:Knowledge – Close the gap: The adoption of cloud resources and the hybrid cloud is just the beginning of the disruptive change that is here. Hybrid cloud is just a steppingstone towards the connected cloud with unlimited resources at your fingertips. That does not imply that the private clouds will not exists. They just need to be connected to the public cloud and empower it by binging some added value. In the other case, if it is not connected, it will be a relic and an edge-case for very special circumstances. In this scenario, knowledge will be important. New features and services are launched on an almost weekly basis. Products are migrating from private preview, to public preview and finally to general availability in matter of months. If you do not take advantage, someone else will, perhaps your competitors.

:New People and Organization 2.0: Best case scenario, you need a huge amount of training and designing. If ordering a new web-server or virtual machine takes longer than the time usually needed to create/deploy it automatically, trust me, you have to do something. Your organization is already changing, perhaps you just have not noticed it yet? Ever heard about Shadow IT, the evil from within? If it is not knocking on your door, it is because it is already inside. Shadow IT is a real problem that you need to take seriously. In the emerging world, people want things yesterday, like always. Problem is that if you do not deliver, someone else can, and asking for forgiveness beats asking for permission 9 out of 10 times, especially if it yielded a positive result. Rules, policies and guidelines are nice, however immediate results are king.

DevOps is a “must”: The new world relies on DevOps. DevOps is a merge between a developer and a IT-Pro where you bring the knowledge of both parties together and apply that knowledge to your business and culture in a series of new processes. DevOps is not automation; however, automation is a key part of DevOps.

:Security: You do know that hackers target IT-Pros due to the fact that they normally have access to everything? The tools to handle this is available and has been for quite some time now. Microsoft Identity Manager comes with PAM (Privileged Access Management) which audits privileged access with time constrains. Then your privileged access token expires, your access is revoked. The Powershell team has created a toolkit called Just Enough Administration (JEA) which is very similar to the Identity Manager solution. Both solutions should be designed with a “break the glass” option for that time when you really don’t care about the security, but need to fix the issue. If you break the glass, all kinds of things happen and you probably would expect to face some sort of hearing where you have to justify the action, which is a good thing.

With Windows Server 2016 a new Hyper-V feature was launched giving us Shielded VMs. With shielded VMs the tenant of a shared resource owns the VM completely. The entity responsible for the platform it is running on, have the ability to manage it to a certain degree (like start, stop and make a backup). The backup of a shielded VM is encrypted if you were wondering.

Last but not least, security starts at the operating system level. In general, reducing the attach surface is regarded as a first line of defense. Windows Server 2016 Nano is the new operating system for the cloud and will change the way you work and handle datacenter workloads. Nano Server has a tiny footprint, small attach surface and is blazingly fast, which makes it a perfect match for a fast moving and agile business.

:Help – Private cloud or hybrid cloud: Even with a new organization and knowledge, it is highly likely that you will need some consultancy. According to Gartner, 95% of all attempts to create a private cloud fails or fails to yield the expected outcome. Building and implementing a private cloud is very hard and you should be very confident on your organization’s abilities before you embark on such a journey. Microsoft is the only public cloud provider that will provide you with a key-ready solution to run your hybrid cloud. If you have not heard about Microsoft AzureStack you should probably read up on it. Basically it is Azure wrapped up in a Hyper Converged ready solution for you to deploy in your datacenter delivered from OEM vendors like Dell, Lenovo, HP et al. New features initiated in Azure most likely will migrate to AzureStack ready for usage in your hybrid cloud.

AzureStack is targeted for release some time mid 2017 or later that year. That is almost a year away. The good thing is that AzureStack is based upon Azure. It has the same underlying technology that powers Azure like the portal and the Azure Resource Manager (ARM). Microsoft is delivering a consistent experience across the public and hybrid cloud with the ARM technology. To prepare yourself for AzureStack, you should invest time and effort into learning Azure and that knowledge will empower you if you decide to implement AzureStack next year.

All in - or not

Do you need to get all in on the private cloud or should you just integrate yourself with the public cloud? It depends on your organization and your business needs. One thing is for certain, you probably have to do something. Implementing your own version of ready to consume features in the public cloud in your own private datacenter, is not an option you should consider. If would require a tremendous effort and tie down your resources and in effect, make you static. You need to rub DevOps and business strategy on your business and culture. There are some really smart people out there that can help you with that and like everything else, it is an ongoing process that requires your constant attention.

The change is here. How will you empower your organization and become the new star? I am happy to discuss opportunities if you reach out by sending me an email.



Friday, September 30, 2016

Windows Server 2016 – DevOps tools and features

I needed to dedicate a full blog post about Windows Server 2016 and the way it will impact you going forward. At some point some of these features will apply to you too, as your infrastructure start to run the new server bits. Here are the highlights from MSignite.

> Highlights

  • Installation
  • Development
  • Packaging and deployment
  • Configuration
  • Containers
  • Operation Validation and Pester Testing
  • Operating security

> Installation

Server 2016 comes in three flavors. You have the “Desktop experience” server intended for management of other flavors of 2016 or as a terminal server. Next is Server Core which is just the same full server without the desktop and is headless, intended to be managed from Powershell or from a server using the desktop experience. Then there is the new kid on the block, Nano Server. It is the new Cloud OS, born in the cloud and the workhorse for everyone serious about creating modern, lean, super-fast and easy to manage applications. 

Installation of the Desktop Experience and Server Core is just like installing like you are familiar with. For Nano server you need to use a new GUI tool that guide you through the process of creating an image or you can just use Powershell. The GUI tool is currently not in the Evaluation version of Server 2016, however it will be available when it reaches general availability in mid October. 


It is really small compared to the Core Server and not to mention the full Desktop Experience server. Here are some key metrics for you to think about:

How do you mange Nano server and/or Core Server?

There are a quite a few options for you. The Nano Server is headless and only have a very simplistic local GUI which is text based. To manage your server, you need to use one of the following:

  1. Install a remote management Gateway and use the Web-GUI in the Azure Portal
  2. Install a Desktop Experience 2016 server and use all your regular tools like:
  • MMC in general
  • Event Viewer MMC
  • Registry
  • Services MMC
  • Server Manager MMC
  • Powershell ISE (remote file editing)
  3. Powershell and Powershell Remoting
  4. Local textbased GUI (very rough and few settings available)

You can still have System Center VMM agents on your Nano Server and System Center Operations Management Agent. Those are packages you will have to install during image creation or add with Powershell and PackageManager.

The intended workloads for Nano Server are:

  • Clustering
  • Hyper-V
  • Storage – Scale out File system (SoFS)
  • DNS server
  • IIS (.net Core and ASP.Net Core)
  • Containers, both Windows Containers and Hyper-V containers

Nano Server is a first class Powershell citizen with support for Desired State Configuration and Classes in Management Framework 5. The Nano server runs Powershell Core which is a subset of the full Powershell version you have in Server Core and Desktop Experience servers. 

> Development

Nano server has a full developer experience, server core is not. You have support for the Windows SDK and Visual Studio 2015 can target the Nano server. You even have full remote debugging capabilities from Visual Studio.

> Packaging and Deployment

Nano server do not support MSI-installers. Reason for that is custom actions that are allowed in MSI. Instead Microsoft has created a new app installer built upon AppX which is called WSA (Windows Server App) installer. The WSA extends the AppX schema and becomes a declarative server installer. You still have support for server specific extensions in WSA like NT service, Perf counters, WMI-providers and ETW events. Of course the WSA does not support custom actions!

Package management architecture:

This might look a little complex, however it is quite simple. You have some core Package management cmdlets which relies upon Package Management Providers who are responsible for sourcing packages from Package Sources. This is really great because for the End User you use the same cmdlets against all Package sources (NuGet, Powershell Gallery, Chocolaty etc). The middle ware is handled by the Package Management providers. So the End User just need these cmdlets to work with packages:

So to install a new package provider, you just use the PackageManagement module:

Here are some of the Providers you can install. Notice that you have a separate Provider for Nano server which you can use to install the VMM/SCOM agent:

> Configuration

Since the Nano server is small and have a cloud friendly footprint, you most likely will have a lot of them running. To configure them and make sure the configuration does not drift and to make it easy to update their configuration, you use something called Desired State Configuration (DSC). This was introduced in WMF 4 and is declarative way of specifying the configuration of a server or a collection of servers.

There are tools out there you can use to leverage management of your configuration. Lookup Chef or Puppet or even Azure Automation for how to do that. This is a big concept and would require a separate blog post to get into details. Please also contact me if you have any further questions about DSC.

> Containers

This is also a big topic and something that has been around for ages in the Linux part of the world. Basically it is just another form of virtualization of the operating system into a single package that is small and runs super-fast. If you have ever heard about Docker, you have heard about containers. Docker is containers. Docker is supported in the new Windows Server 2016, hence you can run Docker containers on it.

One of the core concepts of containers, is that you may have many of them running in the same container at the same time. This is possible because the containers share the same kernel/operating system.

In Windows we will have to flavors of containers:
  • Windows Containers
  • Hyper-V Containers

So with Hyper-V containers we get isolation with performance since the containers do not share the kernel but have their own copy of it. This is important for multi-tenant scenarios and for regulatory requirements. Auditors usually do not like systems that have shared kernel in the sentence, someone told me.

> Operation Validation Testing

This is one of my babies and the coolest thing about how we embrace the future. Microsoft have created a small framework on top of the Pester Unit Testing framework/Module shipped with Windows 10 and Windows Server 2016. The concept is very simple and very powerfull; Create Unit Tests that verify your infrastructure. These tests can be very simple or extremely detailed. You will have to figure out what you are comfortable with and what suits your environment. 

The Pester Module enables us to write declarative statements and executing those tests to verify that the infrastructure is operating according to our needs. 

When you invoke the test, you will see something like this:

This is something I have been working with the last 2 years and I can tell you that it has saved my bacon quite a few times. It has also enabled me to notify my clients about issues with their infrastructure which they were not aware of until I told them. This could be something as simple as a SQL service account that have an expired password or that has been locked out somehow. 

I have created a GitHub repro which contains Pester or Operation Validation Tests for Identity Manager, VMM, Active Directory among other things. This is a community repro which accept pull requests from people who have created tests for other applications and services. Please contact me if you need further information or want to discuss this in detail.

> Operating Security

Just after Snowden shared his knowledge with the world, Microsoft launched a new concept called JEA – Just Enough Administration. It is a new Powershell framework that secures administrative privileges by only issuing Admin Privileges in a constrained way and for a limited amount of time.
You can find more information about JEA here:

> Other things

There are a couple of things you should be aware of. First off, if you plan to use Containers in your infrastructure, you must run them on Server Core or Nano Server. Containers are not supported on Servers installed with the Desktop Experience. This implies that you should probably consider installing your Hyper-V servers with the Nano server OS or the Server Core option. Also all the new cool features like SoFS and Storage Replicas with Storage Direct requires the Datacenter licensing option.



Wednesday, September 28, 2016

Microsoft Ignite 2016 – Announcements and features


I have now spent 3 days at Ignite and walked a total distance of about 22km hustling from sessions and the Expo area according to my Iphone. These are some of my thoughts about what might affect you going forward the next year.


  • Windows server 2016
  • Azure Monitoring
  • Azure Functions
  • Azure Networking

Oh yeah, and System Center 2016 was launched. Why is it not on my list? Well to be perfectly honest, the feature list is almost identical to the latest rollup on 2012 R2. More on that later.

Windows Server 2016 GA

This release of Windows server is the chosen one that is going to power the Azure infrastructure and tenant workloads on Azure and AzureStack when it is released next year. From the Hyper-V perspective, things have changed quite a lot. You will have to forget all best practices and how you setup Hyper-V and Storage. 2016 is all about scaleout filesystem (reFS/NTFS) and storage direct.

You also want to check the OEM hardware list to make sure your servers are listed there. Pay extra attention to you physical NICs. They have to support RDMA. Your switches, or at least your storage switch need to support DCB (Data Center Bridging).

Windows Server 2016 is all about software defined everything. With the switch and NIC hardware requirements listed above, they should add that you also need the special hardware to enable the software defined everything. That is just a personal opinion and I do not expect anyone to change that.

For us mere deadly, not supporting super scale, you will be happy to know that the smallest cluster supported on 2016 is 2 nodes when you use Storage Space Direct! Yay!

Azure Monitoring

In my head, I have been waiting for this. Basically it is a System Center Operation Manager light for your Azure Resources running in the cloud with a super responsive and beautiful console. In addition you can consume logs (logfiles/eventlogs) and search them from the console in Azure. Great for troubleshooting when you need to correlate different logs and servers.

These are some of the items you can “monitor”:
  • Activity logs
  • Metrics
  • Diagnostics logs
Azure Monitor is even connected to Azure Analytics. I am not being fair when I called this SCOM light. It has far more reach and the correlation is out of this world compared to SCOM.


Example of a dashboard:

Image result for azure monitor

You pin items/graphs/tables to your dashboard. That dashboard can be cloned and shared with other users in your Azure subscription. When you add a new pinned item, other users that you have shared the dashboard with, will get a notification that you have added an item and may choose to add it to your dashboard.

Alerts – Trigger on events

Azure Monitoring can be configured to produce alerts on certain events. The following channels is supported out of the “box”:
  • SMS
  • Email
  • Webhooks (http callback)
That might not look very impressive, however the key element here is webhooks. That will enable you to integrate Azure Montor with almost any solution that have webhooks integration. If your Azure Monitor alerts target does not support webhooks, Azure have you covered there also, using the Azure Function service. With functions you can create that webhook/callback target and integrate your custom application as a consumer of Azure Montor Alerts.

Operation Management Suite (OMS)

If you currently use OMS, you have access to all the information in OMS from your Azure Monitor dashboard. This enables you to do queries against the data collected from your agents whether they are running in your datacenter or in Azure.

Azure Monitoring is currently in Private Preview. If you are interested in trying it out, contact me and I will help you. I expect this will reach public preview within 2 months.  

Azure Functions

Have you ever heard about Serverless compute? That is Azure Functions. Before you get to confused, it is executed on a server, however it may be managed by you or by Microsoft. The serverless expression comes from the fact that you decouple your bussiness logic/code from the concept of a virtual machine that host it.

Your code is executed in the cloud. You design the function/code to be very specialized and generic at the same time. Sounds a bit confusing, however it makes perfect sense when you look into it.

Azure functions is great feature that enables you to process data, integrate with other systems not in the cloud, Internet of things devices (IoT) and for building your own API/microservices.

The Azure Function console is loaded with ready to use templates and more is added each day by Microsoft and the community if they pass the unit tests created by the Azure Functions team.

Azure Networking

Quite a few new features added at Ignite:
  • IPv6 support
  • Azure DNS
  • Accelerated networking
  • Web Application Firewall
  • Virtual network peering
Microsoft has silently upgraded their Azure Datacenters with a FPGAs which enables up to 25 Gbps networking performance. The keynote showed that a 1000 node cluster on Azure could translate all the articles on Wikipedia in less than 0.1 seconds using this technology. Expensive yes, however if you need performance for a short period of time, it is not that expensive even if I have not done the math.



Monday, September 19, 2016

Making array lookups faster

powershellThis post is about making lookups in arrays as fast as possible. The array can have may properties or few, it really does not matter. The only thing required is something unique that identifies each row of data.
So from time to time I find the need to make lookups fast. Usually it is a result of importing a huge csv file or something.

Sample data

First we have to create some dummy sample data which we can run some tests against. We will create an array of 10001 objects with a few properties. The unique property that identifies each row is called ID:

(sample data script)

How to test performance?

There are a couple of items that impact performance in Powershell. For instance running a Measure-Command expression will yield quite different results. Normally the first run is slower than the second one and then the standard deviation is quite large for consequent runs. To decreate the standard deviation, I use a static call to the .net GarbageCollector with [gc]::Collect(). I feel that the results are more comparable with this approach.

First contender Where-Object

There are two ways you can query an array with the Where keyword. You can pipe the array to the Where-Object cmdlet or you can use the Where method on the array. The where method will always be faster that the cmdlet/pipline approach since you save moving the objects through the pipeline. For our test, we will therefor use the where method as the base which we measure the performance against.
We are going to run 11 different queries and find 2 unique elements in the array. The time measured will be ticks. I have created an collections of IDs which we will use when we query the data ($CollectionOfIDs):

(Measure the Where method)


That is about 85ms on average to query the collection for two unique IDs. Base line ready.

There is a fast knock at the door

We have a new contender and he calls himself Hashtable. He claims he can do even better that 85ms on average. Challenge accepted.
First we need to create a hashtable representation of the $csvObjects collection/array. That should be pretty straight forward. We let the unique identifier (ID) become the key and the object itself the value:

(hashtable of csv)

Now I know you have a question. What is the performance penalty of converting that array to a hashtable? Good question and I am happy you asked. It converts the 10000 objects into an hashtable in apx 53 milliseconds:


I would say that is a small price to pay.
Using the same ($CollectionOfIDs) as we did for the where method, let’s run the same test against the hashtable:

(Measure the hashtable)


Okay, so the first one is quite slow about 11ms, however it improves quite dramatically to 0.038ms. I we use the average numbers (in ticks) to be fair, we have increased the performance with a factor of 649 (837265 / 1289).


I have only tested this on WMF 5.1 (5.1.14393.103). To use the Where query method on arrays, you need version 4 or later. Converting the collection to an hashtable will give you the ability to perform super fast queries. If you are querying a collection frequently, it makes sense to use hashtable.

Code for speed if you need it, otherwise write beautiful code!



Thursday, August 18, 2016

Creating Menus in Powershell

I have created another Powershell module. This time it is about Console Menus you can use to ease the usage for members of your oranization. It is available on GitHub and published to the PowershellGallery. It is called cliMenu.


This is a Controller module. It uses Write-Host to create a Menu in the console. Some of you may recall that using Write-Host is bad practice. Controller scripts and modules are the exception to this rule. In addition with WMF5 Write-Host writes to the Information stream in Powershell, so it really does not matter anymore.

Design goal

I have seen to many crappy menus that is a mixture of controller script and business logic. It is in essence a wild west out there, hence my ultimate goal is to create something that makes it as easy as possible to create a menu and change the way it looks.
  1. Make it easy to build Menus and change them
  2. Make it as "declarative" as possible


The module supports multiple Menus, however only one Main-Menu with as many Sub-Menus as you like. Each menu has a collection of Menu-Items that the user can choose from.

Example menu:


Menu options

Currently you can control the following aspects of the menu (they are shared across all menus unless you change them before showing a sub-menu):

  • Choose the char that creates the outer frame of the menu
  • Change the color of the frame
  • Change the color and DisplayName for the Menus
  • Change the color and Heading for the Menus
  • Change the color and Sub-Heading for the Menus
  • Change the color and DisplayName for the Menu-Items
  • Change the color and footer text for the menus
  • Change the Width of the Menu


Menu-Items are the elements your users can invoke in your Menu. They have a ScriptBlock and a DisableConfirm switch parameter in addition to a Name and DisplayName. With the DisableConfirm parameter, you may selectively force the user to confirm the action before it is invoked.  

Validation and Return value

The goal of this module is neither. As a tool-builder you are responsible for validating user input when they invoke the ScriptBlock associated with the Menu-Item. 

Any output from the ScriptBlock will be written in the console. As you may know, a ScriptBlock may be a small script or a call to a cmdlet with parameters. I would suggest that you stick to calling custom or built-in cmdlets and design it using the best practice guides from Microsoft in regards to mandatory parameters etc.


This is the core cmdlet responsible for building the Menu and displaying it to the user. Executed without parameters it will display the Main-Menu (remember you can only have one Main-Menu). Nevertheless you may also use it to display Sub-Menus by specifying the parameter MenuId which is the index of the menu. 

Further you may also invoke a specific Menu-Item in a specific Menu by supplying InvokeItem and MenuId parameters. If the Menu-Item is defined to confirm with the user before invocation, it will prompt the user with a confirmation request before execution. You can override this with the -Force parameter to execute it directly.


A menu which uses the Show-Command cmdlet (complete script in example.ps1):


An example with a Main-Menu and Sub-Menu:



Big thank you to Fausto Nascimento for invaluable input and suggestions!

That is it. If you have any questions or issues, look me up on twitter (@toreGroneng) or file an issue on GitHub.



Sunday, May 8, 2016

Identity Manager and Powershell

It is year 2016 and Identity Manager looks like it did in 2010 when Forefront Identity Manager (FIM) was released. Who came up with the name and added it to the Forefront portfolio anyway, crazy stuff. As you probably know, looks can be deceiving. Even if MIM looks the same and run basically the same features, it is still a powerful state machine. I have been buzy collecting scripts and tools I have used the last couple of years and have started on 2 Powershell modules for MIM. This post is just a brief introduction to the module and the plans for the future.

Why create a module

Wait a minute. Does not Identity Manager come with a Powershell module? No, it comes with a Powershell snap-in from 2010. Back in those days Snap-ins where the cool kids on the block and everybody created snap-ins for things that should be created as a module. I blame Powershell version 1.0, however they fixed that in Powershell version 2.0, I think. I use the snap-in as a nested module and have created a Powershell manifest for the snap-in. That way you can choose to load the snap-in as a module if you like (look in the FIMmodule folder for the manifest).

The Snap-In that comes with Identity Manager is very generic/crude and allows you to do almost anything you can in the Identity Manager portal. You just need to remember the syntax and the XPath queries that you need to run. Doable, nevertheless quite hard to remember and prone to producing errors. Hence the effort on my side to create a module that is easy to use and a lovely experience.

I also have a side project where I focus on Operation Validation in FIM/MIM using Pester. Pester is the Unit Test framework for Powershell and the Operation Validation framework from Microsoft. You can have a look at the unit test in this link “Operation Validation”. Point of this being an test you can run to validate you Identity Manager infrastructure and make sure that all the bells and whistles are working as they should. A nice way to detect if your infrastructure peers have installed a new domain controller you should install PCNS on!

Introducing the Identity Manager Powershell module

It is still work in progress and I am focusing in on the Get-CMDlets for all the different object types in FIM/MIM. Currently I have added the following cmdlets:

Name Description
Get-IMObject A generic cmdlet used by all of the Get-Cmdlets. It is responsible for running the core cmdlets in the Identity Manager snap-in
Get-IMObjectMember Used to list members of a group/set. It can list ComputedMembers or ExplicitMembers
Get-IMPerson Get person information
Get-IMPersonMembership Show person membership in Groups/sets
Get-IMSecurityGroup Show information about Security groups in Identity Manager
Get-IMSet Show information about Sets in Identity Manager
Get-IMSetUsage Show all related usage of a Set in Identity Manager
Get-IMXPathQuery Create simple XPath queries with hashtable
Out-IMAttribute Cast a ResourceManagementObject to a PSCustomObject Used by the Get-IMObject cmdlet

It is currently not on the PowershellGallery, however it will be in May 2016. The module will require Powershell version 4.0 (Windows Management Framework 4) or later. It may work with Powershell version 3.0, however I have not tested it with that version. It will work with either Forefront Identity Manager 2010 R2 or Microsoft Identity Manger 2016.

If you want to browse the code and have a look, you can visit the GitHub repro on this link.

Introducing the Identity Manager Synchronization Powershell module

But, wait, there is more :-) This month I will also publish a new Powershell module for the Synchronization engine in Identity Manager. Normally this would be executed as a VBscript per Microsoft. Nothing wrong with that and it works. I on the other hand would like to use Powershell to do this. Thankfully Microsoft has included a WMI/CIM namespace/class for Identity Manager that we can leverage to do this. My Identity Manager Synchronization module (IdentityManagerSynch) will support the following cmdlets:

Name Description
Get-IMManagementAgent List Management Agents or Agent details
Get-IMAgentRunProfile List the RunProfiles associated with an Agent
Get-IMAgentStatus List the last known status of an Agent
Invoke-IMAgentRunProfile Execute a RunProfile for an Agent
Invoke-IMManagementAgentMethod Invoke a CIM-method on the Agent

The cmdlets implement dynamic parameters for the agent and runprofile thus preventing you to try and start a runprofile that is not implemented in the agent. 

I may or may not include a cmdlet that enables you to search for Metaverse Objects. The synchronization client has a nice GUI that solves most issues and lets you poke around. From time to time I find myself wishing for a way to extract information from Metaverse that is not possible in the GUI.



Sunday, April 24, 2016

Pester Operational Tests


I have created yet another GitHub repro "PesterOperationTest". Feels like I do this every week, however that is not the case.

But why?

The purpose of this repository is to collect Pester Unit tests for your infrastructure. Anything that is mission critical for you business deserves a tests that validates the correct configuration. It will make you sleep like a baby and stop worrying when you implement a change in your environment if you tests pass.

This will only become as good as the contribution you make to the repository. I would love to do all the work myself, however there is not enough time in the world. Think of it like you are helping yourself and at the same time a lot of other people could benefit from the test you create.


My original thought was to organize this in folders, one for each vendor and with subfolders for each products and/or feature. The scripts will be published to the Powershell Gallery for easy and convenient access for everyone. The tests should not be specific to your environment, however as generic as possible.

My first contribution

Been working for some time now with Forefront Identity Manager. The last 6 months has changed a lot in terms of how I work and what tools I use. Pester – a unit test framework included in Windows 10 (and in Windows Server 2016) – has become one of my key assets. Heck it has made me better at my work and increased the value added for my customers. Thank you Pester!

I have published a first version of the Microsoft Identity Manager Test to the repro. It validates important stuff like

  • files/folders that need to be present
  • services that should be running and configured the proper way
  • scheduled task that should/could be enabled and configured

I will be adding more stuff later. Pull requests are welcome!