SCSM PowerShell: Creating PowerShell Workflows | Quisitive
SCSM PowerShell: Creating PowerShell Workflows
November 30, 2015
Matthew Dowst
Read more

This post is part of a series on Service Manager PowerShell. Be sure to check out the Overview Page for all posts in this series.

Now that we have covered creating and editing items using the Service Manager PowerShell cmdlets, I would like to touch on one of the best uses of this, which is creating PowerShell workflows. As you will see, utilizing PowerShell can greatly increase the flexibility you have with regards to Service Manager workflows.

First things first, you need to create a management pack in the Authoring Console. Once you’ve done this right click on Workflows and choose Create.

wf01

Now you just need to give your workflow a name and select your trigger conditions. In this example I am going to create a workflow to automatically set the support group to Teir 1 if no support group is selected when the incident is created.

wf02

Next, you just need to drag the Windows PowerShell Script activity into your workflow and open the Script Body properties. Here you can enter your script and properties, but before we do this let’s take a look at a few best practices.

Workflow Script Best Practices

When you create a script to be used with a workflow, there are a few things you need to take into consideration. The first and foremost is performance. If you write a PowerShell script to run as a workflow, you are going to want it to run as quickly as possible. This is not only for speed and ease of use, but also to prevent taxing the resources of your system. Second, you want to make your script is as portable as possible. As you should know, all workflows run on the primary management server, so it may be tempting to hardcode things like the module path. However, what happens if your primary server crashes and you need to promote one of your secondary servers? If Service Manager was not installed to the same path, then your workflows will start failing as well. And the last thing you want in a situation like this is more work. On top of these you will want to add error checking and catching, if necessary to your script.

Script Performance   

Trying to make your scripts run as efficiently as possible can take some trial and error, as well as experience. However, Microsoft has provided some tools to help you out. One of these tools is the Measure-Command cmdlet. The Measure-Command cmdlet enables you to measure the running time of a command or script down to the millisecond.

In the post Modifying Work/Configuration Items, we saw how to use the Update-SCSMClassInstance cmdlet to update a work item. So let’s take a look at our example of setting the support group for an Incident.

In the example below, I am getting the Incident, then I am setting the support group to Tier 1.

12$IR = Get-SCSMClassInstance -Class (Get-SCClass -Name System.WorkItem) | ?{$_.Id -eq “IR27″}$IR | %{$_.TierQueue=”IncidentTierQueuesEnum.Tier1”;$_} | Update-SCClassInstance

There are a few things to note in the first line. First, I am using the class System.WorkItem instead of System.WorkItem.Incident. Technically this will work because the Incident class is derived from the Work Item class, but this could give us a larger amount of returned data to filter through. To test, I will use the Measure-Command cmdlet to see how long it takes that first line to run.

Measure-Command {
$IR = Get-SCSMClassInstance -Class (Get-SCClass -Name System.WorkItem) | ?{$_.Id -eq “IR27”}
}

Days              : 0
Hours             : 0
Minutes           : 0
Seconds           : 0
Milliseconds      : 140
Ticks             : 1400774
TotalDays         : 1.6212662037037E-06
TotalHours        : 3.89103888888889E-05
TotalMinutes      : 0.00233462333333333
TotalSeconds      : 0.1400774
TotalMilliseconds : 140.0774

As you can see from the output above the command took 140 milliseconds. This is no too bad, but I need to take into consideration that this is my test environment that I just rebuilt, and it has less than 30 work items in it. So I ran it again in an older environment, with a lot of ticket data.

Measure-Command {
$IR = Get-SCSMClassInstance -Class (Get-SCClass -Name System.WorkItem) | ?{$_.Id -eq “IR31806”}
}  | FL TotalSeconds, TotalMilliseconds

TotalSeconds      : 13.6384204
TotalMilliseconds : 13638.4204

This time it took 13 seconds to run, as compared to the 140 milliseconds like before. I ran the Measure-Command several more times to ensure that this was not a one off result. On average the command took 12 seconds to run. Having a script run for 12+ seconds every time this workflow is initiated, can start causing a backlog on your system, depending on how often it runs. So let’s see if we can improve the execution time. The first thing we can try is to narrow our command down to the just the Incident class.

Measure-Command {
$IR = Get-SCSMClassInstance -Class (Get-SCClass -Name System.WorkItem.Incident) | ?{$_.Id -eq “IR31806”}
} | FL TotalSeconds, TotalMilliseconds

TotalSeconds      : 10.270151
TotalMilliseconds : 10270.151

Now the command is completing with an average of around 10 seconds. You can see that we saved some time, but not a lot. So let’s take a look at another way we can improve the execution time.

You may have noticed that the command is using a Where clause to get the specific Incident. However, since the Where clause is a piped command PowerShell will run the first half of the command completely before applying the Where clause after the pipe. This means that the first part of the command will return all items in the Incident class, then it will filter down to the one we want. So instead of using the Where clause, I’m going to test using the Filter parameter that is part of the Get-SCSMClassInstance cmdlet. This cause the filtering to happen during the execution of the Get-SCSMClassInstance cmdlet.

Measure-Command {
$IR = Get-SCSMClassInstance -Class (Get-SCClass -Name System.WorkItem.Incident) -Filter ‘Id -eq “IR31806″‘
} | FL TotalSeconds, TotalMilliseconds

TotalSeconds      : 0.1083781
TotalMilliseconds : 108.3781

So now our command is running at an average of around 100 milliseconds. This means the command is now running faster than the original one did in the brand new clean environment, but I still think we can make it faster. One way to do that is by using the Incidents Guid instead of the Work Item Id. The Guid can be passed directly to the Get-SCSMClassInstance without the need for any filters or Where clauses.

Measure-Command {
$IR = Get-SCSMClassInstance -Id 4dcab5b1-8b26-e0bd-74ae-005699366a48
} | FL TotalSeconds, TotalMilliseconds

TotalSeconds      : 0.0352629
TotalMilliseconds : 35.2629

As you can see by using the Guid, the command now runs in 35 milliseconds. So we have now taken the execution time from over 10 seconds to under 40 milliseconds. Now you may be asking where I got the Guid from. In this case I got the Guid by saving the Incident to the variable $IR then outputting the value of $IR.EnterpriseManagementObject.Id.Guid. Of course I had to get the Incident before I could get the Guid, so this example may not be the best to use in the scripts you run manually. However, as you will see, when you are creating a workflow you can have the system pass this variable straight to the script.

Now I have one last thing to check when it comes to the script. In the example I am saving the Incident to the variable $IR then on the next line I am updating the support group. I ran both lines together using the Measure-Command. Then I tested it, putting it all on one line to together. I found in either case the run time was about the same. So, for ease of readability I’m going to leave it at two lines.

Script Portability

Now that our script is running as efficient as possible, we need to make it as portable as possible. The best way to do this is to never hardcode any names or paths. In the case of a Service Manager workflow we are going to need to import the Service Manager PowerShell Module. As you know from previous posts this module is not always properly registered in PowerShell, so it is best to point it directly to the Service Manager install directory. Luckily, when Service Manager is installed it writes the install directory to a registry key. So we can query that key to get the path, then import the module. Also to make things even faster we can check to see if the module is already loaded, and if so,  just go ahead and skip importing it. The example below shows how to do this, and is the start of every Service Manager workflow I create.

123456if(@(get-module | where-object {$_.Name -eq ‘System.Center.Service.Manager’}  ).count -eq 0){    $InstallationConfigKey = ‘HKLM:SOFTWAREMicrosoftSystem Center2010Service ManagerSetup’    $InstallPath = (Get-ItemProperty -Path $InstallationConfigKey -Name InstallDirectory).InstallDirectory + “PowershellSystem.Center.Service.Manager.psd1”    Import-Module -Name $InstallPath -Global}

Error Checking

I’m not going to go into too much detail on error checking, because the techniques for it are not unique to Service Manager. But I would recommend thoroughly testing your scripts and learning about try/catch/final commands in PowerShell.

Putting it all Together

Now that we have our full script, as shown below, it is time to go back to the Authoring Tool.

123456789if(@(get-module | where-object {$_.Name -eq ‘System.Center.Service.Manager’}  ).count -eq 0){    $InstallationConfigKey = ‘HKLM:SOFTWAREMicrosoftSystem Center2010Service ManagerSetup’    $InstallPath = (Get-ItemProperty -Path $InstallationConfigKey -Name InstallDirectory).InstallDirectory + “PowershellSystem.Center.Service.Manager.psd1″    Import-Module -Name $InstallPath -Global}$IR = Get-SCSMClassInstance -Id $id$IR | %{$_.TierQueue=”IncidentTierQueuesEnum.Tier1”;$_} | Update-SCClassInstance

In the Authoring Tool open the Script Body properties. There are two sections in this window. The Script Body and the Script Properties. In the Script Body go ahead and click the down arrow in the View or Edit Script section, and paste in your script.

wf03

You’ll notice  in the script we are using the variable $Id in the Get-SCSMClassInstance cmdlet to return the Incident. You’ll also notice that we are not setting that anywhere in the script. This is because we are going to have Service Manager pass the value to the script at runtime. To do this go to the Script Properties section and click New to create a parameter for your script. Set the name field to the same name as your PowerShell variable, minus the dollar sign. In this case our name will be Id.

wf04

Next click on the grey box next to the Value field. Here is where we will tell Service Manager what value to pass to the script. In our case we want the Guid, so on the left select, Use a class property. Then scroll down and look for the property ID (Internal). This is the Guid of the item. The field that is just ID will be the work item Id, for example IR1234. After you select the ID (Internal) property click OK.

wf05

Click OK again to close the Script Properties. Then save your management pack, and import it into Service Manager. Also, remember when you create a workflow it will also create a dll with the same name as the workflow in the management pack’s folder. You will need to copy this dll into the Service Manager program folder on the primary management server.

To Remove or Not to Remove

There is often the debate as to whether or not you need to add the Remove-Module to the end of your PowerShell scripts. Personally, it is not something I bother with, for a couple of reasons. First, once your workflow stops running the session is closed, so any loaded modules will be closed. Second, if for some reason the module is not closed, we have placed a check at the beginning of our script to go ahead and skip the load. Finally, this seems to be a hold over from SMLets. I have seen SMLets throw error messages if the module is not removed. However, I have never seen this problem with the Service Manager cmdlets.

If you need one more reason, not to remove, there here is a quote from Ed Wilson on the Remove-Module cmdlet, from the post How to Remove a Loaded Module.

Typically, the only people who remove modules are those who are developing the module in question or those who are working in an application environment that’s encapsulating various stages in the process as modules. A typical user rarely needs to remove a module. The Windows PowerShell team almost cut this feature because it turns out to be quite hard to do in a sensible way.

In this post we covered creating a PowerShell workflow and some best practices around them. Stay tuned for the next blog in this series where I’ll go into reporting on and troubleshooting workflows with PowerShell. Be sure to check the Overview Post for more content in this series.