Tips and Tricks: OpsMgr 2012 Agent Push Failed (ErrorCode 80070643 and 2147944003) | Quisitive

​While pushing an Agent out from the Console, I noticed a failure with ErrorCode 80070643 and 2147944003 noted below.

The fix was easy:

  1. Ensure that the remote computer has permissions for the following location, then repush the Agent:
    • C:WINDOWSwinsxsInstalltemp
      • Full Control – Administrators
      • Full Control – System
Task Output:
 
<DataItem type=” MOM.MOMAgentManagementData “time=” 2012-08-26T13:51:49.3564866-05:00 “sourceHealthServiceId=” AA7DE142-13D5-99B4-06DC-E554C1578A8F “> <ErrorCode> 2147944003 </ErrorCode> <Operation> 1 </Operation> <Description> The Agent Management Operation Agent Install failed for remote computer SVR01.domain.local. Install account: DOMAINSCOM_ACTION Error Code: 80070643 Error Description: Fatal error during installation. Microsoft Installer Error Description: For more information, see Windows Installer log file “C:Program FilesSystem Center 2012Operations ManagerServerAgentManagementAgentLogsSVR01AgentInstall.LOG C:Program FilesSystem Center 2012Operations ManagerServerAgentManagementAgentLogsSVR01MOMAgentMgmt.log” on the Management Server. </Description> <AgentName> SVR01.domain.local </AgentName> <PrincipalName> SVR01.domain.local </PrincipalName> <SoftwareUpdateResult> 0 </SoftwareUpdateResult> <SoftwareUpdateInstalled/> <SoftwareUpdateNotInstalled/> <SoftwareUpdateFailureDesc/> </DataItem>

Data deduplication is one of Windows Server 2012’s many new features.  It’s a role that you can enable on volumes to save space.  it looks for duplicate data in files on a volume and rather than have the same data in multiple it consolidates it.  It does not use compression.  It sees duplicate data in files and will only keep one version of the data while the other files have pointers to the actual data.  This saves space.  From an end user standpoint it’s seamless and they don’t even know what is going on in the background.  For detailed information please refer to the TechNet article found at http://bit.ly/MBTvdo .   

Data deduplication is not recommended on volumes that contain files that are locked open.  This mean running virtual machines, SQL 2012 data files, Exchange data files, and files like that.  Data deduplication is designed for your file shares, software deployment shares, VHD libraries, offline VMs and files that are not locked open all the time.   

One thing to note is data deduplication will not work on ReFS formatted volumes.  ReFS is designed for locked open files like running virtual machines, SQL 2012 data files, Exchange data files, etc.  Data deduplication will only work on NTFS volumes. 

The first thing to do is to install the data deduplication role.  It’s under file and storage services, file and iSCSI services roles. 

screenshot.2

When you select it you may get a prompt to install some other features.

screenshot.4
screenshot.6

After you install the role a reboot is not required.

The next step is to configure it.  It’s as simple as right clicking on the volume in server manager, clicking configure data deduplication, and supplying a few bits of information.  To demonstrate data deduplication will not work on ReFS volumes you’ll see In the screenshot below it’s not available.  Just because ReFS is newer than NTFS doesn’t always mean it’s better and should be chosen for everything.  NTFS should still be used for files that get opened and closed and storing files.  ReFS is great for files that are open and locked most of the time.  

After I formatted the drive as NTFS it’s now available. 

screenshot.13

Before we actually enable data deduplication let’s use the ddpeval.exe tool.  It’s installed at c:\windows\system32 when you install the data deduplication role.  This tool will give you an estimate on how much space you’ll save on a volume if you decide to use data deduplication.

The tool is command line and has some switches.

screenshot.17

I’m just going to run it against my E drive.  I have 3 VHDs and some XLM files there totaling 24.9 GB.

screenshot.15
screenshot.19
screenshot.21
screenshot.23

In the screenshots above you’ll see running ddpeval.exe uses the processor a lot.  It uses between 50-75% of your CPU cycles.  I wouldn’t recommend running it during the day.  Also depending on how many files you have and how large the files are this could take some time to run.  Having 3 large offline VHDs and few xml files the ddpeval.exe process took 19 minutes. 

According to ddpeval.exe if I decide to data deduplicate this volume it should go from 24.9GB to 5.13GB!  WOW, let’s do it!

The next few screenshots will show setting up data reduplication.  I’m going to basically enable it on volume E, deduplicate files older than 0 days(the default is 5 but since this server was built today having anything other than 0 will not work), and start this at 10:45AM.  I’m using the scheduled method.  This will use more CPU time but you can schedule it.  The other option is enable background optimization.  This will run using low CPU priority when the server is idle but for demonstration purposes I wanted to start this manually.  I’m also not going to exclude anything as this is for demonstration.  Depending on your servers you might exclude certain things.

screenshot.44
screenshot.31

That’s about it.  When you set this up it’s basically a scheduled task. 

screenshot.36
screenshot.38
screenshot.40
screenshot.42

In the next few screenshots you’ll see it running using high CPU utilization.  Remember we kicked this off manually and when you do that it will run using normal priority. 

screenshot.56

It took 36 minutes but in the end enabling data deduplication saved a lot of room.

screenshot.57

You’ll see the used space is 6.46GB in the properties window !

screenshot.58

Data deduplication can’t be used for everything but if you have a file server it’s worth looking into. 

Data deduplication is one of Windows Server 2012’s many new features.  It’s a role that you can enable on volumes to save space.  it looks for duplicate data in files on a volume and rather than have the same data in multiple it consolidates it.  It does not use compression.  It sees duplicate data in files and will only keep one version of the data while the other files have pointers to the actual data.  This saves space.  From an end user standpoint it’s seamless and they don’t even know what is going on in the background.  For detailed information please refer to the TechNet article found at http://bit.ly/MBTvdo .    Data deduplication is not recommended on volumes that contain files that are locked open.  This mean running virtual machines, SQL 2012 data files, Exchange data files, and files like that.  Data deduplication is designed for your file shares, software deployment shares, VHD libraries, offline VMs and files that are not locked open all the time.    One thing to note is data deduplication will not work on ReFS formatted volumes.  ReFS is designed for locked open files like running virtual machines, SQL 2012 data files, Exchange data files, etc.  Data deduplication will only work on NTFS volumes.  The first thing to do is to install the data deduplication role.  It’s under file and storage services, file and iSCSI services roles.  When you select it you may get a prompt to install some other features. After you install the role a reboot is not required. The next step is to configure it.  It’s as simple as right clicking on the volume in server manager, clicking configure data deduplication, and supplying a few bits of information.  To demonstrate data deduplication will not work on ReFS volumes you’ll see In the screenshot below it’s not available.  Just because ReFS is newer than NTFS doesn’t always mean it’s better and should be chosen for everything.  NTFS should still be used for files that get opened and closed and storing files.  ReFS is great for files that are open and locked most of the time.  After I formatted the drive as NTFS it’s now available.  Before we actually enable data deduplication let’s use the ddpeval.exe tool.  It’s installed at c:windowssystem32 when you install the data deduplication role.  This tool will give you an estimate on how much space you’ll save on a volume if you decide to use data deduplication. The tool is command line and has some switches. I’m just going to run it against my E drive.  I have 3 VHDs and some XLM files there totaling 24.9 GB.       In the screenshots above you’ll see running ddpeval.exe uses the processor a lot.  It uses between 50-75% of your CPU cycles.  I wouldn’t recommend running it during the day.  Also depending on how many files you have and how large the files are this could take some time to run.  Having 3 large offline VHDs and few xml files the ddpeval.exe process took 19 minutes.  According to ddpeval.exe if I decide to data deduplicate this volume it should go from 24.9GB to 5.13GB!  WOW, let’s do it! The next few screenshots will show setting up data reduplication.  I’m going to basically enable it on volume E, deduplicate files older than 0 days(the default is 5 but since this server was built today having anything other than 0 will not work), and start this at 10:45AM.  I’m using the scheduled method.  This will use more CPU time but you can schedule it.  The other option is enable background optimization.  This will run using low CPU priority when the server is idle but for demonstration purposes I wanted to start this manually.  I’m also not going to exclude anything as this is for demonstration.  Depending on your servers you might exclude certain things.       That’s about it.  When you set this up it’s basically a scheduled task.        In the next few screenshots you’ll see it running using high CPU utilization.  Remember we kicked this off manually and when you do that it will run using normal priority.  It took 36 minutes but in the end enabling data deduplication saved a lot of room. You’ll see the used space is 6.46GB in the properties window ! Data deduplication can’t be used for everything but if you have a file server it’s worth looking into.

I’ve seen a lot of environments and the one typical commonality is that there simply was no planning or forethought put into where to store everything like source files, packages, downloads, etc. Chaos reigns and it ain’t pretty.

For all of my lab builds and production rollouts, I use a simple script to build up a nice, consistent folder structure that is easy to follow and enforce, eliminates ambiguity, and is simple. Note that in really large environments, the exact details I outline here may not apply, the principals do though – namely keeping it simple (stupid) and consistent.

One of the first things I notice in most of these environments is that they have multiple shared folders on the same system for the different items. Why? Why use three or five or whatever number of shared folders when one will suffice? I always create a single, top level folder called ConfigMgr (not SCCM) and share it out. Then, under this folder I create multiple sub-folders for all of the content required. This way, everything that is needed is easily found and backed up. This share could even be hosted by DFS and physically located on a non-ConfigMgr system (or multiple if you are using DFS).

Another common source of issues is simple organization and naming, particularly of package/application source files. This is key so that you actually know what’s in each folder, what package or application it corresponds to, and if it’s being used at all. I always use three-levels of sub-folders: vendor, application, version. For the version sub-folder, if its applicable, I always use the version number, architecture (x86 or x64), edition (like Pro Plus or Enterprise), and an indicator for App-V if that’s in use in the environment. You could use more sub-folders for these, but that’s usually overkill and unnecessary.

Permissions can get a little tricky — but just like King Burger, don’t get crazy with it. Best practice for shared folder permissions is to use Everyone Full or Read and let NTFS control access on a granular level. In this case, I use Everyone Full because ConfigMgr needs to write files to some of these directories and you as an administrator surely will want to also.

Here’s a snippet of the batch file I use to set up my typical hierarchy:

@echo off pushd %~dp0 echo Creating top-level Directory md ConfigMgr cd ConfigMgr net share ConfigMgr=%cd% /GRANT:Everyone,FULL echo Creating Sub-directories md Content md ContentSoftware md ContentUpdates md ContentDrivers md ContentApp-V md ContentOSD md ContentOSDBootImages md ContentOSDOSImages md ContentOSDOSInstall md ContentOSDMDTToolkit md ContentOSDMDTSettings md ContentOSDDrivers md ContentOSDMDTSettingsDeploy md InstallationUpdates md BootImageFiles md Captures md Hotfixes md Scripts md StateCapture md Tools md ToolsPSTools md Stuff md MDTLogs md Import md ImportDrivers
md ImportMOFs md ImportBaselines echo Setting permissions icacls Captures /grant %USERDOMAIN%_cmnaa:(OI)(CI)F icacls MDTLogs /grant %USERDOMAIN%_cmnaa:(OI)(CI)F icacls StateCapture /grant LocalService:(OI)(CI)F

popd

And, here’s a PDF that shows my “typical” folder structure with a brief explanation of each and permissions I often assign (this doesn’t match completely with the batch file above as these are both just starting points).

ConfigMgrFolderStructure.pdf

I recently assisted a client of ours with implementing Identity Federation (single SignOn) with ADFS. This client was an original BPOS customer that was transitioned to Office 365. The client originally migrated to BPOS from on-premises Exchange 2003. Once we implemented Identity Federation all of the accounts became Federated users and now making any changes to email aliases. This has not been an issue for any of my previous Office 365 deployment as I have always setup the migration in a Hybrid Deployment with an Exchange 2010 SP2 server on-premises. With the Hybrid server, and configuring DirSync with the Hybrid Config checkmark, the email addresses for the users are replicated back to on-prem and can be modified with the EMC or directly via the ProxyAddresses attribute using ADSIEDIT or Attribute Editor in AD Users and Computers (assuming 2008 R2 DCs).

So for this client, when we installed and configured DirSync, the Hybrid Deployment checkmark was greyed out and unavailable. At the time I figured this was just because the Forest/Domain was not prepped for Exchange 2010 SP2, and really didn’t think much of it. Well that situation didn’t take long for that greyed out option to bite us. We wanted to switch around some email addresses and first attempted via the Exchange Online Management Portal and we get an error that cannot modify Federated user properties and the changes must be made on the on-premises Active Directory user account. Well that is pretty hard seeing how none of the email addresses are able to be sync down to the AD user account.

PowerShell to the Rescue! I needed to get the EmailAddresses from the Online account replicated to the AD Account ProxyAddresses. So what I did was create a PowerShell script that does this for me.

I designed the script to run from a Server 2008 R2 domain controller. The script starts out by importing the Active Directory cmdlets. It then creates a remote PowerShell session to Exchange Online. I then have it get all of the mailboxes in Exchange Online into the variable $Mailboxes. I then loop thru each Mailbox and connect to the online mailbox using the user UPN. The script then gets the EmailAddresses from the online mailbox and sets them into the $OnlineAd variable. The script then cycles through all the email addresses and adds them to the on-prem AD account ProxyAddresses attribute.

The script is below:

import-module activedirectory

$Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri https://ps.outlook.com/powershell/ -Credential $LiveCred -Authentication Basic -AllowRedirection

Import-PSSession $Session

$mailboxes = get-mailbox –ResultSize Unlimited

foreach ($MB in $Mailboxes)

                {

                $UPN = $mb.UserPrincipalName

                $OnlineUser = Get-mailbox $mb.UserPrincipalName

                $OnlineAdd = $onlineUser.EmailAddresses

                foreach ($add in $OnlineAdd)

                                {

                                get-aduser -filter {UserPrincipalName -eq $UPN}|set-aduser -add @{proxyaddresses = $add}

                                }

                }

So now my client can administer the email addresses for the Federated users by changing the ProxyAddresses attribute for the AD account. I warned them only to change the Addresses the are have “SMTP”, the primary or reply to address, or “smtp”, the alias addresses; and ensure they do not make any changes to any other type. I think I smell an app I could write to make the changing of only the smtp addresses via a GUI PowerShell interface.