I recently ran into an issue where I could not deploy a VM from a template from SCVMM 2012. Every time I would try, it would fail at the Install VM components step. The job would report a failure with the following error message:
Error (2927)
A Hardware Management error has occurred trying to contact server <servername> :w:InternalError :HRESULT 0x8033811e:The WS-Management service cannot process the request. The WMI provider returned an 'invalid parameter' error. .
Unknown error (0x8033811e)
Recommended Action
Check that WinRM is installed and running on server <servername>. For more information use the command "winrm helpmsg hresult".
I would also receive this error message when trying to mount and ISO from a share, or when trying to create a VM template from a deployed machine.
After ruling out any firewall or network issues, I began digging into what caused the problem. I found that there can be multiple reasons that you receive this error message. They include WinRM not running or being configured properly, constraint delegation needs to be setup, or there is a port conflict with BITS. I have taken all of these reasons into consideration and created the troubleshoot guide below.
Troubleshooting Error (2927) in SCVMM 2012
Step 1
Confirm that WinRM is running and setup for remote management.
1. Open command prompt and enter: winrm qc
2. If WinRM is configured you should receive a prompt that states:
WinRM service is already running on this machine.
WinRM is already set up for remote management on this computer.
3. If WinRM is running and configured then move onto the next step.
4. If WinRM is not configured you can use the winrm quickconfig command to set it up. For more on this refer to http://msdn.microsoft.com/en-us/library/windows/desktop/aa384372(v=vs.85).aspx
Step 2.
Configure Constrained Delegation for the Hyper-V host
1. Verify that the VMM Server service (vmmservice.exe) is running under a domain account and not the LocalSystem account.
2. In Administrative Tools, open Active Directory Users and Computers, and then navigate to the machine account for the computer running Hyper-V.
3. Right-click the computer account for the Hyper-V host, and then click Properties.
4. On the Delegation tab, click Select this computer for delegation to specified services only, and then click Use any authentication protocol.
5. To allow the Hyper-V computer account to present delegated credentials for the library servers:
a. Click Add.
b. In the Add Services dialog box, click Users or Computers, select each VMM library server that stores ISO image files, and then click OK.
c. In the Available services list, select the cifs protocol (also known as the Server Message Block (SMB) protocol) for each of the VMM library servers, and then click Add.
6. Check to see if this fixes the problem. If not continue on to step 3.
Step 3.
Check for a port conflict between the SCVMM and the Hyper-V host.
1. On the Hyper-V host open the Event Viewer and navigate to System events.
2. Look for EventID: 15005 with Source: HttpEvent
3. Look in the details of the event for the port number.
a. In my case the event details stated, “Unable to bind to the underlying transport for [::]:443”
b. This told me that the conflict was being caused on port 443.
4. On the SCVMM server open regedit
5. Navigate to HKEY_LOCAL_MACHINESOFTWAREMicrosoftMicrosoft System Center Virtual Machine Manager ServerSettings
6. Change the value of BITSTcpPort from 443 to an unused port number. I used 8500
7. Reboot the SCVMM server.
In my case this finally resolved the issue, and I have been able to deploy VMs, create templates and attach ISOs without any issues. By default SCVMM uses port 443 to initiate a BITS file transfer from the Library server to the destination host. If your host server is using 443 for another process it will cause a conflict and the job will fail. By changing the port that SCVMM uses for the transfer you can prevent this conflict.
Clearing your Windows clipboard is important because the clipboard keeps the copied content and makes it available to anyone that has access to the computer. The clipboard stores your copied data temporarily, whether that is sentences, usernames, or even passwords. The information copied is stored even after it has been pasted and it could become a security concern. It is important to develop the habit of clearing your clipboard to avoid security issues.
Follow these steps to clear your Windows 7 clipboard:
- Right-click on your desktop, and select New –> Shortcut
- Copy and paste the following command into the shortcut:
cmd /c “echo off | clip”
3. Choose Next.
4. Enter a name for this shortcut such as Clear My Clipboard
5. Double-click the shortcut anytime you want to clear your clipboard.
And here you have it – a quick and painless way to clear the Windows 7 clipboard!
Hope you enjoyed this quick and easy how-to. If you want to know how to clear the clipboard for Excel for Microsoft 365, Excel 2021, Excel 2019, Excel 2016, Excel 2013, read here.
Even though most browsers are configured to suppress Javascript errors, you want to ensure that you handle these errors effectively.
The best way to do this is to put all of your Javascript code within try/catch statements:
function HandleBillingCodeFieldChange(idx)
{
try
{
$("#bc_changed_" + idx.toString()).val("1");
}
catch(Exception){}
}
In addition to that, it is always a good idea to have an overall Javascript error suppressor. You probably only want to enable this in production, however, because it makes it more difficult to detect and fix Javascript errors in development:
I recently ran into an issue in which I was loading a SharePoint view into an iFrame in my application. It was loading just fine, but it was throwing out a random JS conflict error, even though I had taken the above precautions.
function silentErrorHandler(){return true;}
window.onerror=silentErrorHandler;
I even tried adding an onerror event to my iFrame in an attempt to suppress the error, but it didn’t work.
I was, however, able to find the workaround for this in IE by adding a security=”restricted” attribute to my iframe:
<iframe name=”MyIFrame” id=”MyIFrame” src=”MySource.aspx” security=”restricted”></iframe>
We had a reported application issue in which the user was receiving a “Request entity is too large” over SSL only. When accessing the same application with the same data over regular HTTP, everything worked fine.
Upon further research, we determined that over SSL, the entire request entity body must be preloaded during negotiation. In addition, SSL will use the value of the UploadReadAheadSize metabase property to validate the request size. http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/7e0d74d3-ca01-4d36-8ac7-6b2ca03fd383.mspx?mfr=true
The UploadReadAheadSize metabase property specifies the number of bytes that a Web server will read into a buffer and pass to an ISAPI extension or module. This occurs once per client request. The ISAPI extension or module receives any additional data directly from the client.
To fix this issue, the UploadReadAheadSize metabase property value needs to be increased. Please note that the default value for the UploadReadAheadSize is 49152. The maximum size for this property is 2147483647.
In this example, we will increase the value to “204800”.
- On the web server, open the command prompt Run –> CMD.EXE.
- Change directories to the C:WindowsSysWOW64inetsrv (assumes 64-bit) –> CD C:WindowsSysWOW64inetsrv
- Determine the current metabase property value: appcmd.exe list config –section:system.webServer/serverRuntime
- Increase the metabase property value: appcmd.exe set config -section:system.webServer/serverRuntime /uploadReadAheadSize:”204800″
/commit:apphost
There are many questions to ask yourself or your customer if you are trying to outline the overall System Center 2012 Configuration Manager Infrastructure. When at all possible you should use a Single Primary Site.
In the Beginning, the first question as it relates to infrastructure design that must be asked is whether they have or expect to have over 50,000 Clients if using SQL Server Standard, or 100,000 if using SQL Server Enterprise.
Technical Rules to follow
- A Primary site can only suppport 50,000 Clients when you co-locate SQL on the same server, and it can only support 100,000 Clients* if SQL is on a separate server. If you will go over the count for your configuration, then you would want to use a CAS. Simply put, if you will have more than 100,000 Clients you will need a CAS and Multiple Primary Sites) *The version of SQL “Standard” or “Enterprise” does not increase or decrease client load like the CAS does.
- A Primary Site or all Primary Sites combined cannot have more clients combined than what the CAS can support. A CAS can support 50,000 Total if it is using SQL Standard, and it can support up to 400,000 total if using SQL Enterprise.
- You will have more than 250 Secondary Sites (A Single Primary site can only have a maximum of 250 Secondary Sites.)
- You want more than 250 distribution points. A single primary site can only support 250 distribution points. So you can add secondary sites below a primary to add additional distribution points, up to a maximum of 5,000 per primary and its secondary sites.
Note: Upgrading a SQL Standard to SQL Enterprise later on the CAS or a Primary will not change the limit, for example, if I upgrade a SQL Server Standard version to SQL Enterprise does not reset the 50,000 client support to now support 100,000 Clients because you upgraded the site to SQL Server Enterprise.
Arguments for having more than one Primary Site
There are other reasons to have more than one primary site, but technically the only reasons for having more than one Primary Site technically are above. The below reasons should be addressed as to if they apply to your design or not. It is also important to note that the Configuration manager Site is not a security boundary in that Permissions can apply across all sites and not just one, as was the case in older versions of Configuration Manager
Splitting the Load across two Primary Sites
This idea suggest that you will have a Central Administration Site (CAS), and two Primaries with the thought of splitting the clients across multiple primary sites, with the idea that if you lost one Primary, you could still support half of your environment until the other Primary is recovered. The Pros and Cons of this are as follows:
Pros
- If you lose the CAS or One Primary, then at least one Primary is still functional, as are its Secondary Sites until the CAS or other Primary is brought back online. (The determining factor on if this is truly a Pro is “How long will it take me to get the CAS or Other Primary site back online?”) as well as what the SLA calls for as it applies to Configuration manager within the organization.
- Removes the Single Point of Failure scenario from the design, as clients assigned to other primaries would still be able to report in and be managed. (Please note that is also a Con as shown below in #2.)
Cons
- You now need more at least 2 more servers: a CAS and another Primary Site
- You now have three servers at a minimum that could have outages. This adds multiple single points of failure
- Increased Licensing costs
- Increased hardware costs.
- Increased SQL Replication
- Change latency across the Infrastructure as well as Locking due to replication latency.
Redundancy and High Availability
The data from Primary Sites and the CAS replicates amongst appropriate sites in the hierarchy. The CAS also provides centralized Administration and reporting. It is also important to note that automatic Client Re-assignment does not occur when a Primary Site fails.
The result of a Primary Site failure is that the Primary Site and its Secondary sites communication are now broken, and the Secondary Sites cannot be re-parented. This coupled with the fact that the Client cannot be easily re-assigned in the time it would take to recover the failed Primary Site means there is really not a valid reason to do this unless the time it will take you to recover the Primary site, is greater than the time it would take to reassign and reinstall all of the Secondary sites the failed primary had, and that is usually not the case. So when given this argument the end outcome is usually that this is not a good reason at all.
The rare case that I have seen where this was valid was when the scenario of Natural Disaster or War Type precautions for redundancy are being considered where the other location won’t be coming back online for quite some time, and in that scenario, it could be a valid design… I myself have had two such customers that did it for just this reason.
Geographic
In some cases companies across different countries require that each continent or country can share data, but that they also must be able to still support their country or continents clients must still be manageable. In this case, which is a business case for continuity; it would be feasible to have more than one Primary Site. Making the choice to use another Primary site in this case should be based on connectivity and client count because just using a Secondary site or remote Distribution point should be good enough for Geographic separation.
Political
There are many political reasons used as arguments in adding more Primary Sites, I will write on a few political reasons, though I am sure I have not seen them all. Political reasons are what they are, and sometimes corporations and their IT Departments must reside by specific Governance or other Security factors when building out your SCCM 2012 infrastructure.
Political Argument 1: Separate IT Departments
In some companies the IT departments may be separated out and have different roles throughout the company. Despite what they might think, the Configuration Manager 2012 is fully capable of assigning and isolating privilege within a single primary Site. Great strides were made in Configuration manager 2012 to ensure this argument is invalid. This should not be a reason for adding another Primary Site.
Political Argument 2: Governance and Compliance
With the advent of IT Governance and things such as HIPPA and Sarbanes Oxley, and other Compliance measures to ensure Security there is no True way to “not share” data amongst the Primary sites and the Central Administration site, so isolation of Data cannot be accomplished by adding Primary Sites as that data will be replicated to the CAS, and for this reason multiple Primary sites is not a valid solution. If the Governance and Compliance is truly setup this way, then there will likely be separate forest, separate IT Teams and so forth, which would mean also that a completely separate Configuration Manager 2012 hierarchy would need to exist in order to remain in compliance.
Political Argument 3: Separation of Content
In some cases the content (packages and applications) must remain within that country only. In order to do this, you would need a primary site that could ‘own’ the content. The other alternative would be to have a remote DP in that country but the problem herein lies that the content would be coming from a Primary site in a different country so would not be a viable option. The solution would be to have a Primary in that country that must own the content and a Distribution point pointing its own primary site in said country, this would allow Role Based administration to be used to secure the content and prevent it from leaving that Primary site and distribution point. In this case, a separate primary site is valid.
Political Argument 4: That’s just the way we do things!
In this case, which is sometimes true, and then you really have no choice other than to point out the reasons why and why not and rest your case.
If you’ve ever had the privilege of troubleshooting a stubborn Operations Manager Reporting Role like I have (more times than I’d like to admit), you may need to determine what OM Management Server server is configured as the Collection Server for your Reporting Role, assuming you’ve split the roles in your OpsMgr environment! This is the server that you specify during the OM Reporting Setup [Configuration > Specify a Management server]. In terms of troubleshooting OM Reporting, you may find that you need to not only find the server, but also ensure that the configuration matches.
You can check in two key places, both exist on the OM Reporting Server Role system.
- Registry
Check HKLM\Software\Microsoft\Microsoft Operations Manager\3.0\Reporting
- The rsreportserver.config file located in the Reporting Services\ReportServer folder.
As long as these match, this portion of the configuration checks out!
In this new cyberworld saturated with abundant mouse clicks, it has become more necessary to try to get your user’s attention before allowing them to perform an action that may result in a critical error. One of these actions is leaving the application without first saving changes. In some cases, a request to leave a page without first saving changes may be on purpose, but in many cases it may have been the result of an accidental mouse click.
Browsers have provided a built-in prompt that warns users before leaving certain pages. The user then has the option of leaving the page or staying on the page. This built-in prompt is not at all flexible, by design, to prevent rogue sites from actually preventing users from leaving their web sites.
The following is an example of how you might implement this prompt using the window.onbeforeunload event.
STEP 1: Mechanism to determine if there are unsaved changes
Because there is no reason to warn your users from leaving if they have not made any changes, the first thing you want to do is create a hidden field or client-side variable that specifies whether your page contains unsaved changes.
In this example, I have created a hidden input to store my change status:
@Html.Hidden("UnsavedChanges", "0")
I then created a function to update the change status:
function HandleFieldChange()
{
$("#UnsavedChanges").val("1");
}
Finally, I used a little jQuery to easily add the field change event:
$("input").change(function(){
HandleFieldChange();
});
$("select").change(function(){
HandleFieldChange();
});
$("textarea").change(function(){
HandleFieldChange();
});
$("input:checkbox").click(function(){
HandleFieldChange();
});
$("input:radio").click(function(){
HandleFieldChange();
});
STEP 2: Use the browser’s built-in warning prompt before leaving a page.
Again, this cannot be customized. If it could, that would open the door for this functionality to be used maliciously. You don’t have any control over how the warning looks, but you can insert a custom message into the prompt.
This function checks to see if there are unsaved changes. If there are, it calls the browser’s built-in warning and inserts a custom message.
window.onbeforeunload = function() {
if ($("#UnsavedChanges").val() == "1")
{
var _message = "You currently have unsaved changes!!!nnAre you sure you want to exit without saving.nnChoose 'Leave this page' to exit without saving changes.nChoose 'Stay on this page' to return to the billing profile.";
return _message;
}
}
RESULT
It’s not as pretty as I would like, but it works!
Upgrading SQL Reporting Services (SSRS) to SSRS 2012 should be a fairly easy task, but there are a few “bumps in the road” to watch out for. For this discussion let’s pretend that we have a SharePoint 2010 farm with 2 WFE, 2 APP, and 1 DB servers. SSRS 2008 R2 has been installed on both of the APP servers with a load balanced URL of spreports.contoso.local, each service uses a unique service account identity, all SPNs have been created, and Kerberos constrained delegation is configured and working properly.
Getting started I’m sure you have a few questions, hopefully this will answer a few of them.
- My web application is currently configured for classic mode authentication. Do I need to configure my web app for claims authentication?
The answer is no. Communication between the web app and SharePoint service applications is almost always converted to claims by the SharePoint STS for intra-farm communication, even if the web app is using classic mode for user authentication. However, you will need to use the Claims to Windows Token Service to convert the claims token back to a Windows Identity when connecting to data sources. - Do I need to uninstall SSRS 2008 R2 first?
No. The SSRS instance will automatically be upgraded when running the upgrade from the SQL installation media. - Do I have to upgrade the SQL database server which the databases reside?
No you do not, but it would probably be a good idea. - Is there any additional Kerberos configuration that is needed?
Maybe. Using the same identity for the new SharePoint SSRS service application will keep all of the previous configuration intact, but you may need to perform some additional Kerberos related tasks for the Claims to Windows Token service – more on this later. - Do I need to run the SQL upgrade on all servers in the farm?
You can, but I typically would just run it on only the servers that were running an SSRS instance and install the SharePoint Add-In individually on all remaining farm servers. - Do I need to do anything to my existing reports?
No. The upgrade process will take care of this for you the first time that each report is opened. - I have PowerPivot installed in my SharePoint farm will that continue to work?
If installed on the same server as SSRS then no. You will have to completely uninstall PowerPivot before installing SSRS. Afterwards you can install PowerPivot again from the SQL 2012 installation media. - Is there a difference between SSRS enterprise, BI, or standard?
Yes. The standard edition does not include PowerView or alert subscriptions.
Starting the Upgrade
Ok, now on to the upgrade. Typically what you would do is pop in the installation media for SQL server in the “CD” of each SharePoint server running an instance of SSRS. In this case it is the two App servers. Just make sure to choose the option to Upgrade from SQL Server 2005, SQL Server 2008 or SQL Server 2008 R2.
Once the upgrade wizard starts, it will install the setup files, run the System Configuration Checker, and it will detect the existing SSRS instance for you to upgrade. You pretty much just need to just move the wizard along until you get to the point in the wizard page called Reporting Services SharePoint mode Authentication. This is where I hit my biggest speed bump.
Error: The credentials you provided for the ” service is invalid.
Typically it is a best-practice to use a different domain account for each service instead of use the built-in accounts like Network Service. I think though, there may be an issue with the SQL installer when upgrading SSRS in SharePoint integrated mode to 2012. Every time that I would enter the password for the SSRS service account I would get an error that states:
The credentials you provided for the ” service is invalid.
Every time that I would enter the credentials and click Next I would see this error and then later, also realized that the service account was getting locked out because of too many bad password attempts – even though I just clicked Next just once.
The solution: Change the service account for the SharePoint SSRS instance to run using Network Service. After the upgrade is complete, you change change the credentials of the new SharePoint service application pool back to the previously used account. To change the credentials of the service account to Network Service, open the Reporting Services Configuration Manager and change the account. You will probably be prompted to backup your encryption keys during the process.
Now, you should be able to sail through the installation to perform the upgrade on each server running an instance of SSRS. Once the upgrade is complete, you should see the new service application listed in SharePoint Central Administration and a new service application pool will have been created as well. Once the upgrade is complete the new service application pool for the SSRS service application should be set to execute using the original credentials that SSRS was previously running and not Network Service. To change this, open Central Administration and click the Security link in the left menu and then Configure service accounts.
In the list of Service Accounts you should see a new service application pool that was created by the SSRS upgrade with a name similar to Service Application Pool – MSRSSHP11.0_MSSQLSERVER. Select this application pool and in the bottom drop box, register the account that was previously used by SSRS and select it to change the service identity of the new service application from Network Service to the previously used domain account that has all of the required SPNs already created.
You should then be able to use the newly created SQL Server Reporting Services 2010 service application.
But wait, there are some additional steps that you may want to accomplish before calling the upgrade complete.
Rename the Service Application
Personally, I don’t really care for the new default name of the new service application (MSRSSHP11.0_MSSQLSERVER_Service_Application) and I want to give it a more friendly name, something like SQL Server Reporting Services. There is not a way to rename the proxy using the GUI, but I don’t really care so much about the proxy so I just left it as is.
To do this you can click select the service application and then click the Properties button in the ribbon above to change the name.
This is where I ran into the next issue.
Grant the SharePoint Farm Account db_owner on the 3 Reporting Services databases.
When I tried to rename the service application I was presented an error that stated that it was unable to open the ReportServerTempDB and that the login had failed for the SharePoint farm account.
Now that reporting services is hosted as a SharePoint service application, the SharePoint farm account should be granted db_owner permissions on all 3 of the Reporting Services databases – yes, there are now 3 of them.
Error: Login Failed for “NT Authority\Anonymous Logon”
When I finally went to test one of the existing reports I was greeted with yet another error. The error basically stated that it could not connect to one of the data sources in the report as an anonymous user when I would have expected it to connect as my user account. This led me to believe that there was a Kerberos problem.
In this environment the Claims to Windows Token Service (C2WTS) had already been configured and SPNs and delegation for C2WTS had already been setup for all of the same data sources in order to support Excel Services. If you have not yet configured C2WTS you should create a SPN (doesn’t matter what it is, I just called mine SP/C2WTS) so that the Delegation tab is exposed in Active Directory Users and Computers. You will then need to configure constrained delegation for the services accounts and SPNs for all of the data sources that you plan to connect to. Really though, all of the delegation settings for the account used for the SSRS service should also be replicated for the account running C2WTS.
Every time that I have to setup Kerberos in a SharePoint environment I always refer back to a great Microsoft document Configuring Kerberos Authentication for Microsoft SharePoint 2010 Products. I decided to review the section for setting up Excel services and started double checking my environment to make sure that everything was setup similarly for SSRS. There was one statement in the guide that caught my attention:
“Select Use any authentication protocol. This enables protocol transition and is required for the service account to use the C2WTS.”
I double checked the service account for which SSRS was running and sure enough, it was set to Use Kerberos only. Once I had this changed to “Use any authentication protocol” and restarted all of the appropriate services my reports started working again!
Reclaim the old DNS Record
Now that SSRS is running in SharePoint as a service application the friendly URL that was being used to serve reports and load-balance requests was no longer being used. I was able to reclaim the old URL of spreports.contoso.local.
I was recently working with a dynamic group membership situation where we needed to include all of the sub-OU’s within the group. Our approach was to create a dynamic membership which matched based on a wildcard value that would exist because the sub-OU naming includes the top level OU naming. The result was just what we needed so we didn’t need to specify each sub-OU’s membership in the group. I was going to write up the details on this, but I found that someone else had done so (it’s great being a member of a community that shares information like this – way to go!). The following is a subset of his article available a http://00shep.blogspot.com/2012/03/scom-groups-dyanmic-members-ou.html
“Note the highest level OU for which you want to capture all sub-systems
- Go to one of the systems in SCOM and view the properties in “Monitoring”. One of the values will be “Organizational Unit” > Copy it
- Create your Dynamic Members inclusion rule
- Select “Windows Computer” > Add
- Property = “Organizational Unit”
- Operator = “Matches Wildcard”
- Value = I* + < OU that you copied in step 2>
e.g.
*OU=XenApp-65,OU=Servers,DC=MYDOMAIN,DC=com”
Operations Manager (SCOM, OpsMgr) has the ability to monitor an untrusted domains as well as highly segmented\firewalled networks.
Gateways can be within the trusted domain as well, but are highly segmented by Firewalled VLANs. The Gateways that are installed in the trusted domain do not actually utilize certificates, as the untrusted domain computers do. In this case, the trusted computers utilize Kerberos (SPNs must be registered) and they may also require a Trusted Internal CA Root Cert.
The untrusted Gateway cannot properly communicate to the MS’s (EVENT ID 220071, 21016)
OpsManager Unable to set up a communication channel with MS
I validate the following:
- Verify Manual Agent Installs show in Pending Actions for approval
- In the Operations Console, Administration>Settings>Security
- Ensure ‘Review new manual agent installations in pending managmeent view’ is checked
- In the Operations Console, Administration>Settings>Security
- Recycle the HealthService (System Center Management Service)
- SPN’s registered for DB\DW and MS’s
- Restart of Servers in this order may be necessary
- DB\DW Instances
- RMSe Management Server
- Other MS’s
- Restart of Servers in this order may be necessary
- Install GW as local Administrator
- GW Approval tool run using an account with SysAdmin privileges to SQL DB
- Certificates are OK
- Trusted Root Certificates on All GW and MS
- Ensure Full Name of Computer is used as the Friendly name and name of the certificate
- OpsMgr Cert unique to each computer and imported
- Using 1024 or 2048 key size (2048 adds slight CPU overhead)
- MOMCertImport changes this to 1024
- Expiration OK
- MOMCertImport changes this to 1 year in the Registry
- If you cannot request the certificate from the GW or Agent:
- Use the web site on the MS to request a server cert
- Gateway approval tool is OK
- Ensure the following files exist on the Management Server:
• Microsoft.EnterpriseManagement.GatewayApprovalTool.exe
• Microsoft.EnterpriseManagement.GatewayApprovalTool.config - Command Run successfully:
- Microsoft.EnterpriseManagement.gatewayApprovalTool.exe /ManagementServerName=<managementserverFQDN> /GatewayName=<GatewayFQDN> /Action=Create
- Ensure the following files exist on the Management Server:
- Port 5723 has been validated as Open
- I haven’t seen the need for 5724 in the past, although mentioned in the MS documentation
- Telnet to 5723 to the MS succeeds
- Check DNS
- Mgmt Server name resolves to IP
- Potential Hosts file edit or DNS entry
- Try re-copying the certificate out of the OperationsManager Folder into the Trusted Root store and restarting HealthService
- Flushing Health Service Cache
- Reinstalling GW and pointing to different MS
- Reissuing Certificates and reimporting, then running MOMCertImport
- Recycle the HealthService (System Center Management Service)