Quisitive Achieves a Microsoft Gold Datacenter Competency | Quisitive

Toronto, ON / TheNewswire / January 24, 2019 – Quisitive Technology Solutions Inc. (Quisitive or the “Company“) (TSXV: QUIS), a premier Microsoft solutions provider that helps customers navigate the ever-changing climate that their business relies upon, today announced it is has attained a Gold Datacenter competency, demonstrating a “best-in-class” ability and commitment to meet Microsoft Corp. customers’ evolving needs in today’s mobile-first, cloud-first world and distinguishing itself within Microsoft’s partner ecosystem.

To earn a Microsoft gold competency, partners must successfully complete exams (resulting in Microsoft Certified Professionals) to prove their level of technology expertise, and then designate these certified professionals uniquely to one Microsoft competency, ensuring a certain level of staffing capacity. They also must submit customer references that demonstrate successful projects, meet a performance (revenue and or consumption/usage) commitment, and pass technology and sales assessments.

The Datacenter competency recognizes partners who are transforming data centers into more flexible, scalable, and cost effective solutions. Partners can deepen customer relationships by becoming a provider of Private Cloud, Management, and Virtualization Deployment Planning Services. Through this competency, partners receive access to internal use software licenses, technical and presales support, IT professionals training, incentives, and access to the Microsoft Partner server and cloud site with exclusive content and resources to help you win new deals to deliver projects successfully.

“This Microsoft gold Datacenter competency showcases our expertise in and commitment to today’s technology market and demonstrates our deep knowledge of hybrid solutions that bridge on-premises and the Microsoft Azure cloud,” said Mike Reinhart, Quisitive CEO. “We plan to accelerate our customers’ datacenter transformation by serving as technology advisors for their business demands.”

The Gold Datacenter competency adds to Quisitive’s extensive competency portfolio including Gold achievement in Application Development, Cloud Platform and Cloud Productivity and Silver achievement in Collaboration and Content. By adding this additional strategic capability to their portfolio, Quisitive has demonstrated the highest, most consistent capability within private and hybrid cloud management and virtualization deployment. Customers that select Quisitive as their solutions provider can have confidence they are partnering with an expert in hybrid architecture, cloud migration and management.

The Microsoft Partner Network helps partners strengthen their capabilities to showcase leadership in the marketplace on the latest technology, to better serve customers and to easily connect with one of the most active, diverse networks in the world.

About Quisitive:
To learn more about Quisitive, visit www.Quisitive.com

For additional information
Tami Anders
VP Marketing
[email protected]

TORONTO, Jan. 23, 2019 /CNW/ – Mike Reinhart, CEO & Director, Quisitive Technology Solutions Inc. (QUIS), joined Tim Babcock, Managing Director, Capital Formation, TSX Venture Exchange, to open the market. Quisitive is a Microsoft solutions provider that helps customers navigate the cloud and emerging technologies such as blockchain, artificial intelligence (AI), machine learning, and the Internet of Things (IoT) through customized solutions and first-party cloud-based products. Quisitive serves clients globally with offices in Dallas, Texas, Denver, Colorado and Toronto, Ontario. Quisitive Technology Solutions Inc. commenced trading on TSX Venture Exchange on August 13, 2018.

Source TMX Group Limited

Why Automate Your Infrastructure?

I can say that 10 years ago working in a data center, if you had told me that I would be using Visual Studio on my machine daily or even needed to open Visual Studio (or VSCODE) I would’ve thought otherwise. Businesses want to reduce the massive costs in operations and support while still continuously evolving.

Defining and deploying your environments in code has been the answer to that business need on many levels. Which means that a traditional infrastructure engineer’s skill-sets have changed.

The days of your infrastructure teams clicking buttons and checking and un-checking boxes according to the engineering team’s “Server Build Checklist” are becoming the way of the past. Enterprises want fast, repeatable, controlled deployments of infrastructure in multiple environments. Automation has become the mandate as operational costs soar. Azure and Amazon both have fantastic ways to orchestrate your infrastructure as a code. Deploying your DEV, QA, Pre-Production, and Production subscriptions or Resource Groups with a touch of a button, in multiple regions, all customized end to end.

Catapult’s consultants provide this guidance to our customers to not only advise on what to deploy but the how in an automated fashion. This allows you to take this code and redeploy it to any environments you see fit and version it. Providing this automation guidance is a staple of working with the cloud and through it, organizations benefit in a huge manner. When they have defined environments all in code that they can deploy incrementally or in full via their tool-sets. Providing not just documentation and diagrams of cloud architecture but the actual code to recreate along with it is invaluable. Having a defined reference architecture in the code will allow for a gold standard for your environments. Redeploying becomes easier than triaging and troubleshoot. Which means less downtime, less unknown and no more of the “Oh that was built by Bob, he’s the only one that knows how to configure that server”.

When working with our customers I always establish a delivery model that includes automated deployments of the resources. This creates a framework for your entire environment. I also suggest that: Infrastructure as a code, in general, should follow the same source control guidelines that your developer product teams follow. By utilizing this technique, you can deploy infrastructure in a uniform and controlled manner. From your Dev/Test environments all the way to Production. This creates a deployment model that allows your QA teams to truly test production in their closed QA environments, creates a predictable uniform architecture and allows for operation teams to quickly redeploy entire environments on the fly without having to follow pages of notes. Versioned, committed infrastructure as a code, that is checked into source control, being adopted by infrastructure teams: Allows for the reference architecture for your environments to be mapped in templates.

When it comes to Azure, knowing how and what to deploy in Azure Resource Manager Templates has revolutionized how Technical Architects and Engineers perform their day today.  As was posted by one of Catapult’s Azure solutions architects here, Mick covered an introduction to Azure Resource Manager Templates. I can’t understate how valuable a tool ARM Templates can be, as you move/create workloads in Azure. This blog post was a fantastic introduction to a feature set that allows you to deploy resources with the touch of a button. From one resource to entire subscriptions of chained templates, as well as their configuration and scale in a lite touch operation.

I encourage you to read that blog post first.


In the following series of posts, let’s talk about how to leverage these templates to create customized multi-tier applications in Infrastructure as a Service and Platform as a Service. As well I will include a section about how to define your environments in templates to accompany your documentation, as this is a hugely valuable way to collaborate among teams by having a table of contents for your infrastructure by using templates.

So, how can we manage all of this in Azure?

In the first part of this series focused on Azure Resource Manager (ARM) Templates, let’s discuss extensions.


Part 1 – Extensions

After reading Mick’s blog post. Let’s talk about automating even further. To start, let’s highlight some steps that you may have to complete each time you deploy a new VM. There’s always additional tasks you need to complete, joining to the domain or configuring monitoring, maybe you have some infosec requirements to encrypt your managed disks or business requirements to perform a backup.

All these types of customization can be accomplished with Extensions.

One of the most important concepts when it comes to Azure templates is: They follow a defined order of operation. At the most basic, resources should have a defined “dependsOn”: that states that the deployment of that referenced resource needs to exist prior to it being deployed. In short, you are listing its dependencies. For DependsOn, you can make a resource depend on several resources, or one, or you can list conditions (if the resource group’s location = east, deploy, if not do not, more on this in future posts). These orders of what to deploy and when is the most important part of automated Azure deployments. Even for something as simple as a standalone Virtual Machine. https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-define-dependencies

Extensions, if you have ever manually installed them on an Azure virtual Machine, take this a little bit deeper. To manually install extensions on a deployed virtual machine it’s as easy as navigating to the Azure portal, selecting your virtual machine, choosing extensions and you get a list of various extensions you can install. You’ll see that you can install 3rd party solutions like Chef or Data Dog. You can configure backups, run scripts, anti-malware, etc. This allows full customization of a machine.

Why use extensions? Sure, you can get the executable or MSI that you want to install and drop it on the machine and install it. And we all know the benefits of automating deployments. But another important function of extensions is you can allow them to be auto upgraded via Azure. Meaning if a new version of this extension is released, Microsoft will update the extensions for you, from minor to major versions.

You can always view manually installed extensions, as all Windows Azure VMs have them by looking at the Automation Script section on your Resource Groups:

Automation Script of a Resource Group with extensions. This allows us to view the code within the Azure REST API

We can deploy extensions with the creation of the VM resource or incrementally later.

Let’s look at using extensions to customize our deployments after the resource is created. At the end of the deployment, you are ready to start using the machines for their intended purpose, rather than taking a day or more to make all the configuration changes. I have selected a few of the more commonly used extensions to demo how we can accomplish this. I will use working snippets of code that I use.


One of the most common extensions that I request guidance on, is the automated domain join of a virtual machine during a Resource Group deployment. Suffice to say, if you are deploying your machine to a virtual network that can communicate with your domain controllers or Azure AD Domain services, you can automate this process of joining to the domain, moving the computer object to the correct OU, rebooting and applying the correct group policy. This is a valuable addition to your templates, if you are deploying many machines and they need to be domain joined. Let’s look at the makeup of this code within a template:

Azure Resource Manager

This is the domain join extension. As you can see on the bottom line it depends on a VM. The dependson clause is paramount as the machine needs to exist prior to the deployment. As well the name of the extension, because it is in fact a resource in Azure. If you look at the name it concatenates (concat) the virtual machine’s name and the ‘/joindomain’ is instructing the extension to bind to that VM.

Under the Settings, you can see the makeup of the configuration:

“settings”: {
“Name”: “The Fully Qualified Domain Name“,
“OUPath”: “The Distinguished Name of the Organizational Unit in Active Directory“,
“User”: “The User Principal Name of a user that has permissions to join computers to your domain” there is a way to use the NETBIOSusername method but I have found UPN to work well,
“Restart”: “true”, Automated Restart after successful join
“Options”: “3”  These are total bitflags used when performing the AD trust. IE: NETSETUP_JOIN_DOMAIN (0x00000001) & NETSETUP_ACCT_CREATE (0x00000002) 1+2 = 3.
},
“protectedSettings”: {
“Password”: “The protected password for the “User” listed above
}

If you notice the screenshot above, we have stored these values in parameters. These values are referenced to a Keyvault’s secrets for added security. I will cover this in a later blog.

During deployment of your virtual machine(s) you will notice that the extension deploys as an extra step.

Extension during deployment

If it fails you can, check the status code that the failed deployment will output in the screen above, or you remote into the machine you were attempting to deploy extensions to, via Windows path  C:WindowsAzure or Linux /var/log/azure/ . There you can see the steps taken and any errors,  or fire up your trusty AZ/AzureRM PowerShell and perform a Get-AzureRmVMExtension and query its status. This is true of all the extensions I’ll list.


Another time saver is to automate deploying your VM diagnostic settings for monitoring.

This extension will configure all the various settings under the Diagnostic Settings in the portal. Especially if you are using another log ingestion tool that has specific requirements, you’ll want to incorporate this into your templates to make sure all deployments of VMs are being properly monitored, logging can be configured for external and internal events/counters. I have minimized much of this extension as it contains definitions for all the performance counters. Some important bits:

For this extension’s code lets break it down. You’ll find that the same “flow” is followed by all the extensions. So, let’s use this as our example:

“resources“: [
{
“name“: “Microsoft.Insights.VMDiagnosticsSettings“, As you can see here and with the other examples below this is where the extension name is categorized by the REST API
“type“: “extensions“, All resources need to have a type defined
“location“: “[parameters(‘location’)]“, Location needs to be the same as the VM
“apiVersion“: “2015-06-15“, As Mick Monk wrote about briefly, each resource as a particular API available. There are several ways to find the newest API.
“dependsOn“: [
“[concat(‘Microsoft.Compute/virtualMachines/’, variables(‘vmName’))]“
], Heres the most important field where you list what resources NEED to be deployed prior to installing the extension.
“tags“: {
“displayName“: “AzureDiagnostics“
}, This is cosmetic so you can keep your JSON code labelled
“properties“: {
“publisher“: “Microsoft.Azure.Diagnostics“,   The properties section includes the who what where of the extension
“type“: “IaaSDiagnostics“,
“typeHandlerVersion“: “1.5“,
“autoUpgradeMinorVersion“: true, This is one of the more valuable features of extensions as with this flag set to true it allows Azure to control the version and update as neccessary without you having to manually update
“settings“: {
“xmlCfg“: “[base64(concat(variables(‘wadcfgxstart’), variables(‘wadmetricsresourceid’), variables(‘vmName’), variables(‘wadcfgxend’)))]“, This is a concatenated resource ID for the configuration. View the support article for how this needs to be formatted
“storageAccount“: “[parameters(‘existingdiagnosticsStorageAccountName’)]” The storage account being used to store your diagnostic logs
},
“protectedSettings“: {
“storageAccountName“: “[parameters(‘existingdiagnosticsStorageAccountName’)]“,
“storageAccountKey“: “[listkeys(variables(‘accountid’), ‘2015-06-15’).key1]“, This valuable variable allows the storage account key to be obtained using RBAC permissions of the credential deploying. Meaning, if you have access to this storage account, it’ll obtain key1
“storageAccountEndPoint“: “https://core.windows.net” Blob, File, Table storage endpoint
}

Under the Settings: section you can see we have defined WAD Logging, Infrastructure Logs, Performance Counters and Windows Event logs. This is a good way to create a policy for all your machines. Especially if you use Azure Log Analytics to ingest these logs.

Also, at the bottom of this section, you can see where we have defined our storage account where these logs are dropped. For more info: https://docs.microsoft.com/en-us/azure/virtual-machines/windows/extensions-diagnostics


Azure Resource Manager

Pretty self-explanatory it’ll check the machine into Log Analytics for your monitoring. This is also needed for Azure Update Management, Azure Change Tracking/Inventory. Just need to add your workspace id and workspace key.

Full documentation: https://docs.microsoft.com/en-us/azure/virtual-machines/extensions/oms-windows


This is one I use for just about every sort of customization. If you can imagine the options, if you can launch a script post deployment. You can obviously perform just about any action under the sun, in chained format. Either using the magic of PowerShell or just by adding multiple script extensions, you can completely configure your machines in an automated fashion. I’ve used this to configure entire clusters, T-SQL operations, installing software, configuring Remote Desktop Services, new AD forest deployments, the possibilities are large. The example I included is what I normally use where most of the function/jobs are embedded in the PowerShell script. There is another option within the template, where the ARM template adds many of the PowerShell variables and parameters. This I find most useful, as you can store the script in blob storage, give it an access key (via Key vault) and it’ll run. This extension could be an entire blog just as with the next item. For Windows, here are the various options when automating with this extension: https://docs.microsoft.com/en-us/azure/virtual-machines/extensions/custom-script-windows


Azure Resource Manager

In my experience have used this primary for server function/roles. I have built many templates where DSC has come in quite handy. Domain Controllers, Web Servers, RDS are just some examples where it was used to not just configure the role but all the settings in between. Recently, I’ve used the DSC extension to deploy a new AD forest, import its GPOs, create/modify the DNS server in an IaaS environment, modify VNET properties, install SQL and configure it in a cluster etc. Looking at the example the format is very similar to the Custom Script Extension except it uses DSC. To get a full rundown on the possibilities with DSC to see this post


Azure Resource Manager

You can enforce disk encryption on your managed disk VMs during deployment. This extension utilizes an Azure Active Directory application to perform the operation & stores the key in key vault. There is a full example of how to create everything end to end for these operations here


This will register the machine in an Azure DevOps account and project as a Deployment Group with a Personal Access Token. As of this writing, the extension still uses the old name Visual Studio Team Services (VSTS). I’m assuming that there will be a replacement API soon. Click here for more info.


As with the Azure DevOps, as of this writing, this API for anti-malware is in preview. As with the others, this is self-explanatory it is adding Anti-Malware to the machine. I have consulted users to make this a staple of their VM deployments, this way you ensure that you are running protected machines. More on this extension and OS version support, here.


Azure Resource Manager

This extension assumes you have a Recovery Services vault configured. This will bind your backup policies to your VM.


The SQL IaaS Extension is meant for VMs running one of the custom Azure VMs with SQL Server obtained in the Azure Marketplace. This extension allows you to configure patching, backup and key vault integration from more modern versions of SQL


Quick Closing

There are other extensions that you can view here: https://docs.microsoft.com/en-us/azure/virtual-machines/extensions/features-windows

I hope these examples will help you to understand some of the customization you can make to your infrastructure deployments. Next blog posting I’ll cover customizing your PaaS deployments. Part 3 will cover Key Vault and its place in ARM Templates, Part 4 will cover conditional, nested and linked deployments.

I’ve been working on the development of what (for me at least) is a pretty complicated Microsoft Flow. This flow provides “intelligent” notification on whether to open or close the windows at my house. I’m doing this as a practical example to better learn Microsoft Flow, to increase energy efficiency, and to get some fresh air through the house. When developing this Flow one major challenge I ran into was effectively testing all the conditions that could apply which would factor into the decision whether to open or close the windows. These decisions were based on weather conditions which change on an hourly basis. This blog post will go through a couple of the approaches that I recommend when developing complicated Flow’s.

If you are interested in previous blog posts related to the technology I’m using to decide whether to open or close the windows check out these blog posts:

The first step I took on debugging this Flow was to add an email notification at the end of every path which the Flow could go down. This included conditions where it made sense to open or close the windows or when it did not make sense to do these steps. This means that whatever path the Flow takes results in an email sent whenever the Flow is run. Below is a sample step taken which logs every variable I’m using in this Flow so that I can manually verify if the Flow performed as I expected it to.

Microsoft Flow

This was an example for “Debug Email 4” which means it was the fourth path in the Flow which resulted in a decision to not open or close the windows. This same step was performed for each of the other paths with the same content (so it’s just a cut and paste with a slightly different name “Debug Email 1” vs. “Debug Email 4” as an example). This approach works really well when developing a Flow but it does generate a lot of email if you are running the Flow regularly.

As a result of the challenge above (lots of email), I added a debug flag to the Flow by initializing a Boolean value called “Debug” which could have a value of “true” or “false”.

Microsoft Flow

Once this value has been initialized, we can use it to determine whether to send the debug emails.

Microsoft Flow

When I need to make changes to the Flow, I change the initialization step to set the Debug to “true” and after saving that change debug emails flow again.

Summary: If you are developing complicated Flow’s, I highly recommend using a Boolean type Debug flag combined with debug emails on the various Flow paths like this blog post shows.

The Microsoft Cloud Solution Provider Program allows Quisitive to provide direct billing, sell combined offers and services, as well as directly provision, manage and support Microsoft cloud offerings.

Toronto, ON / TheNewswire / January 7, 2019 – Quisitive Technology Solutions Inc. (Quisitive or the “Company“) (TSXV: QUIS), a premier Microsoft solutions provider that helps customers navigate the ever-changing climate that their business relies upon, today announced it is now selling through the Microsoft Cloud Solution Provider (CSP) Program as a Direct Provider. This licensing model enables Quisitive to be the center of the customer relationship by providing direct billing, sell combined offers and services, as well as directly provision, manage, and support Microsoft Office 365 and Microsoft Azure.

“The success of an enterprise hinges on its ability to stay relevant, and we believe cloud technology makes relevancy possible. Selling through the Microsoft Cloud Solution Provider program allows us to deepen the value we are bringing to our customers, ensuring that the solutions we develop, manage and support propel their business,” said Mike Reinhart, CEO at Quisitive.

The CSP program now allows Quisitive to support the complete customer lifecycle. Simplifying the equation for their customers and allowing them to benefit from having a single point of contact for Microsoft cloud services. With Quisitive’s deep expertise in the appropriate application of Microsoft cloud technology, the ability to now sell through the CSP program ensures that their customers have a skilled technology partner by their side that understands their business and is advising them on the best options from implementation to support. The winners, ultimately, are Quisitive customers, says Mike Reinhart. “It allows clients to easily source all their IT services from one provider, regardless of their size or where they are in their lifecycle.”

“The Cloud Solution Provider program puts our partners at the center of the customer relationship,” said Gavriella Schuster, corporate vice president, Commercial Partner Channels and Programs at Microsoft. “Partners who sell through CSP have demonstrated dedication to helping our mutual customers drive their digital transformation.”

The Cloud Solutions Provider Program ensures that Quisitive’s clients can take full advantage of Microsoft cloud technology and gain expert assistance to evaluate, deploy and directly support their solutions. This program also provides the platform for Quisitive to realize recurring revenues from Microsoft cloud licensing in addition to managed services and SaaS subscriptions solutions.

About Quisitive:
To learn more about Quisitive, visit www.Quisitive.com

For additional information
Tami Anders
Chief of Staff
[email protected]