Cinemark Partners with Quisitive as a Key 2019 IT Services Provider | Quisitive

Toronto, ON / TheNewswire / March 28, 2019 Quisitive Technology Solutions Inc. (“Quisitive” or the “Company”) (TSXV: QUIS), a premier Microsoft solutions provider, today announced that Cinemark (NYSE: CNK) has chosen it as a 2019 strategic IT services provider. Under the terms of the agreement, Quisitive will serve as the multi-national motion picture theatre company’s IT services provider for Microsoft-centric initiatives, including application development, data and analytics, Azure cloud, Microsoft Office 365 and mobile applications.

“We’re pleased to formalize our relationship with Quisitive,” said Doug Fay, CTO, Cinemark. “Over the years, they have proved invaluable in helping us fulfill our vision to deliver an excellent guest experience and build brand loyalty. They are a partner in every sense of the word. They take the time to understand our business, processes and goals and always provide expert guidance and use their technological expertise to create digital experiences that keep our customers engaged and coming back.”

Quisitive has a seven-year relationship with Cinemark, resulting in this key 2019 agreement. During their long-term engagement with Cinemark, Quisitive has leveraged its deep technical knowledge of the Microsoft platform to support Cinemark’s mission to improve guest experience, satisfaction and service by modernizing the brand’s mobile app and web experience.

“Our strategic relationship with Cinemark is a true example of the depth of partnership Quisitive forms with customers,” said Mike Reinhart, CEO, Quisitive. “We work together with Doug and his team to understand their business priorities and challenges and we apply our expertise in the Microsoft technology stack and emerging workloads to quickly scale his teams and put optimal solutions in place.”

Quisitive supports an array of Cinemark’s IT needs, from understanding how the cloud can enable Cinemark to ensuring the theatre company’s digital guest experience continues to evolve, meeting customer preferences.

About Quisitive:
To learn more about Quisitive, visit www.Quisitive.com

About Cinemark Holdings, Inc.:

Cinemark is a leading domestic and international motion picture exhibitor, operating 546 theatres with 6,048 screens in 41 U.S. states, Brazil, Argentina and 13 other Latin American countries as of December 31, 2018. For more information, go to investors.cinemark.com.

Contacts:

Quisitive

Tami Anders, VP Marketing
[email protected]  

Cinemark Theatres

Lanay Fournier-Stokes, 972.665.1680

[email protected]

Recently we had a requirement to provide more than basic CPU threshold queries for Log Analytics.

We have been watching the upcoming dynamic threshold functionality to see if this will cover what we need. However, this appears to only be available for systems running in Azure.

For our on-prem systems, we have developed the following queries to provide an alert when any server is over or under a specific threshold, a specific percentage of the instances over a specific timeframe. Examples:

This blog post will show sample memory and CPU queries thresholds for virtual machines, however, the queries can be used for any performance counter in Log Analytics.

Monitoring Processor Health

If we want to look at the CPU usage for a system, we can use a query like this one which shows how a specific system’s % CPU looks over the last hour for each instance of the counter for that system (0, 1, 2, 3, _Total)

Perf
| where CounterName == "% Processor Time"
and TimeGenerated > ago(AssessTime) and Computer contains
"XYZ"

If we render this data as a Stacked Column by the InstanceName we see the following results:
CPU queries

Below is the query for the Processor or % Processor Time counters. This query looks at the “Processor” or “%Processor Time” counter and sees which computers have a value of more than 90% over the last hour for more than 99% of the time.

An example of next Generation CPU queries

let AssessTime = 30m;
let CounterThreshold = 90;
let CounterThresholdPct = 70;
Perf
| where (ObjectName == "Processor" or ObjectName == "System") and CounterName == "% Processor Time" and TimeGenerated > ago(AssessTime)
| summarize MaxCPU = max(CounterValue), CpuOverLimit = countif(CounterValue > CounterThreshold), PerfInstanceCount = count(Computer), PctOver = round(todouble(todouble(((countif(CounterValue > CounterThreshold)*100))/todouble((count(Computer)))))) by Computer
| where PctOver > CounterThresholdPct

This example of CPU queries can adapt based on any of the configurations that you are looking for. The format is:

A sample result set is shown below (with CounterThreshold and CounterThresholdPct updated so there is sample data):

CPU ThresholdLong Description

CPU Threshold

This query approach only alerts when a counter is above a threshold for a percentage of the data points over a specified timeframe. This should result in a much more targeted alert – IE: When is my CPU really a bottleneck.

Monitoring Memory Health

If we want to look at the memory usage for a system we can use a query like this one which shows how a specific system’s available memory looks over the last hour. We can see that the available memory is consistently less than the threshold of 700.

Perf
| where CounterName == "Available Mbytes" and TimeGenerated > ago(AssessTime) and Computer contains "XYZ"

CPU queries

Below is a variation of the query above re-written for the available memory counter: (changes compared to the first query are in Bold below). This query looks at the “Available Mbytes” counter and sees which computers have a value of less than 700 Mbytes over the last hour for more than 90% of the time.

Next Generation memory query

let AssessTime = 60m;
let CounterThreshold = 700;
let CounterThresholdPct = 90;
Perf
| where CounterName == “Available MBytes” and TimeGenerated > ago(AssessTime)
| summarize MinMemory = min(CounterValue), MemoryUnderLimit = countif(CounterValue < CounterThreshold), PerfInstanceCount = count(Computer), PctUnder = round(todouble(todouble(((countif(CounterValue < CounterThreshold)*100))/todouble((count(Computer)))))) by Computer
| where PctUnder > CounterThresholdPct

A sample result set is shown below:

CPU queries

This query approach only alerts when a counter is below a threshold for a percentage of the data points over a specified timeframe. This should result in a much more targeted alert – IE: When is my memory really a bottleneck.

Monitoring Disk Space

The query below shows a similar type of query focused on free disk space.

let AssessTime = 1d;

let CounterThreshold = 5;
let CounterThresholdPct = 70;
Perf
| where (ObjectName == “LogicalDisk” and CounterName == “% Free Space” and InstanceName != “_Total”)and TimeGenerated > ago(AssessTime)
| summarize DiskFreeUnderLimit = countif(CounterValue < CounterThreshold), PerfInstanceCount = count(InstanceName), PctOver = round(todouble(todouble(((countif(CounterValue < CounterThreshold)*100))/todouble((count(InstanceName)))))) by Computer, InstanceName
| where PctOver > CounterThresholdPct

Summary:

The sample queries in this blog post (see the “Next Generation CPU query” and “Next Generation memory query” sections for the queries) should provide extremely actionable alerting for these two KPI’s for servers. Additionally, these queries can be used for any performance metrics which you gather into Log Analytics!

P.S. I owe a huge shout-out to Thomas Forbes for his development of the CPU query contained in this blog post. Way to go Thomas!

UPDATE: Updated on 6/1/21 with functional enhancements to the queries that Thomas put together.

Toronto, ON / TheNewswire / March 5, 2019 Quisitive Technology Solutions Inc. (“Quisitive” or the “Company”) (TSXV: QUIS), a premier Microsoft solutions provider, today announced the release of their manufacturing case study. This video case study demonstrates how the Microsoft Azure cloud can bring visibility into the supply chain.

To address the growing needs of manufacturing and the pressures of the supply chain, Quisitive’s manufacturing case study demonstrates how leveraging Microsoft Azure, blockchain, artificial intelligence (AI), computer vision, neuro-linguistic programming (NLP), and Internet of Things (IoT) can quickly address supply chain needs to improve operations and increase the bottom line.

“We are leveraging our extensive background in cloud technologies and our deep experience in the industry trenches to develop a number of assets from case studies to proof-of-concepts that tackle some of the most challenging industry scenarios facing our customers today,” said Scotty Perkins, Quisitive Senior Vice President of Product Innovation.

In 2018, Quisitive unveiled their blockchain proof-of-concept (POC) for the oil and gas industry to highlight how technology can improve producer and midstream operator transparency, visibility and profitability. The POC demonstrates how storing data in a shared environment not only eliminates redundancies but boosts profits for all parties involved in the pipeline process. The Company’s newest manufacturing case study adds to its growing industry solution portfolio.

Quisitive’s industry-focused case studies, POCs and demonstrations are being developed to show organizations how they can use these new technologies to solve complex industry-specific business challenges. Through its proprietary Commonsense Approach, Quisitive can quickly frame the key scenarios and unique value in which the Microsoft Azure cloud can be leveraged in combination with AI, ML, IoT and blockchain to provide quantifiable business value.

“As we think about Quisitive’s growth vision, verticalization is a key to providing our customers a portfolio of solutions that can quickly bring value,” said Mike Reinhart, Quisitive CEO. “The development of industry-focused solutions is a critical component that demonstrates how we think about these complex problems and the role technology integration plays in solving them.”

About Quisitive:
To learn more about Quisitive, visit www.Quisitive.com

For additional information
Tami Anders
VP Marketing
[email protected]

In this first part of this blog series, we introduced our solution for ping monitoring within Azure Monitor via Log Analytics and Azure Automation. In this blog post, we will showcase how we can query the data in Log Analytics, generate alerts in Azure Monitor for ping failures and how we can visualize this information.

Once we have had ping failures with the solution shown in the first blog post we can now see the results of this with the following query:

PingMonitor_CL | project Name_s, TimeGenerated, LastHeartbeat_s

ping failures

The query below gives only specific systems which have not responded to the ping test and when these systems last responded successfully.

PingMonitor_CL
| project Name_s, LastHeartbeat_s
| distinct Name_s, LastHeartbeat_s
| project Name = Name_s, LastHeartbeat = LastHeartbeat_s

If we look at the above results, we see that while the watcher is running every minute the log analytics entries are writing every 15 minutes (which makes sense as that’s how we configured the ping to work by setting SuppressMinutes to 15).

ping failures

Generating alerts for ping failures:

Now that we have our underlying queries, it’s simple to generate alerts for these conditions in Azure Monitor. We add a new rule which queries Log Analytics and alerts when the number of results is greater than 0 based on a 15-minute period and checking with a 15-minute frequency.

ping failures

The full configuration for the alert is below.

ping failures

And here’s what we receive when a system is not responding to ping requests.

ping failures

Azure Cost breakdown:

Once we get data into Log Analytics cost is impacted by the amount of data sent to Log Analytics, and the number/configuration of ping failures alerts on that data.

Summary: Once we get the data into Log Analytics, it’s very straightforward to generate queries, alerts and dashboards based on this data. The overall cost of this solution should be generally insignificant on a monthly basis while still providing critical information for when systems are down.

Using Azure Monitor to provide availability of systems works extremely well for most configurations, but what about situations where you can’t install a Log Analytics agent on the system? (whether the OS is not supported, or for a router as an example where it’s not possible to install the agent). For these use cases, we have found it useful to provide a ping monitor for these types of systems. This blog post will provide details on the solution which we have developed which provides a ping monitor for an extremely low cost on a monthly basis.

What’s required:

The architecture that we are using for this solution runs in Azure Automation using a watcher node and it consists of three runbooks:

This solution also requires one or more Hybrid Runbook workers where the watcher and action scripts will execute.

Installing the solution:

Pre-requisites: This solution assumes that you already have the following:

Adding the runbooks:

Once we have our Azure Automation environment, we can easily create the three required scripts by creating each of the three runbooks as the PowerShell runbook type with the names defined above (PingMonitor-Watcher, PingMonitor-Updater, PingMonitor-Action). These scripts are available for download here. Once these have been added you can save and publish them. After they are created, they should look like the screenshot below:

ping monitor

Defining variables:

Create the three following variables with their appropriate content (PingMonitorDevices, PingMonitorWorkspaceId, PingMonitorWorkspaceKey):

Once created these should look like the following:
ping monitor

Populating the PingMonitorDevices variable:

The Ping Monitor-Updater script populates the information required to ping the various systems. We run this script and provide the following information:

In my example, I used a single value of “TestServer” for the device and took the defaults for the remainder. Below shows the JSON value which the script put into this variable:

Scheduling the watcher task:

Now that all of the scripts and variables are in place we can configure the PingMonitor-Watcher to be run as a watcher task. This is done under process automation/watcher tasks.

We add a watcher task, which I defined as “Ping Monitor” with a frequency of 1 and I pointed it to the PingMonitor-Watcher runbook for the watcher and the PingMonitor-Action runbook for the action.

This creates the task and the user experience shows the last watcher status in this same pane.

Additionally, you can dig into the various watcher task runs to see if data is being written such as in the example below:
ping monitor

How does this all come together?

So how does this all work when it’s installed? The watcher task checks every minute for a ping failure. If it does find a ping failure it writes out to the Log Analytics workspace details into the PingMonitor_CL class. If none of the pings fail, it does not write to the Log Analytics workspace. Once this information is logged to Log Analytics we can use Azure Monitor to send an alert whenever a ping failure occurs. Additionally, we can surface this information via dashboards in Azure (both of these topics will be covered in the next blog post).

Solution restrictions:

It is important to note that there is a 30 second maximum for the watcher task to complete. Additionally, as mentioned earlier in this blog post this solution does NOT write data if systems are successfully contacted via ping. We only write data when errors are occurring (IE: systems are offline).

Azure Cost breakdown:

To get the data into Log Analytics this solution uses both Azure Automation runbooks and the worker tasks. The prices on these are below:

Additional readings:

There were two other blog posts which existed which are similar in the goal (pinging systems via Azure Monitor/Log Analytics). Their approaches were focused on running scheduled tasks or running the tasks via an Azure Hybrid Runbook worker. This approach is good as well and would be the best choice if you need to have ping statistics (response time, etc) but if the goal is to keep the cost down the approach in this blog post is significantly less expensive to run on a monthly basis.

Summary: If you are looking for a cost-effective method to provide notifications when systems or devices are not responding from Azure Monitor you will want to try this solution out! In the next part of this blog series, Cameron Fuller will show how we can alert from failed ping responses and will show how this data can be showcased in an Azure dashboard.