Quisitive Pillars of Digital Transformation | Quisitive
Quisitive Pillars of Digital Transformation
At Quisitive, we believe digital transformation is realized when data-driven insights, technology, and people are connected in agile and innovative ways to drive engagement and lasting change.
Digital transformation made easy.

In a world where digital transformation is no longer optional, it’s essential for companies to identify and prioritize the changes they must make to optimize value, sustain customer engagement, and enable innovation.

 

Often, companies are aware of the opportunities that digital transformation offers as well as where change is needed within their organization. But, without a skilled partner to help navigate through the process of digital transformation, many are left paralyzed trying to decide where to begin and where to invest resources.

 

At Quisitive we believe digital transformation is realized when data-driven insights, technology, and people are connected in agile and innovative ways to drive engagement and lasting change. This belief, and the partnerships we build with clients around it, is the key driver for making digital transformation attainable and delivering increased profitability, greater customer engagement and retention, streamlined operations, and faster speed to market.

There are times as young adults where we may think we have everything figured out, and what we want to do for the rest of our lives.  That’s great for those that do, but I am not one of those people.  For me it’s about having a base and having the confidence that I started my career exactly where I wanted to.  Through that, I believe my true life’s passion will be carved from the work I do day-to-day.

”Don’t do work for recognition, but do work worthy of recognition.” – Jackson Brown, Jr.

Think about all of the times in your life that you have been told obsession is bad, or obsession is weird.  The oh so common, “why do you keep doing that, focus on things that matter,” or something along those lines.  We have all heard it, and we have all brushed it off perhaps a number of times.  The truth is, whatever you are obsessing about, for in that moment (be it be a good thing to obsess over like a hobby), is acquiring all of your attention and drive.  Throughout college, I would get so flustered at times with busy work and non-major courses, thinking they had no purpose, and would try my hardest to breeze my way through it, just to get it done.  It wasn’t until the end of college where I learned to reshape my logic behind this and leverage it in a positive light.  Once doing so, I suddenly began to appreciate the types of classes I loathed in the past.  Might I say that I even began to learn more and perform better.  The reason I tell you this is because the lesson I learned through that period, I have carried over with me into the work world, and perhaps will be the key to determine my future.

Just like with school, tasks in your work place may be drug out and drive you crazy at times.  It’s the classic why will this not work; I know I am doing this right.  Then you think on it for a little more time, and more times than not you finally figure out your mistake.  Or perhaps you are working through a long drawn out spreadsheet and have to make small edits that seems to take days to reach a point of completion.  My trick here is to simply apply a mental block.  Drown out that voice in your head that is complaining and think of the satisfaction that you will feel once you have accomplished your work.  As I discussed in my last blog, these are the small victories that are worth living for, especially in your job.  It’s the sudden gratification gained from checking one more thing off the list.  However, I say simply doing the work and completing it is not enough.

In addition to applying this mental block to finish your work in a sane manner, take that mental block, twist it around, and mold it into confidence that you can use.  This is where I like to say one enters their metaphysical world of possibility.  Moreover, this is the point where one should begin to obsess over their work, even the smallest of tasks.  Just like earlier when we were talking about being weird.  When you were younger and you were obsessed with something and all you received from that was criticism.  Little did those people know, that weird thing was a passion of yours. Even the smallest of work tasks can be a passion of yours as well.   Once you reach this point of obsession, your quality of work will increase, you will retain more of what you are doing, and in the end, both your company and yourself will benefit. And perhaps then you may learn to approach even the most mundane of tasks in a more energetic and creative manner.

Once you reach this point of obsession, it sparks two things, imagination and creativity.  Theorists say that it takes a minimum of 10,000 hours to master a skill.  Well, in order to actively apply 10,000 hours to anything, you must have some level of obsession for that certain thing, whatever it be.  By applying this to your work you will begin to notice that you are finding passion in areas that may have never of crossed your mind previously. You will also begin to take pride in the work you are tasked to do, leading to more opportunity, which in turn, will instill passion.  And once you are passionate about your work, then success will take care of itself in the long run.  This success being measured as small or large.

For me, I look back on when I first began to practice this, and notice a significant change in my goal setting.  Although I have not been in the work place long, this work ethic and obsession, is a key driver to making me approach my work the way I do.  I still don’t know entirely what I want to do someday.  I know I want to open a business someday but I don’t know what kind.  What I do know, is that right now I am gaining a high level of experience every day. Learning from those who have done it and seeing their passion that goes into their work, makes me want to do the same if not better.  I know that asserting myself to my work and trying my hardest to reach this point of obsession, I will improve and reach a point of no return in a positive light.  I have been working for four and a half months now and I already know that there is no room for someone to simply show up 9 to 5, punch their time card, and go home unseen or unheard. My advice is to actively search for ways trigger obsession, and the passion that emerges will shape you as a business person and as an individual.   Climbing the ladder to success can be daunting, and it is okay to slip from time to time, (that is a conversation for another time) but if you don’t keep climbing, and climbing in search of passion, then you will never reach your destination.

”If you work hard enough and assert yourself, and use your mind and imagination, you can shape the world to your desires.” – Malcolm Gladwell

Do you want to maximize the benefits you can get from your Log Analytics workspace? Start by controlling the amount of data you are uploading to data to what you really need.

I recently had a subscription where I needed to reign in data usage to the workspace quickly but I needed to do so in a way where most of the functionality of Log Analytics in OMS was still available. This post discusses the top 10 approaches to take to cut back the amount of data uploaded to a Log Analytics workspace while maintaining as much functionality as possible:

  1. Find out what’s using most of your data in your Log Analytics workspace
  2. Determine your hourly data addition rate
  3. Identifying computers which are not needed in Log Analytics
  4. Increase the intervals on performance counters
  5. Change security & auditing levels
  6. Remove solutions that you can live without (at least temporarily)
  7. Exclude large numbers of security events
  8. Exclude specific systems from network monitoring
  9. Tweaking configurations of solutions in System Center Operations Manager
  10. Tuning takes time

1) Find out what’s using most of your data in your Log Analytics workspace

Log Analytics includes built-in set of views which show the usage for Log Analytics. These are available from the left side using the three bars icon highlighted below.

To tune usage we care about the first three views: (Data volume over time, Data volume by solution, Data not associated with a computer). The first screenshot shows the workspace prior to tuning which had two systems providing more than 5 GB of data each and the LogManagement solution generating more than 34 GB of data!

After tuning this workspace, the data levels are looking much more in alignment with the size we had wanted for this particular demo environment.

Using these views we can easily identify the total amount of data, computers which are contributing the most data, solutions which are contributing the most data and where is data coming in which is not associated with a computer.

You can also use Wei’s “Free Tier Data Consumption Tracker” to see how much of your quota is being used and what is using the most data (to tweak his solution for workspaces with more than 500 MB see this blog post).

Before I started tuning, the workspace I was working on was at 110%+ of its quota with heavy focus on Security, LogManagement and WireData.

After tuning this workspace, the % utilization has dropped significantly (less than 50%).

This solution makes it even easier to see what is using the most space by solution, data type and more.

From the dashboards above we can see that the largest amount of data being added in this particular workspace is related to LogManagement (20%) and Security (15.3%). By data type, we have lots of data from the SecurityEvent Type (14.7%) and Perf (11.9%). From these various dashboards we can see what areas to focus on which will help us to minimize the amount of data being sent.

2) Determine your hourly data addition rate

Next we want to determine what our maximum records per hour should be for the size of the workspace which we are working towards. We can use a “search *” and set the time to the last hour to determine how many records are being written per hour. Initially we were initially seeing 72K records written per hour. After initial tuning we had this down to 47K records written per hour as shown below (using the previous query language).

After more tuning of this environment, we are down to 33K records written per hour.

For the free tier our maximum should be just under 40K records per hour to keep it under the 500 mb per day cap. The math on this should extend forward so that:

Cap/records per hour:

500 mb                 40K

1 GB                      80K

10 GB                   800K

Knowing what number of records per hour you need helps to to provide a simple target to aim for and helps you to track where you are at with your data tuning process.

3) Identifying computers which are not needed in Log Analytics

The built-in usage views and Wei’s solution provide a quick way to see what computers are providing data into OMS (see the “Find out what’s using data in your Log Analytics workspace” section of this blog post for example graphics). If there are computers which you do not want to be sending data to Log Analytics start first by removing them from OMS either as directly attached agents or removing them from SCOM integration. As an example, if you have workstations reporting to OMS and you don’t want workstations reporting to OMS start by removing them. If you have specific servers which you don’t want in OMS, remove them from OMS.

4) Increase the intervals on performance counters

If we use a “search *” query we can see what types are the most common in the workspace. In the query below the highest was for “Perf” by far when compared with other types.

In most workspaces, performance counters often represent a significant number of records which are gathered into Log Analytics. To check which types of performance counters these we can use a query like this: search * | where ( Type == “Perf” )

This type of a query will provide a list of the most common objectnames and counternames.

When we were initially tuning, the following were the heavy perf counter collections. This led us to remove the “Capacity & Performance” solution but to re-investigate it later.

NOTE: For additional background, we did check to see if we could change these counters directly in the Log Analytics data settings or though making changes to the rules via System Center Operations Manager. The XML configuration for the rule is shown below. We were not able to change these though an override (only enable or disable). The “Capacity & Performance” solution is currently in preview so I expect that this will change before the production release.

After removing the “Capacity & Performance” solution we had a different set of counters which were generating the most data. Logical disk represented the most data being collected in Perf.

These performance counters were being collected relatively frequently and could be changed easily in settings / data / Windows Performance Counters. For our environment we increased the sample interval to 1500 (25 minutes) for the disk counters and the rest of the counters were increased to 300 seconds (5 minutes).

5) Change security & auditing levels

A great place to minimize the amount of data which is being gathered into Log Analytics is in the Security and Audit solution under security settings. By default this solution will collect all Windows Security and AppLocker event logs. This can be decreased by changing it to either Common, Minimal or None (not recommended).

When tuning the amount of data start with Common and then decrease to Minimal if the tuning only if it’s required.

6) Remove solutions that you can live without (at least temporarily)

If there are solutions in your workspace which you do not need and they are using significant amounts of data start with removing these. For our tuning we ended up removing the “Capacity & Performance” solution (due to the 60 second performance counters), the “Wire Data” solution and the “Security & Auditing” solution (temporarily).

7) Exclude large numbers of security events

One of the largest changes we needed to make was to exclude some service accounts from logging their security events. For our environment, three service accounts represented almost 200K (or 5 hours a day as 40K per hour) worth of records.

To work with these I recommend this approach: https://www.catapultsystems.com/stompkins/archive/2016/08/16/filter-which-security-events-scom-sends-to-oms/.  You can also potentially remove the server where these service accounts are from the Security & Auditing solution to cut down the amount of data: https://www.catapultsystems.com/cfuller/archive/2015/11/09/targeting-oms-solutions-to-specific-systems-in-operations-manager/

8) Exclude specific systems from network monitoring

In our environment there was one server (the main Hyper-V host) which was writing the most data.

To change this we opened the solution and removed the system from being checked as “Use for Monitoring”:

This approach is easy to directly in the Log Analytics portal.

9) Tweaking configurations of solutions in System Center Operations Manager

Sometimes you have to go into SCOM and make some tweaks to get the data levels to where you need them. For our environment we ended up having to create a group to exclude one node (our Configuration Manager server) from collecting its IIS logs due to the amount of volume. We created a group called: Exclude_OMS_IISLogs

We applied an override to disable (set enable to false) for this group to the “IIS Log Collection Rule” rule. This effectively excluded this particular system from having its IIS logs collected.

TIP: The easy way to find what rule is relevant is to go to the administration pane in SCOM, overrides and search on the group name used to activate OMS functionality (Microsoft System Center Advisor Monitoring Server Group).

10) Tuning takes time

Watching these types of changes make time. You make a change and then watch carefully over time how it impacts the % of your quota:

Also continue to watch the number of records which are written hourly to your OMS workspace.

Some tweaks take a few hours to really have an impact.

Or even check in the next morning:

Summary: Hopefully this blog post has given you some ideas and approaches which will help you to tune the amount of data which you have in your OMS workspace. If you have your own tuning tips for Log Analytics data post a comment here!

Recently, I took a short getaway with my wife to see a concert and stay overnight in Dallas at a Hilton hotel. Since I’ve been a Hilton Honors member for 10 years, this isn’t the surprising part of the story.

I have the Hilton Honors app on my phone that I normally use to manage my reservations, check my reward levels, and book stays, but recently they have started to roll out digital keys to certain locations. Digital keys use your smartphone as a key card instead of the plastic cards you get at the front desk. I’m always eager to try new technology so I was quick to choose that option. The convenience is that you can check-in ahead of time and get your digital key delivered in advance of your arrival so you can go straight to your room.

This technology isn’t even the part of the story I wanted to focus on. Given the fact that you have to have the Hilton app to take advantage of the key, Hilton has a unique opportunity, they now have a captive audience in their mobile app and they know when you’re on property through geo-location and app usage.

Being a marketing geek, I was pleasantly surprised that Hilton took advantage of this.

Shortly after arriving, and conveniently timed around 5pm I received a push notification in the app asking if I was hungry, and it included a list of the restaurants in the hotel. Given this was a large Hilton property, there were a few options listed.

We were already planning on eating in the hotel for a quick dinner before we went out for the concert, but this was a nice reminder, and it was appropriately timed.

Shortly after we checked out from dinner, I received another message promoting the art exhibit in the hotel, if we were looking for things to do.

We were on our way to the concert so we left the hotel.

While I was gone, we didn’t receive any other messages. When we returned that evening it was late and my phone was silent.

But, when I woke up the next morning there was a message letting me know that breakfast was waiting downstairs.

These aren’t revolutionary activities or even messages, but they are core elements of the hotel experience and a great way for the hotel to drive incremental revenue while expanding the guest interactions.

Sitecore has a tool in its platform called Engagement Plans, and these are automated nurture streams that can expand across multiple channels. Leveraging an engagement plan, you can send automated email messages, personalize the web experience, send message to your mobile application, and even monitor online purchasing. As I was playing with the app, I was beginning to map out ways to leverage this tool with my customers and integrate it into their digital strategies.

So, the question is, how can you leverage these automated experiences in your brand to make your customers feel more connected, encourage incremental revenue, and drive greater customer experiences?

Here’s a simple way to get started:

  1. Think through your customer journey, and identify points where customers get bogged down, confused, or could use some guidance.
  2. Identify key messages you could deliver automatically that could guide the customer to the next step.
  3. Determine what will trigger that message to be delivered. Is it a customer action? A specific time or date?
  4. Test a simple message to see what data you can collect.

Hilton knew that after customers checked in they probably had an interest in dinner, but given there are lots of options they wanted an easy way to present their options. They delivered a simple message with the different restaurants, what they offered, and where to find them. They triggered them based on timing of dinner, and that I was geo-fenced within their property, so they knew I might be interested. And they could easily test their restaurant traffic before these messages and after, along with the rooms that received a message and those that paid.

These simple experience enhancements can add up when you start to layer them together into an overall digital strategy. How will you include them in yours?

I had a couple of excellent questions asked with regards to the first blog post in this series which discussed using Power BI to gather information from an upgraded Log Analytics workspace.

“How do I authenticate to schedule a refresh of data in Power BI for my particular datasets from queries? I have the option and have tried all of them, all failed. Not sure what I’m doing wrong. Any pointers?” – My attempt to address this is in the “Scheduling the refresh of the Log Analytics data in Power BI” section.

I see the limit or records is set at 2000. When I enter the amount of records I require (25 000) I get an “Error 502 (Bad Gateway) error in PowerBI as soon as I click “Done” in the advanced query editor. Is there a timeout I need to adjust here to allow a little longer for the data to load? Are there limitations on how many records I can query?” – My attempts to address this question are included in the “Updating the query with an increased # of records” and the “Updating the query to return only the one day of data” sections of this blog post.

Scheduling the refresh of the Log Analytics data in Power BI:

Once you have your query in place, you need to provide credentials as we discussed in the previous blog post. To schedule this data to update, we need to publish it next.

In my example I will publish it to my workspace in Power BI.

From here we switch to Power BI web (http://powerbi.microsoft.com) and open “my workspace” in the datasets section and find the name of the dataset which was published.

Use the “Schedule Refresh” option to set when you would like the data to update

From here I needed to edit my credentials and set them to Oauth2 to authenticate properly.

And then logged in my account credentials for the OMS workspace.

Now we can determine when we want this to update (either daily, or weekly):

You can also add more times to have it refresh on the daily basis.

Right-clicking on the dataset shows when the most recent refresh was and when the next one is scheduled to occur. You can also click on the ellipses and choose to “Refresh Now”.

Updating the query with an increased # of records:

The default query returns 2000 records. This value can be changed to a higher value depending on how large the records which you return. For my query I was able to update it to a value of 18,000 before receiving this error message.

This appears to be due to a hard limit of 8mb on your data size: https://stackoverflo.wcom/questions/41869170/query-from-powerbi-to-ai-suddenly-fails-with-502-bad-gateway/41878025#41878025

Updating the query to return only the one day of data:

To minimize the amount of data, I changed from the original query approach to use a time restricted query which was updated on a schedule as discussed above. [Be aware that when cut and pasting this query the quotes may substitute incorrectly]

To validate that the scheduling was working, I looked at the results on the “TimeGenerated” field. When the data was originally gathered, the TimeGenerated went up to 9/5/2017 2:00:00 PM

After a successful refresh of the data the TimeGenerated field shows more recent data.

(*Note: may have to close and re-open Power BI web to see if the data has refreshed*)

Below is the query which is provided by OMS for Power BI:

/*

The exported Power Query Formula Language (M Language) can be used with Power Query in Excel

and Power BI Desktop.

For Power BI Desktop follow the instructions below:

1) Download Power BI Desktop from https://powerbi.microsoft.com/desktop/

2) In Power BI Desktop select: ‘Get Data’ -> ‘Blank Query’->’Advanced Query Editor’

3) Paste the M Language script into the Advanced Query Editor and select ‘Done’

*/

let AnalyticsQuery =

let Source = Json.Document(Web.Contents("https://management.azure.com/subscriptions/63184bfc-089a-4446-bf2a-59a0caa9c013/resourceGroups/mms-eus/providers/Microsoft.OperationalInsights/workspaces/scdemolabs/api/query?api-version=2017-01-01-preview",

[Query=[#"query"="Perf | where CounterName == ""% Processor Time"" and TimeGenerated >= ago(1d) | summarize AggregatedValue = avg(CounterValue) by Computer, TimeGenerated | limit 2000",#"x-ms-app"="OmsAnalyticsPBI",#"prefer"="ai.response-thinning=true"],Timeout=#duration(0,0,4,0)])),

TypeMap = #table(

{ "AnalyticsTypes", "Type" },

{

{ "string", Text.Type },

{ "int", Int32.Type },

{ "long", Int64.Type },

{ "real", Double.Type },

{ "timespan", Duration.Type },

{ "datetime", DateTimeZone.Type },

{ "bool", Logical.Type },

{ "guid", Text.Type }

}),

DataTable = Source[tables]{0},

Columns = Table.FromRecords(DataTable[columns]),

ColumnsWithType = Table.Join(Columns, {"type"}, TypeMap , {"AnalyticsTypes"}),

Rows = Table.FromRows(DataTable[rows], Columns[name]),

Table = Table.TransformColumnTypes(Rows, Table.ToList(ColumnsWithType, (c) => { c{0}, c{3}}))

in

Table

in AnalyticsQuery

Summary: To schedule your Log Analytics data to update on a scheduled basis: publish, and then schedule the dataset to update on the schedule which you would like it to update. To avoid the “502 bad gateway” errors, limit the duration of time for your query so that it will return less data but run it more frequently. Thank you to Erik Skov who was my “Phone a friend” for this issue!