How is the weight determined for recommendations within the assessment tools in OMS
During the IT/Devconnections conference there was a good question asked by an attendee at one of the OMS sessions to explain how the various weights for recommendations on the assessment categories are calculated. This blog post is designed to answer that question through the following major topics:
 What are recommendation weights in OMS?
 How recommendations are weighted
 What is the formula or equation for the score and weight of recommendations in OMS?
 Do the weight or the score impact the donut chart sizing?
 What do the values for probability, effort and impact equate to?
What are recommendation weights in OMS?
There are assessments which are currently available for both Active Directory and SQL servers. Examples of these recommendations are shown below:
Each individual recommendation is ordered based upon the weight of the recommendation. The example below shows thirteen recommendations (1 high priority in Red, 12 low priority in Blue).
As you can see, there are a variety of weights shown for the recommendations (from a value of 0.8 up to a weight of 8.1).
To determine how these are calculated let’s start with a simple example which contains a single low priority recommendation. For Active Directory I have a single server which is shown in this graph. For Active Directory Performance and Scalability there are a total of 7 checks which occur (six were passed, one was a low priority recommendation).
If we take 100% as our total, the above is pretty close to a mathematical match. 100/7=14.3 versus the 14.9 shown above. So at first glance it looks like this is a pretty simple equation to determine what the weight of the recommendation is.
If we move to a more complex example, we can see that for Active Directory Availability and Business Continuity we have a total of 75 recommendations (71 passed, 2 low priority, 2 high priority).
Again if we take 100% as our total, the above is 100/75=1.33. The high priority items as a 1.4 seem to work with this math, but the low priority items at 0.9 and 0.7 do not appear to make sense but it will make sense as we get further into this blog post.
How recommendations are weighted:
As we dig into this deeper in the documentation we can gain some more insight. The following is a subset from: https://technet.microsoft.com/enus/library/mt484102.aspx:
What we can take away from this:
 The higher the probability the higher the weight
 The higher the impact the higher the weight
 The lower the effort the higher the weight
If we break these down a little further we can take each recommendation and see what their probability, impact and effort levels are for these modifiers. If we look at the first item on the list we can see that it’s a high impact, moderate probability, and moderate effort. If we assume that moderate does not impact the score, the high impact likely increases this from a 1.33 up to a 1.4.
The second high priority item on the list is also a high impact, moderate probability, and moderate effort. It shows the recommendation a 1.4 as well.
The third recommendation on the list is a high impact, low to moderate probability, and low effort. This likely means that the value is increased by the impact, decreased slightly by the probability and increased due to the low effort associated with the change. It shows the recommendation with a score of 0.9.
The fourth item has a high impact, very low probability and moderate effort. It’s score is a 0.7.
The above rules show how probability, impact and effort factor into the overall weight. In the next section of the blog post we will explain the formula used to provide this calculation based upon probability, impact and effort.
What is the formula or equation for the score and weight of recommendations in OMS?
A score is calculated based on the following formula: Score = (1 + Probability) x Impact – Effort
The weight is calculated based upon the following: Weight = Individual recommendation Score / Sum of all recommendation Scores within that Focus Area
Do the weight or the score impact the donut chart sizing?
The donut charts are a simple percentage calculation based on the counts and are not related to the weights. Using the example below we have a total of 7 items. Six of them are green and one of them is blue. If we take the number of recommendations and divide that by the total number of recommendations and multiply that times 100 we receive the appropriate percentage. For the item below this would be (6 /7) *100 which is 85.71 which rounds to 86%.
The donut chart formula is: (number of recommendations / total number of recommendations) * 100
As expected this also works for the example above where the formula is 71/75*100 which equals 95% (rounded from 94.66%).
What do the values for probability, effort and impact equate to?
The above however is assuming that impact, probability and effort each have numeric values associated with their level. Such as (the numbers below are not actual values, just my initial thoughts on how this weight calculation may occur):
Probability:
Very High 90%
High 70%
Moderate 50%
Low to Moderate 30%
Very Low 10%
Effort:
Very High 34
High 21
Moderate 513
Very Low 13
Impact:
Catastrophic 89
High 21
Moderate 513
Very Low 13
Summary: I owe a huge thank you to the Microsoft product team who provided several key pieces of information which were critical to this blog post. These included:
 The method to calculate the score discussed above: Score = (1 + Probability) x Impact – Effort
 The method to calculate the weight as discussed above: Weight = Individual recommendation Score / Sum of all recommendation Scores within that Focus Area
 The explanation of how donut charts are calculated.

Each of the values shown in the “What do the values for probability, effort, and impact equate to” section.
From a high level assumptions perspective, the following are good takeaways when determining the scoring:
 The higher the probability the higher the weight
 The higher the impact the higher the weight
 The lower the effort the higher the weight
Or to show an example of this graphically from the recent Midwest Management Summit event (Thank you to Pete Zerger for developing this!):
For details see Microsoft’s explanation at: https://technet.microsoft.com/enus/library/mt484102.aspx