Paradox of Choice

When executing a risk assessment, the scales/standards used to define the ‘severity’ and ‘vulnerability’ values for any given hazard [to determine the risk] is a key starting point.  Get the scales wrong, and the whole effort is likely for nothing, causing your organization to lose confidence in the theory of a ‘risk based approach’ being better than conventional decision-making after several regulatory (or internal) observations on the trot.  The organization will likely revert back to the practice of generational pass-down of knowledge via storytelling, much like we [humanity] have done for millennia.  Entertaining, but incredibly inefficient.  For example: ‘we did it like this before, and never received an observation’… sound familiar?  Cool story bro!  Ugh, please get me on the first flight back to Austin. 

There are several key points to understand when designing the assessment scoring scales.  However, in this blog entry we will cover just one key design element: simplicity.  As humans, we are prone to the cognitive challenge defined as the “Paradox of Choice”.  How many of you have struggled to decide on what to eat at the Cheesecake Factory?  As of today, they have over 250 potential choices on the menu – which one is good, which one is the best… what should I order?  Impossible to determine.

This decision-making problem is defined as the Paradox of Choice.  Once a human is presented with a large number of potential choices, two things happen:

1.      Mental exhaustion leading to paralysis

2.      Dissatisfaction with the end result (as a result of second-guessing)

Maybe this is why I avoid the Cheesecake Factory like the plague?  Probably not only for that reason, but I digress…

Within a highly regulated environment, where we deal with life/death decisions, choosing the wrong option is scary, hence we strive to find the “right answer”.  Unfortunately, however, there are no right/wrong answers in assessing risk, as determining future (or past) performance of a machine or human is an impossible task – all we can do is get close to the right answer (hence my 100% belief in qualitative vs. quantitative assessments).  I consistently come across scoring scales that have 5 or more potential options for both severity and vulnerability (vs. the core three: high/medium/low).  It appears that these folks have fallen into the fallacy of “more is better”.  Somehow more scientific, more compliant?  Wrong, that’s old school.  In this case – less is better.  In order for the critical thinking side of your brain to kick into high gear, it needs 1) energy, and 2) confidence.  When presented with too many choices, it shuts down and hibernates until recovered.

So what can we do about it, Pete?  I argue in this blog that three things will happen as a result of simplification of the scoring scales (high, medium, low):

1.      More participation in the QRM program due to less mental exhaustion

2.      More confidence in data governance outcomes due to less second-guessing and wondering if the wrong choice was made (…should I have ordered the tacos…?)

3.      Easy inspections – people know what they do, and [more importantly] why they do it.

Pete

Previous
Previous

Sounding the Alarm

Next
Next

The 80/20 Rule