The first answer is probably the most controversial but at the same time the most important from a public policy point of view, the Ceiling Effect, the Regulatory Compliance Theory of Diminishing Returns. Without a doubt this is probably the one result of all the research into regulatory compliance that has resulted in the most sleepless nights for researchers and administrators. But it is the kernel of everything related to regulatory compliance and so many suggested changes that occurred after its discovery and publication. When it was first proposed back in the 1970’s and 1980’s, it was looked upon as heresy because it went against all regulatory thinking at that point. Of course, there was a linear relationship between regulatory compliance and program quality; but the empirical data was not supporting this predominant paradigm. The data clearly demonstrated that full 100% regulatory compliance did not guarantee that these same programs were the highest quality.
Wow, that was a revelation. It was always assumed that as regulatory compliance increased, program quality would increase in the same proportion. Very honestly, that was the hypothesis back in the 1970’s and it would have been so much more simple if that were the case. Of what is to follow would never have occurred because there would not have been support for it. But it did not work out that way. The data back then and the data to this day clearly indicates that regulatory compliance has limitations when it comes to identifying program quality. Licensing via regulatory compliance will ensure health and safety but it will not guarantee quality of programming. This is an important distinction and one that is pertinent to all industries impacted by regulatory science and not just the human services.
So what are some of the key questions and their respective answers based upon this paradigm shifting discovery related to a ceiling effect with regulatory compliance data? The first that will jump out at you has to do with “one size fits all vs a more targeted or differential approach”. If there was a linear relationship between regulatory compliance and program quality, one size fits all would work just fine. But when there is a ceiling effect present, it lends itself to a more targeted or differential approach in which the pursuit of specific rules/regulations/standards have a differential impact on the overall program is warranted. Clearly it opens the door to risk assessment analysis and predictor analysis via key indicators. Both these approaches would not be necessary if all rules were created equal and administered equally; but they are not. So, as a licensing administrator, you need to take that into account and weight rules and look for rules that statistically predict overall regulatory compliance.
Another major issue with regulatory compliance which adds to the difficulty of making licensing decisions and how best to enforce rules is the fact that the regulatory compliance data distribution is so skewed that it is very difficult to distinguish between the high performers and the mediocre performers. The data are not normally distributed as is the case with more program quality metrics. With regulatory compliance metrics (RegalCMetrics), it doesn’t work that way and one will have difficulty in sharing with the general public who the best performers are. Plus, the data are all nominally measured, in other words, either a program is in compliance or out of compliance with each rule. Guess what, from a statistical point of view, not much you can do with that. Regulatory compliance violation data are not very useful. However, there is a work around for it call the Regulatory Compliance Scale which places the regulatory compliance violation data into categories or buckets that are more logical from a licensing point of view (this idea is addressed in several previous blog posts).
So where does that leave us. From a public policy point of view, licensing administrators have a big decision to make regarding the issue of full versus partial regulatory compliance in order to obtain a regular license. Based upon the empirical evidence, it would appear that being in substantial but not full regulatory compliance would be sufficient to being granted a regular license. But that is a major public policy change.
The paradigm shifts from one of being continuous to one that is more discrete and dichotomous in the following ways: “do things well versus do no harm” and “strength based versus a deficit based model”. Both are important but they do change how you approach your monitoring of programs. Obviously the above quotes fit the “program quality versus the regulatory compliance” dichotomy as mentioned earlier which is at the heart of what we are trying to accomplish. One should build upon the other and be continuous. It should be a linear relationship but the ceiling effect prevents this from happening and it is more non-linear. And we are searching for that sweet spot of the right combination of risk aversion and statistical predictors of regulatory compliance.
This is what led to the Quality Rating and Improvement Systems (QRIS) movement and the proliferation of these systems because of the frustration that licensing systems just were not doing the job of balancing health and safety with program quality. And it was a good move, states did not have the appetite to take that on within their licensing systems; so a new approach had to be created. But now we need to think in a more integrative monitoring frame of reference to combine these two systems into one more effective and efficient approach, such as an Early Childhood Program Quality Improvement and Indicator Systems Model (ECPQI2M), which balances risk assessment (risk predictor rules) with program performance (quality indicators). I will address the ECPQI2M in an upcoming blog post in greater detail and demonstrate how it fits within the various program monitoring approaches.
We need to have the ability to more clearly distinguish the top performers from the mediocre performers as we can distinguish the top performers from the non-optimal performers. We need to balance our gatekeeper role to one more of an enabler. To balance risk and performance; structural and process quality.
These are really tough questions and many of the answers are difficult to digest but based upon the past 50 years of regulatory compliance/licensing measurement and research we are gradually finding our way. A paradigm shift is occurring whereas a field we are moving from an absolute/one size fits all to more of a relative/differential approach.