New Licensing Measurement/Regulatory Compliance Tools for Licensing Administrators and Regulatory Scientists

In a previous blog post, I presented the ceiling effect/diminishing effect, the regulatory compliance scale, and the program quality indicators scale. In that post, I said I would be doing additional data mining of the very rich database that was created in Canada and used to generate these new tools: the regulatory compliance scale and the program quality indicators scale. Here are some of my insights in having done this deeper dive into the database.

The ceiling effect/diminishing effect was present when the regulatory compliance scores were compared to the environmental rating scale scores with the typical plateauing in the quality scores as one moves from substantial to full 100% regulatory compliance. However, the plateauing was not present when comparing the program quality indicators scale scores and the regulatory compliance scores. There was more of a linear relationship between the two. Why could that be the case? In reviewing the content of the program quality indicators scale there appears to be more of a balance in how quality is determined. Remember, the program quality indicators scale is the result of previous key indicator research involving licensing, accreditation, professional development, quality rating and improvement systems. It may provide a more balanced approach for licensing administrators in attempting to address the infusion of program quality into their licensing system. And, in fact, I would go so far to say that the program quality indicators scale could be used as a screener tool for measuring program quality across the board. This is something that I have refrained from doing in the past, but given the new scale, I think this is a potential use of the new program quality indicators scale.

I could also see the use of the program quality indicators scale as a public policy enhancement by using it in conjunction with Caring for Our Children Basics which I have proposed all licensing administrators use as their baseline for regulatory development and implementation. Using the two in tandem would be a win-win in that it would be the ultimate manifestation of the use of the key indicator methodology in addressing both basic health and safety as well as program quality together in a differential monitoring approach. This would provide a very cost effective and efficient monitoring system.

Another insight from my deep dive into the database is that using violations frequency data is not a useful metric in licensing measurement. The frequency data needs to be put into more logical categories or buckets, such as full, substantial, mediocre, and low regulatory compliance which is more consistent with licensing research. The frequency data measured at a nominal level just doesn’t work because the data are so discrete and not continuous. There is a total random relationship between regulatory compliance and program quality when it is used. Put these violation frequency data into the regulatory compliance scale and it works really well in distinguishing amongst the various levels of program quality. See my previous blog posts on introducing the regulatory compliance scale and how it can be used.

I plan on continuing my deep dive into the database and see what other insights I can glean from the data. For now, I wanted to share these initial insights because I think they can be put to immediate use. Both the regulatory compliance scale and the program quality indicators scale are available for use by licensing administrators and regulatory scientists. Both are contained within previous posts on this blog. I encourage you to try them out, I was really surprised by how robust and useful they were. They really do make a difference in the analyses.

About Dr Fiene

Dr. Rick Fiene has spent his professional career in improving the quality of child care in various states, nationally, and internationally. He has done extensive research and publishing on the key components in improving child care quality through an early childhood program quality indicator model of training, technical assistance, quality rating & improvement systems, professional development, mentoring, licensing, risk assessment, differential program monitoring, and accreditation. Dr. Fiene is a retired professor of human development & psychology (Penn State University) where he was department head and director of the Capital Area Early Childhood Research and Training Institute.
This entry was posted in RIKInstitute. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s