A Brief History of Licensing Measurement

The history of licensing measurement and regulatory compliance has actually a rather long lineage but is still in its infancy in terms of development. In the early stages most licensing visits and inspection results were recorded via anecdotal records/case records with the licensing staff recording their results in more social work note taking. It was a qualitative type of measurement with very little quantitative measurement occurring with the exception of basic demographics, number of clients, number of caregiving staff, etc… This qualitative approach worked very well when there were not many programs to be monitored and there were sufficient licensing staff to do the monitoring and conduct the inspections.

This all started to change in the 1980’s when Instrument Based Program Monitoring (IPM) was introduced and started to be adopted by state licensing agencies throughout the United States. Just as a footnote, this brief history is pertinent to the USA and does not include other countries although the Canadian Provinces have followed a similar route as the USA. The reason for the introduction of an IPM approach was the tremendous increase in early care and education programs in the 1960’s and 1970’s. It was difficult for licensing staff to keep up with the increased number of programs in their monitoring efforts. There needed to be a more effective and efficient methodology to be employed to deal with these increases.

A very influential paper was written in 1985 and published in Child Care Quarterly which introduced IPM along with Licensing Key Indicators, Risk Assessment (Weighting), and Differential Monitoring (Abbreviated Inspections). This paper outlined the various methodologies and their use by a consortium of states to test the viability of this new approach to licensing measurement, regulatory compliance, and program monitoring. Also, the terminology has changed over the decades. Back in 1985 weighting was used rather than risk, abbreviated inspections were used rather than differential monitoring, targeted monitoring, or inferential monitoring. All these terms can be used interchangeably as they have been over the years, but the first introduction of them back in 1985 utilized weighting and abbreviated inspections.

In the early 1990’s the risk assessment methodology was used to develop Stepping Stones to Caring for Our Children, the comprehensive national health and safety standards for early care and education (ECE) programs in the USA. This was a major development in attempting to develop national voluntary standards for child care in the USA.

It was during this time that two other very significant discoveries occurred related to licensing data distributions: 1) Licensing data are extremely skewed and do not follow a normal curve distribution. This fact has a significant impact on the statistics that can be used with the data distributions and how data analyses are performed. For example, data dichotomization is warranted with licensing data; 2) Regulatory compliance data are not linear when compared to program quality measures but are more plateaued at the substantial and full regulatory compliance levels. The data appear to follow the Law of Diminishing Returns as compliance moves from substantial to full (100%) regulatory compliance. This finding has been replicated in several studies and has been controversial because it has led to the issuing of licenses to programs with less than full compliance with all rules/regulations/standards. These two discoveries have been very influential in tracking developments in licensing measurement since their discoveries.

In the new century as states began to adopt the various methodologies it became necessary to have a standardized approach to designing and implementing them. The National Association for Regulatory Administration (NARA) took up this role and in 2000 produced a chapter on Licensing Measurement and Systems which helped to guide states/provinces in the valid and reliable means for designing and implementing these methodologies. In 2002 a very important study was conducted by the Assistant Secretary’s Office for Planning and Evaluation (ASPE) in which they published the Thirteen Indicators of Quality Health and Safety and a Parent’s Guide to go along with the research. This publication further helped states as they revised their licensing and program monitoring systems for doing inspections of early care and education facilities based upon the specific indicators identified in this publication. Both publications have been distributed widely throughout the licensing world.

During the first decade of the new century, Stepping Stones for Caring for Our Children went through a second edition. This publication and the ASPE publications were very useful to states as they prepared their Child Care Development Fund (CCDF) plans based upon Child Care Development Block Grant (CCDBG) funding.

From 2010 to the present, there have been many major events that have helped to shape licensing measurements for the future. Caring for Our Children Basics (CFOCB) was published and immediately became the default voluntary early care and education standards for the ECE field. The CFOCB is a combination of the risk assessment and key indicator methodologies. Three major publications by the following Federal agencies: HHS/ACF/USDA: Department of Health and Human Services/Administration for Children and Families/United States Department of Agriculture, OCC: Office of Child Care, and ASPE: Assistant Secretary’s Office for Planning and Evaluation dealing with licensing and program monitoring strategies were published. These publications will guide the field of licensing measurement for years to come. The Office of Head Start developed and implemented their own Head Start Key Indicator (HSKI) methodology. And in 2016, CCDBG was reauthorized and differential monitoring was included in the legislation being recommended as an approach for states to consider.

Most recently, the Office of Head Start is revising their monitoring system that provides a balance between compliance and performance. This system revision will go a long way to enhancing the balance between regulatory compliance and program quality. Also, there has been experimentation with an Early Childhood Program Quality Indicator instrument combining licensing and quality indicators into a single tool. These two developments help with breaking down the silo approach to measurement where licensing and quality initiatives are administered through separate and distinct approaches such as licensing versus professional development systems versus quality rating and improvement systems. A paradigm shift in which an Early Childhood Program Quality Improvement and Indicator Model is proposed. The paradigm shift should help to make licensing measurement more integrated with other quality initiatives.

The licensing field continues to make refinements to its measurement strategies in building a national/international regulatory compliance data base. More and more is being learned about the nuances and idiosyncrasies of licensing data, such as moving from a nominal to an ordinal driven data system. For example, NARA and the Research Institute of Key Indicators (RIKI) have entered into an exclusive agreement for the future development of licensing measurement strategies via differential monitoring, key indicators for licensing and program quality, and risk assessment approaches. Several validation studies have been completed in testing whether the various methodologies work as intended. A significant Office of Program Research and Evaluation (OPRE) Research Brief which developed a framework for conducting validation studies for quality rating and improvement systems has been adapted to be used in licensing measurement.

For additional updates to licensing measurement, please check out and follow these RIKINotes Blog posts. There are and will be many examples of licensing measurement enhancements. Also, although much of the research on licensing measurement has been completed in the ECE field, the methodologies, models, systems, and approaches can be utilized in any human service arena, such as child residential or adult residential services. Also, NARA’s chapter in their Licensing Curriculum has been developed into a full blown course, please go to the following web page for additional information: https://www.naralicensing.org/key-indicator-facilitated-dialogues

About Dr Fiene

Dr. Rick Fiene has spent his professional career in improving the quality of child care in various states, nationally, and internationally. He has done extensive research and publishing on the key components in improving child care quality through an early childhood program quality indicator model of training, technical assistance, quality rating & improvement systems, professional development, mentoring, licensing, risk assessment, differential program monitoring, and accreditation. Dr. Fiene is a retired professor of human development & psychology (Penn State University) where he was department head and director of the Capital Area Early Childhood Research and Training Institute.
This entry was posted in RIKInstitute. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s