Eleven code measures to help you figure out if your inspection programs are working.
NFPA Journal®, January/February 2009
By Jennifer Flynn, John Hall, and Casey Grant
Next to direct fire suppression, the adoption and enforcement of fire-related codes and standards have been the principal strategies fire departments employ to reduce the frequency and severity of unwanted fires. Enforcers rely on building inspections during construction and periodically thereafter to ensure compliance. But how well does that work?
Until recently, there were few, if any, measures to evaluate the effectiveness of fire prevention, specifically building inspections that enforce fire-related codes and standards. In order to correct this, NFPA, in conjunction with the Fire Protection Research Foundation (FPRF), developed a portfolio of code compliance effectiveness measures.
Inspection goals, tasks involved, and liability
The general goal of code compliance activity is simple: making sure that every provision of every requirement is complied with in every applicable property. Fire inspections are designed to identify and remove hazards to create a safer community.
Inspections typically occur in all properties undergoing construction or renovation, and are performed routinely in all non-home properties in the community, including common areas in apartment buildings. The majority of fires and fire deaths occur in one- and two-family dwellings or in non-common areas of apartment buildings, neither of which are typically inspected after construction, unless they are being renovated. According to the U.S. Fire Administration’s National Fire Incident Reporting System and NFPA’s annual survey of national fire statistics, properties and areas subject to fire inspection are involved in about 10 to 25 percent of structure fires, less than 5 percent of deaths, less than 10 percent of injuries, and 20 to 30 percent of property damage.
While inspection programs share the same general goal, methods of achieving that goal can vary greatly. How inspections are conducted, who performs them, how often they are conducted, and enforcement styles are only a few of the decisions that must be made when constructing and implementing an inspection program. One possible approach that would promote general consistency through the inspection community, and one that our report, Measuring Code Compliance Effectiveness for Fire-Related Portions of the Code, examines in detail, is an institutional approach based on measures of exceeding code requirements through a conceptual program referred to as Leadership in Life Safety Design (LLSD).
The Station Nightclub and the Cocoanut Grove fires are examples of major fires resulting in many deaths and injuries that occurred in buildings that had passed inspection. A fire with large loss of life raises awareness and questions about what went wrong and who was at fault. When a fire occurs and inspection practices are found to be negligent, the fire or building inspector, the fire department, the fire marshal, and the municipality can be held liable.
Inspectors must correctly inspect a facility, issue a correction or cite a violation when one is found, follow through to ensure the building owner corrects the violation, and maintain complete and accurate records. However, it is not clear whether a community is obligated to inspect a property in the first place, absent some reason to believe that there are violations present or legal requirements, such as state regulations on frequency, that mandate routine inspections.
Proposed effectiveness measures
Our code compliance report identifies 11 core measures designed to address the goals of fire inspection and to build on innovative practices identified in the existing literature and in interviews with fire marshals, fire chiefs, and inspectors. Measures include calculations, estimates, lists, and matrices.
Some measures require that certain assumptions and definitions be made. With consistent assumptions and definitions, a department can compare its performance from year to year. However, only with standardized assumptions and definitions can a department compare its performance to that of other departments. It is up to the department to set target results for each measure.
The effectiveness measurement methodology mirrors three stages of program evaluation:
Outcome evaluation type, or measures of fire loss,
Impact evaluation type, or measures of the presence of hazards, and
Process evaluation type, or measures of quantity and quality of service delivered.
Outcome measures most clearly quantify program success, focusing on what is important to society. Process measures respond best to changes in program management, focusing on what inspectors do. And impact measures fall in between. The best approach is to use all three types.
Three stages of proposed effectiveness measures
1. Structure fire rate
The philosophy behind the structure fire rate, which is the base measure used in every study of fire inspection performance, is that performing inspections lowers the fire rate. A low structure-fire rate is most desirable and suggests that an inspection program is effective because fires are not happening.
To reach this rate, you must identify the number of structure fires that occurred in inspectable properties and the number of inspectable properties in the community or jurisdiction. Then you must calculate structure fires per 1,000 inspectable properties, using five-year averages to compensate for small numbers of fires per year.
2. Presence and severity matrix
The presence and severity matrix is a list of all fires in a given time frame that resulted in direct property damage of at least $25,000. Inspections mostly target hazards that make fires more severe, not more likely, so inspection success will generally manifest as fewer, smaller serious fires. Once costlier fires are identified, the matrix displays the major hazards associated with each fire.
For fires that resulted in property loss of at least $25,000, you must identify what hazards were present at the time of the fire that could have been prevented by an inspection and the severity of each hazard’s contribution to fire cause, spread, and loss.
You must create a rating system that identifies hazard levels. Consider including indirect loss, such as business interruption costs. This measure can also be modified to include questions about whether an inspection took place before the fire and whether the inspection was within the target cycle time. Finally, you can link this measure to Measure 4 , “Number of Violations per Inspection,” which can use the same major-hazard groups selected for focus.
3. Value per additional inspection
The value per additional inspection is an estimate of the dollar value for one additional inspection, by major property group. With literature reviews and interviews, we identified two approaches, one based on prevention effects and the other on mitigation effects.
You must identify the annual fire loss total and the total number of occupancies in the community, using the formula:
Value of one annual inspection = (Fire loss per year) x (% loss preventable by inspection)/(# occupancies)
Calculate this measure for each major property type, and link to Measure 5, “Percentage of Preventable Fires,” for that component of the equation. Both measures use the same framework for judging preventable fires or fires that are amenable to mitigation.
4. Number of violations
This measure tells you how many violations overall, violations that were sprinkler-related, or violations that were safe-evacuation-related were found per inspection. This is an indicator of how much of the inspection workload involves recurring hazards, suggesting conditions are not safe between inspections.
Low numbers indicate that problems found during the inspection have not returned, but these numbers must be audited for quality assurance. Many violations suggest that perhaps communication needs improvement. An educated property manager will know not to put hazards back in place.
When you have done this, link to Measure 2, “Presence and Severity Matrix,” which can use the same major-hazard groups.
5. Percentage of preventable fires
This is a measure of how many fires, out of all those that occurred, could have been prevented by inspection or education. This requires assumptions and definitions about what is preventable and which education program could have prevented the fire from happening. The measure is still experimental, making it risky to use for comparisons of different inspection programs. Future research is needed.
For every fire that occurs, you should identify whether it could have been prevented and identify whether prevention could have been achieved through inspection or education. Compare these numbers to the total number of fires that occurred during the same time period, and link to Measure 3, “Value per Additional Inspection,” which can use the same framework for judging preventable fires or fires amenable to mitigation.
6. Percentage of fires with pending, uncorrected violations at time of fire
The percentage of fires that occurred while violation corrections were pending is an indicator of the effectiveness of violation follow-ups. A high number suggests negligence, which is a serious liability issue.
To figure out this measure, identify the number of fires that had a pending or uncorrected violations at the time of the fire and compare this number to the total number of fires during the same period.
7. Percentage of properties not inspected
It’s possible that a fire will occur in a building that was not on file but should have been. This measure indicates the number of properties in which fires occurred that should have been inspected but were not because the fire department did not know about them.
For this measure, you must identify the number of fires that occurred in properties subject to inspection but not listed in the files as needing to be inspected and compare this number to the total number of fires that occurred during the same period.
8. Percentage of inspections not completed in target cycle
Target cycle time refers to inspection frequency. For example, a fire department may have a target cycle time of one year for inspecting schools. This measure indicates the effectiveness of the inspection program in achieving set target cycles.
For each major property type, identify the number of inspections performed in properties that were overdue for inspection based on the department’s target cycle for inspection frequency. Then compare this number to the total number of inspections performed during the same period.
9. Building systems and features without completed inspection
This measure identifies building systems and features in new construction projects for which inspection and approval were not completed. A building system or feature would be added to this list if no timely inspection occurred.
First, list building systems and features that are typical hazards that inspectors look for during new construction projects. Then record the number of systems and features from the list for which inspection was not completed, as well as the number of systems and features for which approval was not issued.
10. Percentage certified inspections
The percentage of certified inspections is a proxy for evaluating inspector quality. It indicates how many inspections have been completed by individuals who have obtained all necessary certifications. It may be appropriate to analyze initial inspections and follow-up inspections separately or to separate assignments in other ways that relate to differences in required certifications.
To obtain the percentage of certified inspections, identify the number of inspections conducted by inspectors who are up to date on all necessary certifications, by major property type. Then compare each to the total number of inspections performed during the same period. A list of necessary certifications should be developed to support the measure.
11. Percentage of full-time inspector inspection
The percentage of inspections conducted by full-time inspectors is another measure of inspector quality. Literature and interviews suggest that inspections performed by full-time inspectors yield the highest quality of inspections performed. If this is true, then a high percentage for this measure indicates a high level of quality for inspections in the community.
For each property type, identify the number of inspections full-time inspectors performed and compare this number to the total number of inspections performed during the same period.
How these measures lead to successful code compliance programs
The interviews undertaken for this project underscored some old cautionary rules about management: managers get the information they ask for, which is not always the information they need or the information they intended to ask for. In most communities, management only requests cost and workload information. They don’t ask for evidence of impact or quality outcomes, and because it costs money to obtain such information, this information is rarely collected.
For those innovative fire marshals who take the initiative to collect and analyze such information, it is not clear whether their ability to make a case and gain support for their management initiatives and decisions improves. Perhaps code compliance effectiveness information would be more widely collected if it were connected to higher-level demands, such as fire department certification protocols.
Additional research is needed to provide materials to help inspectors collect and use these effectiveness measures and to enhance their inspections in ways anticipated by the design of the measures. For example, standardized materials would help inspectors use best practices on the educational and motivational aspects of inspections. The report identifies a number of software innovations that can help capture and store data that could make it easier to measure effectiveness and to use these results in planning inspections.
Following up these findings will help more fire departments measure and improve the effectiveness of their inspections and related code compliance activities.
A copy of the project report is available at www.nfpa.org/foundation under Reports.
Jennifer Flynn is a research analyst with NFPA’s Division of Fire Analysis and Research, of which John Hall is division director. Casey Grant is program director for the Fire Protection Research Foundation.