Bookshelf

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2024 Jan-.

Cover of StatPearls

StatPearls [Internet].

Treasure Island (FL): StatPearls Publishing; 2024 Jan-.

Standards and Evaluation of Healthcare Quality, Safety, and Person-Centered Care

Michael Young ; Mark A. Smith .

Authors

Michael Young 1 ; Mark A. Smith 2 .

Affiliations

1 Marian University 2 Jefferson Medical College, Philadelphia, PA

Last Update: December 13, 2022 .

Continuing Education Activity

The function of quality management (QM) in general (or of healthcare quality management (HQM) in particular) is to improve quality, as defined herein, through data analysis and the identification of root causes (independent variables) of events. Healthcare is a service offered to persons who are simultaneously patients in a traditional sense and customers in a modern sense. This article focuses on delivering health services and products in both senses. Although some persons tout American healthcare as having the highest quality, there is much data to indicate, on the contrary, that American HCPs have much room to improve the quality of their services. US national standards are determined by the federal legislature, the federal courts, the President, and the organizations that report to these entities. The article discusses quality regulation, tools for analyzing quality retrospectively and in real-time, and strategies for using interprofessional teams to deliver quality healthcare.

Identify the factors that contribute to healthcare quality.

Determine the tools used in delivering quality in other industries that healthcare professionals and healthcare managers do not regularly use.

Assess the bodies that regulate and promote quality. Communicate the tools for achieving consensus in healthcare teams.

Introduction

Healthcare is a service offered to persons who are simultaneously patients in a traditional sense and customers in a modern sense. This topic focuses on delivering health services and products in both senses. Therefore, when referring to patients who are also always simultaneously customers, the topic primarily refers to them as "patient-customers." Healthcare providers (HCPs) should retain this concept while reading this topic and delivering healthcare.

Delivering quality services and goods to consumers is a field of study, and this topic serves only as an overview. The history of quality delivery in healthcare can be traced to the earliest texts concerning physician ethical obligations, which are reviewed elsewhere.[1][2]

In the modern sense of evidence-based medicine, objectively measuring and tracking data to determine whether quality has been delivered or how to improve quality, Ernest Codman (1869-1940) may be the most notable American pioneer. Codman, a surgeon who began practicing medicine in 1895, advocated for record-keeping, enabling retrospective analysis of how processes affected outcomes. He also advocated for practice standardization and helped lead the founding of the American College of Surgeons (ACS) and its Hospital Standardization Program, which became the Joint Commission on Accreditation of Healthcare Organizations. Codman, like Ignaz Semelweiss, the pioneer of antisepsis in hospitals, is a historical example of a physician being rejected by other physicians for insisting that the status quo was detrimental to patients. He resigned from the Massachusetts General Hospital due to his colleagues' rejection of his quality measures and continued to practice and advocate for quality measures elsewhere.[3]

The deliberate study, quantification, and practical application of data to achieve quality in business originated outside of healthcare. While working in telecommunications engineering, Walter Shewart (1891-1967) pioneered modern statistical process control (SPC). Edwards Deming (1900-1993), a physicist by training who became a business consultant and a professor of statistics and business, expanded Shewart's work and has been called the originator of quality management.[4] Deming is best known for influencing automobile manufacturing quality control, but healthcare professionals adopted his ideas.[5] Quality control measures in modern healthcare often have some relationship to the work of 1 or more of these 3 individuals.

Function

The function of quality management (QM) in general (or of healthcare quality management (HQM) in particular) is to improve quality, as defined below, through data analysis and the identification of root causes (independent variables) of events. QM is a simpler and historically earlier version of the scientific method (SM). QM shares with the SM the goal of increasing objectivity and reducing subjectivity in decision-making. Both QM and the SM involve:

Defining a problem or issue of interest, Quantifying 1 or more independent variables exposed to a process or event, Analyzing how outcomes differed after exposure and Concluding what the measurements mean.

These steps can be referred to as a quality process cycle. Different persons divide the cycle into different numbers of steps, such as Plan Do Study Act (PDSA) or Define Measure Analyze Improve Control (DMAIC). Although persons performing QM can perform experiments that eliminate bias and confounding variables like scientists do to achieve level 1 evidence, QM analyses fall below level 1 evidence. They do not attempt to eliminate all possible forms of bias and confounders. The goal of QM is usually to determine which of 2 processes is better or when a process should be altered to reach an objective, not to quantify precisely how much better 1 process is compared to another or how precisely a relationship exists between an independent variable and a dependent variable. Whereas the SM emphasizes attempting to disprove a null hypothesis, identifying/eliminating data collection biases, and quantifying the degree of mathematical certainty for a finding, QM uses simpler ways to modify a process to achieve a more desirable outcome.

QM emphasizes process monitoring and control. According to Deming, quality managers should evaluate every part of a process that can result in delays and inconsistencies between individuals and between systems. In addition to using lists of processes, quality managers use lists of priorities. The list of Taiiichi Ohno, who is credited with the development of the Toyota Production System and Lean manufacturing, had 3 items to eliminate:

Muda (futility) Mura (inconsistency), and Muri (overburdening).

Managers may not implement QM because of the following restrictions:

They cannot obtain data (ie, measure structures, processes, or outcomes). They cannot design an experiment to show whether a new process is superior to a current process. They are not able to quantify statistical confidence in differences between groups.

They cannot perform plan-do-study-act (PDSA) cycles in the time allotted for their workdays while performing requirements assigned to them by their superiors and do not want to hire someone who can.

They are overly tasked with "putting out" instead of "preventing" fires (they are distracted by their daily routines and issues and do not focus on larger or longer-term objectives).

They work for superiors who do not see a benefit adequate to the costs of performing PDSA cycles.

They are not concerned with how well things could be performed and instead think, "If it ain't broke, don't fix it," or are concerned only that a bottom line is met somehow (the ends justify the means).

Issues of Concern

Defining Quality

Deming defined quality in business as the delivery of a predictable, uniform standard in services or goods with the standard both suited to and defined by the customer. Although the concept of quality specifies consistency, processes must also adapt to retain quality based on developments in knowledge and external factors. Such factors include new customer demands, the discovery of better practices developed elsewhere, or new rules from regulatory bodies. Deming considered quality to involve minimizing costs, but he viewed cost control as an end of other quality management means, not as a means primarily unto itself.

Quality can be defined using the following equation:

Quality = process outcomes x customer satisfaction, where process outcomes are hard data, and customer satisfaction is based on perceptual data

Philosophically and linguistically, it can be argued that cost is irrelevant to quality.

Value, in equation form, can be defined as:

Value = quality/costs

Healthcare quality includes all factors contributing to patients' healthcare status and their acceptance of the care received. This includes hard, factual outcomes, such as mortality and morbidity complications, and soft outcomes, such as patient experience. Patients evaluate quality and value based on what they perceive, whether that perception is accurate or inaccurate. Perceived quality depends on the patient's observations about what input (effort) the HCP makes, what the actual output (outcome) of the care is, and how much (time, money, and effort) they sacrificed to obtain it.

To maximize value, healthcare managers (HCMs) must find ways to increase positive hard outcomes and softer perceptual measures while decreasing overall costs (see Graph. Average Costs to a Managed Care Organization). Hard and soft quality measures can be studied concerning certain 'domains' (including effectiveness, efficiency, timeliness, safety, equity, and customer-centeredness) or via other models (such as the 1 based on the work of Avedis Donabedian, discussed below).[6]

Evaluating Healthcare Quality

Although some tout American healthcare as having the highest quality, there is much data to indicate that American HCPs have much room to improve the quality of their services.

The Commonwealth Fund, which freely reports its data analysis on its website, published its most recent study of eleven first-world countries' healthcare metrics in 2017. The United States placed first in preventing inpatient death after a stroke, tied for first in 5-year breast cancer survival, third in preventing inpatient death after a heart attack, and tied for third in 5-year colon cancer survival. However, the US placed last in health care outcomes (quality) regarding:

All the other disease-specific outcomes it measured All-cause death is amenable to healthcare intervention and Every measure of population health.

Thus, unless a person bases their assessment on healthcare outcomes for several isolated disease processes, they should conclude that Americans do not come close to having the highest healthcare quality in the world.

The American public has approved a healthcare system that incentivizes new, often unproven technology, defensive medicine, and restricting healthcare for people under age 65 to employment-related options. Given that this system costs Americans more per citizen by far than systems elsewhere cost citizens of any other first-world country, the value of US healthcare likely does not rank among the top fifty countries in the world.

In the 1960s, following Deming's and Codman's lead, Avedis Donabedian began advocating that healthcare quality managers (HQMs) should quantify healthcare processes and measure healthcare structures and outcomes.[4] In other words, Donabedian argued that HQMs should measure behaviors and the completion of tasks (such as the rate of HCP compliance in completing checklists and the time HCPs spend performing specific tasks) in addition to tracking infrastructure items (such as computers available by the bedside and syringes used in a procedure) and patient disease state outcomes (such as morbidity and mortality). He wrote:

Most quality studies suffer from having adopted too narrow a definition of quality. They generally concern themselves with the technical management of illness and pay little attention to prevention, rehabilitation, coordination, and continuity of care or handling the patient-physician relationship.

In doing so, Donabedian called for HCPs and HQMs to investigate numerous healthcare delivery processes besides HCP technical proficiency to determine where process changes could improve quality. Since Donabedian's writing, the assessment of healthcare quality has often been separated into 3 primary buckets:

Structure: resources required to supply healthcare, including human and inanimate physical resources Process: methods, behaviors, and strategies

Outcome: measurable results, which are more of a gold standard barometer than structure or process (see the definition of quality above).

In general, structure measurements evaluate the baseline resource commitments that determine whether or not a healthcare entity should be delivering care in the first place. Process measurements are often binary (yes/no) measures based on national standards, and outcome measures are usually evaluated as rates.

Care for acute myocardial infarction (AMI) patients is discussed as an example: Structural measurements include how well an emergency department is staffed with HCPs who are acquainted with the diagnosis and initial management of AMI and how well a catheterization laboratory is staffed with HCPs and equipment capable of providing diagnostic and interventional services. Process factors include the behaviors that enable the patient to quickly be sent for coronary artery catheterization, which can be measured in sum as the door-to-wire (patient arriving at the institution to cardiologist crossing the patient's coronary artery lesion with a wire) time or can be measured in smaller divisions of the overall process. A standard for door-to-wire time in the United States has been suggested as 90 minutes or less.[7]

Outcomes, the actual results of care, can be measured as the success rate of opening acute coronary occlusions, the average improvement of cardiac function measured by cardiac output, or the 24-hour death rate (the number of patients with AMIs who die within 24 hours divided by the total number of AMI patients treated).

Many healthcare organizations (HCOs) have heeded Donabedian's call, such as the Commonwealth Fund, the Leapfrog Group, and the National Institute of Standards and Technology. Persons in the US government monitor reports from these groups concerning their data. In the 20th and (primarily) 21st centuries, in response to the data and observation that American healthcare quality lags in many other countries, federal and other institutions began regulating certain facets of healthcare quality delivery.

Obligating Quality From Healthcare Providers

Obligations to provide quality (and value) to patient-customers derive from a hierarchy of standards, listed in subjective order of binding authority:

Legal standards for healthcare services and billing include statutes passed by legislatures, executive orders declared by the President and governors, and common laws issued by judges.

National medical organization standards for healthcare services include medical societies and organizations that monitor healthcare facilities, such as TJC.

Local (eg, hospital-level) policy standards for healthcare services and billing

Other ethical norms (eg, heuristics for ethical decision-making, such as those proposed by Beauchamp and Childress or Jonsen, Siegler, and Winslade).[1]

Healthcare Quality Standards

National standards in the US are determined by the federal legislature, the federal courts, the President, and the organizations that report to these entities (primarily the President).

Since 1953, the US Department of Health and Human Services (DHHS) has been led by an official who reports directly to the President as a Cabinet member. The DHHS is the executive branch department responsible for the portion of American healthcare that is federally controlled (much of American healthcare is not federally regulated). The DHHS consists of numerous appointed officers (such as the surgeon general) and eleven operating divisions, which include the Food and Drug Administration (FDA), the Centers for Disease Control (CDC), the Centers for Medicare & Medicaid Services (CMS), and the Agency for Healthcare Research and Quality (AHRQ). The AHRQ, which also interacts with the US Congress, oversees a network of Patient Safety Organizations (PSOs) and the Network of Patient Safety Databases (NPSD). It funds the United States Preventive Services Task Force (USPSTF), created in 1984. It recommends that screening exams improve healthcare quality enough for the US government to pay for them or demand that private insurance companies pay for them. HCPs can submit information about individual patient adverse outcomes and rates to PSOs, which return to HCPs feedback on preventing future patient safety events.

The US Congress creates healthcare laws under the leadership of the Health, Education, Labor, and Pensions Committee in the Senate and the Ways and Means Committee in the House of Representatives. Congress usually creates laws intended to improve healthcare quality in response to reports by other organizations (discussed further below).

The US federal court system has made several decisions that have established national healthcare law, such as legalizing abortion in the first trimester of pregnancy, legalizing the withdrawal of life-sustaining care, and legalizing physician-assisted suicide if previously legalized by state statute. Its standard for evaluating healthcare quality is the language of pre-existing federal law. Of the 3 branches of government, the federal court system has the least impact on creating healthcare quality on the concepts of quality promoted by Shewhart, Deming, and Donabedian.

The US executive and legislative branches interact with multiple independent, non-profit organizations when making policies on healthcare standards. Two examples of such organizations are the National Committee for Quality Assurance (NCQA) and the National Quality Forum (NQF).

The NCQA was founded by Margaret O'Kane in 1990 to publish evidence-based quality standards. It has since become the primary voluntary quality accreditation program for individual physicians, health plans, medical groups, and healthcare software companies. Its primary contributions include maintaining the Healthcare Effectiveness Data and Information Set (HEDIS) and the Consumer Assessment of Healthcare Providers Systems (CAHPS) survey. Health insurance plans achieve accreditation status by meeting the HEDIS criteria. The NCQA and AHRQ later developed other CAHPS surveys for doctors, nursing homes, long-term care, home health, and dialysis centers.

In partnership with the Hospital Quality Alliance, CMS began using a hospital rating and reporting system (HCAHPS, sometimes called Hospital Compare) in 2002. The CAHPS surveys are used by health insurance plans, including CMS' Medicare plan, the largest American health insurance plan, to obtain patients' feedback about services they have received. Insurance plans can financially penalize HCOs that have poor feedback. Scores from some CAHPS surveys are publicly available for consumers to use when selecting 1 HCO or HCP over another.

The National Quality Forum (NQF) is a non-profit membership organization established in 1999. It comprises over 400 for-profit and non-profit organizations collecting data and endorsing healthcare standards. That is to say, the NQF does not generally create its statements on standards but endorses standards set by other entities that then 'increase in legitimacy' as a national standard. Membership organizations include consumers, health plans, medical professionals, employers, government, public health, pharmaceutical, and medical device organizations.

Regulation of Quality in Hospitals and Other Healthcare Organizations (HCOs)

HCOs must undergo periodic reviews and submit data to demonstrate that their HCPs are meeting quality standards to bill federal and state health insurance programs for services. Federal agents can "deem" HCOs to have met quality standards via direct investigation. However, federal and state governments more commonly allow independent agencies that set quality standards (such as Det Norsk Veritas (DNV), the Healthcare Facilities Accreditation Program (HFAP), and The Joint Commission (TJC)), to act on their behalf to create and oversee HCO quality of care standards.

Given the complexity of this topic, it focuses on TJC standards and does not discuss the nuances of non-uniformity in the standards created by TJC, other independent accrediting bodies, and the federal government. TJC publishes standards for HCPs in its Patient Safety System guidelines, which were last updated in January 2018. The standards are divided into 4 sections; all sections are titled "LD" (for leadership), followed by .01, .02, .03, or .04, and then followed by an additional set of numbers, such as LD.01.01.01. TJC says that each accredited HCO must:

Have its leaders create and maintain a culture of safety and quality throughout the organization. Be a learning organization (an organization that requires its personnel to learn continuously.

Encourage blame-free reporting of system and process failures and encourage proactive risk assessments by HQMs.

Report certain adverse events, close calls, and hazardous conditions to TJC. Have an organization-wide patient safety program that includes performance improvement activities. Collect data to monitor quality performance.

Hold persons accountable for their responsibilities. On this point, TJC states: "… a fair and just culture holds individuals accountable for their actions but does not punish individuals for issues attributed to flawed systems or processes."

Ironically, TJC does not hold itself accountable in many regards; it enables the HCOs that it accredits leeway in interpreting TJC standards, such that 1 TJC-accredited HCO may interpret a standard differently than another, and the 2 HCOs may use different standards. Furthermore, suppose a patient or a physician has a complaint regarding a TJC-accredited institution. In that case, TJC may refuse to do anything about the complaint if the TJC-accredited institution has made a minimum effort to adopt any standard. More than 20% of US hospitals do not report to TJC (ie, do not follow its standards), which charges a fee for HCOs to become accredited and either remain unaccredited or choose accreditation from a competing organization.

Regulation and Setting of Quality Standards

Individual state medical boards and court systems uphold standards for HCPs, with medical boards primarily enforcing HCP ethics standards and courts judging issues of HCP negligence in adherence to state laws.

Most healthcare practice standards are not established through the regulatory processes described above but are determined by guidelines from national medical specialty organizations. There is much overlap in this process. For example, organizations that publish standards for HCPs who practice vascular medicine in the United States include the following:

Society for Vascular Surgery (SVS)

American Venous Forum (the latter 2 organizations work in conjunction with the American College of Surgeons (ACS))

Society of Interventional Radiology (an organization that coordinates with the American College of Radiology)

American College of Cardiology

These organizations sometimes interact to set quality guidelines. The organizations may agree or disagree regarding what a standard should be; some facets of vascular medicine quality may be addressed by only 1 of these organizations.

While not 'regulation' per se, the federal government and private insurance companies incentivize HCPs to meet quality measures through pay-for-performance (P4P) programs that reduce the pay of HCPs who do not meet certain quality thresholds. The Centers for Medicare and Medicaid Services (CMS) current iteration of P4P is the Quality Payment Program (QPP). Most HCPs who bill CMS for services rendered receive payment tied to meeting criteria in the QPP's Merit-based Incentive Payment System (MIPS), which is 1 of 2 payment programs through the QPP. MIPS redistributes payments from HCPs who do not meet the criteria to HCPs who do meet the criteria, which are extensive and are listed at: https://qpp.cms.gov/mips/quality-requirements. These requirements involve many types of structures, processes, and outcomes. Over time, CMS has provided HCPs more flexibility in meeting quality measures while increasing the overall requirements. Since CMS started HCP P4P in 2006, it has increased the percentage of the revenue tied to P4P criteria that HCPs earn from treating Medicare patients to as high as 10%. This value is likely to continue to increase. CMS and private insurance companies also reimburse hospitals and other healthcare facilities using measures related to quality (P4P).

Healthcare Quality Management (HQM) and Healthcare Quality Managers (HQMs) Standards

Authority for determining standards in HQM per se is even less established and more fragmented than in determining standards in medical ethics and medical and surgical care practices per se. HCPs with training in HQM comprise a tiny fraction of HCPs overall, including the HCPs who establish standards of care guidelines in the national societies listed above. Several quality management organizations that directly impact American HQM are listed below.

The Institute for Healthcare Improvement (IHI) is an independent non-profit organization (and PSO) started in 1991 by Donald Berwick, a pediatrician and lead CMS administrator under President Barack Obama. The IHI is primarily an educational and advocacy organization; it published its Framework for Clinical Excellence in 2017.

The National Association for Healthcare Quality (NAHQ) is an independent non-profit organization established in 1976. It is the only organization offering a national certification for HQM professionals and is accredited (by the Institute for Credentialing Excellence).

The American Health Quality Association (AHQA) is a non-profit organization established in 1984 that advocates for HQM standards and encourages participation from third-party healthcare organizations. However, it does not publish standards or guidelines.

The American National Standards Institute (ANSI) is an independent non-profit membership organization established in 1918 that primarily endorses quality standards in technology, including health information technology. It formally serves as an American International Standards Organization (ISO) member. This independent non-profit organization publishes standards for technology-centered businesses headquartered in Geneva, Switzerland. The ISO advocates for quality improvement using:

Process-focused approaches, Customer-focused outcomes, Coordination and engagement of involved persons/parties, Data-collection, and Evidence-based decision-making.[8]

In summary, healthcare quality regulation is fragmented between government agencies, agencies established to determine quality standards for HCPs themselves, agencies established to determine quality standards for HQMs themselves, and other agencies with their own agendas in healthcare quality.

Having discussed setting quality standards by organizations with a nationwide reach, attention has now turned to creating and analyzing healthcare quality.

Terms In Quality Delivery

The field of quality management (QM) has its terminology, some of which derives from Japan, where Deming and others practiced QM to enable Japanese companies to start manufacturing goods and services, such as cars and electronic devices, with higher quality than those produced in the United States after World War II. Because QM is a branch of applied statistics and research, many concepts and terms derive from those fields, eg:

Continuous (also known as variables or quantitative) data vs. categorical (also known as attributes or qualitative) data types

Gaussian data distribution vs. Poisson, polynomial, or skewed data distributions Relative risk, effect size.

Adroitness in the practice of QM requires a grasp of statistics, which are not discussed here. Instead, quality management terms are introduced. The reader is advised to consult more comprehensive texts on quality management and statistics for more details on terminology and applications within the field.

Retrospectively Analyzing Causes (Independent Variable)

Persons performing QM spend a lot of effort analyzing all the steps in a process and looking for kaizen (opportunities for improvement). In performing this task, they often create lists and charts, many types of which are briefly explained below.

Suppler-input-process-output customer (SIPOC, also known as COPIS) charts divide a process into 5 components that must be considered when determining how to improve the process to achieve quality in the customer's eyes.

Gap analysis is the process of identifying where specific barriers in a process can be removed. It explores the details of why present outcomes differ from desired outcomes. During gap analyses, people often communicate using a flow chart.

Flow charts illustrate how individual processes affect a larger overall process. Flow charts can aid a subjective evaluation of which individual processes can or should be improved to prevent errors and variation in outcomes or should be eliminated to reduce waste. A fishbone diagram, also called an Ishikawa diagram, is a flow chart designed to facilitate the analysis of causes and effects. Instead of organizing a complicated process linearly where each factor or process leads to another that leads to another, individual contributing factors are grouped into larger categories.

Aristotle provided an example of classifying causes in this method in his books Metaphysics and Physics by defining 4 causes of any 1 thing:

Efficient cause (its maker) Final cause (its teleological purpose) Formal cause (its design) Material cause (its substance)

Modern attempts at classifying causes include such alliterations as 'patron/patient, people at work, provisions, place, and procedures' and 'methods, men, machines, and materials.' HQMs and HCPs often use fishbone diagrams when discussing causes of adverse patient outcomes during root cause analyses (described below).

The causes of problems can be anticipated or reacted to; different QM techniques, such as failure modes effects analysis (FMEA) and root cause analysis (RCA), are available for anticipation or reaction.

FMEA is a method to design or redesign a process in anticipation of problems. Quality management:

Lays out a complex process as a flow chart or as a table broken into individual sub-processes Defines how frequently each sub-process tends to be problematic Elucidates the various manners by which the sub-processes can fail and

Assesses how easily a minor problem in a process can be detected before it results in a serious adverse outcome.

To improve the overall process and outcome, the sub-processes that fail often, are easy to fix, and are difficult to detect before resulting in a catastrophe should be prioritized for correction.

Root Cause Analysis and Action (RCA2, or just RCA) works backward from an outcome (usually an unwanted outcome, but the outcome could be the desired outcome) to its "root" causes, stopping short of reaching the unmoved mover postulated by Aristotle in the Metaphysics. RCAs at HCOs are usually held as meetings involving all persons identified as having a role in processes that led to the outcome. The primary function of an RCA is to identify gaps between the expected process/outcome/structure and the actual process/outcome/structure. RCA can be facilitated using Ohno and Sakichi Toyoda's 5 Why's Technique and James Reason's Swiss cheese model.[9] In 2015, the National Patient Safety Foundation (which merged with the IHI in 2017) published its opinion on when and how RCA should be performed in the medical setting. CMS published instructions on its website for HCPs on performing RCAs. TJC requires accredited HCOs to perform RCAs whenever 1 of 24 adverse events (called sentinel events) occurs.

The techniques discussed thus far do not require the use of data, but data can improve objectivity in analysis. HQMs can increase data capture for actions not captured automatically by electronic health records (EHRs) by having HCPs document processes they are involved with using checksheets or checklists (which can be converted into checksheets and uploaded into the EHR as needed).

The Oxford Centre categorizes the quality of evidence based on the quality of data collection and analysis for Evidence-Based Medicine (OCEBM) and the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) levels of evidence. OCEBM lists 5 levels, whereas GRADE lists 4 levels. Level 1 evidence is a randomized controlled trial or meta-analysis of randomized controlled trials. GRADE emphasizes that data evaluation based on meta-analyses may be no better than analysis of isolated studies. Unlike OCEBM, GRADE also does not consider experts' opinions to qualify as a level of evidence.

Although QMs should make decisions based on the relative quality of evidence available, it is not feasible for QMs to perform randomized controlled trials or cross-sectional studies in most circumstances. QMs can use other tools, such as histograms (or other bar charts) and scatter diagrams, to detect trends and relationships between independent and dependent variables when data is available in day-to-day practice. For categorical data, a Pareto chart (a type of bar chart) can be created from the data to determine the priority in 'where the cause is' for a problem. In other words, a Pareto analysis orders the categories of data to show which category or categories are most associated with an outcome of interest.

Categorical and continuous data can be analyzed retrospectively with a moderate passage of time between the time of events and the time of their evaluation. However, quality improvement can (and should) be performed using prospective data collection to allow evidence-based decision-making as quickly as possible (in real-time or near real-time). In the healthcare industry, patients' lives are at stake.

Analyzing Processes In Real-Time To Detect And Modify Causes (Independent Variables)

Processes can be monitored in real-time or near-real-time when collecting data by comparing outcomes in 2 groups (control and experimental groups) or by comparing outcomes in 1 group before and after an intervention. The data can be presented in table format or by plotting the input or output variables over time in various line charts, particularly run and SPC charts (see figure).

A run chart is a line chart that enables the collected data points to be compared to a measure of the central tendency (usually the data's mean) that the data have had in the past up to the most recent data point collected. Run charts can be used to evaluate fluctuations in categorical or continuous data (usually the latter) so that quality managers can quickly identify when a process has recently begun to deviate from the past trend. This allows the quality manager to attempt corrective action when a trend is undesired or to evaluate whether a manager's recent corrective action has resulted in a new desired trend. Shewhart devised a way to convert a run chart into a more specialized illustration of a process trend once enough data had been collected to enable a statistical analysis of data variation from the measure of central tendency.

In other words, statistical process control charts (also called SPC charts, control charts, or Shewhart charts after their inventor) are run charts that require some additional statistical manipulation. SPC is the method most commonly used by QMs to make data-driven decisions with a degree of statistical certainty about processes at their institution. Therefore, some attention is given in detail here, although actual steps for incorporating SPC into clinical practice are not discussed.

SPC charts used in clinical practice usually provide a threshold of statistical 'certainty' for determining when data points diverge from the measure of central tendency called a control limit or a process limit. When the threshold is breached, the quality manager should search for the new disruptive independent variable (special cause variation). A small variance in the data not reaching a high degree of statistical probability should be accepted as being due to prior fluctuations in the already established independent variables (termed random variation or common cause variation). Data analysis using SPC charts enables decision-making using a 'level of evidence' equivalent to a non-randomized, incompletely- or non-matched prospective cohort study.

In both types of comparison, relative risks and effect sizes can be determined, although their values are incompletely reliable due to bias and confounding variables. In a stable population of patient-customers, such as those who undergo return visits to a clinic (eg, patients with peripheral arterial disease returning for medical or other rehabilitative non-invasive therapies), data can be obtained pre-intervention and post-intervention on the same patient-customers. This enables a crossover study format with patient-customers acting as their true controls without interrupting the clinic's business operation, such as increasing costs for new patient-customer recruitment. Given that making evidence-based interventions to improve outcomes is the primary goal of HQM (not achieving statistical certainty), analysis of internal organization data using SPC charts can help accomplish that goal in most circumstances.

SPC requires the user to select from many different chart subtypes, each designated for a specific use depending on the properties of the data being collected. SPC charts used in clinical practice usually incorporate the data's standard deviation as a value moving across time and compare the actual data to values expected in a Gaussian distribution (ie, a bell curve).[10] Determining how the actual data's distribution differs from a Gaussian distribution can be a relevant step in the analysis (and can facilitate selecting the best SPC chart to use). However, SPC charts can detect nonrandom process outcomes without the chart creator having first to verify that assumptions of models based on Gaussian statistics are met. SPC charts are also more effective when 1 or more problems and data collection methods about the problem(s) have already been established. When the problem(s) or the cause(s) of the problem is(are) not known, then the QM should instead use flow charts, Pareto charts, probability plots, scatter diagrams, histograms, or affinity diagrams to guide the initial quality improvement measures.

In theory, all HQMs should:

Collect data on all possibly relevant independent variables pertaining to the processes for which they are responsible for maintaining quality amenable to statistical analysis.

Perform statistical analysis using confidence levels and confidence intervals.

Monitor all recurring processes where corrective managerial actions are time-sensitive by performing run charts.

Switch from run charts to SPC charts for processes that meet the theoretical criteria for appropriately interpreting the SPC chart.

Of the many goals that HQMs and HCPs should have concerning the above-described techniques, the discussion is further centered around the following 4: