__STYLES__
Objective: The aim of this project was to understand how patient trust in U.S. hospitals changed in the lead-up to and during the COVID-19 period. The primary metric used was the Net Promoter Score (NPS), an indicator of consumer loyalty and trust in many industries.
Data Source: The data was derived from the HCAHPS survey, which evaluates patient perspectives on hospital care from (report) year from 2015 to 2023.
Why NPS?
The special emphasis on NPS was clear to me already from the beginning. Only two measures can be seen as "dependent/target variables" in this dataset; overall satisfaction and willingness to recommend. Thinking about it logically (and later performing separate regression/path analysis, I confirmed that overall satisfaction acts as mediator between other patient satisfaction measures (independent variables) and willingness to recommend. The willingness to recommend depends on the level of (overall) satisfaction, and not vice versa. Familiar with the willingness to recommend metric in my daily work, I recognized it as the Net Promoter Score (NPS), widely used to measure trust or loyalty. By calculating the difference between the top (Promoters) and bottom percentage scores (Detractors), I derived the NPS metric from this dataset.
Why the focus on the pandemic?
I started by analyzing the trend of various patient satisfaction indicators from 2015 to 2023. This allowed be to identify if, and when indicators changed. I saw that 2021 was the year when everything changed. Recognizing that the report year differs from the data collection year (lags by 1 year), it became apparent that 2021 represented 2020 - the onset of the COVID-19 pandemic. From here, the idea was born to look into how patient satisfaction measures and NPS changed, and if the COVID-19 truly had an impact.
What does the project title mean?
The title holds significant weight for me, as it's probably the most important thing in capturing readers' attention and engagement. I tried to weave in elements of the pandemic and the notion of recommendation, all within a healthcare framework. After many hours of thinking, I finally came up with Pandemic Prescription: Are U.S. hospitals losing their recommended dose of trust? The word "prescription" suggests a set of recommendations or solutions designed to address the challenges faced by U.S. hospitals during the pandemic era. Just as a patient gets a prescription to address their health concerns, this report provides a "prescription" or roadmap (final section of my report) for hospitals to improve patient trust and satisfaction. The phrase "recommended dose" ties back to the healthcare theme. Just as a doctor prescribes a recommended dose of medicine to a patient, hospitals rely on a "dose" of trust from their patients to function effectively. By questioning if hospitals are losing this trust, the title highlights the central theme of the analysis: exploring whether patient trust have been affected by the COVID-19 pandemic.
How is the project structured?
The structure of the projects starts from high level and goes into depth based on findings from the high level.
Section 1: The pandemic’s impact: A shift in patient satisfaction indicators
This section shows a historical trend analysis of various patient satisfaction indicators from 2015 to 2023. On the left, a historical trend graph is displayed, where all metrics except the NPS are de-emphasized to spotlight the NPS, since it is the focus of the entire project. Years during the pandemic, starting from 2021, are distinguished with dashed lines. On the right, I present the change in each indicator during the pandemic relative to previous years, emphasizing three notably declining indicators. The COVID-era NPS values are presented in a separate bar chart to contextualize the percentages. This design helps readers in deriving pre-pandemic values by adding the p.p. differences.
In this section, I don't claim the pandemic is the reason for the NPS drop. Instead, I suggest it might have impacted patient views. In the next section, I use statistical methods to further explore this idea.
Section 2: A holistic understanding of key drivers shaping patient trust
In this section, I aimed to identify which indicators truly affect the NPS. Not all underperforming indicators necessarily impact NPS. Through a correlation and quadrant analysis, I assessed how each indicator relates to NPS and their respective performance. This helped pinpoint the main contributors to patient trust. Three indicators stood out by being important but underperforming (the top left quadrant). To add more context I also charted how these indicators shifted during the pandemic versus before it.
After identifying most important factors for NPS/trust, I used a multivariate regression analysis to test their significance against the NPS as the dependent variable. I used the state data and transformed it so that I could perform the regression analysis, resulting in 458 unique state-year observations. In order to test if the pandemic had an impact, I also included a dummy variable where =1 represents the COVID-19 era and =0 represents the pre COVID-19 era in the model. Initially, I explored more complex models with interaction terms (i.e., when the effect of one variable depends on the level of another variable). However, despite their complexity, these models did not offer a significant improvement in model fit over simpler models. Diagnostic plots, including a 4-in-1 plot, were examined to ensure the assumptions of regression were met, such as normally distributed residuals, linearity, and homoscedasticity. The final model chosen was based on its simplicity and interpretability without compromising on the accuracy of insights. This section concludes with the regression results and a discussion on how the included variables influence NPS/trust.
Section 3: Key states to watch - identifying drifters and thrivers
In this section I wanted to focus on state level comparison. I calculated the average (national) decrease in NPS and used it as a "benchmark" to identify which states are drifters and thrivers. Drifters are the states where there was a notable negative deviation in NPS scores after the onset of the COVID-19 pandemic. Thrivers on the other hand are states that despite the challenges brought by the pandemic managed to either maintain their previous NPS scores or even improve them (2 states had no change and 2 states improved). By combining the deviation from national average and the difference in NPS during vs pre pandemic, I was able to identify states that notably diverged from the "norm".
Section 4: Roadmap to rebuilding patient trust
The last section wraps up the three previous sections by focusing on recommendations that are specific, relevant, and directly tied to the challenges I've identified. The first three challenges derive from core insights of the report. I introduced a fourth one, even though it's not data-driven. Through the HCAHPS documentation, I observed inconsistencies in feedback collection timings—ranging from 48 hours to 6 weeks post-discharge. Such variability might skew patients' memories, potentially compromising the accuracy and comparability of feedback.
Why did I overlook response rates?
While analyzing the data, I saw a trend of diminishing response rates. I chose not to place emphasis on this aspect, and here's why. A common misconception is equating higher response rates with better survey quality. However, the quality of a survey isn't decided by its response rate but by its methodologies and sampling techniques. It's these factors that determine if results can be extended to a broader population. For instance, a survey with a 10% response rate could be more reliable than one with a 90% rate, if its findings are more generalizable due to random sampling techniques. It's not just about how many respond, but who responds and how representative those responses are. The HCAHPS collects around 2+ million responses annually. Even with diminishing response rates, the large volume of responses ensures a broad representation of patient experiences, and that the data remains statistically significant and robust.
In light of this, I looked into the data to explore the correlation between response rates and the NPS. I found that there wasn't a strong relationship between the two metrics, as indicated by a correlation coefficient of r=0.13. This result confirmed my decision to not focus on response rates in my analysis.
Tools used
I used Python for the regression analysis, quadrant analysis and some of the charts. I used PowerPoint for overall design.