In This Section

Lessons Learned From COVID-19 Modeling Project: Q&A With David Rubin

Published on July 12, 2021 in Cornerstone Blog · Last updated 1 month 1 week ago
AddtoAny
Share:

WATCH THIS PAGE

Subscribe to be notified of changes or updates to this page.

1 + 13 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.
“David

David Rubin, MD, MSCE

Editor’s Note: At the beginning of the COVID-19 pandemic last year, PolicyLab launched a model to forecast COVID-19 transmission, called COVID-Lab, using real-time data about physical distancing and weather for as many as 821 counties throughout the country. The data from the weekly models informed government response plans at the local, state, and federal levels, and the mainstream media widely reported the findings. Weekly modeling updates have stopped as the COVID-19 situation has improved, but the team will update the models at important moments in time this summer and fall. We talked with David Rubin, MD, MSCE, director of PolicyLab —a Center of Emphasis in the Research Institute — to recap the project, tell us the lessons learned, and discuss how the models can be used in the future.

Tell us what motivated you to launch this project at CHOP. What made CHOP the ideal place to conduct this type of research?

I work in population science, and over the years, we’ve developed a lot of science around quantitative methods, particularly around local area variation in services to families. People often do research at a macro level, not thinking about how public health is really local. Most of the decisions that guided this pandemic were local decisions. To some degree, there was national coordination and universal masking. But, in terms of the day-to-day response and the impact on schools, on families, and on decision-making, we needed a locally responsive model that incorporated not just the transmissibility of this virus, but known behavioral and environmental contexts that influence transmission at the community level.

We were very adept and had a lot of skill, both from a biostatistical and epidemiological perspective, to be able to do that. It began like an early virtual water cooler conversation; it was like all hands on deck. We brought together an interdisciplinary group of researchers with different backgrounds and expertise. It was sort of serendipity and capability. We elaborated this model, which I believe to this day has been one of the better models in the country in terms of navigating the local context and how that’s influenced peoples’ journeys through the pandemic.

How was this project able to come together so quickly?

This is a testament to the value of the Research Institute, and the relationships built among a strong health services research group including PolicyLab, Clinical Futures, the Biostatistics and Data Management Core, the Department of Biomedical and Health Informatics, etc. There’s already a collaborative aspect here. We have the ability to bring those people together for specific issues, in this case, the COVID-19 pandemic. It may be lateral to what they do in their primary research, but collectively, we build something bigger than the individual parts. That’s what happened last March. It was out of a sense of purpose and professional friendships that this all came together. People were pretty motivated around the same cause, which was to help people navigate through the pandemic with actionable data.

What are some of the major research lessons learned as a result of this project?

Number one is that there was a proclivity to think about the transmissibility of this virus just from the perspective of the strain or the virus and the viral properties. I think an important lesson was that the transmissibility day to day, or what we call the instantaneous reproduction number, is really influenced by a lot of factors.

It certainly came up this spring when everyone was blaming the variant strains over the winter and early spring for the increased transmission we saw throughout the country. Our data would lead us to believe that may have been a component, but the larger component was that people were returning to normal lives. They were distancing less. People were fatigued. So, there are environmental and behavioral factors that can influence the transmissibility of the virus much greater than simply the virus’ innate properties. That was an important cornerstone of our model.

In terms of interpreting our data, I think we learned to be less prescriptive over time to help people understand what the data was showing and where the uncertainties were, and to provide people a range of choices. We learned to appreciate that people — policymakers, school leaders, communities, individuals and families — come from different perspectives of risk tolerance, and they needed to understand the data to make their informed choices.

Can this model be utilized for future outbreaks and pandemics?

We’ve set up a methodology that will allow us to jump out of the gate a lot quicker. If there’s clean, reportable data, we could actually pivot to influenza or to other seasonal viruses or seasonal pandemics and reestablish this model. We have a number of peer-reviewed papers that are coming out, including on our methodological work, which I hope will form some of the foundation that leads the way, not just for our group but for other groups, to think about forecast models and pandemics in the future.

Read more about other lessons learned on PolicyLab’s blog.