With the ideas42 Seminar Series, we invite leading scholars to share their insights and what inspires their exploration into human behavior.
We were pleased to welcome Stefano DellaVigna, the Daniel Koshland, Sr. Distinguished Professor of Economics and Professor of Business Administration at the University of California, Berkeley. His recent paper, RCTs to Scale: Comprehensive Evidence from Two Nudge Units, compares behavioral science interventions in research studies versus field experiments. After giving a talk to the ideas42 team, Stefano was kind enough to share some of his thoughts on behavioral science:
What do you think researchers (or implementing organizations) need to do differently to incorporate behavioral science into programs and products more effectively?
One point suggested by our analysis is not about the exact design of the intervention, but making sure that the intervention is well powered statistically. Our results suggests that one should aim for a statistical power of 1.5 percentage point effects; in practice, many interventions are run with lower statistical power, which increases the chance that findings would not be replicable. This may mean running fewer treatment arms, or working hard to get a larger population.
In your view, how could we encourage researchers to share null or negative results?
A very important point. I think that journals should be open to precisely estimated null effects, but it is a path to get to that. I have two suggestions. One, it may be easier to publish null results as a bundle of interventions, some of which may be null, others maybe not. For example, Angela Duckworth and Katy Milkman have a great project on mega-studies that effectively does this, bundles a number of interventions, even by different authors in one setting and in one paper.
My second thought on this is very close to my heart — with Eva Vivalt we are launching a platform for people to get expert forecasts about the result of their interventions. In many cases, comparing to the average prediction of experts makes the finding not a null, but the rejection of the mean expert forecasts.
The United States recently expanded unemployment insurance dramatically. What can behavioral economics tell us about how to design unemployment insurance?
There are many dimensions to this, but I would keep in mind that we probably do not want a drastic drop-off of UI benefit level once the support legislated by Congress ends. That is a dimension to keep in mind. A good place to start is understanding how people actually approach job searches, which is actually different than standard models suggest.