Safety Not Guaranteed: International Races for Risky Technologies Eoghan Stafford, Robert F. Trager, Allan Dafoe November 2022 Abstract The great powers appear to be entering an era of heightened competition to master security-relevant technologies in areas such as AI. This is concerning because deploy- ing new technologies can create substantial shared risks, such as inadvertent crisis escalation or uncontrolled proliferation. We analyze a strategic model to determine when states deploy technologies before learning how to minimize their risks. When competitors are moderately adversarial or the technology laggard is not very capable, the laggard does not use a risky technology unless it catches up to the technology leader. By contrast, if competitors are highly adversarial and the laggard is closer to the leader’s capability level, the laggard is willing to cut corners to gamble for advan- tage, so that the shared risk falls if the laggard catches up. Further, when competitors are not deploying the riskiest technologies, steps to make those technologies safer will be attenuated or reversed by risk compensation. Preliminary. Please do not cite or distribute without permission. We are grateful for outstanding research assistance by Ben Harack and Maximilian Negele and for very helpful feedback from Eric Gartzke, Nadiya Kostyuk, and attendees of our presentations at the Future of Humanity Institute, the Centre for the Governance of AI, Dartmouth College, and the American Political Science Association 2021 Annual Meeting.