Abstract. In this paper, we map out key strategic and normative dilemmas that regulators must navigate in regulating the development and application of AI. We propose three such dilemmas. The first and fundamental dilemma concerns the trade-off between Responsibility and Innovation: governments must decide how to balance the imperative for regulatory safety and the imperative to develop and proliferate the use of AI. The first dilemma generates two more specific dilemmas: the choice between Horizontal and Vertical regulation, and the formulation of risk assessment methods. Here, we argue that the latter decision points fundamentally depend on the first: the structure of AI regulation should follow from the relative weighting that regulators give to the competing imperatives of Responsibility and Innovation. The task of this paper is to map out these decision points, and to foreground their normative assumptions and implications.
April 6th, 2022
Dilemmas in AI Regulation: An Exposition of the Regulatory Trade-Offs Between Responsibility and Innovation
Interested in our company?