HOME
        TheInfoList






Elon Musk called for some sor

Elon Musk called for some sort of regulation of AI development as early as 2017. According to NPR, the Tesla CEO is "clearly not thrilled" to be advocating for government scrutiny that could impact his own industry, but believes the risks of going completely without oversight are too high: "Normally the way regulations are set up is when a bunch of bad things happen, there's a public outcry, and after many years a regulatory agency is set up to regulate that industry. It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilisation." Musk states the first step would be for the government to gain "insight" into the actual status of current research, warning that "Once there is awareness, people will be extremely afraid ... [as] they should be." In response, politicians express skepticism about the wisdom of regulating a technology that's still in development.[125][126][127]

Responding both to Musk and to February 2017 proposals by European Union lawmakers to regulate AI and robotics, Intel CEO Brian Krzanich argues that artificial intelligence is in its infancy and that it is too early to regulate the technology.[127] Instead of trying to regulate the technology itself, some scholars suggest to rather develop common norms including requirements for the testing and transparency of algorithms, possibly in combination with some form of warranty.

Responding both to Musk and to February 2017 proposals by European Union lawmakers to regulate AI and robotics, Intel CEO Brian Krzanich argues that artificial intelligence is in its infancy and that it is too early to regulate the technology.[127] Instead of trying to regulate the technology itself, some scholars suggest to rather develop common norms including requirements for the testing and transparency of algorithms, possibly in combination with some form of warranty.[128] Developing well regulated weapons systems is in line with the ethos of some countries' militaries.[129] On October 31, 2019, the Unites States Department of Defense's (DoD's) Defense Innovation Board published the draft of a report outlining five principles for weaponized AI and making 12 recommendations for the ethical use of artificial intelligence by the DoD that seeks to manage the control problem in all DoD weaponized AI.[130]

Regulation of AGI would likely be influenced by regulation of weaponized or militarized AI, i.e., the AI arms race, the regulation of which is an emerging issue. Any form of regulation will likely be influenced by developments in leading countries' domestic policy towards militarized AI, in the US under the purview of the National Security Commission on Artificial Intelligence,[131][132] and international moves to regulate an AI arms race. Regulation of research into AGI focuses on the role of review boards and encouraging research into safe AI, and the possibility of differential technological progress (prioritizing risk-reducing strategies over risk-taking strategies in AI development) or conducting international mass surveillance to perform AGI arms control.[133] Regulation of conscious AGIs focuses on integrating them with existing human society and can be divided into considerations of their legal standing and of their moral rights.[133] AI arms control will likely require the institutionalization of new international norms embodied in effective technical specifications combined with active monitoring and informal diplomacy by communities of experts, together with a legal and political verification process.[134][135]