Editor's note: This piece was authored by Cassandra Shand.
The AI revolution is transforming the policymaking landscape, necessitating adjustments in the policy space to effectively address emerging challenges.
As Google CEO Sundar Pichai commented, AI has the potential to impact “every product of every company.” Recent AI advancements have raised concerns across industries, with over 27,000 individuals—including researchers, academics, and industry experts like Elon Musk and Steve Wozniak—signing an open letter published by the Future of Life Institute. The letter called for a temporary moratorium on training AI models more powerful than GPT-4 due to concerns over unforeseen consequences for humanity.
However, voluntary restraint on AI development is unlikely. Without government intervention, incentives for rival AI firms or countries with less developed AI capabilities to build stronger models will supersede public calls for development restriction.
This is where governments must step in, yet the current policy-making landscape is ill-equipped to handle the task. Policymakers lack the nuanced understanding and knowledge of AI technology needed to establish regulations that will benefit society and foster the industry’s growth. To remedy this, we need more collaboration between technologists and policymakers.
Recommended
The extensive integration of AI, AI-enhanced products, and industries affected by AI requires niche, comprehensive policy measures that reflect an understanding of AI capabilities, industry landscapes, and potential effects. Policymakers must also consider AI’s dual utility for both civilian and military entities and continuously update policies to reflect global changes in the AI landscape. This could involve drafting and adopting international agreements or treaties related to AI.
While the US National Institute for Science and Technology has issued its AI Risk Management Framework 1.0, this is only the first of many steps necessary for a well-crafted AI policy. More regulatory efforts will likely be adopted in the future, including retroactive, industry-specific AI restrictions as problems with AI adoption arise.
Optimal regulation should be proactive, much like the UK’s latest AI policy framework, fostering innovation and healthy competition in the AI industry while motivating companies to invest in their own in-house AI safety and ethics expertise. For AI firms, this would likely involve the robust hiring of philosophers, policy minds, and interdisciplinary scholars who could help temper unsafe AI applications at the company level.
At the policy level, policymakers must address the deficit of technologists in the AI regulative space. Policymakers and technologists must work together to develop appropriate policies that demonstrate a deep understanding of the technologies being regulated.
In addition to policymaker collaboration with AI firms, creating a concerted government or institutional effort around making AI policy more lucrative could help attract financially driven talent to the policy arena. Defining roles for technologists interested in public service could be a good place to start. We need to attract and invest in visionary talent with an eye for technology trends who can help anticipate future policy challenges in tech. Each of these could be accomplished by building new internships, fellowships, or scholarships to help technologists learn more about the policy space and how to best apply their talent and insights.
In the long term, society can lay the groundwork for a flourishing policy community by encouraging interdisciplinary studies in policy and technology, and fostering expertise in both fields. Universities and companies can contribute by valuing public contributions made by technologists in the policy space. Universities should prioritize academic, policy, and industry collaboration, particularly in AI, and stress the importance of policy as a career field that directly impacts the tech space. After all, a career rooted in understanding AI will likely withstand any economic shocks brought on by AI advancements. Americans should also value technology expertise in the voting booth, which may help build a foundation for tech-knowledgeable legislation.
Lawmakers and society at-large should recognize that technology policy is no longer a task for the nerds, but for any citizen wishing to make a lasting impact on the future of humanity.
Cassandra Shand is a Ph.D. candidate at the University of Cambridge and a Young Voices Innovation Fellow. Twitter: @CassandraShand.