AI Oversight or AI Overlooked? The 2024 Act’s Ambitious Start and Critical Misses

National Security Institute
The SCIF
Published in
3 min readApr 29, 2024

--

By Jeffrey Wells, NSI Visiting Fellow

The 2024 bipartisan Future of AI Innovation Act, introduced in the U.S. Senate in the spirit of cross-party cooperation, has set an ambitious framework for managing the rapid advancements in artificial intelligence (AI). This significant legislative effort not only aims to guide AI development toward a more ethical and controlled path but also holds the potential to inspire public optimism and confidence in policymakers. The introduction of this Act is seen as a pivotal step to harness AI’s potential for societal benefit while addressing the challenges posed by this transformative technology. It promises to foster optimism by establishing a structured approach to governmental oversight and public sector initiatives regarding AI.

One of the Act’s significant strengths is its comprehensive strategy to promote innovation while mitigating associated risks. For example, it proposes the creation of a national AI research resource that aims to democratize access to AI research and development tools, which could reduce entry barriers and potentially ignite innovation across various sectors and regions. On the risk side, the Act takes a proactive stance by acknowledging that AI can pose risks, such as privacy invasion, discrimination, and spreading misinformation if left unchecked. By trying to set a moral compass for ethical and voluntary standards of AI development, calling on the Commerce Department to establish an “Artificial Intelligence Safety Institute,” the legislation attempts to curtail malicious uses of AI and ensure its alignment with societal values and norms.

However, the Act’s failure to address the private sector’s pivotal role in AI innovation and regulation is more than a minor oversight. The private sector, dominated by a few influential tech giants — with superior financial resources, talent acquisition, and innovative capabilities — drive AI’s future. This sector’s influence on AI’s trajectory is substantial, necessitating urgent attention and more explicit regulatory guidelines. The Act’s vague stance on this crucial matter poses potential risks, suggesting the need for explicit guidelines to ensure that private sector involvement benefits society rather than causing harm.

The Act’s reliance on voluntary compliance and self-regulation by private entities is a significant concern. This approach, while seemingly progressive, could potentially lead to the misuse of AI in surveillance, the spread of misinformation, and the exacerbation of social inequalities. These risks undermine the Act’s objective of guiding AI towards an ethical path. To ensure the public’s security and trust, it is crucial that legislation and government policies prioritize data privacy and consumer protection with strict, enforceable regulations.

To address these deficiencies, the Act should incorporate several enhancements:

● Mandatory Transparency Measures: Companies should be required to disclose the design, intent, and functionality of their AI systems publicly. This step would address the opaque nature of many AI systems, enhancing public and regulatory understanding and fostering trust.

● Robust Data Protection Laws: Stringent data privacy regulations are needed to oversee the collection and use of consumer data by AI systems. Strong data protection is vital to prevent privacy breaches and the manipulation of personal data.

● Accountability Standards: Clear legal frameworks should hold companies accountable for their AI systems, mainly when they cause harm or deviate from ethical standards.

The Future of AI Innovation Act is critical to rapidly growing AI. Still, significant reinforcement is required to regulate the decisive role of private corporations in this sector effectively and achieve the legislation’s aims to protect against the overpowering influence of large tech companies. The Act must extend beyond recommendations, and lawmakers must urgently establish clear, stringent rules to address the pressing needs of the AI era and to ensure that AI advancements benefit all,

Jeffrey R. Wells is a Visiting Fellow with the National Security Institute at George Mason University’s Antonin Scalia Law School, the Chief Security Officer for #AfghanEvac, and a Truman National Security Project Fellow.

--

--