Can Limiting AI Power Through Hardware Control Lead To A Safer Future?

Can Limiting AI Power Through Hardware Control Lead To A Safer Future? According to experts, a recommended strategy for ensuring the safety of artificial intelligence (AI) involves the regulation of its “hardware,” encompassing the chips and data centers that drive AI technologies, commonly referred to as “compute.” This proposal, originating from a collaborative effort involving prominent institutions such as the University of Cambridge’s Leverhulme Centre for the Future of Intelligence and OpenAI, suggests the implementation of a global registry to meticulously track AI chips. The concept of “compute caps” is introduced in this initiative, aiming to maintain a balanced distribution of research and development (R&D) efforts across various nations and companies.

This innovative approach, which focuses on the physical components of chips and data centers, is believed to be more practical to regulate compared to the intangible aspects of data and algorithms. Haydn Belfield, a co-lead author from the University of Cambridge, emphasizes the crucial role of computing power in AI R&D, highlighting that AI supercomputers consist of a vast network of AI chips consuming substantial amounts of power.

The comprehensive report, authored by a team of 19 experts, including the renowned ‘AI godfather’ Yoshio Bengio, draws attention to the staggering growth in computing power demanded by AI models. The report notes that the largest AI models now require 350 million times more compute than they did thirteen years ago. The authors argue that this exponential increase underscores the urgent need for governance to prevent centralization and the potential uncontrolled expansion of AI. Moreover, given the significant power consumption of certain data centers, regulatory measures could also mitigate the growing impact of AI on energy grids.

Professor Diane Coyle, another co-author, emphasizes the advantages of hardware monitoring for maintaining a competitive market. She suggests that monitoring hardware would assist competition authorities in curbing the market power of major tech companies, thereby creating space for innovation and new market entrants.

In a parallel with nuclear regulation, the report proposes policies to enhance global visibility of AI computing, allocate computing resources for societal benefit, and impose restrictions on computing power to mitigate risks. Belfield encapsulates the report’s key message, urging those involved in AI regulation to focus on the source of AI power, computing, rather than attempting to govern AI models once deployed.

Drawing parallels with the history of nuclear power regulation, the report raises pertinent questions about the application of similar measures to AI. It questions who would lead a central agency responsible for limiting chip supply, who would mandate such an agreement, and how enforcement would be achieved. Additionally, concerns are raised about preventing entities with strong supply chains from benefiting at the expense of their competitors. The geopolitical landscape is considered, particularly with regards to countries like Russia, China, and the Middle East, highlighting the need for global cooperation in regulating AI.

The report, which exceeds 100 pages, offers insights into these complex issues, signaling that exploring this avenue could be worthwhile. However, the comparison of AI to nuclear power raises concerns that significant disasters may be necessary to transform safety sentiments into tangible regulatory reality.

Read: Could Chat-GPT 4 Produce A Bio-Weapon

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *