[ad_1]

A global registry aimed at tracking the flow of chips. AI supercomputers is one of the policy options highlighted by An important new report To help prevent artificial intelligence abuses and disasters – calls for regulation of “compute” – the hardware that underpins all AI.

Experts say that the largest AI models now use 350 million times more compute than they did thirteen years ago.

Other technical suggestions made by the report include “compute caps” — internal limits on the number of chips that can be attached to each AI chip — and dividing the “start switch” for AI training among multiple parties. to allow a digital veto of dangerous Before feeding the AI ​​data.

The researchers argue that AI chips and data centers present more effective targets for auditing and AI safety governance, as these assets have to be physically held, while other elements of the “AI triad” – Data and algorithms – in theory, can be replicated endlessly. and spread.

Experts point out that the powerful computing chips needed to run generative AI models are built through highly concentrated supply chains, dominated by just a handful of companies – making the hardware the only risk-reducing AI. Creates a strong intervention point for policies.

The report was authored by nineteen experts and led by three University of Cambridge Institute – Leverholm Center for the Future of Intelligence (LCFI). Center for the Study of Existential Risk (CSER) and the Bennett Institute for Public Policy – ​​with OpenAI and the Center for the Governance of AI.

“Artificial intelligence has made staggering advances in the past decade, much of which has been enabled by the rapid increase in computing power applied to training algorithms,” reports Hayden Belfield, LCFI, Cambridge. said co-lead author of

“Governments are rightly concerned about the potential consequences of AI, and are looking at how to regulate the technology, but data and algorithms are intangible and difficult to control.

“AI supercomputers consist of tens of thousands of networked AI chips in large data centers, often the size of several football fields, consuming tens of megawatts of electricity,” Belfeld said.

“Computing hardware is visible, quantifiable, and its physical nature means constraints can be imposed in a way that may soon be nearly impossible with more virtual elements of AI.”

The computing power behind AI has grown exponentially since the “deep learning era” began in earnest, with “computes” used to train the largest AI models every six months since 2010. “The quantity has increased. The largest AI models now use 350 million. Many times more than thirteen years ago.

Government efforts around the world over the past year — including the US Executive Order on AI, the EU AI Act, China’s Generative AI Regulation, and the UK’s AI Safety Institute — have begun to focus on computing when considering AI governance. has given

Outside of China, the cloud computing market is dominated by three companies, known as “hyperscalers”: Amazon, Microsoft and Google.

Co-author Professor Diane Coyle, from Cambridge’s Bennett Institute, said: “Monitoring hardware will greatly help competition authorities to check the market power of big tech companies, and thus encourage more innovation and new entrants. The space will open.”

The report provides “blueprints” of possible directions for computational governance, highlighting the analogy between AI training and uranium enrichment.

“The international regulation of nuclear supply focuses on a critical input that has to go through a long, difficult and expensive process,” Belfield said. “A focus on compute will allow AI regulation to do just that.”

Policy ideas fall into three camps: increasing the global exposure of AI computing; Allocating computational resources for the greatest benefit to society; Enforcing limits on computing power.

For example, a regularly audited international AI chip registry requiring chip producers, sellers, and resellers to report all transfers, at any given time, of the amount of compute held by nations and corporations. Will provide accurate information about

The report even says that a unique identifier could be added to each chip to prevent industrial espionage and “chip smuggling.”

“Governments already track many economic transactions, so it makes sense to make surveillance of a commodity as rare and powerful as a modern AI chip,” Belfield said. However, the team pointed out that such methods could lead to a black market in undetectable “ghost chips”.

Other proposals to increase visibility — and accountability — include reporting of large-scale AI training by cloud computing providers, and “workload monitoring” to protect privacy to help prevent an arms race. If large-scale compute investments are made without sufficient transparency.

“Compute users will engage in a mix of beneficial, benign, and harmful activities, and committed groups will find ways to circumvent restrictions,” Belfield said.

“Regulators will need to create checks and balances that thwart malicious or misleading uses of AI computing.”

These may include physical limitations on chip-to-chip networking, or secret technology that allows AI chips to be remotely disabled in extreme circumstances.

One proposed approach would require the consent of multiple parties to unlock AI compute for particularly dangerous training runs, a procedure familiar to nuclear weapons.

AI risk mitigation policies can prioritize computing for research that can benefit society – from green energy to health and education. It can even take the form of large international AI “megaprojects” that pool computing resources to tackle global problems.

The report’s authors are clear that their policy proposals are “exploratory” rather than fully developed proposals and that they address everything from the risks of leaking proprietary data to negative economic impacts and hindering positive AI development. There are downsides.

They offer five considerations for regulating AI through compute, including exclusion of small-scale and non-AI computing, regular review of compute thresholds, and attention to privacy protection.

Belfield added: “Trying to govern AI models as they are deployed may prove futile, like chasing shadows. Those seeking to establish AI regulation should look upwards for reckoning. , which is the driving force behind the AI ​​revolution.

“If computing remains unregulated, it poses serious risks to society.”

Source: University of Cambridge



[ad_2]