[ad_1]

A global registry tracking the flow of chips destined for AI supercomputers to help prevent artificial intelligence misuse and disasters is one of the policy options highlighted by a major new report that I call for regulation of the “compute” — the hardware that underpins all AI.

Other technical proposals offered by the report include “compute caps” — a built-in limit on the number of chips each AI chip can connect to — and allowing a digital veto for AI training across multiple parties. Distributing the “start switch”. of dangerous AI before feeding on the data.

The researchers argue that AI chips and data centers present more effective targets for auditing and AI safety governance, as these assets have to be physically contained, while other elements of the “AI triad” — data and algorithms — can theoretically be Endlessly copied and disseminated.

Experts point out that the powerful computing chips needed to run generative AI models are built through highly concentrated supply chains, dominated by just a handful of companies — making the hardware itself the risk-reducing AI. Creates a strong intervention point for policies.

The report, published on February 14, was authored by nineteen experts and jointly led by three institutions at the University of Cambridge – Leverholm Center for the Future of Intelligence (LCFI), Center for the Study of Existential Risk (CSER). and the Bennett Institute for Public Policy — with OpenAI and the Center for the Governance of AI.

“Artificial intelligence has made staggering advances in the past decade, much of which has been enabled by the rapid increase in computing power applied to training algorithms,” reports Hayden Belfield, Cambridge’s LCFI. said co-lead author.

“Governments are rightly concerned about the potential consequences of AI, and are looking at how to regulate the technology, but data and algorithms are intangible and difficult to control.

“AI supercomputers consist of tens of thousands of networked AI chips in large data centers, often the size of several football fields, consuming tens of megawatts of electricity,” Belfeld said.

“Computing hardware is visible, quantifiable, and its physical nature means constraints can be imposed in a way that may soon be nearly impossible with more virtual elements of AI.”

The computing power behind AI has grown exponentially since the “deep learning era” began in earnest, with every six months since 2010 used to train the largest AI models. Compute” amount has increased. The largest AI models now use 350 million. Many times more than thirteen years ago.

Over the past year, government efforts around the world — including the US Executive Order on AI, the EU AI Act, China’s Generative AI Regulation, and the UK’s AI Safety Institute — have focused on computing when considering AI governance. has started

Outside of China, the cloud computing market is dominated by three companies, known as “hyperscalers”: Amazon, Microsoft, and Google. “Monitoring hardware will greatly help competition authorities monitor the market power of the biggest tech companies, and thus encourage more innovation and new entrants,” said co-author Professor Diane Coyle, from Cambridge’s Bennett Institute. Space will open up for.”

The report provides “blueprints” of possible directions for computational governance, highlighting the analogy between AI training and uranium enrichment. “The international regulation of nuclear supply focuses on a critical input that has to go through a long, difficult and expensive process,” Belfield said. “A focus on compute will allow AI regulation to do just that.”

Policy ideas fall into three camps: increasing the global exposure of AI computing; Allocating computational resources for the greatest benefit to society; Enforcing limits on computing power.

For example, a regularly audited international AI chip registry requiring chip producers, sellers, and resellers to report all transfers, at any given time, of the amount of compute held by nations and corporations. Will provide accurate information about

The report even says that a unique identifier could be added to each chip to prevent industrial espionage and “chip smuggling.”

“Governments already track many economic transactions, so it makes sense to make monitoring something as rare and powerful as a modern AI chip,” Belfield said. However, the team points out that such approaches could lead to a black market in undetectable “ghost chips.”

Other proposals to increase visibility — and accountability — include reporting of large-scale AI training by cloud computing providers, and privacy-preserving “workload monitoring” to prevent an arms race. Help if large-scale computing investments are made without sufficient transparency.

“Computer users will engage in a mix of beneficial, benign and harmful activities, and determined groups will find ways to circumvent restrictions,” Belfield said. “Regulators will need to create checks and balances that thwart malicious or misleading uses of AI computing.”

These may include physical limitations on chip-to-chip networking, or secret technology that allows AI chips to be remotely disabled in extreme circumstances. One proposed approach would require the consent of multiple parties to unlock AI compute for particularly dangerous training runs, a procedure familiar to nuclear weapons.

Policies to mitigate AI risk can prioritize computing for research that can most benefit society — from green energy to health and education. It can even take the form of large international AI “megaprojects” that pool computing resources to tackle global problems.

The report’s authors are clear that their policy proposals are “exploratory” rather than fully developed proposals, and that they range from the risks of proprietary data leaks to negative economic impacts and hindering the development of positive AI. I have a possible deficiency.

They offer five considerations for regulating AI through compute, including exclusion of small-scale and non-AI computing, regular review of compute thresholds, and attention to privacy protection.

Belfield added: “Trying to govern AI models as they are deployed may prove futile, like chasing shadows. Those who want to establish AI regulation are held to account. must be looked at, which is the driving force behind the AI ​​revolution. Threats to society.”

The report is Computing Power and the Governance of Artificial Intelligence.

[ad_2]