British Standards Institution Publishes 1st Guide on How to Manage AI Safely


Breakthroughs in artificial intelligence are being made every day but a British body has just published the first guide to managing it safely.

The British Standards Institution (BSI) has published the first international standard on how to safely manage artificial intelligence amid growing concerns about its potential risks.

The document sets out how to establish, implement, maintain, and continually improve an AI management system and gives advice on safeguards.
Officially known as BS ISO/IEC 42001, it is designed for corporations and government departments and costs £187 for non-members of the BSI, a national standards body which was founded in 1901 and now operates in 195 countries.

The BSI’s chief executive, Susan Taylor Martin, said: “AI is a transformational technology. For it to be a powerful force for good, trust is critical.”

‘Opportunity to Harness AI’

She added: “The publication of the first international AI management system standard is an important step in empowering organisations to responsibly manage the technology which, in turn, offers the opportunity to harness AI to accelerate progress towards a better future and a sustainable world.”

Ms. Taylor Martin said BSI wanted to be at the forefront of “ensuring AI’s safe and trusted integration across society.”

In November 2023 Bill Gates predicted AI would accelerate discoveries in a way never seen before, and Australia’s Commonwealth Scientific and Industrial Research Organisation predicted AI would become the “new normal.”

Related Stories

Human Impersonation AI Must Be Outlawed
US Military Not ‘Building Killer Robots in the Basement’: Pentagon Official

Judy Slatyer, head of Australia’s National AI Centre, said last month: “In 2024, hundreds of millions of people will be using the increasingly advanced features of AI in their everyday work and life. Already some 54 percent of global consumers are using AI every day and we’ll see this increase significantly next year.”

The BSI said they published the new standard amid an ongoing debate about the need to regulate AI, which has been spurred by the release of tools such as ChatGPT.

In a press release, the BSI said the threats posed range from “AI being used to create malware for cyber attacks” to a “potentially existential threat to humanity, if humans were to lose control of the technology.”

Last year Michael Cohen, a doctoral candidate in engineering science at Oxford University, warned against “ploughing ahead” with certain types of AI which could have potentially disastrous consequences for the human race.
Mr. Cohen had earlier told a committee of MPs there was a risk of a “dystopian future” in which AI takes over the world and humans are wiped out, akin to the plot of the film “The Terminator.”
Britain hosted the first global AI safety summit in November, where world leaders and executives from major tech firms met to discuss the implications of the cutting-edge technology.

At the summit, Prime Minister Rishi Sunak announced he would be creating the world’s first AI safety institute, which would build on the work of the Frontier AI Taskforce, a research team dedicated to evaluating the risks posed by the technology.

Mr. Sunak highlighted the task force’s progress in securing “privileged access” to the proprietary technology models of leading AI giants such as Google DeepMind, Anthropic, and OpenAI.

“We are doing far more than any other country to keep you safe from AI threats,” said the prime minister.

Self-Driving Cars Among Areas Benefiting From AI

Scott Steedman, director general for standards at BSI, said, “Medical diagnoses, self-driving cars and digital assistants are just a few examples of products that already benefit from AI.”

But he said, “Consumers and industry need to be confident that in the race to develop these new technologies we are not embedding discrimination, safety blind spots or loss of privacy.”

“The guidelines for business leaders in the new AI standard aim to balance innovation with best practice by focusing on the key risks, accountabilities and safeguards,” added Mr. Steedman.

PA Media contributed to this report.


Source link

Leave a Reply

Your email address will not be published. Required fields are marked *