UK Government launches AI self-assessment tool

The UK government has launched a free self-assessment tool to help businesses responsibly manage their use of artificial intelligence.

The questionnaire is intended for use by any organization that develops, provides or uses services that use AI as part of its standard operations, but it is primarily intended for smaller companies or start-ups. The results will tell decision makers the strengths and weaknesses of their AI management systems.

How to use AI Management Essentials

Now availablethe self-assessment is one of three parts of a so-called “AI Management Essentials” tool. The other two parts include a rating system that provides an overview of how well the business is managing its AI and a set of action points and recommendations that organizations should consider. None have been released yet.

AIME is based on the ISO/IEC 42001 standard, NIST framework and EU AI law. Self-assessment questions cover how the company uses AI, manages its risks and is transparent about it with stakeholders.

SEE: Delaying UK AI rollout by five years could cost economy £150+ billion, Microsoft report finds

“The tool is not designed to evaluate AI products or services themselves, but rather to evaluate the organizational processes in place to enable the responsible development and use of these products,” according to the Department for Science, Innovation and Technology report.

When completing the self-assessment, input should be obtained from employees with technical and broad business knowledge, such as a CTO or software engineer and an HR Business Manager.

The government wants to include the self-assessment in its procurement policy and frameworks to include assurance in the private sector. It also wants to make it available to public sector buyers to help them make more informed decisions about AI.

On November 6, the government opened a consultation invites businesses to provide feedback on the self-assessment, and the results will be used to refine it. The rating and recommendation parts of the AIME tool will be released after the consultation closes on 29 January 2025.

Self-assessment is one of many planned government initiatives for AI insurance

In a paper published this week, the government said AIME will be one of many resources available on the “AI Assurance Platform” it wants to develop. This will help businesses conduct impact assessments or review AI data for bias.

The government is also creating a responsible AI terminology tool to define and standardize key AI insurance terms to improve communication and cross-border trade, particularly with the U.S.

“Over time, we will create a set of accessible tools to enable basic good practices for the responsible development and deployment of AI,” the authors wrote.

The government says that the UK’s AI insurance market, the sector that provides tools for the development or use of AI safety and currently consists of 524 firms, will grow the economy by more than £6.5 billion over the next decade. This growth can be attributed in part to increasing public confidence in the technology.

The report adds that the government will work with the AI ​​Safety Institute – launched by former Prime Minister Rishi Sunak at the AI ​​Safety Summit in November 2023 – to promote AI insurance in the country. It will also award funding to expand the Systemic Safety Grant programme, which currently has up to £200,000 available for initiatives developing the AI ​​insurance ecosystem.

Legally binding legislation on AI safety is coming in the next year

Meanwhile, Peter Kyle, the UK’s technology secretary, has pledged to make the voluntary agreement on AI safety testing legally binding by implementing the AI Bill in the following year at the Financial Times’ Future of AI Summit on Wednesday.

At November’s AI Safety Summit, AI companies — including OpenAI, Google DeepMind and Anthropic — voluntarily agreed to allow governments to test the safety of their latest AI models before their public release. Kyle was first reported to have pitched his plans to legislate voluntary agreements to executives of prominent AI companies in a meeting in July.

SEE: OpenAI and Anthropic Sign deal with US AI Safety Institute, hand over frontier models for testing

He also said the AI ​​bill would focus on the large ChatGPT-style foundation models created by a handful of companies and turn the AI ​​Security Institute from a DSIT directorate into an “arm’s length government body”. Kyle reiterated these points at this week’s summit, according to the FT, stressing that he wants to give the Institute “the independence to act fully in the interests of British citizens”.

In addition, he pledged to invest in advanced computing power to support the development of frontier AI models in the UK, in response to criticism over the government scrapping £800m of funding for an Edinburgh University supercomputer in August have.

SEE: UK government announces £32m for AI projects after funding for supercomputers scrapped

Kyle stated that while the government cannot invest £100 billion alone, it will work with private investors to secure the necessary funding for future initiatives.

A year in AI safety legislation for the UK

Piles of legislation have been published in the past year committing the UK to develop and use AI responsibly.

On October 30, 2023, the Group of Seven countries, including the UK, created a voluntary AI code of conduct consisting of 11 principles that “promote safe, secure and reliable AI worldwide.”

The AI ​​Security Summit, in which 28 countries committed to ensuring safe and responsible development and deployment, kicked off just a few days later. Later in November, the United Kingdom’s National Cyber ​​Security Center, the US Cybersecurity and Infrastructure Security Agency, and international agencies from 16 other countries released guidelines on how to ensure security when developing new AI models.

SEE: UK AI safety summit: Global Powers make ‘landmark’ pledge to AI safety

In March, the G7 countries signed another agreement committing to explore how AI can improve public services and boost economic growth. The agreement also covered the joint development of an AI toolkit to ensure that the models used are safe and reliable. The following month, the then Conservative government agreed to work with the US in developing tests for advanced AI models by signing a Memorandum of Understanding.

In May, the government released Inspect, a free, open-source testing platform that assesses the safety of new AI models by assessing their core knowledge, ability to reason, and autonomous capabilities. It also co-hosted another AI safety summit in Seoul, which saw the UK agree to work with global nations on AI safety measures and announced up to £8.5 million in grants for research to protect society against protect its risks.

Then, in September, the UK signed the world’s first international treaty on AI with the EU, the US and seven other countries, committing to adopt or maintain measures that ensure the use of AI in conformity is with human rights, democracy and the law.

And it’s not over yet; with the AIME tool and report, the government announced a new AI security partnership with Singapore through a Memorandum of Cooperation. It will also be represented at the first meeting of international AI security institutes in San Francisco later this month.

AI Safety Institute Chairman Ian Hogarth said: “An effective approach to AI safety requires global collaboration. This is why we place such emphasis on the International Network of AI Safety Institutes, while also developing our own strengthen research partnerships.”

However, the US has moved further away from AI cooperation with its recent assignment limiting the sharing of AI technology and mandating protection against foreign access to AI resources.

+++++++++++++++++++
TechNewsUpdates
beewire.org

Leave a Comment