REGULATION from artificial intelligence (AI) developers “cannot be the only step” taken to make sure the technology is safe in Scotland, according to an expert.
Four of the major forces in AI development – Google, Microsoft, OpenAI and Anthropic – launched Frontier Model Forum on Wednesday, an industry body slated to oversee the safe development of the technology.
Membership to the body will require organisations to be working on the most advanced AI technology, defined as "Frontier" models, with the goal of mitigating the dangers they might pose.
Dr Anil Fernando, a professor of video coding and communications at Strathclyde University whose work involves researching AI and machine learning, said we need to take a "holistic approach" to regulating the technology in Scotland.
He said: “The Frontier Model Forum can be taken as positive step forward, but it cannot be taken as the only step forward – its effectiveness will depend on how well it complements government regulations and how seriously the member companies commit to its objectives.
“A holistic approach to AI regulation that combines industry collaboration, government involvement and public engagement will be essential for achieving comprehensive AI safety.”
He said that left without proper regulation, AI “can be highly dangerous”.
He continued: “AI systems have the potential to be biased in decision making, invade privacy, or cause harm to individuals and the society.
“The big tech companies are driven by market competition and profit. Hence, self-regulation cannot be taken as replacement for government action. It could lead to overlooking potential risks to gain a competitive advantage.”
The founding companies of the forum have stated that collaborating with governments and policymakers is one of their four main aims, as well as advancing the safety of AI research, identifying best practices and helping to develop applications to fight humanity’s greatest challenges – such as climate change and cancers.
Speaking on the role of the Scottish Government in regulating AI, Innovation Minister Richard Lochhead said: “The regulation of AI is reserved to the UK Government, which has set out a non-statutory regulatory approach, in contrast to the EU’s much more ambitious EU AI Act.”
The EU AI Act is one of the most substantial attempts to regulate AI, establishing four different risk classifications for the technology, with the most severe being “unacceptable risk”. This would include, for example, models which can manipulate people or those which rate members of society based on their behaviour or socio-economic status.
AI systems falling into this category would be banned outright by the legislation.
The UK Government released its A Pro-Innovation Approach To AI Regulation white paper on March 29 this year establishing its short and long-term strategy around AI.
It has committed to working with industry and regulators, establishing partnerships and setting out a regulatory framework, but has not committed to any legislative proposals.
Lochhead added: “We recognise the transformational potential of AI, as well as opportunities and risks it could bring to the Scottish economy and society. The Scottish Government is working with the Scottish AI Alliance to take targeted actions, within the limits of devolved powers, to make Scotland a leader in the development and use of trustworthy, ethical and inclusive AI.
“We have also published a Digital Economy Skills Action Plan, to ensure our workforce has the skills to deliver economic prosperity for all of Scotland.”
The Scottish AI Alliance, a partnership between AI and data science hub The Data Lab and the Scottish Government, was launched in 2021 and “tasked with the delivery of Scotland’s AI strategy”, according to its website.
On Tuesday, the group launched a Communities Call Out, encouraging Scottish community groups, networks, charities and other organisations to voice their thoughts and concerns regarding AI in Scotland – with the intent to discover and consider how it might impact individuals’ lives.
It comes after the White House held talks last week with these top AI organisations, as well as other technology companies, and secured commitments to introduce a number of safeguards into their technology.
These included introducing watermarks on AI-generated content to make it clearly identifiable as “fake”, prioritising research on the risk of AI and sharing information with the US Government, among other commitments.
Fernando said that watermarking AI-generated content was a “step in the right direction”, but that it alone was not sufficient to address the multitude of risks posed by AI, such as “data privacy protection, intellectual property rights and adherence to ethical guidelines and societal norms”.
He pointed to six key areas where action should be taken at the governmental level to minimise the growing risks of AI technology:
• Establishment of independent advisory bodies to monitor development of AI technologies, assess their impact on society and provide guidance responsible AI practices.
• Creation of industry standards and certification to ensure AI systems meet the safety, fairness and reliability criteria.
• Reinforce data protection laws to safeguard individual rights.
• Allocate funds for research and development focused on addressing issues and risks associated with AI technology.
• Promote public awareness and education on AI to create better understanding and informed decision making in relation to AI technology.
• International collaboration to establish common ground to address cross border challenges of AI.
A UK Government spokesperson said: “As set out in our AI regulation white paper, our approach to regulation is proportionate and adaptable, allowing us to manage the risks posed by AI whilst harnessing the enormous benefits.
“Our approach relies on collaboration between Government, regulators, and business.
"Additionally, the AI taskforce has also been equipped with an initial investment of £100 million to manage the safe development and deployment of AI.
“Further to that, the UK will host the first major global summit on AI safety this autumn."
Why are you making commenting on The National only available to subscribers?
We know there are thousands of National readers who want to debate, argue and go back and forth in the comments section of our stories. We’ve got the most informed readers in Scotland, asking each other the big questions about the future of our country.
Unfortunately, though, these important debates are being spoiled by a vocal minority of trolls who aren’t really interested in the issues, try to derail the conversations, register under fake names, and post vile abuse.
So that’s why we’ve decided to make the ability to comment only available to our paying subscribers. That way, all the trolls who post abuse on our website will have to pay if they want to join the debate – and risk a permanent ban from the account that they subscribe with.
The conversation will go back to what it should be about – people who care passionately about the issues, but disagree constructively on what we should do about them. Let’s get that debate started!
Callum Baird, Editor of The National
Comments: Our rules
We want our comments to be a lively and valuable part of our community - a place where readers can debate and engage with the most important local issues. The ability to comment on our stories is a privilege, not a right, however, and that privilege may be withdrawn if it is abused or misused.
Please report any comments that break our rules.
Read the rules hereLast Updated:
Report this comment Cancel