LOGO

UK AI Strategy: Ambitious Plan Needs Funding - Marc Warner

September 24, 2021
UK AI Strategy: Ambitious Plan Needs Funding - Marc Warner

U.K. Launches National AI Strategy

The United Kingdom has this week unveiled its inaugural national strategy for artificial intelligence. This governmental pledge represents a decade-long dedication to enhancing the nation’s AI capabilities. The focus will be on directing resources towards crucial areas such as skills development, attracting talent, increasing compute power, and improving data accessibility. This initiative has generally received positive feedback from the country’s technology sector.

Funding Concerns Remain

However, questions have arisen regarding the government’s genuine commitment to establishing the U.K. as a “global AI superpower,” particularly in light of the absence of a concurrent funding announcement.

Further clarity is anticipated with the upcoming spending review scheduled for October 27th. This review will outline public spending plans for the next three fiscal years.

Industry Perspective on Government Support

Prior to the spending review, TechCrunch consulted with Marc Warner, CEO of U.K.-based AI startup Faculty. Warner emphasized the necessity for the government to demonstrate a serious dedication to long-term support. This support is vital for fostering the U.K.’s capabilities and maintaining its global competitiveness, and requires an appropriate funding allocation. He did, however, acknowledge the “genuine ambition” displayed by the government in its support for AI.

Faculty's Investment in Talent

Warner’s company, which recently secured $42 million in growth funding, has initiated an internal educational program. This program aims to attract PhD-level researchers and cultivate the next generation of data scientists. Notably, Warner also serves as a member of the U.K.’s AI Council, an expert advisory body that provided input on the development of this strategy.

Strategy Assessment

“I believe this is a remarkably sound strategy,” Warner stated to TechCrunch. “It exhibits a genuine ambition, which is somewhat uncommon in governmental initiatives, and it correctly identifies several key areas requiring improvement.”

“The central challenge – and it’s a substantial one – is the current lack of associated financial figures.”

“Therefore, while the strategy contains many promising elements in principle, its practical implementation is entirely dependent on being supported by the necessary funding. Furthermore, it requires a firm commitment from the broader government to ensure high-quality execution of these initiatives.”

Risk of Unfulfilled Potential

Warner cautioned against the risk of a potentially “serious strategy” losing its momentum if it isn’t accompanied by a commensurate – and potentially “world-beating” – level of funding.

“The outcome will be determined by the spending review, but it appears that if a genuinely robust strategy is developed, it could easily devolve into a more commonplace one due to a reluctance to commit sufficient funding or to prioritize execution and bring these plans to fruition.”

Funding Level Recommendations

When questioned about the desired level of government funding to realize the strategy’s long-term objectives, Warner asserted that the U.K. must set ambitious goals and compete on a global scale.

“We can examine the commitments made by other nations to their AI strategies, which range from hundreds of millions to low billions of dollars,” he suggested. “If we are truly committed to global competitiveness – as the strategy intends, and as we should be – we must at least match, if not surpass, the funding levels of these other countries.”

“Ultimately, the determining factor is the priority assigned to this initiative. If the government intends to deliver on its ambitious strategy, it must be a high priority,” he concluded.

Securing Skilled Professionals

When elaborating on the comprehensive details outlined in the strategy for bolstering the U.K.’s position in the field of Artificial Intelligence, Warner emphasized the critical importance of a skilled workforce.

“Within a highly technical domain such as AI, access to qualified personnel is paramount. A worldwide contest exists for these specialists. It appears the government acknowledges this reality and is poised to implement measures ensuring the U.K. possesses the necessary talent – both through skills development and streamlined visa processes.”

He continued, “We greatly value the opportunity to draw upon the expertise of individuals globally to address significant challenges. Facilitating the recruitment of these individuals – whether by universities, charities, corporations like ours, or even governmental bodies – represents a substantial advancement.”

“It is encouraging to see computing and data science receiving due consideration,” Warner noted, while discussing further aspects of the strategy. “These two areas are fundamentally essential to the machine learning techniques that underpin contemporary AI. Governmental efforts to enhance accessibility in these fields are undoubtedly beneficial.”

“The proactive consideration of the potential long-term risks associated with AI is both innovative and fundamentally crucial,” he stated.

“Furthermore, the strategy demonstrates a candid assessment of the U.K.’s comparatively slower rate of AI adoption among both businesses and as a nation. Hopefully, acknowledging this gap and dedicating focused attention to its resolution will prove effective – overall, the strategy is remarkably well-conceived.”

The strategy also addresses the necessity of establishing “defined regulations, grounded in ethical principles, and a supportive framework for innovation” concerning AI. However, the U.K. currently trails in this area, as the European Union presented its AI Regulation proposal earlier this year.

Warner expressed his perspective on AI regulation, advocating for rules tailored to specific applications when questioned on the matter.

Domain-Specific Regulation of AI

A key consideration in AI governance is whether regulation should focus on the technology itself or its specific applications. A broad approach to regulating artificial intelligence is likened to regulating steel without considering its end use, such as construction or weaponry.

This analogy highlights the dilemma: overly lenient regulations risk misuse, while overly strict rules stifle innovation. Therefore, a nuanced approach is necessary.

It is generally accepted that effective AI regulation will likely be implemented on a domain-specific basis. This means rules will vary depending on how the technology is utilized.

Varying Regulatory Needs

The need for tighter regulation is naturally higher in sensitive areas like healthcare, where AI is used for critical tasks such as medical diagnosis. Conversely, applications in less critical domains, like e-commerce, may require fewer restrictions.

Government strategies are increasingly recognizing the importance of tailoring regulations to the specific context of AI deployment. This domain-specific focus is considered a sensible and pragmatic approach.

Ensuring AI is developed and deployed responsibly, safely, and for the betterment of society remains a paramount concern.

EU Framework and UK Data Protection

The European Union’s proposed risk-based framework for AI regulation already incorporates a domain-specific approach, categorizing applications based on their level of risk. Regulatory requirements are then adjusted accordingly.

However, a comprehensive assessment of the EU proposal is still underway. Further study is needed to fully evaluate its effectiveness.

Alongside AI regulation, the UK government is also considering reforms to its data protection framework. These reforms include potential changes that could lessen existing protections for personal information.

Data Usage and Legitimacy

Concerns have been raised that these reforms could lead to a decline in privacy standards. It is vital that all AI applications adhere to both legal and ethical standards.

Data usage must be transparent and acceptable to individuals. If people were fully aware of how their data was being utilized, they should feel comfortable with the process.

The Validity of Data Practices

Faculty’s artificial intelligence venture operated, though under a different corporate title, prior to the implementation of the U.K.’s adaptation of the EU General Data Protection Regulation (GDPR). The preceding regulatory framework was largely comparable. Consequently, current regulations do not seem to have negatively impacted the company’s potential as a valuable and expanding U.K. AI enterprise.

Considering this, could the government’s inclination to lessen the degree of data protection afforded to U.K. residents – under the premise that such action would foster innovation – actually prove detrimental to AI businesses that rely on user confidence for growth? Furthermore, any U.K. AI companies seeking to operate within the European Union would still be obligated to adhere to GDPR standards.

Warner posited that “GDPR isn’t without flaws.” He suggested a widespread acknowledgement of this point, and refuted the notion of a binary choice. He believes improvements are possible, and that striving for a superior standard is the appropriate course of action.

“There are numerous avenues for refining the regulation of these technologies over time,” Warner continued. “Maintaining a leading standard for legitimacy in the application of these technologies is paramount, particularly for organizations like ours aiming to conduct business in a manner that is broadly accepted and actively supported by society.”

“Essentially, I don’t advocate for compromise, but the situation isn’t simply a matter of adhering to or rejecting GDPR. The issue is far more nuanced.”

Recent years have witnessed several significant data breaches originating within the U.K. It is important to acknowledge this context.

Faculty, formerly known as ASI Data Science, was deeply connected to the contentious application of data during the U.K.’s Brexit referendum, specifically in the targeting of political advertisements towards voters.

The company has since declared its commitment to abstain from future political engagements.

Political Campaigning and Data Ethics

The corporate rebrand of ASI Data Science occurred in the wake of disclosures concerning the data-mining practices of Cambridge Analytica, a firm now widely discredited. This situation escalated into a global controversy in 2018.

The revelations prompted inquiries from legislators globally regarding the influence of data and predictive modeling on voter behavior.

Concerns were raised about the potential for these technologies to manipulate electoral outcomes.

The U.K.’s information commissioner advocated for a temporary halt to the application of data and AI tools in political advertising.

This call for an “ethical pause” stemmed from a belief that large-scale data techniques were eroding public confidence in democratic processes.

These techniques often involve the opaque targeting of voters with personalized political messages.

Brexit Referendum and Data Science

During the 2016 Brexit referendum, Matthew Warner collaborated with Dominic Cummings, a former special advisor to the U.K. government.

Cummings also served as a director for the Vote Leave campaign.

Cummings has consistently emphasized the pivotal role data scientists played in securing the Brexit vote.

In a 2016 blog post, he detailed the utilization of data science and AI during the referendum, stating:

AI Regulation in Political Campaigns

Considering this historical context, we questioned Warner regarding his stance on potential regulations governing the use of AI in political campaigning.

While the U.K. government has not yet formally proposed such measures, it is considering amendments to election laws.

These potential changes include the implementation of disclosure labels for online political advertising.

Warner responded by stating, “Faculty as an organization is no longer involved in political work – it is not a current focus.”

When pressed further about his support for limitations on AI within the political sphere, he reiterated: “From Faculty’s standpoint, we have moved away from politics.

The decision regarding appropriate regulations ultimately rests with the government.”

Additional Statement from Faculty

Following the initial publication of this article, the company provided an additional statement for clarification.

#UK AI strategy#artificial intelligence#AI funding#Marc Warner#AI policy