LOGO

AI-First Investing: Responsibilities & Considerations

September 15, 2021
AI-First Investing: Responsibilities & Considerations

The Growing Scrutiny of AI Investment in Sensitive Sectors

Investment in AI-first technology companies focused on the defense sector, including firms like Palantir, Primer, and Anduril, has proven successful for investors. Notably, Anduril achieved a valuation exceeding $4 billion within a remarkably short timeframe of under four years.

Many companies developing broadly applicable, AI-first technologies – for example, those specializing in image labeling – derive a significant, though often undisclosed, portion of their revenue from contracts within the defense industry.

Beneficial Applications of AI Technology

Frequently, companies initially not targeting the defense sector find their technologies are ultimately utilized by other influential entities. This includes law enforcement, municipal organizations, and even media outlets, in the execution of their respective functions.

A considerable amount of this work yields positive outcomes. Examples include DataRobot assisting agencies in tracking the spread of COVID-19, HASH conducting simulations for vaccine distribution, and Lilt facilitating access to school communications for immigrant parents within a U.S. school district.

The Potential for Misuse and Ethical Concerns

However, instances of problematic application exist. Reports from The Washington Post and 16 partner media organizations indicate that technology developed by the Israeli cyber-intelligence firm NSO was employed to compromise the smartphones of 37 individuals.

These individuals included journalists, human rights advocates, business leaders, and the fiancée of Jamal Khashoggi, the murdered Saudi journalist. The report details a list of over 50,000 phone numbers belonging to individuals in countries known for citizen surveillance and utilizing the services of the aforementioned Israeli firm.

Increasing Accountability for Investors

Investors in these types of companies are now facing increasingly difficult inquiries from founders, limited partners, and governmental bodies. These questions center around the potential for excessive power, overreach, or overly broad application of the technology.

While these are often matters of degree, they are not always considered during the initial investment phase. A growing demand for responsible AI application is emerging.

The Core Question: Responsible AI Investment

Having spoken with numerous individuals – including CEOs, founders of emerging companies, and politicians – since the publication of “The AI-First Company” and through a decade of investing in these firms, a recurring question has surfaced: How can investors ensure the responsible deployment of AI by their portfolio companies?

It’s a common response for investors to dismiss this concern as difficult to assess at the investment stage. Startups, by their nature, represent future possibilities. However, AI-first startups operate with a potent force from the outset: tools that amplify leverage beyond conventional limits.

The Amplifying Power of Artificial Intelligence

AI expands human capabilities, not only in physical strength through robotics or data analysis, but also in temporal understanding through predictive modeling. This predictive capacity, coupled with rapid learning, enables swift action.

Like any technology, AI’s potential can be channeled for constructive or destructive purposes. A simple object, such as a rock, can be used for building or as a weapon. Gunpowder can create spectacular displays or deliver destructive force.

Dual-Use Potential of AI Technologies

Similarly, AI-based computer vision models can analyze the movements of a dance troupe or a terrorist organization. AI-powered drones can capture footage of skiers, but also be weaponized.

This article will explore the fundamental aspects, key metrics, and political considerations surrounding responsible investment in AI-first companies.

Understanding Responsibility in AI Investment

Those who provide funding to, and serve on the boards of, companies focused on artificial intelligence bear a significant degree of accountability for the actions those companies undertake.

The impact of investors on company founders is undeniable, regardless of intentionality. Founders routinely seek guidance from investors concerning product development, target customer identification, and deal negotiations.

This pursuit of insight aims to enhance success probabilities and maintain investor engagement, as continued funding may depend on it.

Even when investors believe they are simply acting as a neutral sounding board, their inquiries and advice exert influence over crucial choices, including product features, sales strategies, and pricing structures. Consequently, a responsible investment framework for AI is essential for investors.

Board members are directly involved in shaping key strategic directions, both through legal obligations and practical considerations.

Critical decisions regarding product roadmaps, pricing models, and service packaging are often finalized during board meetings. These choices can have far-reaching consequences, dictating how the underlying technology is deployed.

For instance, decisions concerning exclusive government licensing, the establishment of international branches, or the acquisition of security clearances all fall within the purview of board oversight. Therefore, a dedicated framework for responsible AI investment is vital for board members as well.

Key Areas of Investor & Board Influence

  • Product development and feature prioritization.
  • Target customer selection and market strategy.
  • Deal negotiations and partnership agreements.
  • Licensing agreements, particularly with governmental entities.
  • Establishment of international operations.
  • Security protocols and clearance procedures.

The influence extends beyond simple advice; the very act of posing questions can steer a company's trajectory. A proactive, ethical framework is therefore paramount.

Responsible AI investment necessitates a thorough understanding of potential implications and a commitment to guiding companies toward beneficial outcomes.

Understanding Key Performance Indicators

Taking ownership necessitates a clear understanding of the current situation. It's common for investors in startups to underestimate the importance of monitoring the internal workings of AI-based models. For many software investments, verifying code functionality before deployment is deemed sufficient.

However, AI-first products are characterized by continuous adaptation, evolution, and the generation of new data. Some perceive monitoring AI as an insurmountable challenge. Nevertheless, establishing both metrics and management systems to track the impact of AI-first products is achievable.

Utilizing Metrics for Performance Evaluation

Objective metrics can be employed to determine whether a startup’s AI-based system is functioning as intended or exhibiting undesirable behavior. The specific metrics chosen should align with the modeling technique used, the training data, and the desired outcome of the prediction. For instance, when aiming for a specific target, evaluating true/false positive/negative rates is crucial.

In healthcare applications, sensitivity and specificity can provide valuable insights into the effectiveness of a diagnostic product. Does the product accurately identify a sufficient number of diseases to justify the associated costs and potential discomfort of the diagnostic procedure? A detailed explanation of these metrics, along with a comprehensive list for consideration, is available.

Managing Model Drift

A machine learning management loop can be implemented to detect models that deviate from real-world accuracy. “Drift” occurs when a model is trained on data that differs from the data it currently processes, and is identified by comparing the distributions of these datasets. Regular measurement of model drift is essential, acknowledging that the world is subject to gradual, sudden, and frequent changes.

Detecting gradual shifts requires metrics collected over time, while sudden changes necessitate near real-time metrics. Consistent measurement at regular intervals is vital for identifying recurring patterns. The following diagram illustrates the steps involved in a machine learning management loop, emphasizing the importance of consistently measuring the same parameters throughout the model’s lifecycle – from development and testing to deployment and utilization.

the responsibilities of ai-first investorsThe issue of AI bias presents both ethical and technical challenges. Here, we focus on the technical aspects, addressing machine bias through a similar approach to managing human bias: by implementing strict constraints. Defining limitations on model predictions, access to those predictions, feedback data, and acceptable uses requires upfront effort during system design, but ensures appropriate alerts are triggered.

Addressing Bias in AI Systems

Establishing standards for training data can also enhance the likelihood of the model considering a diverse range of inputs. Direct communication with the model’s designer is the most effective way to understand potential biases inherent in their methodology. Consider incorporating automated responses, such as system shutdown or alerts, once these constraints are established.

Political Considerations

Providing substantial capabilities to influential organizations can frequently be perceived as endorsing the political factions responsible for their leadership. Such alignment, whether accurate or not, often results in repercussions.

Individuals associated with your team, your clientele, and prospective investors who favor opposing political ideologies may choose to disengage. Negative media attention is also a possibility. This potential should be openly discussed internally when deciding whether to collaborate with specific institutions.

Direct Political Impacts

The most immediate political challenges for investors typically emerge when companies engage in work for the armed forces. Notable examples, like Google, have experienced employee protests even concerning the possibility of securing military contracts.

Indirect Political Concerns

Issues relating to individual privacy represent a more nuanced political consideration, hinging on the extent to which they generate pressure to curtail AI applications. For instance, when organizations advocating for civil liberties focus on applications that potentially infringe upon personal privacy, investors may need to contemplate limitations on their deployment.

Broader Industrial Implications

Political issues of a tertiary nature are generally focused on industry-wide effects, such as the potential impact of AI on employment. These are challenging for investors to address, as the societal consequences are often difficult to predict within the timeframe relevant to political decision-making – typically a few years.

A Risk-Based Approach

Prudent investors will continually evaluate all three areas – military applications, privacy concerns, and broader industrial impacts – and establish internal policy priorities, both in the short, medium, and long term, based on the immediacy of the political risks involved.

Taking a Stance

AI-driven companies aiming to foster global peace might ultimately believe they must align with a particular side to exert influence. While a firm position, it can be rationalized through certain ethical frameworks, particularly those prioritizing overall well-being and minimizing harm.

Concluding Thoughts

The duties of investors focused on artificial intelligence are substantial, and the full scope is often underestimated by those new to the field. A complete understanding of the potential consequences of their investments is frequently lacking.

One potential remedy lies in the creation of a robust ethical framework, consistently applied to every investment decision.

A detailed exploration of ethical frameworks hasn't been undertaken here, as their comprehensive consideration requires extensive study. Constructing such a framework, both personally and for organizations, is a complex and time-consuming endeavor.

It is my contention that the expertise of philosophers could be significantly beneficial within AI-driven companies, specifically in the development of these crucial frameworks.

In the interim, investors possessing a foundational understanding of the core principles, relevant metrics, and the political landscape will likely exert a positive influence on those developing this exceptionally potent technology.

Disclaimer: The author holds investments in two companies referenced within this article – HASH and Lilt – through Zetta, a fund where they serve as a managing partner.

#AI investing#artificial intelligence#venture capital#AI ethics#investment responsibilities