Credo AI Launches with $5.5M Funding for Ethical AI Solutions

The Growing Concerns Surrounding AI Ethics
A significant number of individuals within the artificial intelligence community express apprehension regarding its potential consequences. Navrina Singh, previously a product manager at both Qualcomm and Microsoft, is among those concerned. Her experiences at Microsoft revealed how a Twitter bot, developed in 2016 as an experiment in “conversational understanding,” rapidly exhibited racist behavior, as reported by The Verge.
Instances of AI Malfunction
The incident at Microsoft represents just one example in a series of AI failures. Research in 2019 uncovered that an algorithm, marketed by Optum to identify patients who would benefit from enhanced medical attention, significantly miscalculated the healthcare requirements of the most critically ill Black patients. Furthermore, AI systems utilized for credit scoring have consistently demonstrated gender bias.
Challenges in Ethical AI Development
While numerous large corporations have established dedicated teams to address the ethical dilemmas stemming from the extensive data they gather and employ in training their machine learning models, progress in this area has been incremental. Simultaneously, smaller AI-driven businesses, lacking the resources for specialized teams, often proceed without comprehensive ethical oversight.
Introducing Credo AI
Navrina Singh’s company, Credo AI, a Software-as-a-Service (SaaS) provider, has recently announced the completion of a $5.5 million funding round led by Decibel, Village Global, and AI Fund.
Credo AI’s Approach to AI Governance
The company’s core offering, as Singh explains, centers on managing complexity to provide clarity. Her team of 15 has created a risk framework that offers organizations insight into their own governance practices. Credo AI’s value isn’t necessarily in groundbreaking technology, but rather in addressing a common deficiency – accountability – by providing a centralized control panel for managing data collection and suggesting relevant controls, such as integrating IEEE standards to strengthen machine learning model safeguards.
The Need for Standardization in AI Ethics
“A common challenge many companies face is the absence of a shared understanding and agreement on what constitutes ‘good’ AI governance,” Singh observes. “Consequently, organizations are actively seeking assistance with standardization.”
Customization and Sector-Specific Considerations
Credo AI’s software isn’t a standardized solution, Singh emphasizes. The impact of models varies across organizations, and even within specific industries, companies often have differing goals. “The definition of fairness isn’t uniform across sectors,” Singh explains, citing financial services as an example, where regulations are continually evolving through federal banking agencies. “What does fairness mean in the context of fraud detection? What does it mean for credit underwriting?”
Collaboration and Value Alignment
Instead of awaiting definitive answers, Credo AI collaborates with companies to define their core values and then provides tools to manage accordingly, allowing for the addition of custom metrics and stakeholder involvement. “Our aim is to facilitate collaboration between data science, compliance, executive leadership, and risk management teams,” says Singh.
Preventing Negative Outcomes
Credo AI strives to help companies avoid damaging incidents – and potentially more serious repercussions.
The Expanding AI Market
The market opportunity is substantial. According to data released earlier this year by the International Data Corporation (IDC), global revenue for the AI market—encompassing software, hardware, and services—was projected to increase by 16.4% year-over-year in 2021, reaching $327.5 billion. IDC forecasts the market will surpass $500 billion by 2024.
The Future of Ethical AI
As investment in AI grows, the demand for solutions ensuring its proper function and preventing harm will likely increase. Singh envisions Credo AI becoming a recognized standard, signifying a company’s commitment to ethical AI practices.
A Vision for Responsible AI Development
“Our ultimate goal,” Singh states, “is for Credo AI to be synonymous with the development of responsible and ethical AI. That is our primary ambition.”
Related Posts

ChatGPT Launches App Store for Developers

Pickle Robot Appoints Tesla Veteran as First CFO

Peripheral Labs: Self-Driving Car Sensors Enhance Sports Fan Experience

Luma AI: Generate Videos from Start and End Frames

Alexa+ Adds AI to Ring Doorbells - Amazon's New Feature
