LOGO

X Launches AI-Powered Community Notes Program

July 1, 2025
X Launches AI-Powered Community Notes Program

X to Trial AI-Generated Community Notes

The social media platform X is initiating a pilot program to incorporate AI chatbots into the creation of Community Notes.

Understanding Community Notes

Community Notes, a feature originating during the platform’s time as Twitter, has been expanded upon since Elon Musk’s acquisition of the service. This program allows users to contribute contextual information to posts.

Submitted comments undergo review by other users before being attached to the original post. For instance, a Community Note might be added to an AI-generated video lacking transparency regarding its synthetic nature, or to clarify a potentially misleading statement from a political figure.

Achieving Consensus

A Community Note becomes publicly visible only after reaching a consensus among groups with differing viewpoints on previous ratings.

Impact and Adoption

The success of Community Notes on X has prompted similar initiatives from other platforms, including Meta, TikTok, and YouTube.

Notably, Meta discontinued its reliance on external fact-checking services, opting instead for this community-driven approach.

Potential Benefits and Risks

The effectiveness of utilizing AI chatbots for fact-checking remains uncertain.

These AI notes can be produced using X’s own Grok or through integration with other AI tools via the platform’s API.

Vetting Process

Any note submitted by an AI will be subject to the same verification procedures as those submitted by human users, ensuring a consistent standard for accuracy.

Concerns Regarding AI Accuracy

The application of AI in fact-checking raises concerns, given the documented tendency of AI models to hallucinate – generating information not grounded in reality.

x is piloting a program that lets ai chatbots generate community notesHuman-AI Collaboration

Research conducted by X Community Notes suggests a collaborative approach between humans and Large Language Models (LLMs) is optimal.

Human feedback can refine AI note generation through reinforcement learning, with human raters providing a final validation step before publication.

The Goal of the System

The stated objective is not to dictate user opinions, but to foster a system that encourages critical thinking and a deeper understanding of information.

The research emphasizes the potential for a mutually beneficial relationship between LLMs and human contributors.

Potential Pitfalls

Despite human oversight, risks remain, particularly with the allowance of third-party LLMs.

Recent issues with OpenAI’s ChatGPT, exhibiting excessive deference, highlight the danger of an AI prioritizing “helpfulness” over factual accuracy.

Workload Concerns

There is also apprehension that human raters may become overwhelmed by the volume of AI-generated comments, potentially diminishing their engagement in this voluntary role.

Timeline for Rollout

The introduction of AI-generated Community Notes is not immediate; X intends to evaluate the program for several weeks before considering a wider release, contingent upon its success.

#X#Twitter#Community Notes#AI#chatbots#misinformation