Unicorn Ideas: Superblocks CEO on AI Prompt Analysis

The Hidden Value in AI System Prompts
Brad Menezes, the CEO of Superblocks, a startup focused on enterprise vibe-coding, posits that the next generation of billion-dollar startup concepts may be found within the system prompts utilized by current, highly valued AI companies.
Understanding System Prompts
System prompts are extensive instructions – often exceeding 5,000 to 6,000 words – employed by AI startups to guide foundational models from providers like OpenAI and Anthropic. These prompts dictate how the underlying AI generates application-specific outputs.
Menezes views these prompts as invaluable resources, essentially representing a comprehensive education in effective prompt engineering.
Variations in Prompt Design
“Each company employs a distinctly unique system prompt, even when leveraging the same foundational model,” Menezes shared with TechCrunch. The goal is to tailor the model’s performance to precise requirements for a particular field or set of tasks.
While not entirely concealed, access to these system prompts isn’t always straightforward. Some AI tools will share them upon request, but public availability is not guaranteed.
Superblocks' Initiative: Sharing System Prompts
To coincide with the launch of Clark, their new enterprise coding AI agent, Superblocks made a collection of 19 system prompts publicly available. These prompts were sourced from popular AI coding tools including Windsurf, Manus, Cursor, Lovable, and Bolt.
Menezes’ announcement on social media quickly gained traction, reaching nearly 2 million viewers, including prominent figures in the tech industry such as Sam Blond and Aaron Levie, a Superblocks investor.
Superblocks recently secured a $23 million Series A extension, increasing their total Series A funding to $60 million. This investment supports their development of vibe-coding tools designed for non-developers within enterprise environments.
Analyzing System Prompts for Insights
We asked Menezes to elaborate on how others can analyze existing system prompts to extract valuable knowledge.
“Our experience building Clark and examining these prompts revealed that the prompt itself accounts for approximately 20% of the overall success,” he explained. This initial prompt establishes the fundamental instructions for the Large Language Model (LLM).
The Importance of Prompt Enrichment
The remaining 80% of the equation, according to Menezes, lies in “prompt enrichment.” This encompasses the infrastructure built around the LLM calls.
This infrastructure includes supplementary instructions appended to user prompts, as well as verification procedures implemented after the LLM generates a response, such as accuracy checks.
- System Prompts: Lengthy instructions guiding AI models.
- Prompt Engineering: The art of crafting effective prompts.
- Prompt Enrichment: Infrastructure surrounding LLM calls for improved results.
Understanding System Prompts for Large Language Models
System prompts are comprised of three key elements: role prompting, contextual prompting, and enabling tool use. These components are crucial for effectively guiding LLMs.
It’s important to recognize that while expressed in natural language, system prompts require a high degree of precision. Menezes emphasizes the need for clear and specific instructions, akin to communicating with a human colleague.
The Power of Role Prompting
Role prompting establishes consistency within the LLM by defining both its purpose and its persona. This provides a framework for its responses.
For example, Devin’s system prompt explicitly states, “You are Devin, a software engineer utilizing a genuine computer operating system. You possess exceptional coding abilities, surpassing many programmers in codebase comprehension, clean code creation, and iterative refinement.”
Contextual Prompting: Setting Boundaries
Contextual prompting provides the necessary background information for the model to operate effectively. It establishes parameters and safeguards, influencing cost management and task clarity.
Cursor’s approach illustrates this, instructing the model to “Only invoke tools when necessary, and refrain from mentioning tool names to the user – simply describe the action being taken. Avoid displaying code unless specifically requested. Prioritize reviewing relevant file content before editing, addressing obvious errors, but avoid speculative fixes or excessive looping.”
Enabling Agentic Tasks Through Tool Use
The integration of tool use empowers LLMs to move beyond simple text generation, facilitating more complex, agentic tasks. This involves instructing the model on how to interact with external resources.
Replit’s system prompt, for instance, extensively details functionalities such as code editing and searching, language installation, PostgreSQL database management, and shell command execution.
Insights from Analyzing Existing System Prompts
Menezes’ research into existing system prompts revealed differing priorities among developers. Platforms like Loveable, V0, and Bolt prioritize “rapid iteration,” while others, including Manus, Devin, OpenAI Codex, and Replit, focus on enabling the creation of complete applications, though their output often remains in raw code format.
This analysis highlighted an opportunity for Menezes’ startup to cater to a different audience: non-programmers seeking to build applications, provided they could address challenges related to security and access to enterprise data sources like Salesforce.
Early Success and Internal Adoption
Despite not yet reaching a multibillion-dollar valuation, Superblocks has secured significant clients, including Instacart and Papaya Global.
Internally, Menezes enforces a “dogfooding” policy, prohibiting his software engineers from developing internal tools. Instead, they are tasked with building the core product, leading business teams to create agents for their specific needs, such as lead identification using CRM data, support metric tracking, and sales engineer assignment balancing.
“Our strategy is to construct the tools we require rather than purchasing them,” Menezes explains.
Note: This article has been updated to accurately reflect the nature of the most recent funding round and the total Series A amount raised.
Related Posts

Databricks Raises $4B at $134B Valuation - AI Business Growth

Google Launches Managed MCP Servers for AI Agents

Cashew Research: AI-Powered Market Research | Disrupting the $90B Industry

Boom Supersonic Secures $300M for Natural Gas Turbines with Crusoe Data Centers

Microsoft to Invest $17.5B in India by 2029 - AI Expansion
