Table of Contents
Frankly, I hate the term AGI, or Artificial General Intelligence. It's not a thing. I've been around the block enough to know that intelligence is multifaceted. Machines will undoubtedly be able to automate more and more functions, but there's a false sense of there being a discrete point at which that mythical AGI will have been reached. I believe this loose thinking and hype around AGI is a distraction. Instead, let's talk concretely about AI technology, its strengths and weaknesses.
Why AGI is a Distraction for AI Entrepreneurs
The obsession with Artificial General Intelligence creates unrealistic expectations and diverts attention from practical AI implementation strategies. Intelligence isn't a monolithic capability that machines will suddenly achieve at some mythical threshold. Instead, it encompasses diverse cognitive functions that can be automated incrementally and strategically.
Key Insight
Rather than chasing the AGI mirage, successful AI startups focus on building systems that excel at specific, well-defined tasks while acknowledging and compensating for their limitations.
Why Businesses Experiment with AI but Deploy Cautiously
You see this in the enterprise. While consumer adoption of AI has set a record pace, business has been slower to adopt it. Certainly, CEOs and boards everywhere are presenting their company as being "AI first" (or planning to become that), and are experimenting heavily, sometimes with hundreds of use cases. But for all of the mass experimentation going on in enterprise, only a fraction of AI projects actually reach deployment. This boils down to two key challenges:
The Cost Challenge
Cost: The high cost of running LLMs challenges the economics of business software. The first challenge is due to the inherent cost of serving (let alone training) LLMs, and will be dealt with by a combination of two methods. One is using smaller LLMs (the term Small Language Models, or SMLs, is making the rounds), those "tiny" sub-7B and even sub-3B parameter models. The other method is using different, more efficient architectures than the standard transformer architecture
The Reliability Challenge
LLMs produce wonderful output alongside utter nonsense: Imagine writing an investment memo or responding to a customer, and your AI is brilliant 95% of the time but produces garbage the other 5%. That is a showstopper in the enterprise.
The second challenge is particularly acute, and more challenging. The phenomenon is inherent to the probabilistic nature of LLMs, and it's delusional wishful thinking that this will go away, no matter how much effort is placed on things like "alignment", "guardrails", and such.
AI Systems: Moving Beyond "Prompt and Pray"
I believe that we're going to wean ourselves from what I call our current "prompt and pray" modus operandi. The industry will realize that LLMs are an element of a more comprehensive AI system, which can seamlessly integrate and leverage the strengths of various AI technologies, including LLMs, retrieval, tools and other traditional code. AI systems will offer greater control, efficiency, reliability, especially when tackling tasks that require nontrivial reasoning, which most tasks do.
The Architecture of Effective AI Systems
AI systems that combine multiple LLMs and other tools offer a compelling solution. These systems allow for better cost and compute management by intelligently routing tasks to the most suitable resources. For example, a smaller LLM could act as a "router," directing tasks to specialized LLMs or tools for optimal efficiency. These systems can also enhance reliability and quality by incorporating checks and balances during the computation.
System Components
- Multiple specialized LLMs for different tasks
- Intelligent routing mechanisms
- Retrieval systems for grounded responses
- Traditional code for deterministic operations
- Quality control and validation layers
Product-Algo Fit: A Strategic Framework
But as you apply these AI systems to real-world problems, you should approach this wisely. My common advice to AI startup leaders is to strive for "product-algo" fit. What I mean by that is while AI systems will be a dramatic improvement over barebones LLMs in terms of reliability and efficiency, they will still be imperfect; the underlying uncertainty involved in LLM calls, search and retrieval will not completely go away.
So as an entrepreneur creating a new product, understand the strengths and weaknesses of the technology, and craft that product in a way that leverages their strengths and compensates for their imperfections. This is what I call "product-algo fit".
Principles of Product-Algo Fit
1. Acknowledge Imperfection: Design your product with the understanding that AI will make mistakes. Build in mechanisms to catch, correct, or mitigate these errors.
2. Leverage AI Strengths: Use AI for tasks where it excels - pattern recognition, content generation, data analysis - while avoiding applications where precision is absolutely critical.
3. Human-AI Collaboration: Design workflows that combine AI efficiency with human oversight and decision-making authority.
4. Gradual Automation: Start with AI assistance and gradually increase automation as confidence and reliability improve.
Practical Implementation Guide
Step 1: Identify Suitable Use Cases
Look for tasks that benefit from AI's strengths while being tolerant of occasional imperfections:
- Content drafting (with human review)
- Data analysis and pattern identification
- Customer query routing and initial responses
- Document summarization and extraction
Step 2: Build Robust AI Systems
Implement the multi-component architecture discussed earlier:
- Deploy specialized models for specific tasks
- Implement routing logic to optimize cost and performance
- Add retrieval systems for factual grounding
- Include validation and error-checking mechanisms
Step 3: Design for Graceful Failure
Accept that AI will sometimes fail and design your product to handle these situations elegantly:
- Implement confidence scoring and uncertainty quantification
- Create fallback mechanisms for low-confidence outputs
- Design clear escalation paths to human operators
- Maintain audit trails for accountability
Success Metrics for Product-Algo Fit
Measure success not just by AI accuracy, but by overall system performance:
- Task completion rates (including human intervention)
- User satisfaction and trust levels
- Cost per successful outcome
- Time to resolution or completion
The Future of AI Systems
As AI technology continues to evolve, the companies that master product-algo fit will have significant advantages. They'll build products that users trust, operations that scale efficiently, and businesses that generate sustainable value from AI technology.
The key is to remain grounded in reality about AI capabilities while being ambitious about the problems you can solve. Focus on building systems that work reliably in practice, not ones that chase theoretical perfection.