Imagine this: Your AI assistant reminds you to pick up coffee filters—and suggests a deal at the local shop.
Welcome to the age of ad-supported AI, where your assistant is also a marketer. It’s inevitable, practical, and fraught with challenges.
Marissa Mayer’s vision of ad-sponsored AI chatbots could make these tools universally accessible by turning conversations into contextual ad opportunities.
After all, Google built an empire by pairing searches with ads. Why shouldn’t AI chatbots do the same?
But here’s the twist: unlike search engines, AI assistants don’t just answer questions—they hold entire conversations, remember preferences, and build persistent histories. This context-rich interaction makes them more capable of delivering laser-targeted suggestions. The flip side? Privacy risks escalate. How do we ensure your chatbot doesn’t become an all-seeing billboard for advertisers?
The future of AI assistants lies in balance. Done right, ads could fund free, advanced tools that genuinely help users. Done wrong, it’s a slippery slope into manipulation and data exploitation.
The question isn’t whether this model will happen—it’s how we’ll keep it from crossing ethical lines.
--
If you have any questions or thoughts, don't hesitate to reach out. You can find me as @viksit on Twitter.
T-Mobile’s $100 million deal with OpenAI isn’t about chatbots making small talk. It’s about hiring AI workflows to replace the grind of managing customer care. Their IntentCX platform anticipates problems, solves them, and even acts autonomously—all while giving human employees the enviable role of looking like geniuses who planned it all.
For years, companies like Palantir sent armies of engineers to map workflows and wrestle data into submission. AI flips this script. It’s no longer just a helpful tool—it’s the engineer. It learns, adapts, and connects the dots faster than your best strategist on a caffeine binge.
AI as the engineer, data as the blueprint.
OpenAI and Anthropic aren’t scaling LLMs just to win a science fair ribbon. They know the model race is plateauing—open-source rivals are catching up, and training costs are dropping like a meme stock. The real play is higher up the stack: embedding AI into enterprise operations and turning siloed data into self-improving, always-on systems that power entire departments.
Take T-Mobile. They’re not just automating customer care—they’re letting AI workflows do the heavy lifting while humans focus on high-value tasks, like rethinking strategy or quietly celebrating fewer meetings. HR, finance, logistics—they’re all ripe for this shift. Imagine AI managing recruitment pipelines, reconciling accounts, or optimizing supply chains in real time, leaving people free to tackle what machines can’t: vision, empathy, and creativity.
While industries like customer care are leading the charge, others—like healthcare and manufacturing—will face unique hurdles, from regulatory constraints to deeply entrenched legacy systems. But the potential is universal: departments won't be staffed anymore—they’ll be powered.
So, what should companies do today? Start with the data. Centralize silos, identify repetitive workflows, and experiment with AI tools that can adapt and evolve. The goal isn’t to replace people—it’s to make them exponentially more effective.
By 2030, companies won’t buy software to manage tasks. They’ll hire AI workflows to run departments. And if your company isn’t embedding AI into its core operations now, don’t worry—you’ll have plenty of time to reflect on it while your competition eats your lunch.
--
If you have any questions or thoughts, don't hesitate to reach out. You can find me as @viksit on Twitter.
Everyone's building AI services. Every business will have its own AI assistant. But there's a problem: these AIs can't find each other.
They can't work together. Each one is rebuilding capabilities that hundreds of others already have.
In 2017, thinking about conversational AI, we predicted assistants would become gateways to services. That wasn't quite right. The interesting part isn't just the interface – it's how services discover and compose each other.
We're seeing this need emerge now. Meta open-sourced Llama, making sophisticated AI widely available. GitHub Sparks and Replit Agent are showing how natural language can generate working software. But every AI service is still an island – millions of capabilities with no way to find what's relevant. Like the web before PageRank.
What's missing isn't better AI. It's a simple protocol that lets AIs describe their capabilities in both structured and semantic terms. The protocol needs to know:
This creates an entirely new kind of service discovery. Not search indexes. Not app stores. But a capability web, where AIs discover and compose each other's abilities in real time. Without this, we'll have thousands of powerful but isolated AI services, each rebuilding similar capabilities.
The first implementation will look deceptively simple. Perhaps just an actions.txt standard, describing service capabilities in a structured format and a prompt for LLMs. Jina.ai has a great example of what this may become.
But whoever builds this protocol won't just enable better AI services. They'll become the PageRank of the AI era – the capability layer of the AI stack.
--
If you have any questions or thoughts, don't hesitate to reach out. You can find me as @viksit on Twitter.
Search engines used to find knowledge. Now they're starting to compute it.
The shift is already beginning: ChatGPT and Perplexity show us glimpses of AI synthesizing the internet's vast content. But that's just the prologue.
The real transformation happens when search stops finding answers and starts computing them.
Think of it as moving from a library to a research lab. Every query will spawn a custom analysis:
Each computation creates knowledge that didn't exist anywhere before.
This isn't just faster search. It's the difference between reading an analyst's report and having a thousand analysts crunch numbers specifically for you. Current engines find and remix the web's knowledge. The next wave will compute new knowledge, tailored to your exact question.
The opportunity isn't in building better retrieval. It's in creating the infrastructure that lets AI agents turn raw data into fresh insights. That's where the next platform emerges.
We're moving from finding answers to computing them. The search bar isn't dying. It's becoming a truth computer.
--
If you have any questions or thoughts, don't hesitate to reach out. You can find me as @viksit on Twitter.
Search engines used to find knowledge. Now they're starting to compute it.
The shift is already beginning: ChatGPT and Perplexity show us glimpses of AI synthesizing the internet's vast content. But that's just the prologue.
The real transformation happens when search stops finding answers and starts computing them.
Think of it as moving from a library to a research lab. Every query will spawn a custom analysis:
Each computation creates knowledge that didn't exist anywhere before.
This isn't just faster search. It's the difference between reading an analyst's report and having a thousand analysts crunch numbers specifically for you. Current engines find and remix the web's knowledge. The next wave will compute new knowledge, tailored to your exact question.
The opportunity isn't in building better retrieval. It's in creating the infrastructure that lets AI agents turn raw data into fresh insights. That's where the next platform emerges.
We're moving from finding answers to computing them. The search bar isn't dying. It's becoming a truth computer.
--
If you have any questions or thoughts, don't hesitate to reach out. You can find me as @viksit on Twitter.
Everyone's building AI services. Every business will have its own AI assistant. But there's a problem: these AIs can't find each other.
They can't work together. Each one is rebuilding capabilities that hundreds of others already have.
In 2017, thinking about conversational AI, we predicted assistants would become gateways to services. That wasn't quite right. The interesting part isn't just the interface – it's how services discover and compose each other.
We're seeing this need emerge now. Meta open-sourced Llama, making sophisticated AI widely available. GitHub Sparks and Replit Agent are showing how natural language can generate working software. But every AI service is still an island – millions of capabilities with no way to find what's relevant. Like the web before PageRank.
What's missing isn't better AI. It's a simple protocol that lets AIs describe their capabilities in both structured and semantic terms. The protocol needs to know:
This creates an entirely new kind of service discovery. Not search indexes. Not app stores. But a capability web, where AIs discover and compose each other's abilities in real time. Without this, we'll have thousands of powerful but isolated AI services, each rebuilding similar capabilities.
The first implementation will look deceptively simple. Perhaps just an actions.txt standard, describing service capabilities in a structured format and a prompt for LLMs. Jina.ai has a great example of what this may become.
But whoever builds this protocol won't just enable better AI services. They'll become the PageRank of the AI era – the capability layer of the AI stack.
--
If you have any questions or thoughts, don't hesitate to reach out. You can find me as @viksit on Twitter.
Something interesting is happening in enterprise software: Generative AI isn't just changing how we sell – it's changing what we're selling.
The shift
Key software capabilities
This isn't about sales automation. It's about implementation automation - software that understands context and builds itself around it.
The "demo vs. custom implementation" divide is disappearing. When software can reshape itself, every version becomes custom.
Still early, but worth watching how this redefines what we consider an enterprise "product".
Thanks for reading viksit has notes.! Subscribe for free to receive new posts and support my work.
--
If you have any questions or thoughts, don't hesitate to reach out. You can find me as @viksit on Twitter.
Something interesting is happening in enterprise software: Generative AI isn't just changing how we sell – it's changing what we're selling.
The shift
Traditional software = rigid features that need sales to explain
Generative software = adapts itself to each customer's context
Key software capabilities
Reads company data → suggests specific workflows to automate
Watches workflow patterns → builds automations
Analyzes tech stack → automates integrations of workflow into systems
This isn't about sales automation. It's about implementation automation - software that understands context and builds itself around it.
The "demo vs. custom implementation" divide is disappearing. When software can reshape itself, every version becomes custom.
Still early, but worth watching how this redefines what we consider an enterprise "product".
--
If you have any questions or thoughts, don't hesitate to reach out. You can find me as @viksit on Twitter.
‘viksit has notes’ is a collection of atomic essays on the future of AI—brief meditations on where it's headed and the changes it's bringing.
It is written from the perspective of an ML engineer and 2x founder who has built search engines, language models, and humanoid robots.
--
If you have any questions or thoughts, don't hesitate to reach out. You can find me as @viksit on Twitter.