February 11, 2025

AI disrupted how we create content. Crypto will redefine how we value it.

It is abundantly clear that AI slop is here to stay. We’re in an era of cheap, disposable, shallow content so generic that it barely holds anyone’s attention anymore. Worse, the flood of AI generated slop will only grow, fragmenting our already shattered attention, eroding trust in platforms, and burying the important stuff in an ocean of noise.

Scarcity breeds value

In this world, authenticity, and exclusivity become rare commodities. Original data, unique insights and cutting edge research will stand out — and those who possess them will own the market for information. Paywalls, exclusive memberships, and private marketplaces aren’t new — but they’ll become more prominent when the web is awash with useless drivel. Human curated marketplaces will likely be the new norm. Journalism, too, stands to gain because the demand for real information will spike. Meanwhile social media will plunge further into disrepute, thanks to algorithmic biases amplifying controversial AI noise.

The machine problem

But there’s another twist. We won’t be the only ones browsing the internet. We’ll have AI powered agents, as openAI’s recent deep research showed, automating everything from reading the news to scanning for deals on marketplaces. These bots will sift through unimaginable volumes of data, and may hit paywalled sources billions of times. Traditional payment rails, built for human scales, simply can’t handle that scale of micro-transactions.

And that is crypto’s moment.

Storage blockchains like Arweave, paired with high speed networks like Solana can process enormous volumes of micropayents. Instead of wrestling with clunky subscriptions, agents will pay fractional feeds of whatever data lakes it dips into.

A world where machines buy data from machines, all day long, is closer than we think.

And ironically, AI — a force that promises to unlock all information — may end up gating more knowledge than before.

--

If you have any questions or thoughts, don't hesitate to reach out. You can find me as @viksit on Twitter.

February 09, 2025

From parrots to prodigies: Why scaling alone won’t make AI truly smart

While industry leaders like Sam Altman and Dario Amodei tout that more compute, more data, and ever-lower loss are the keys to AGI, much of this messaging seems designed to generate PR buzz and secure funding rather than address fundamental challenges. Scaling has yielded unexpected abilities, such as improved chain‑of‑thought outputs. Yet these gains primarily come from learning the “low‑hanging fruit” — token frequencies, common word pairings, and simple grammatical structures — without fostering deep, algorithmic reasoning.

When asked to derive equations or compute complex metrics, LLMs often produce plausible-sounding but shallow responses, memorizing shortcuts without truly understanding the underlying logic.

LLMs offer emergent behaviors but with limits

LLMs learn in a continuous space, where small parameter adjustments capture statistical patterns rapidly. Techniques like chain‑of‑thought prompting enable them to simulate multi‑step reasoning, but these emergent behaviors are built on heuristics rather than systematic, step‑by‑step deduction. For instance, when challenged to derive the formula for capacitance between two wires or estimate FLOP requirements, many models generate generic, pattern-based answers that lack a clear logical derivation. They excel at regurgitating learned patterns but struggle to organize complex reasoning in a structured, transparent way.

Neuro-symbolic approaches: Building and learning from a dynamic world model

To overcome these limits, researchers are exploring neuro‑symbolic methods that blend neural network adaptability with explicit, rule‑based reasoning. The vJEPA framework — championed by Yann LeCun, Meta’s chief scientist — exemplifies this approach by processing video to “document the world” in real time. Instead of relying solely on pre‑labeled text data, vJEPA builds a dynamic internal model that captures interactions and causal relationships. This world model enables the system to derive and explain complex relationships and equations, achieving the kind of structured reasoning that LLMs currently lack but humans excel at.

Beyond brute force: The need for structural innovation

Additional compute and memory may further lower AI models loss by enabling them to learn patterns that are even more granular, but they won’t enable a model to learn the sophisticated algorithms required for deep reasoning. True AGI demands a fundamental shift — rethinking training objectives and architectures to integrate structured, neuro‑symbolic reasoning with neural learning.

Only by combining scaling with structural innovation can we move from parroting patterns to achieving prodigious, human-like intelligence.

--

If you have any questions or thoughts, don't hesitate to reach out. You can find me as @viksit on Twitter.

February 08, 2025

AI empowers developers: Entering the era of augmentation, not obsolescence

There’s a clamor that AI is going to crash software developer salaries and render them obsolete. Some companies are already saying they won’t hire junior or mid-level engineers this year, or that most of their code is now churned out by AI. But here’s what they’re missing.

Human creativity, spontaneous initiative, and ethical judgment remain irreplaceable.

Consider the advent of spreadsheets. By automating routine calculations, spreadsheets eliminated roughly 400,000 accounting clerk jobs in the US. Yet they also shifted the focus to higher-level tasks — financial analysis, forecasting, and strategic decision-making — creating around 600,000 new roles. In short, spreadsheets didn’t replace humans; they elevated their work.

Similarly, AI is set to take over the grunt work of coding. It will handle the monotonous, repetitive tasks that bog us down, freeing developers to focus on system design, integration, and ethical oversight. The alarmist notion that AI will slash salaries and displace developers oversimplifies a much more nuanced reality. Technology doesn’t simply replace humans — it augments our capabilities and channels our efforts into higher-value roles that demand creativity and strategic insight.

The human edge: Hannah Arendt’s vision of agency

Hannah Arendt, in The Human Condition, wrote: “Action is the only activity that goes on directly between men without the intermediary of things or matter.” The unique, unpredictable nature of human initiative — the spark of creativity and the capacity for ethical judgment — cannot be mechanized. AI may generate code, but it cannot conceive a vision, challenge assumptions, or navigate the moral complexities of modern systems.

Going beyond automation

Far from heralding a future of job loss and salary collapse, AI is poised to elevate our roles. Just as spreadsheets freed accountants to become strategic advisors, AI will empower developers to transcend routine coding. We’re entering an era where our work is defined not by the drudgery of repetitive tasks, but by our capacity to innovate, integrate, and lead. In this brave new world, our creative, high-level contributions ensure that we remain indispensable: not despite technology, but because of it.

--

If you have any questions or thoughts, don't hesitate to reach out. You can find me as @viksit on Twitter.

February 07, 2025

Replicating the calculated madness of human brains: Can we teach AI to make irrational decisions?

Chaos may be the catalyst that pushes AI to the next level.

Ten years ago, I asked a question: Can there be an algorithm for creativity? I defined creativity as “the ability to generate unique and novel explanations for events that can’t be deduced from the past”.

Fast forward to today. While AI has made exponential progress, it’s still trapped in the past — optimizing, predicting, reinforcing patterns based on historical data. We reward AI for getting things “right” and penalize it for deviation. But if every decision is logical, where does creativity come from?

The power of irrationality

Human history is shaped by those who ignored conventional wisdom — founders betting on unproven technologies, scientists challenging dogma, explorers risking everything on a hunch.

Irrationality isn’t random; it’s the engine of serendipity. Our cognitive biases — overconfidence, risk-seeking, contrarianism — have led to paradigm shifts. Space exploration was once seen as reckless. Investing in electricity was a gamble. AI, as it exists today, would never have made those leaps. But what if we built an AI that could?

A system for betting on the impossible

One approach: hybrid models that blend rational analysis with an “irrational module”, inspired by the brain’s dual-process system. System 1 makes intuitive, gut-driven calls; System 2 is slow, deliberate, and rational. An AI trained on both could inject creative risk where caution normally prevails.

Imagine an AI for drug discovery. The rational module identifies viable compounds based on known data. The irrational module, trained on past scientific breakthroughs, proposes radical, unexpected configurations. One might fail**. But one might unlock an entirely new class of therapeutics.** We saw a glimpse of this with AlphaGo’s legendary Move 37: a move no human would have made, but one that redefined the game.

Balancing chaos and control

Risk comes with failure. An AI trained to take irrational bets must also learn when to pull back. Adaptive safeguards, real-time risk monitoring, and human oversight will be critical.

But I think it’s time to abandon the myth that rationality is the only path to success. Let’s build machines that, like us, take leaps into the unknown and unlock a future we can’t yet imagine.

--

If you have any questions or thoughts, don't hesitate to reach out. You can find me as @viksit on Twitter.

February 07, 2025

The rise of 1 billion casual developers: software is no longer a product. It's a medium

For decades, software was built by professionals. Now, a billion people are making it — without realizing they’re developers.

Casual developers aren’t engineers. They’re educators automating lesson plans, small business owners tweaking Notion databases, and retirees building book review apps. They structure data, automate workflows, and generate scripts — programming without writing a single line of code.

Existing tools are behind the curve

Most coding tools aren't build for casual developers: they're for professionals. Platforms like Replit and Lovable are attracting unexpected users — artists making interactive experiences, 75-year-olds building reminder apps — but the next step is missing: tools that make app creation as intuitive as posting on Instagram.

These people aren’t waiting to become developers. They’re already building. They just need better tools to match their instincts.

Software becomes social

When anyone can create and remix software, apps stop being products. They become remixable, hyper-personalized — more like content than code.

Instead of downloading apps, you’ll stumble upon them in your feed. Just as TikTok changed video and Instagram changed photography, AI will turn software into a medium for everyday expression.

If software becomes expression, who profits?

SaaS subscriptions and app stores won’t work anymore. Software will monetize like content:

  • Creators selling templates and remixable versions.
  • Micro-transactions for custom features.
  • Communities funding the tools they love.

When software shifts from corporations to individuals, the value follows.

The real winners will be the platforms that make creation, discovery, and remixing effortless.

--

If you have any questions or thoughts, don't hesitate to reach out. You can find me as @viksit on Twitter.