Why AI Won't Save Your PLG Strategy (But It's Changing How I Find It)

Why AI Won't Save Your PLG Strategy (But It's Changing How I Find It)

I built Cont3xt.dev in a few weeks using AI. That was the easy part. The hard part? I've got 40 users and I'm still figuring out if anyone will actually pay for it.

Here's what nobody tells you about the "AI makes building easy" narrative: AI helped me build the platform fast. It's not helping me figure out if it should exist. My outreach emails go to spam. I'm at 5% visitor-to-sign-up conversion. I'm talking to users one by one trying to understand what they actually need. This is the reality of product-led growth in the AI era - the code is trivial, everything else is still hard.

The Myth Everyone Believes

Here's the narrative floating around right now: AI makes coding fast, therefore you can iterate fast, therefore product-led growth is easier. Build, test, iterate at lightning speed. The future of indie SaaS is here.

This is not accurate. PLG was never bottlenecked by coding speed. It's bottlenecked by understanding human behaviour.

You know what actually takes time? Figuring out why someone signs up but never creates their first rule. Understanding why developers love the concept but don't actually connect their AI tools. Discovering that your perfectly crafted onboarding flow loses 80% of users at step three. These are psychology problems, not engineering problems.

I can rebuild my entire onboarding sequence in a day now. That's genuinely impressive. But I still don't know what onboarding sequence will actually work, because that requires understanding psychology, not writing code. The speed of implementation is irrelevant when you don't know what to implement.

Product-led growth requires understanding activation moments (when users experience value), designing viral loops (how users bring others), optimizing onboarding (the path from curiosity to commitment), and pricing psychology (what pain justifies payment). None of these are coding problems. AI can write the code that implements your hypothesis faster than any human. It can't tell you which hypothesis to test.

I've written about AI for coding before, focusing on individual productivity. But when you're trying to find product-market fit, the bottleneck isn't your coding speed. It's your understanding of the problem.

What I'm Actually Struggling With Right Now

Let me be specific about where I am with Cont3xt.dev, because I think the honest reality is more useful than pretending I've figured it out.

Finding the Activation Moment (I Don't Know Yet)

My current hypothesis: activation means someone makes their first query to the platform through their AI tool. That's when they get value, when the context actually helps them code. But getting there requires signing up, creating rules, installing MCP, connecting to their AI tool, and actually making a query that benefits from context. That's a lot of steps for an unclear payoff.

I'm watching people sign up and create rules, but I don't have visibility into whether they're actually using it. The conversion from "created an account" to "got value" is murky, and that's the metric that actually matters. Everything before that is vanity.

Here's what I can rebuild in an afternoon with AI: different onboarding sequences, alternative UI flows for rule creation, new landing page copy, modified feature sets. Here's what AI can't tell me: which step actually matters, why people drop off, what "value" looks like to them, whether I'm measuring the right thing. The gap between these two lists is where the real work lives.

I've rebuilt the onboarding flow three times based on hunches. Each rebuild takes a day. Figuring out if the rebuild worked takes a week of watching user behaviour. AI speeds up the building, not the learning. That's the entire problem with assuming AI solves PLG - it optimizes for the wrong bottleneck.

The Single Engineer vs Team Problem

I'm testing a specific hypothesis: sell to individual engineers who then bring their teams. The reasoning is sound - I'm one person running an indie project, I can't do enterprise sales cycles or six-month procurement conversations. So the bet is classic bottom-up PLG: an engineer tries it, finds it useful, invites their team.

The question I'm wrestling with is fundamental: is context management an individual problem or a team problem? Because if I've got this wrong, the entire product strategy is backwards.

If it's an individual problem, then people should be happy using Cont3xt.dev solo. They'll upgrade for more storage, more rules, more features. The value is personal. But when I talk to users, the value proposition feels stronger when I discuss teams sharing context.

I've tested different landing page copy - "Build faster with your team's context" versus "Never explain your tech stack to AI again." The conversion rates are similar, which either means I haven't found the right framing or I'm attracting the wrong audience. AI helped me test seven different value propositions in two weeks. It didn't tell me which one resonates, because that requires talking to humans and understanding their actual problems.

Free vs Paid (Still Figuring It Out)

My current free tier includes basic rule creation and limited queries. Paid adds more storage, team features, priority support. It's a standard freemium model because standard models exist for a reason - they work often enough to be worth testing.

But I don't have enough volume yet to know if this split is right. What's the pain point that triggers an upgrade? Running out of storage? Wanting to share with teammates? Something else entirely? I'm at 40-50 users with maybe 5-10 actively using it. I need more data to know what works, but I need to find the right users first to get that data. Classic chicken-and-egg problem that AI doesn't solve.

The advantage AI provides: I can A/B test pricing pages in hours instead of days. The limitation AI can't overcome: I need people at the pricing page first. Getting people there is a distribution problem, a messaging problem, a product-market fit problem. None of those are engineering problems.

What AI Actually Changed

Let me be clear about where AI genuinely helps, because there is real value here.

Testing Hypotheses is Ridiculously Cheap

I've rewritten the landing page seven times in two weeks. Each iteration tests a different value proposition, different target audience, different messaging angle. The cost per iteration breaks down to 2-3 hours of my time, zero development cost since I'm doing it myself, and low opportunity cost because I can test and learn quickly.

Without AI, each landing page iteration would take a day or more - design, implementation, mobile responsiveness, cross-browser testing. Now it's an afternoon. I rebuilt the onboarding flow completely three times. First version was too complex and lost people. Second version was too simple and didn't explain value. Third version tried to find the middle ground and probably still isn't right. Each rebuild took a day instead of a week.

The tactical advantages compound: I added analytics tracking, modified the sign-up flow, changed the MCP installation instructions, experimented with different rule templates, adjusted the dashboard layout. All things I could test in days instead of weeks or months. But here's the critical limitation - I still needed to know what to test. AI gave me the ability to test fast, but it didn't give me the intuition about which tests matter. That intuition comes from talking to users, watching behaviour, understanding psychology.

The traditional approach: pick one hypothesis, build it carefully, hope it works. If it doesn't, you've spent weeks or months finding out you were wrong. The AI-assisted approach: test ten hypotheses quickly, learn from all of them, keep what works. You're still wrong most of the time - I certainly am - but being wrong costs days instead of months. That's not a minor difference, it's a structural advantage that changes the economics of indie SaaS entirely.

Solo Founder Viability (With Tradeoffs)

I built Cont3xt.dev as a solo founder - backend in Go, frontend in Vue, vector search with pgvector, MCP integration, payment processing, analytics. Previously, this would have required a small team or significant contractor expense. The technical barriers that used to require multiple specialists have collapsed.

AI filled in the gaps in my knowledge efficiently. I'm stronger on backend than frontend, so AI helped with the Vue components and styling. I hadn't used pgvector that much before, so AI helped me implement the hybrid search correctly. I needed to integrate Stripe for payments, and AI made that faster. The technical feasibility of solo founder SaaS has fundamentally changed.

But there's a tradeoff here that most content ignores: being solo means you're learning PLG solo. No co-founder to argue with about target audience. No team to discuss which metrics actually matter. No second opinion on whether that landing page copy makes sense to anyone outside your head. You move fast, but you move fast alone. AI gives you capabilities, but it doesn't give you perspective. That's probably the biggest hidden cost of AI-accelerated solo founder life - the echo chamber risk is real.

Can Afford to Fail (Multiple Times)

The economic shift is straightforward: Cont3xt.dev cost me a few weeks of evenings and weekends. Server costs are minimal at current scale. No salary to pay, no office to rent, no external funding required. If this doesn't work, I can pivot to something adjacent or try a completely different idea. The cost of failure is my time and about £200 in infrastructure expenses.

Old model: Get funding, hire team, spend six months building, launch, find out if it works. Cost of failure: £50-100k and a year of runway consumed. This model forced you to be right on the first or second attempt. New model: Build yourself with AI, launch in weeks, test quickly, pivot if needed. Cost of failure: A few weekends and coffee money. This model lets you be wrong ten times before you need to worry about runway.

This changes the game for indie founders fundamentally. Not because AI makes you right, but because AI makes being wrong affordable. I haven't found product-market fit yet. But I can keep testing because each test is cheap. That's not nothing - it's actually the entire point.

The Cont3xt.dev Hypothesis Log

Let me document what I'm actually testing right now, because the process of finding PMF is more useful than pretending I've found it. This is the messy reality of PLG that most content skips over.

Hypothesis 1: Developers Will Try Free First

The bet: No one buys developer tools without trying them first. Free tier removes friction, gets people in the door. This isn't revolutionary, it's just acknowledging that developers are inherently skeptical of untested tools and rightfully so.

What's included in free: Basic rule creation, high limited queries, MCP connection. I'm betting that high limited queries matter more than unlimited storage, because queries represent actual value while storage is just a technical constraint.

What I'm measuring: Visitor to sign-up conversion (sitting at ~5% currently), sign-up to rule creation rate, rule creation to actual usage. The funnel matters more than any individual metric.

What I'm learning: People sign up more readily than I expected, but getting from sign-up to real usage has more friction than I thought. The gap isn't price, it's something else - probably either value clarity or technical friction in the MCP setup.

Next experiment: Reaching out to people who signed up but didn't create rules, understanding the actual barrier. This requires manual work that AI can't help with - talking to humans, understanding their context, discovering their real objections.

Hypothesis 2: Individual Engineer → Team is the Right Path

The bet: Engineers discover tools, prove value, then bring their teams. I can't do top-down enterprise sales as a solo founder, so this has to work or I need a different product.

How I'm testing: Landing page focuses on individual value ("your context") but highlights team features prominently. Watching whether people create teams organically or stay solo indefinitely.

What I'm seeing: Most users are solo right now. I haven't seen the team adoption pattern just yet. Either I'm too early in the adoption curve or the hypothesis is fundamentally wrong. The data isn't conclusive yet, which means I need more users before I can really know.

The question: Am I solving an individual problem that happens to work better with teams? Or a team problem that individuals won't adopt alone? This distinction matters enormously for positioning, pricing, and feature prioritization.

Next experiment: Direct outreach to users asking if they'd want their team on the platform, understanding what would actually trigger that decision. Again, this is human work that requires conversation and context.

Hypothesis 3: Activation = Using It, Not Just Creating Rules

The bet: Real value comes from AI queries that benefit from your context, not from just storing rules in a database. Storage is a means to an end, not the end itself.

What I'm measuring: Sign-ups → Rule creation → MCP connection → Actual queries. Trying to understand the full funnel and where it breaks down most severely.

The problem: My visibility into "actual queries" is limited. I can see requests, but I can't see the user's perception of value. Did that query actually help them? Did the context make a difference? These questions require user research, not analytics.

Current funnel: ~5% visitor to sign-up, higher rate sign-up to rule creation, unclear on rule creation to actual usage. The gaps in my knowledge here are probably more important than what I do know.

Next experiment: Adding more visible feedback when context is being used. Show users "your rule just helped with this query" in real-time. Make the value explicit rather than implicit.

What's Not Working Yet

Let me be honest: I haven't cracked distribution. Traffic to the site is low - maybe a few hundred visitors per week. Sign-ups are growing but slowly. The math is straightforward: at 5% conversion with a few hundred weekly visitors, I'm getting single-digit sign-ups per week. That's not a path to anything meaningful without dramatic changes.

I'm doing scrappy, manual outreach. Talking to people on X, Reddit, anywhere developers congregate. Finding relevant conversations and adding value before mentioning the product. It's working better than nothing, but it's not scalable. My outreach emails are going to spam - literally ending up in spam folders, not metaphorically. I'm learning to work around this by engaging in communities first, building relationships before asking people to try the product. It's the right approach, but it's slow.

The gap between what I can build and what I can distribute is massive. I can build and iterate quickly on features, onboarding flows, landing pages. I haven't figured out how to get in front of the right people at scale. That's not an AI problem, that's a distribution problem. AI can help me write better copy, but it can't tell me where my users actually are or how to reach them effectively.

I'm testing content marketing through articles like my piece on solving the AI context problem, direct outreach, community participation. Nothing's really taken off yet. The advantage: because iteration is cheap, I can keep testing different approaches without running out of runway. The disadvantage: I still haven't found the distribution channel that actually works at scale.

What I'm Learning About PLG in the AI Era

After a few months of this, patterns are emerging. Some confirm what I suspected, others surprised me. All of them matter more than the speed at which I can write React components.

1. AI Speeds Up Testing, Not Learning

I can test 10 different onboarding flows in the time it used to take to test one. That's genuinely valuable and shouldn't be dismissed. But I still need to interpret the results, understand why users behave the way they do, form hypotheses about what might work better. The bottleneck moved from "how fast can I build" to "how fast can I understand." AI solved the first problem completely. It barely touched the second.

I'm talking to users one by one, asking why they signed up, what they were hoping this would do, why they didn't create any rules. These conversations take time and require context that AI doesn't have and can't synthesize. The speed of learning is still human-speed, even if the speed of building is AI-speed. That asymmetry is the entire challenge.

2. The New Advantage: Fail Faster

The old model forced careful planning: build one thing carefully, launch it, hope it works. If it doesn't, you've burned months finding out you were wrong. The new model permits recklessness: build, test, learn, pivot, repeat. Test ten ideas in the time it used to take to test one. The quality of each test might be lower, but the quantity more than compensates.

I'm wrong about most things - my initial landing page copy didn't resonate, my first onboarding flow was too complex, my assumptions about who would use this were off. But being wrong cost me days, not months. So I can afford to be wrong, learn from it, and try something different. This is the actual advantage of AI for indie founders: not that you're more likely to be right, but that being wrong is cheaper. The economic equation of experimentation has fundamentally changed.

3. What Still Takes Time

Some things AI doesn't accelerate at all, and pretending otherwise creates false expectations. Understanding your users requires talking to people, watching their behaviour, understanding their context. This is human work that doesn't compress. Finding the activation moment requires observing patterns over time, interpreting data, making judgement calls. AI can surface data, but it can't tell me what it means in context.

Crafting messaging that resonates requires testing and intuition developed through repeated interaction with your specific audience. I can iterate on copy quickly, but knowing which message resonates requires market knowledge that only comes from time and exposure. Building distribution means finding where your users are, earning their attention, creating content that spreads. These are fundamentally human challenges that exist in the same problem space as they always have.

The code is the easy part now. Everything else is still hard, and in some ways harder because the code being easy creates unrealistic expectations about everything else.

4. The Indie Founder Opportunity

The barrier to entry for testing indie SaaS ideas has collapsed, and that changes who can play the game. You can test ideas that wouldn't get VC funding because they're too niche or too uncertain. If Cont3xt.dev works, great. If not, I pivot to something adjacent. The cost of exploration is manageable.

You can pivot without permission - no board to convince, no investors to update, just direct decision-making based on what you're learning. You can learn in public, writing articles like this one admitting you haven't figured it out yet. That builds an audience of people following the journey, which has its own value independent of the product.

You can afford to be wrong multiple times before you need to worry about runway. Each failed experiment costs me a few days. I've got dozens of attempts before I run out of patience or money. That's a structural advantage that changes the risk calculus entirely.

The Framework

Let me give you something practical if you're in the same boat. The timelines are based on my actual experience and conversations with other indie founders doing similar work.

Traditional Solo Founder Timeline:

  • Idea validation → 3 months (validate thoroughly before building because rebuilding is expensive)
  • MVP development → 6 months (build carefully, limited resources mean you get one shot)
  • First customers → 3 months (hope people find it through organic channels)
  • PMF search → 12 months (pray you have enough runway to figure it out)
  • Total: 24 months, probably need funding to make it work

AI-Assisted Solo Founder Timeline:

  • Idea validation → 1 month (validate while building because rebuilding is cheap)
  • MVP development → 2-4 weeks (AI handles implementation, focus shifts to product decisions)
  • First customers → 1-3 months (still human work, still requires distribution strategy)
  • PMF search → 6-12 months (faster iteration enables more experiments, but understanding markets takes time)
  • Total: 8-16 months, bootstrappable with minimal external capital

What changed: Build time collapsed from months to weeks. This is the dramatic improvement everyone focuses on because it's measurable and obvious.

What didn't change: Understanding users, finding distribution channels, discovering product-market fit. These are still fundamentally human problems that require human timescales. AI can't compress market learning.

The old risk: Running out of money before finding PMF. This forced conservative decision-making and careful resource allocation.

The new risk: Running out of motivation before finding PMF. The financial risk dropped dramatically, but the psychological risk is still there. Cheap failure is still failure, and repeated failure drains motivation regardless of cost.

Old cost of failure: £20-50k and 6-12 months of concentrated effort. This meant you needed to be mostly right on your first or second attempt.

New cost of failure: £200 and 4-6 weeks of part-time work. This means you can be wrong ten times before you approach the old cost of failure once.

Old number of attempts before giving up: 1-2 attempts (ran out of money or patience after significant capital investment).

New number of attempts before giving up: 10+ attempts (run out of patience before running out of capital).

The question isn't "Can I afford to build this?" anymore. The question is "Am I patient enough to find what works?" That's a fundamentally different constraint that favours different personality types and approaches.

The Honest Reality

I don't know if Cont3xt.dev will succeed. I've got 40 users, unclear activation metrics, untested conversion mechanics, and distribution that doesn't scale yet. I haven't found product-market fit. The landing page is on its eighth iteration. I'm still figuring out who my actual user is and what pain I'm actually solving. My outreach emails go to spam folders.

But here's what's different from trying this five years ago: I can keep testing. Each failed hypothesis costs me a week instead of a quarter. Each pivot costs me days instead of months. I rebuilt the onboarding flow three times. I rewrote the landing page seven times. I tested different value propositions, different target audiences, different feature sets. All in the time it used to take to launch once and hope you got it right.

AI didn't give me the answers. It gave me the ability to test cheaper and faster. That's not nothing, but it's not everything either. The hard part of PLG isn't the building - it's the learning. Understanding why someone signs up but doesn't activate. Figuring out what pain is severe enough to justify payment. Discovering how to get in front of the right people at scale. These are human problems that require human solutions.

AI speeds up the iteration cycle dramatically. But it doesn't tell you what to iterate on, and that's the harder problem. If you're building something similar - trying to figure out if your product has legs - the advantage isn't that AI makes you right. It's that AI makes being wrong cheaper. That's a structural shift that changes the economics of indie SaaS, but it doesn't change the fundamental challenge of understanding markets and users.

I'm learning PLG by doing PLG. And the fact that I can test, fail, and iterate this quickly means I might actually figure it out before I run out of motivation. That's the actual change - not that success is easier, but that persistence is cheaper.


Try Cont3xt.dev if you're dealing with AI tools that don't understand your team's context. And if you figure out my PLG strategy before I do, seriously, let me know.


Need help with your business?

Enjoyed this post? I help companies navigate AI implementation, fintech architecture, and technical strategy. Whether you're scaling engineering teams or building AI-powered products, I'd love to discuss your challenges.

Learn more about how I can support you.

Get practical insights weekly

Real solutions to real problems. No fluff, just production-ready code and workflows that work.
You've successfully subscribed to Kyle Redelinghuys
Great! Next, complete checkout to get full access to all premium content.
Welcome back! You've successfully signed in.
Success! Your account is fully activated, you now have access to all content.
Error! Stripe checkout failed.
Success! Your billing info is updated.
Error! Billing info update failed.