3 traps for unwary technical founders

3 traps for unwary technical founders

This article was written by Anh-Tho Chuong, co-founder of Lago.


Being a founder is hard. I experience this on a daily basis.

And it’s even harder when you don’t master a topic you need to deal with. It’s easy to get impressed by a candidate, or an advisor, who name-drops concepts that look shiny to achieve their goal: sell themselves.

It happens with technical candidates who oversell themselves to business founders, and the other way around: technical founders can be fooled by a varnish of knowledge or new concepts.

Here are the most common traps technical founders should not fall into, when they need to deal with a business topic. In other words, if a candidate or vendor sells one of these topics to you as the solution to a burning issue, in 95% of cases it’s a red flag.

#1 PR agencies: best way to shorten your runway

If you’re earlier than series B, and have no idea how to get press exposure, a PR agency alone won’t help. Their retainers usually start at $10k/month, so costs can increase quickly.

PR agencies are usually bad at crafting new angles, iterating on your positioning, and being very proactive. They might help at a later stage to amplify your reach, when you’re established and your messaging is stabilized. But even at this point, you will need 10-20% of a full-time employee to drive them.

If you don’t know where to start: reach out to internal ‘brand and communication leads’ at companies that are in your industry, one or two steps further (if you’re at seed level, learn from someone at a series A startup). They are the ones who actually do the work, and from whom you can learn.

Read more here

#2 Attribution: best way to debate models instead of growing the company

Some ex-consultants love this topic and want to bring it in, when they join a startup. I’ve worked at McKinsey myself, so I can relate. They can live in a world of concepts and theory, instead of getting down to the nitty-gritty of operational tasks.

What is ‘attribution’?

Attribution models are a way to ‘attribute’ a conversion (usually a sign-up or a sale) to one or several marketing channels: a TV campaign, a Facebook ads campaign, etc.

For instance, if one of your leads interacts with a Facebook ad, then with a Google AdWords ad, then talks to a Sales Executive during an event, and finally signs up after clicking on a LinkedIn ad, how do you determine the influence of each channel on the sign-up?

Is it 100% Facebook? (first touch)
Is it LinkedIn? (last touch)
Is it equally distributed? (¼ for each touchpoint)
The list can go on.

Defining and implementing a custom attribution model takes a lot of debating (debating = time = money) and engineering resources to implement (it’s a data engineering project).

Here’s what I recommend:

a) Keep your global cost of acquisition (CAC) under control, with a simple ratio: number of new conversions / global acquisition spend.

b) Identify ‘no-brainer’ actions: actions that are needed, and that no analysis or complex attribution model will deprioritize.

There are many other topics you can address:

• What does your first interaction with leads look like? Can it be improved? I’m thinking: how hard have you worked on your landing pages’ conversion rates? On your cold email copy? On the SEO ranking of your most viewed pages?

• How about the last touch? Have you tweaked your sales script and tested it? Have you tested different copies of your AdWords ad?

• What’s the satisfaction rate of people in contact with your Sales team at different stages of the funnel?

• If you organize events, what’s the NPS of attendees?

• Do some of your new users drop during onboarding and never come back? Have you tried to fix this?

• What does your activation rate look like after onboarding? Have you defined it and are you monitoring and improving it over time?

✅ Pros:

You can identify no-brainers, focus your efforts on direct and high-impact projects, while keeping your global cost of acquisition under control (i.e. you know you don’t spend more than X€ to acquire a new customer, regardless of which channel contributes most, and that’s ‘good enough’).

❌ Cons:

You don’t have an exact or sophisticated attribution model, and visualization of leads’ journey. Doesn’t matter if you grow and your CAC is under control.

Read more here

#3 A/B testing: right move for PhDs, not for startups

What are the prerequisites for a successful A/B test?

1. A large enough testing sample: basically, you should have your main user base, a testing sample for version A, and a testing sample for version B. If your testing samples aren’t large enough, it’s just not statistically significant. In many cases, companies don’t have a large enough testing sample. I’ve seen vendors selling A/B testing for pricing to seed stage companies.

2. You should have a very specific topic to test: if your version A differs completely from version B, and the test results don’t show a real difference in performance, you won’t even know what to learn from this; and

3. You should have enough time ahead of you: to define two versions, implement them in the front-end and in the back-end, brief your team, and then wait for the feedback loop to complete. If you’re testing pricing and your sales cycle is two months, you may need six months to complete a test: one month of preparation, four months to have a few cohorts, and a few weeks to analyze the results.

Can you afford to wait and spend so many resources? Does someone on your team have the data, marketing and project management skills to lead this? Can you staff engineers and product managers on this?

In most cases, you don’t. I’ve only seen very late stage companies actually using A/B testing in their product.

How do others do? They use surveys, polls, interviews, ‘educated guesses’ and optimize on iteration speed. Is it 100% scientific? Not really. But in a high ambiguity environment, are your chances of success higher if you iterate continuously during six months, or if you design only two options and wait for six months?

That’s why A/B testing is great for use cases with a quick feedback loop, such as testing an email subject line or a landing page for an ads campaign, not for the rest. And not for pricing.

Instead, if you need help with your pricing, feel free to reach out.

Two hosting options, same benefits

Whether you choose the cloud version or decide to host the solution yourself, you will benefit from our powerful API and user-friendly interface.

Open source

The optimal solution for small projects.


The optimal solution for teams who want control and flexibility on cloud or self-hosted version.