Blog
Pivot Series ‒ Part 1: Why we moved away from the no-code reverse ETL space

Pivot Series ‒ Part 1: Why we moved away from the no-code reverse ETL space

Earlier this year, we shared our pivot experience on Sifted. Since then, I’ve received dozens of messages from other founders asking me for more details. This is the first article in a series that traces our journey.

When we founded Lago, we were operating in a different space from billing: our product was a no-code data tool for growth teams. Therefore, people often ask why we moved away from our initial space.

Why we chose this space

My co-founder Raffi and I met at Qonto. I joined the B2B neobank pre-product and later became VP Growth. Raffi joined shortly after me and led our Data Intelligence and Growth Engineering teams. We started from zero to reach tens of millions of euros in ARR.

Growth there meant anything related to go-to-market, including paid acquisition, SEO, partnerships, activation, business intelligence, product marketing, branding, public relations and events. Our two main KPIs were our monthly recurring revenue (MRR) and brand awareness.

After the pandemic, we were ripe for a fresh start (we love the early-stage phase) and wanted to build something together. Our collaboration was the initial starting point. In a very ‘Y Combinator fashion’, we looked for ideas related to the pain points we had experienced first hand.

We had spent a lot of resources at Qonto to extract, segment and sync customer data from different systems (back-end, front-end, payment data) with our growth tools (i.e. emailing, ads, etc.). This work was crucial as a big part of Qonto’s revenue comes from user activity and interchange fees. In short, this means that the more you spend with your Qonto card, the more money Qonto earns.

In fintech, customers can drop easily so user activation is key: there are many verification steps to go through (‘know your customer’ process: identity and bank verification, document checks, etc.) and each of them is an opportunity to drop and never use the platform again.

To segment and sync user data, we had implemented an imperfect system based on Segment Personas and a lot of manual work performed by engineers that were fully allocated to go-to-market projects. Their work was to maintain our system and make sure we had clean and accurate data for our marketing campaigns. It was a continuous work: we had hundreds of marketing campaigns running simultaneously and as the company was growing, we kept iterating and launching new ones: for a new feature, a new country, a more granular approach for each city, etc. Segment Personas was a robust product but it was very expensive and not really user-friendly, so we ended up developing an extra layer on top of it.

Even after we left Qonto, other companies came to us to replicate our growth tool stack but they did not have the engineering resources to do so. We started to design a solution for them and created a landing page to describe our project. More than 1,000 people registered on our waiting list. On paper, this opportunity looked like a no-brainer:

• A field in which we were experts;

• A huge space: growth teams handling data, with direct impact on revenue; and

• An audience that we could activate.

So we went for it! In March 2021, we officially incorporated the company, applied to Y Combinator with a Figma prototype and got into the June 2021 batch.

What we built

Here is an extract from our investment memo:

The Problem

All growth teams need to segment and synchronize customer data across dozens of data sources to improve marketing, product experience, and sales, but the traditional tools that do this cost hundreds of thousands of dollars a year, and require dedicated engineers to implement (for example Fivetran, Snowflake, Hightouch, and Census).

Lago solves this by letting growth teams do this themselves, without having to code. We offer a spreadsheet-like no-code interface to manage and plan their audiences, without having to resort to complex data modeling, SQL, and building data pipelines.

With our no-code tool, the same tasks of data extraction, modeling, and synchronization can be accomplished in minutes vs hours. No growth or marketing team says they have enough engineering resources - many have none. We've also built Lago for them.

Growth teams' success is about iteration speed. Going through engineers hurt iteration speed and therefore their performance.

Here is the typical set up to just to sync customer data in a CRM:

Diagram of a typical data management system

The Product

Before Lago


• Sales, Growth, and Customer Success teams need to know how the product is used, to adapt their strategy (e.g., Sales approach, customer success answers)

• To do that, the only way was to ask engineers (e.g., data engineers, or salesforce integrators) to extract data from the back-end, transform it so that it fits the 'business tool' (e.g., Salesforce for a CRM, Zendesk for a support tool), and then create a data pipeline to push data to these tools

Results:

• Business teams usually don't have product usage data in their favorite tools (e.g., Salesforce, Hubspot, Zendesk), and have to manage an incredible number of spreadsheets

• When engineers built custom data pipelines to synchronize data, business teams have low flexibility: if the Sales team decides to add a 'field' in Salesforce for instance, they need to ask engineers to re-synchronize data to the right field, and wait for weeks, if not months

With Lago

• In a few clicks, extract product usage data from the DB, transform it in our 'spreadsheet-like' interface, and synchronize it to business tools

• If you add a new field in one of your business tools (Salesforce, Hubspot or Zendesk for instance), re-sync the relevant data to this field in a few clicks, by using Lago

Results:

• Less/no engineering time spent on data plumbing to get product usage data in business tools

• Higher agility of business teams, to get relevant data into their favorite tools, and perform their work without engineering bottlenecks

We shipped the product in four months and sold it to our first users for about $500/month, without resorting to paid marketing. The future looked bright!

How and why we lost our conviction

The following is our personal experience, others might interpret the facts differently and that’s okay.

At such an early stage, the founders’ conviction matters more than anything else. The bottom line is that we progressively lost it between August and December 2021.

What did we learn that changed the equation?

Understanding who our power users were

The first version of Lago enabled users to extract and transform data coming from their data warehouse or database. To help them do so, we had designed a ‘no-code’ user interface (i.e. knowledge of SQL was not required). However, our power users were more technical than we thought they’d be: they were not growth marketers but engineers and business professionals with a technical background ‒ people who knew how to write an SQL query.

Also, something else was bothering us: our product was used as a data pipeline between our users’ data warehouses and their business tools (e.g. Salesforce, Customer.io) but once the connection was set up, all the value-added work was happening either in the data warehouse (e.g. data modeling by engineers) or performed by the business tool (e.g. managing marketing campaigns). In other words, people did not spend much time using our tool.

Isn’t it the same with ETL tools? Yes it is. ETLs such as Fivetran, Stytch or Airbyte, do data plumbing too, in a reverse order though: they take data from external applications (e.g. Salesforce) and dump it into data warehouses.

When the data warehouse is the destination (not the source of data), the person that extracts and uses the data is the same: an engineer. Once the data is in the data warehouse, they can work on complex modeling, clean it and segment it. As engineers see both sides of the process (i.e. data extraction and modeling), they easily understand the value of the ETL tool.

With ‘reverse ETL’ solutions (e.g. Lago v1), an engineer needs to extract the data from the warehouse and synchronize it with a business tool (e.g. CRM), then a business person can transform and use it to create marketing campaigns. Therefore, the value chain includes a more varied set of stakeholders, who don’t necessarily see the full potential of the solution as they don’t get the big picture.

To summarize, we had built a no-code tool that was actually used by engineers. Our initial differentiator was our no-code user interface, so we had to find another angle.

Understanding the challenges faced by Growth Marketers

I’ll share another finding here: every marketer says they want to be more data-driven and most of them will tell you that they’re waiting for engineers to access data. That is partly true.

Marketers often already have access to data, at least in ‘read-only’ mode, and can download it in CSV format. They don’t really explore their options, not because they would need to learn SQL (no need with spreadsheets), but because this would require them to study the whole data structure of the company.

A database usually contains dozens or hundreds of data tables, and understanding how they are organized, how they relate to each other and how often they are updated is a huge effort. Therefore, they say they want to be more data-driven, but they rarely acquire the knowledge that is required to do so, because the bar is pretty high and because this would add up to their existing workload. Due to this lack of knowledge, some Engineering teams don’t trust their Growth teams to handle data and prevent them from accessing the data warehouse, even with a no-code tool. In addition to this, most Growth teams don’t feel comfortable manipulating databases: flooding their CRM software with distorted data would be a nightmare for them.

Lago was of no help. We thought a ‘data bootcamp’ for marketers could help address this issue. However, we wanted to build a software company, not a service company.

What did we miss at the beginning?

We had built the tool we would have liked to have at Qonto but overlooked the fact that it was a very specific set-up. Our Growth team at Qonto combined very rare traits and unfortunately, we did not find a large enough pool of teams with similarities:

• The team had its own internal engineering team, so we could manage projects on our own. Usually the Growth team needs to ask for ad-hoc engineering resources from the Engineering or Product team;

• We were very data-driven from day 1. I had worked for several B2B SaaS before and had spent all-nighters cleaning data or implementing event tracking systems. As soon as we had ‘product market fit’ at Qonto, I started working on a scalable data model; and

• We had a great relationship with the engineering team. Communication was smooth, the VP Engineering did not feel the need to control what we were doing and was willing to give us as much autonomy as possible to succeed (as long as we did not break anything).

We learned a lot in nine months. With this in mind, there were only three options:

  1. Becoming a full dev tool: engineers were already using our tool and when a company grows, the user segmentation projects often fall onto the Data Engineering team, so we could become their favorite tool;
  2. Becoming a full marketing tool: we could go further and add ‘marketing’ features (e.g. emailing, ads, etc.) on top of our ‘data segmentation’ features. In other words, we’d create a combination of a data tool + Mailchimp; or
  3. Pivoting: do something else.

Our team was not really excited by options 1 and 2. Option 1 meant becoming a ‘reverse ETL’. We really liked the growth angle but not so much the data engineering side. Also, the two emerging leaders of this young category, Hightouch and Census, already had similar products and were already fighting through marketing and sales campaigns. As for option 2, we concluded that an established marketing company like Customer.io would be best placed to develop data sync capabilities and win this space.

As mentioned previously, Lago started with a team. We like building together, so when we came to the conclusion that a hard pivot was necessary, we were okay with that. The most important thing was to stay together.

Were our conclusions obvious in hindsight? Maybe. But everything always seems obvious in hindsight. That’s why Y Combinator pushes you to launch your product as soon as possible, because user feedback is very different from conducting user interviews.

We definitely learned to launch quickly, operate as a team and be resilient with this first iteration. Finding our new product (an open-source billing API for product-led SaaS) was not going to be easy but that’s another story that I’ll cover in the second post of this series.

Two hosting options, same benefits

Whether you choose the cloud version or decide to host the solution yourself, you will benefit from our powerful API and user-friendly interface.

Lago Premium

The optimal solution for teams with control and flexibility.

lago-cloud-version

Lago Open Source

The optimal solution for small projects.

lago-open-source-version