- Confluent's S1 filing for IPO signifies a major milestone in their journey.
- Apache Kafka and Confluent have emerged as superior solutions for real-time streaming needs, with Kafka solving problems elegantly.
- Confluent's products are centered around Apache Kafka, with additional proprietary software like ksqlDB, Control Center, REST Server, and Schema Registry.
- Confluent Cloud offers Kafka as a PaaS, simplifying Kafka usage for organizations.
- Concerns exist around Confluent's cash burn rate and the need for substantial capital infusion post-IPO.
Confluent just filed their S1 to IPO. I worked with Confluent starting in March of 2015, and we eventually parted ways. At my company, we continue to work with streaming technologies, including Kafka. I want to share a different opinion on what the filing says, my experiences with Kafka and streaming technologies, and some background stories to help round things out.
Background Early 2010’s
In the early 2010s, I was still at Cloudera. There was a clear trend starting at that time. Companies wanted to do streaming but were limited by the offerings at that time. The only technology that could handle real-time streaming was Apache Flume. However, it wasn’t the right technology for the majority of use cases. So I was on the search for something that was better suited.
In 2014, I started Big Data Institute. That freed me from having to only look at or recommend technologies in the Cloudera stack. I had continued my search for technologies that solved this trend. Then, I came upon Apache Kafka and Confluent. Kafka solved these problems far more elegantly than anything I’d seen.
I sought out to work with Confluent, and I worked with some great people there. So I get to say I worked with Confluent when they were just a few people in an office next to a dentist in Palo Alto. I remember those early days of Confluent fondly.
Given that I’ve worked with Kafka for so many years, I’ve also seen what Kafka lacks. Some of these missing pieces were clear from the beginning. Some missing parts, such as enterprise features, were added later. Unfortunately, other features aren’t so easy to add, and I think that’s one of the most significant issues for Kafka lies.
In thinking back to 2015, virtually nobody had heard of Kafka. I worked in Confluent’s Strata booth, and no one knew about it. So I created a demo to show what could be done with Kafka to help people understand what it could do. In 2016, I worked in Confluent’s Strata booth again, and there was a slight uptick in people who knew Kafka. By 2017, it was off to races, and everyone was talking about Kafka.
I think it’s worth noting that Confluent came at a point in time when there was no real competition in the streaming space. Much to my dismay, Cloudera didn’t see the need or demand to create a competitive product. However, all of that changed, and now there is a substantial amount of competition (competition that isn’t in the S1).
What Are Confluent’s Products?
The S1 didn’t do a great job of telling the story of Confluent’s products. At their core, everything revolves around Apache Kafka. Early on, I created a video showing how Kafka works. I’d recommend watching it as I made the video as non-technical as possible.
Apache Kafka is open source and not proprietary to Confluent. It is made up of:
- The broker process that receives, stores, and saves data.
- A software engineer or data engineer uses the client API to write their own code to use Kafka.
- Kafka Connect provides a configuration-based way of getting data into and out of Kafka.
- Kafka Streams provides a different type of API for software engineers and data engineers to write their code.
Confluent’s proprietary software is both closed source and source available. The products are:
- ksqlDB (SA) gives a SQL interface to make it easier to query streams and create new streams.
- Confluent Control Center (closed) makes it easier to monitor a Kafka cluster (Kafka brokers).
- Kafka REST Server (SA) that allows HTTP REST calls to publish and consume from Kafka.
- Schema Registry (closed) that keeps track of Avro schemas for serialization and deserialization.
Not entirely fitting into one of these categories is Confluent Cloud. It provides Apache Kafka as a PaaS. Using a PaaS makes it easier for organizations using public cloud providers to use Kafka without having to operate a Kafka cluster.
A key distinction is that Confluent Cloud provides for running brokers and not running Kafka Streams clusters. Not running Kafka Streams clusters is meaningful because it still puts an operational onus on the users to run their own.
Rounding out the products are the human-powered products. These products are professional services and training. Contrary to Confluent’s marketing message, it still requires substantial effort and data engineering to use Kafka or other streaming technologies. I’ve been teaching and mentoring companies on Kafka since 2015. Companies can expect to invest a substantial amount of time, effort, and development into making their “data in motion.” It is not easy, no matter what marketing or positioning says.
The S1 talks about Confluent’s community license and how it prevents cloud vendors from using their software without their permission. To be clear, this isn’t the case for Apache Kafka. The cloud vendors are providing Kafka PaaS. This community license stipulation was more for the schema registry and REST server but primarily for ksqlDB. It doesn’t start with the fundamental question of did the cloud vendor even want ksqlDB. For Amazon, this was a resounding no as they use Flink to provide that functionality and more. In my experience, this license change just causes confusion and problems while providing no benefit. I’ll point out too that I don’t recommend our clients use Kafka Streams or ksqlDB either.
In the S1, they talk about their efforts to “Extend our Product Leadership and Innovation” with ksqlDB and other initiatives such as Project Metamorphosis. However, I’m really leery of putting a great deal of investment into ksqlDB. Also, Project Metamorphosis had excellent marketing behind it, but the actual changes from it were disappointing.
Confluent As A Business
Going through the S1 has been my first look at Confluent’s numbers.
Looking over their 2021 breakdown of customers, there are ~2,000 under $100,000, 500 between $100,000 and $1,000,000, and 60 over $1,000,000. In 2020, they earned $236,000,000 in revenue with 374 customers. Given what I’ve heard of Confluent’s pricing model and other streaming technologies’ pricing, getting to $100,000 is a “starter” model. Another concern is that “starter” amount will keep a customer good for a while. Confluent’s plan to land and expand at these customers will be difficult. The customers above $1,000,000 are a different story. Their issues revolve around Kafka’s lack of multitenancy, and those customers often have to support multiple clusters instead of having a single cluster. The multiple clusters work around some of the politics and resources that a multitenant cluster deals with technically. Like the starters, expanding at these customers will be difficult because they’re already over-provisioned on their Kafka usage.
Their cash and burn rate gives me some cause for concern “As of December 31, 2020 and March 31, 2021, we had an accumulated deficit of $406.1 million and $450.6 million, respectively.” They have $205,478,000 in working capital. Unless they can raise a substantial amount of money in the IPO, the lack of working capital will be a considerable constraint.
Missing Risk Factors
When I first went through the risk factors, I skipped right to the risk factors. I believe many risk factors are missing, and investors should know about them.I believe many risk factors are missing, and investors should know about them.
Kubernetes
In the S1, Kubernetes is briefly talked about in a positive light. Confluent has a Kubernetes operator. What isn’t talked about is how Kubernetes has completely changed the operational game. It’s odd to me how Kubernetes and its effects are almost ignored in the S1. One possibility is that people bypass Confluent Cloud altogether because they can easily start a cluster with Kubernetes.
There’s a concept in operations called cattle and pets. Pets are clusters that need constant operations and supervision. Cattle are clusters that you don’t care about and just spin up a new one if it dies. Right now, all Kafka clusters are pets (though I do know of one company using Kafka as cattle). All Hadoop clusters used to be pets. That all changed with the cloud, and Hadoop clusters became cattle. It was unthinkable five years ago. Kafka with Kubernetes or other technologies could spell the “cattleization”of Kafka. For Cloudera, clusters becoming cattle was a massive problem for its business model. For Confluent, the same issue could prove true.
Single Play
Confluent is a pure-play on Kafka. You could make the argument that it is a good thing or a bad thing. I think it’s a risk factor that investors should know about. In the S1’s diagrams, everything comes in and out of Kafka. That’s true; most architectures will look like that. However, Kafka plays a much smaller role in this movement. Usually, the bigger and costlier parts of the equation are outside of Kafka and, therefore, not part of Confluent’s revenue. At scale, different systems and operations will be needed. These other components, such as Apache Flink, abstract out the incoming data, so the data engineer doesn’t care if the data is coming from Kafka, Pulsar, etc.
Other companies in the space realize the need for multiple technologies to be successful. For example, StreamNative has Pulsar (a competitor to Kafka) and Flink, Datastax has Cassandra and Pulsar, Cloudera has Kafka, Flink, and many others. Databricks, in another strategy, has been adding other products to its lineup to distinguish itself from other companies just selling Spark.
Cloud Vendors
The S1 leaves off that the cloud vendors are directly competing with them on Kafka PaaS. For example, Amazon Web Services has its own Kafka PaaS, and a direct competitor to Confluent Cloud called MSK. Azure Event Hubs can be used directly with Kafka’s API. There are limitations to Event Hubs, but we’ve used the product with clients, and it worked well.
This cloud vendor market is incredibly competitive.
Legacy Vendors
The S1 calls Cloudera and others “legacy products.” The products aren’t as legacy as Confluent would like because Cloudera is selling Kafka too. TIBCO is creating a streaming product based on Apache Pulsar. There’s a Sun Tzu quote somewhere about underestimating your competition.
Other Competitive Technologies
There are many direct competitors to Kafka. Some of these are proprietary, and others are open source.
In my opinion, Apache Pulsar is Kafka’s biggest competitor. This competition has become even more apparent as Splunk and Datastax have chosen Pulsar to power their next-generation systems. I’ve written a high-level comparison. These lack of features are a big deal as many of Kafka’s latest features have been in Pulsar for a while. However, I don’t believe Kafka will replicate every one of Pulsar’s features due to architectural limitations.
You also have Redpanda, who have created a Kafka line-protocol compliant implementation of their own broker. Their company says that it vastly increases the performance of and reduces the operational issues of Kafka.
Another competitor is Pravega from Dell. They’ve put effort into the hardware and software needed for streaming applications. (Update: Pravega doesn’t require Dell’s hardware and is standalone software.)
As I look through our own clients at Big Data Institute, none of them uses Confluent’s products, but some use Kafka. They use Event Hub with Kafka clients, Amazon’s MSK, self-supporting their own Kafka cluster, or Apache Pulsar. This technology usage without paying the vendor portends the trends I saw at Cloudera, where people were using the product without using Cloudera’s products.
Kafka’s API and Binary Protocol
The real $64,000 question on Confluent is how easy it would be for a customer to move from Confluent’s Kafka to another company’s Kafka product. A variation of the same question is how easy would it be to use their existing Kafka code and go to a completely different broker technology that supported Kafka’s line protocol such as Pulsar (Kafka-on-Pulsar), Event Hubs, or Redpanda. Of course, there will be technical limitations or workarounds but will that be enough to keep a customer on Confluent?
I break out Kafka’s API from its binary protocol because of the way someone could replace Confluent. For example, someone could rewrite their entire product to use a new technology. Another option would be to recompile their code with another technology’s API that is compliant with Kafka’s API. This newly recompiled code will now use a new broker, and this is an option that Apache Pulsar provides. The other solution is for the new technology to support Kafka’s line binary protocol. This way, the client wouldn’t have to make any code changes.
In my opinion, Kafka’s API and binary protocol will live longer than Kafka itself. So we’ll see it be a quasi-standard that others will program to. This long-life breaks my heart a little bit, as Kafka’s client APIs aren’t the greatest.
How Easy Is It?
Confluent markets Kafka as being easy. ksqlDB makes it easy. Confluent Cloud makes it easy. The reality is different. The word getting out on the level of difficulty represents a challenge for revenue growth and new customer acquisition. More data engineering goes into getting a data product ready for dissemination with Kafka than just spinning up a Confluent Cloud cluster.
In another essay, I talked about streaming data being 15 times more complex than small data. This complexity increase is due to the technical challenges of doing data processing at scale and in real-time. Using Kafka can make some parts easier. The overall challenge doesn’t become dramatically easy, though. The problem is that customers come in expecting an easy path and quickly find themselves drowning in the complexity of distributed streaming. This drowning leaves us with the industry-standard 85% failure rate and an unhappy customer blaming their vendor for the problems.
Marketing Challenges
Confluent boasts some excellent marketing. We’re finding that the management and individual contributors are aware of Confluent’s products at our clients. However, they have cursory knowledge from the marketing positioning. Right now, Confluent and Kafka are cool. That makes marketing all the easier. But, it will change as it does with all technologies.
One big marketing venue for Confluent has been Kafka Summit. They’re able to showcase their new products and customer stories. We analyzed Kafka Summit’s keynotes addresses and found only a few weren’t a Confluent employee, Confluent partner, or Confluent customer. This kind of domination won’t be possible going forward as other companies will need to be given keynote addresses.
Microservices
Confluent is really pushing Kafka as the messaging backbone for microservices. Using Kafka with microservices can be a good use case. However, it has the unintended consequence that we dealt with at Cloudera of non-distributed systems engineers creating distributed systems. This consequence caused all sorts of problems with mismatched expectations, skills, and value delivery.
If microservices as an architectural pattern go out of style, what will Confluent replace the messaging with? I have no doubt there will be new patterns, fads, and waves. The significant variable will be how big of a stretch to fit Kafka into those use cases.
Partner Relations
The S1 talks about the need to maintain good relationships with their partners. I’ve heard this has improved, but as recently as two years ago, Confluent had a wide reputation as a poor partner. This reputation stemmed from Confluent positioning their product as the solution to everything, including the partner’s product. This positioning happened in the sales process and conference talks. It’s led to some hard feelings from partners who aren’t going to forget about these slights.
Is Confluent Cloudera 2.0?
That leaves me with the nagging question of will Confluent be Cloudera version 2.0? There’s a lot here to remind me of Cloudera. Lots of opportunities (I think the $50 billion number is too generous), intelligent people, and a hot product. Can Confluent succeed where Cloudera hit major bumps?
It reminds me of two conversations I had with two Cloudera executives.
In 2016, Amr Awadallah, Cloudera’s CTO, told me Confluent was going around saying that Kafka was all you needed. You didn’t need S3, MapReduce, or a database. Kafka did it all. I kind of brushed it off, thinking it was two vendors fighting a proxy war in their messaging. In my opinion, no one would say Kafka did it all, and you’d obviously need more technologies. It turned out that, yes, Confluent was saying Kafka was all you needed. That started my path of vehemently disagreeing with Confluent and us drifting apart as partners. I would tell people the things I knew to be incorrect in the real world.
Another 2016 conversation was with a Cloudera sales executive. It mystified me that Cloudera was nowhere in the streaming space. That person said they were hitting some demand for Kafka, but Kafka’s overall deal sizes were just too small to be meaningful. Going back to Confluent’s breakdown of $1,000,000 and above deals, that would be much higher for Cloudera. The Cloudera person even said they would throw in Kafka for free to get a more substantial Hadoop deal.
I’ll end this post with one final story. This is the story of why I decided not to join Confluent when I could have been in the first ~20 hires. Simply put, they don’t have a big enough piece of the puzzle. I had always considered them an acquisition rather than a public company. My view of being at Cloudera colored this perception. Cloudera had to wean off of Hadoop and into other technologies to survive. Had Kafka not become what it is now, it would have been a different story. Thinking back to 2015, it surprises the hell out of me that Kafka went from entirely unknown to this sexy technology when it’s just data infrastructure. I think this story is relevant as we look at Cloudera’s troubles and eventually going private. It’s a tough business out there, and Kafka is just one small but essential piece of the puzzle.
If you would like to ask more questions about Confluent, Kafka, or streaming technologies, please use my contact form.
Full Disclosure: I don’t own any Confluent shares. I am an advisor for StreamNative.