CapRover Interview 2
Having a shark in the tank doesn't mean that you cannot have smaller fish swimming in the tank as well.
In this episode, we welcome Kasra Bigdeli, Engineering Manager at CapRover. He shares how he built an open-source version of Heroku that features one-click apps. He explains how he designed it to be newbie-friendly: easy to navigate and full of contextual help. Kasra also talks about using only organic techniques in marketing CapRover, and now its community is naturally growing with more than a hundred projects.
---
It's interesting timing to welcome Kasra Bigdeli onto the show. Before we start, I wanted to give mention the project that I was there for, which is Open Feature. You can learn more about that at OpenFeature.dev. It's an interesting project that we've been involved in and is going through the CNCF sandbox. I'm trying to create an SDK standard for feature flagging.
It is creating an SDK standard for feature flagging. If people want to care about it, that's up to them. We're looking for people to help. We're getting some traction on the project and about to start designing the client-side specification. If you're interested in learning about that or feature flagging in general, check out OpenFeature.dev. With that done, welcome, Kasra.
Ben, thanks for having me here.
It's great to chat finally. This is one episode that I've been wanting to do for a while. We tried for a while. Your family got bigger, which was one of the reasons that we didn't chat a bit earlier in 2023. Do you want to tell us a little bit about yourself and the project that you've been working on?
I started the pet project several years ago. Let me give you a little bit of background about why the project started to begin with. I had a ton of site projects and every time I wanted to deploy a backend project, I used to use Heroku. Heroku was good. Get push and your website and API are live. It was super easy but for pet projects and site projects, I figured that this is not maintainable.
At some point, I found myself paying over $150 a month for random projects with probably maybe less than 10 requests per day. It didn't make any sense. I looked into how Heroku works under the hood and then I discovered the world of Docker. Docker Swarm was getting introduced back then and it was gaining more popularity.
I started looking into that and I essentially built an open-source version of Heroku, which is very similar to Heroku. You've got the UI. You can deploy apps. Heroku used to call them add-ons but in CapRover’s world, we call them one-click apps. You essentially have a list of apps, databases, MySQL and anything. You pick, input a few configuration values, hit deploy and there you go. You've got your app running.
Since then, the project has gained a lot of popularity. It has always been my side project so I've never looked into commercializing it at some point. At this stage, it's grown so much that I'm trying to find avenues where I can grow the project even further and potentially commercialize the project moving forward.
For full disclosure, I am a CapRover user as well. I've got a bunch of side projects and things like open-source uptime monitors and all manners. I've got 8 or 9 applications running on a little server somewhere. I'm a happy user of yours and I love your platform. To back it up a bit, what you described is a monumental amount of work, potentially.
Heroku laid the groundwork with buildpacks and then Docker came along but glueing all of that stuff together is a serious undertaking. How did you go about deciding how you could get something usable that wasn't going to have you underwater for two years before you had something beneficial to someone like me?
The very first version of the project, I spent from the start to the first release. It took me about six months to learn Docker and Docker Swarm. Do you know how they say the best way of learning something is to teach it? I went one step further. The best way of learning something is to build something with it and let others use it.
In the process, after I released the first version, I realized, “This is how people are using this.” I had to go back to the drawing board, change a lot of designs and go back and deploy the next version. At this point, this is a very interesting segue into the release cycle. It's so stable now. The last version that it released was pretty much from years ago. There have been no bugs. Nothing. The platform was stable. There are a few design decisions that I made early on that contributed to the fact that it ended up being so stable.
What do you think those were?
One of the most common pitfalls of open-source projects is that everybody wants to derail the projects and move them towards their specific use case. It's important to hold your ground and everybody thinks their use case is very common. This is very typical. If you listen to everybody, you're going to end up with a Frankenstein-type of project and it's going to be so confusing and hard to navigate.
One of the most common pitfalls of open source projects is that everybody wants to derail them and move them towards their specific use case. If you listen to everybody, you will end up with a Frankenstein-type of project.
It feels like you're sitting in the cockpit of a 747. You don't want that. You want to abstract away everything complex to advanced users. You do want to let users make tweaks but you don't want to surface that to the beginning user. The way I built the project in terms of the product and UI, there was a lot of thought. I'm a very huge product person.
The way I built this was to think about the user who installs this first time as being a newbie user. They don't know anything about Docker or server management. Everything should be plain and simple like port mapping and storage. Everything has this little question mark or icon. You hover over it. It's got a short description and you can click on it. You go directly to the description or contextual help.
For more advanced things, for example, Docker has over 50 different knobs for each container. You can customize the maximum reserved RAM or CPU. You can customize the type of logging driver and everything. Imagine if I had to bring all of them to the UI. The UI would have been super complex. Instead, I said, “This is not going to be the main use case.” Heroku is a proven platform. Everybody's using it.
Whatever Heroku is surfacing to the user, I'm going to surface that to the user with a little bit more things because you own the VPS and you can do things like port mapping and the processing directories, which is not possible with the Heroku framework. Other than that, I abstracted everything. If you do want to customize your container, you can do that but not through the UI. Go ahead and use the Docker command line if you want to do that. Docker is maintained by hundreds of engineers if not thousands.
You can do that and CapRover respects it. CapRover says, “I know about these small subsets of configurations and I'm going to override it if you update your configuration but anything else that I do not know about, if you modify it, I'm not going to touch it. I respect your decision.” This was one of the main, if not the most important thing that helped me as a solo maintainer of this project to keep the project stable for the past years.
Was that a common request that people were like, “I want to be able to do,” and some esoteric thing that you know?
If you look at the GitHub issue lists, you'll see a ton of them popping up. I shut them down but I keep a mental note somewhere. From time to time, I search GitHub issues for common patterns. If I see that there is a need for a particular thing, I'll go ahead and add it. The latest thing that is popping into my mind is customizing commands for containers. They could have the same image but the command could be different from one container to another. This is very common. That's something that I'm thinking of how to make it a little bit easier for users to be able to modify.
In terms of how the project started, I've loved the idea of renting a little server from DigitalOcean, Hetzner or something quite cheap. If you buy a physical server from someone like Hetzner, you can get an insane amount of performance. The agency that I was running had dozens of applications running on a single machine, staging stuff, testing stuff and things like that. I've been interested and followed other projects. Flint shut down. Dokku is another one that's very popular but doesn't ship with a nice user interface as CapRover does. What was it that made you sit down and spend months writing it rather than going, “I'll just use Dokku,” for example?
Curiosity, first of all. For context, I never studied computer science. In university, I got my Master's in Structural Engineering. I had a Bachelor's in Mechanical Engineering and I learned software by reading. Even when I learned programming, I was always curious about how this thing works. I started digging into server-side programming at some point.
It was always annoying me like, “What is this black magic? How does this whole thing work?” It has helped me a lot in my career and my full-time job because not a lot of people know the underlying technologies behind a lot of these systems that they're working with like Kafka and databases. Curiosity was one of the main factors. Also, I tried a few of these platforms. I don't think I tried Flint, to be honest. It was almost dead when I started but I tried Dokku. Dokku back then was fairly command-line heavy. Dokku’s slogan when it started was, “Heroku in hundred lines of bash.” It was a bunch of helper scripts and I wanted something to have a graphical user interface and easy to play with for beginners.
I believe this is right and it might not be right but in my memory of targeting buildpacks first before, Docker was like a second-class citizen for a while within Dokku. As Docker rapidly became this complete standard in terms of shipping, deploying and running software, over time, it spent quite a lot of work re-engineering that so that Docker is more of a first-class citizen. With CapRover, it's just a Docker image and that's what you're going to get.
I was not being too smart or anything like that. By the time I started, Docker has already been the studio standard for all the content in-rise applications. I decided to use Docker and build it. It’s a bunch of scripts. You can technically convert any buildpack to Docker. It’s another schema for a Docker file.
Once you release version 1, the project's got over 10,000 stars which is a great achievement. I missed that milestone. Did you put a lot of energy and time into trying to market the project and get the GitHub URL out there or did that happen quite naturally?
No, everything was organic. This was supposed to be a pet project. I wasn't expecting this to become this big. When I first launched the project, I put it on Hacker News. I told my wife that I'm going to be super happy if I get 500 stars in 2 months. I reached that milestone in two days in Hacker News. It was great and thrilling.
In terms of time, my last job was a lot lighter than my full-time job. I had a lot of time being able to spend on this project and grow it. Luckily, the project has become so stable that I do not need to spend a lot of time adding new features. I'm on my pattern even. Hopefully, I'll get some time to add summer features moving forward.
There's one thing you mentioned in the interface that you wanted to abstract away a lot of these more unusual parts of the Docker interface. There's one very important user interface widget that does separate you from Dokku in another way, which is the instance count when you're looking at an application. For those of you reading who don't know what I'm getting at, Dokku, as far as I understand it, was designed around the idea that you wouldn't take on. I like or understand the design decision of saying, “We're not going to do load balancing and try and do self-healing and design this thing to be able to trivially scale to 10,000 requests.”
The second thing is for most people, you're going to want to run one server. It's simplest to do that. You'll get good uptime anyway because you don't have to worry about all these additional moving parts. Most of your self-hosted stuff is going to get ten requests a day but you decided not to do that. You decided to go right. I want to include that as a paradigm. Flint, which shut, was YC funded but went down that VC path.
We used that for a while and it was pretty cool but it could eat itself pretty easily. It felt quite fragile. You mentioned Docker Swarm. Do you want to talk a little bit about the relevance or the impact of making that decision to take all that on and how you feel about it? Also, for a lot of people and myself as well, never quite understood what Docker Swarm was designed to achieve because it got swept away with Nomad and more with Kubernetes.
Docker Swarm lost the battle in terms of the orchestration layer. When I started the project, I looked into using plain Docker or Docker Swarm. Docker Swarm provided a much cleaner interface for an application like the program we work with. I don't need to care about load balancing, IPs and health checks. Everything is done automatically for me. If the system restarts, the scheduler is going to schedule all these tasks on different nodes for me and I don't need to care about it. The design decision was an artefact of time back then.
Docker Swarm was gaining a lot of popularity being a natural hand-in-hand component with plain Docker. I figured that this is going to be the way to go and much easier. In terms of the implementation, it was super easy. It provided me with a lot of knobs and things to manipulate and be able to ship a ton of features out. Something lost the battle and I don't think that Docker Swarm is going to be the future, having said that.
Another type of issue that from time to time keeps popping up years ago when Docker Swarm started to go down was everybody was like, “CapRover is dead. Docker Swarm is not going to be maintained.” Here we go years later, Docker Swarm is still here. It's going to be here for the foreseeable future. It's not going to be the main player in the market. For the type of users that CapRover is targeting, Docker Swarm will do fine. As long as it is maintained security holes or patches, we probably don't need any additional features at all.
As long as the security features are packed and they don't introduce any major bugs, Docker Swarm is here to stay for a CapRover. Long-term plan, maybe we work on a Kubernetes variant of CapRover but that's going to be a huge rewrite. My second beef with Kubernetes is that I tweeted this. If you want to have a small garden in your backyard, you don't need a tractor or an excavator. You have a bunch of hand tools and they'll do fine.
It'll do more damage if you bring a tractor to your backyard to do some small gardening. It’s a similar story to CapRover and Docker Swarm. I played with a few mini Kubernetes variants. They all seem to be a little bit heavier than Docker Swarm, which defeats the purpose of CapRover being a super simple platform. I have a Raspberry Pi that I run four applications on CapRover. I don't know if you can do that with Kubernetes. Maybe yes or maybe no. If I remember correctly, the minimum footprint of RAM usage for the smallest Kubernetes is about 150 megabytes. That's not small. That's not plenty for Raspberry Pi.
When I deploy an application in CapRover and up the instance count from 1 to 2, what happens behind the scenes?
CapRover is simply asking Docker Swarm, “Go create two instances of this.” There are a bunch of rollout strategies that CapRover automatically decides for you but it allows you to override it. If your application does not have any persistent data, it allows for multiple instance counts and starts first. The reason behind that is that otherwise if your application is like a database or any application that writes to a file, if you have two instances of it and they write to the same directory and file, it's going to cause storage corruption.
Let's take out that persistent part from the equation but for simple and stateless applications, it simply asks Docker Swarm, “Go increase the instance count by 1 or 2,” whatever you want it to be. It is the job of Docker Swarm to figure out if you have multiple nodes on where to put the second instance. They do have an internal DNS that is round-robin between different containers.
Every single request that you send, it rounds to a different container. Sometimes you do have specific needs. One of our users DMed me. They are running a container that is specifically doing some ML work and they want to run it on a node that has a large GPU. For ML applications, this is one of the common things. You want to increase the number of instances because it takes a while for that application to process the request.
They reached out to me and said they didn't see anything here. They don't because this is not a common use case but look up Docker service updates. It allows you to specifically indicate what node you want this to be running on. You can label your nodes with GPU enabled and have a tag on the service to say, “This service can only be deployed in GPU-enabled nodes.” There is a ton of configurations that knobs in Docker Swarm. They'll do fine for small to medium size applications. If you're talking about a company that has over 100 or 150 employees in 2 years, CapRover is not going to have the right choice.
Do you have any idea what percentage of people who are running CapRover doing it like I am? On a single node, I have never touched that instance count, set up a second VM, have them talk to each other and do all that stuff.
I don't. That's because we don't have any analytics on CapRover just yet. That's one of the things that I'm going to add to the next version. I got to be very careful with it. I don't want to come out as logging events without user content. I have to make it very explicit but these are the type of things that I'm going to add in the future.
In terms of the progression of the project and the growth of the community, another thing that makes it very different from a project like Dokku is the one for caps. There's an ever-growing list of applications, databases and services. I built the definition for the project flagship. Did that community come about naturally? You got a lot. There are maybe 100 or so.
More than that. It's insane. It's to a degree that I had to add a clause in the self-check, “Please don't submit any requests of any application that has less than 1,000 stars.” There is a ton of interesting projects with hundreds of stars. I cannot keep up. Every day, 3 or 4 requests are coming.
People were using it as a bit of a growth hack thing for their smaller projects.
Surprisingly, that's not a lot of requests. People want to use one particular project. They're self-hosting it and are like, “Why is this not on OneClick Atlas? Let me access that.” That's one of the other things. I am redesigning that at some point. It all comes back to every decision that I make. I think about, “How can I make this easier for myself to be able to maintain this?” We have over 100 apps. It’s impossible for me to maintain this. We do have the capability to be able to add a third-party repository.
I didn't know that. That's interesting.
You do have the option and a lot of people do have them. There are a bunch of third-party repositories where people hold their one-click apps and add them to the list. Instead of having people commit to the main repository, I want to make that third-party repository a little bit more prominent and have people add their URL to the list of third-party repositories and then we do sections. This is the official repository and every time you search, you get results from different repositories. I don't need to care about the security holes in the third-party repository. That's the owner's responsibility. That's going to be in the future.
Another question I was curious if you knew the answer to is how common is it for people to be running commercial production workloads using it as the orchestration platform.
I thought it was uncommon but I'm proven wrong. I've been getting a lot of requests from folks over the past few years, specifically in 2022, asking for a lot of commercial support and features that are only needed in commercial settings. For example, multi-user support. I did not even consider this. It turns out that a lot of people are interested in this.
I was opposed to that idea in the beginning because, with the design of CapRover, I don't want to create this false premise that you can have multiple users and essentially build your hero Google, deploy it and have random people sign up. At the end of the day, you do have access to a container that can have a mapped volume. That map volume allows you to hook into Docker sockets, run Docker commands and read the host files.
The use case is pretty valid. There are tons of startups that are using CapRover as their deployment platform and they have about 10 employees, let's say and 1 of the employees leave. They have to change the password again every time someone leaves the company. Whereas, they could assign different passwords to different people. If they want to, they can mess up each other's applications. They can read secrets from other applications if they want to. The good thing is if they leave, you can invalidate their accounts and can no longer access the server.
That brings us to what you talked about at the start of our conversation, which is the future and potentially, trying to build a sustainable project and business around it. How much time and thought did you put into that? Is it something that you've had a clear path towards but haven't pulled the trigger on or is it something that you're not quite sure? For multiple users, that sounds like a perfect gateway for like, “That's part of a commercial plugin,” or something like that.
In terms of having a clear plan, the answer is no. In my many years in tech, I never had any idea what I'm doing to try to figure it out. In terms of the implementation, I'm pretty close to both. I've completed the paid feature. I'm playing with it personally to see how it works. There are a few things that are going to be the first version of the commercial support, two-factor authentication, webbooks, emails for build failures, logins and build successes. It’s simple things.
This is going to be the main premise of the first version and moving forward to multi-user support and health monitoring, you name it. There are going to be additional features. The way I'm going to build this is similar to how I built CapRover. I'm not going to go ahead and build an enormous system and have people use it. I'm going to be using the feedback from the community to build more features in the future.
Have you considered the changes or how you're going to approach that from a licensing point of view or an operational point of view? Operationally, this can be a real can of worms as well.
Hopefully not. I'm very careful about what I am supporting in the paid version. It's going to be self-serve. Hopefully, there's not going to be a lot of operational issues. If there are, these are paid users. I've been donating my time for free for the past few years. I'll happily spend some time understanding the use cases. Ben, my motto for any project, even at my full-time, I tell my engineers on my team, “If you're being asked the questions three times, it is your fault. The first time, answer that question. The second time, write it down in a wiki. Make sure the API is clearer. If you're being asked a question three times, it's wrong. It's a bad design.”
This is a principle that I follow as well. If I see a common pattern of requests or confusion coming up, then I have to rethink my design. It can be as dumb as adding a tooltip, which is the last resort but hopefully, a better design. To answer your question in terms of the operational overhead, if it's huge, then it means that they have a huge paying user base, which I'm fine with it. It's a good problem to have.
The key is to resolve it in the long term. I don't care about having to figure out a ton of broken pipes and walls in the first few weeks or months. The key is for me to be able to fix those so they can self-explain and be self-resolving. A lot of these things are going to help make the project essentially profitable in the future. Your first part of your question about licensing is an area that I do not have a lot of expertise so I'm going to reach out to a few experts to ask for help on that one.
The space that it's in and the problem that it's solving, there are different ways that you could approach that from a licensing and a copyright point of view. Another thing that I was curious about is Docker Swarm and Nomad from HashiCorp. Multibillion-dollar companies are doing some part of Kubernetes. They're huge potentially winnable markets. I'm assuming that you are either politely declining or deleting regular emails from VCs wanting to invest in it and turbocharging. They probably see dollar signs. How have you dealt with that?
Going back to when I first launched the project, I got an email from Anderson Ford. I was shocked and was like, “What is going on here? This was supposed to be a pet project.” To answer your question, it's not that simple. VCs are not going to pour money into my pocket and say, "Go and build this." I want to understand where I'm going with it. I want to have a viable business first before I reach out to VCs and I have a clear path moving forward.
At this point, it's an idea for the commercial version of this. In terms of the addressable markets, the other question that I typically get is, “How do you expect CapRover to survive if you do have Kubernetes and all these large players that are going to eat you alive?” It's been several years and they haven't eaten me alive. The reason behind that is that you do have Salesforce but you do have hundreds of thousands of WordPress agencies that create eCommerce websites. Some of them are pretty large and sizable.
Only because you have a super large player and a shark in the tank doesn't mean that you cannot have smaller fish swimming in the tank as well. That is smaller compared to Salesforce but they're large. I was surprised I was talking to someone using CapRover and they are managing eCommerce that is generating about $20 billion worth of revenue every year. It's huge when you think about this running on WordPress and they're not using Salesforce. There is a market for CapRover, which is not quite colliding with all these Kubernetes and Nomad stuff.
That's what my agency was doing well with Dokku but it was always a bit of a wrestle with Dokku. If things would sometimes break, you'd need to know the command-line incantations to get it back under control. If you're running 300 eCommerce websites, your Heroku-build would be horrible but even your Fly.io would be probably pretty bad too. They’re not getting written about and going on Hacker News but those folks doing that, that's a huge part of the world that we live in.
People don't see them. You've got skyscrapers. They're super tall and visible but there are not that many of them. You've got a ton of the smaller buildings. Altogether, they are massive. They're much larger than those skyscrapers.
Has anyone tried to add a multi-tenancy layer to it and start selling that?
I've seen these projects trying to do that. None of them were successful. It's not about the idea but the implementation. Every time you see one of these unicorns, there are 100 other unicorns probably even introduced before that one but it's about the execution. Only because they're not successful, I do not think it has anything to do with the need for multitenancy.
None of the unicorn projects who focus on idea alone have succeeded. It must be about the implementation.
You probably be setting yourself up for some difficult problems as well, technically, if you were to try and do that, especially in terms of isolation.
The isolation part is one thing that I'm not going to be going after a multitenant thing. For folks who are in the same company, it's like having the same pseudo access to the server. Everybody has the root access, technically but once they leave the company, you can invoke their access and be done with it. At least in the first version, I'm not going to be trying to limit access.
Meaning if you want to both have different accounts on the same server, I can mess up with your application if I want to. It's not going to be easy. I have to do some clever volume mapping and call the APIs, get your services and try to SSH into them. I'm not going to be blocking against that because that's not my user. My users who are requesting multi-tenants are engineers in the same company. They don't care. As long as that account is active, it's okay to have access to other applications. It's not going to be visible in the dashboard but if they want to, they can access it technically.
In terms of the projects and the community around them, how much help have you been getting over the years? Is it something that you're still doing primarily by yourself or are there some people who've started taking a more active role in issue triage and trying to reproduce issues and things like that?
Definitely. I don't have a single person that's been with me for a long time. There are always periods, 6 months or 3 months, where 1 person becomes super active. They're helping even fixing bugs. We talked about the success of the project and all the good design decisions that I made. Let's also talk about the not-so-good decisions that I made.
People came along and said, "You're doing this the old way. The new way is not to use the class definition in React. Let's write it in functional mode.” Before that, people complained about the CLI tool being written in an old style. Nothing was wrong with it. It was just old-style. Somebody offered to rewrite it and I was like, “That's good. Go ahead and rewrite it.”
They rewrote it. It was fine but they left the project and I have no freaking idea how this whole thing worked at this point. That's why, fast forward, a lot of people complained, “The React part is using the old style.” There’s nothing wrong with it but instead of having a 10-line definition per view, I have a 20-line definition. It's boilerplate code. I don't need to write it but it's not the new cool thing.
We made very similar decisions when Flagsmith was a tiny little project. We could have kept the React code base up to date with whatever was the latest version of the interface but we were like, “It works. It doesn't crash a browser. It wouldn't look any different. It might be a fraction faster to people using it.” Funnily enough, after four years, Kyle who wrote the front end was like, “I can't look at this code anymore. It's driving me nuts. There’s been so much thought put into.” We'd take a lot of lines of code out, which is for any engineer, that's pretty much the holy grail. We were very specific about it. Unless there's a high or critical security issue in the library, it’s the total anti-patent to start refactoring that stuff.
You do want to rewrite your application every few years or so. Not even necessarily because your framework is old. You've made a lot of patches in the system. It's a piece of cloth with hundred patches on it. It doesn't look great. Let's rebuild it and make it cleaner because you have a better understanding of what it is supposed to do.
When you started the project, you had a general idea and you had to make a lot of modifications to make it do what you want it to do. I'm in the same mindset with regard to refactoring. That's one of the mistakes that I make. At some point, I have to rewrite the client-side library so that it's more self-explanatory. It might be old but I like the code to explain it. Less number of lines doesn't necessarily mean that's good.
That is one good reason to do that. If it's got so many patches on it, it's hard for someone to submit a pull request without having to reason with it forever. That is a good argument. You mentioned getting a ton of stars on a successful Hacker News submission. What other things have you done or decided not to do that you think were successful or helped propel the projects?
Engaging with the community. This Slack group was a good idea. A lot of people said, “Why don't we move to Discord? Discord is the new cool thing.” Keep it steady. The Slack group was good. I do have an active Twitter account, which is good. I follow mentions of CapRover every now and then to understand what's going on. Hacker News is interesting.
Pretty much every 2 years or 1.5 years, somebody posts about CapRover. It looks like people are rediscovering CapRover. It gets 200 to 300 upvotes and becomes the top 1 again and again. Other than that, it's pretty much it. I didn't invest time and energy into growing the popularity of the project. It's word of mouth at this point.
In terms of the future, are you still trying to figure out what that looks like or have you got some firm plans you want to execute?
For commercialization, I've already built it. It's fully done and working in my environment. I’m playing with it for a little bit. I'm going to add more features, polish it, release the first version and see what it does. The first version is going to be a terrible failure but hopefully, I'll get to learn from it. That's the hope. Also, build more features on top of it. Essentially, it opens an avenue for users who are requesting features. “Would you be willing to pay if you have that feature?” It makes it much easier. It also helps the open source version because it helps it be maintained and it becomes a more sustainable project moving forward.
The first version of any open source project will be terrible and a failure. Be sure to learn from it and find out which features to build on top of it.
It sounds like we've got an opportunity for a follow-up chat in that case because that step is quite big for something that you've been working on for several years and investing thousands of hours. That's quite a big step. There are a lot of interesting things and problems that occur when you take that, I'm sure.
Even with simple things like licensing and analytics that I'm adding, I have to be very careful. With open-source projects, you don't have any second group. You go wrong and you're dead. The number of hours that I spent on how I approach the path board is 10X more than the time that I spent on the keyboard coding it.
Kasra, it has been super fascinating chatting. For those interested, there's a good subreddit called Self-Hosted. That's a great subreddit for learning and discovering applications and things. I want to thank you for the project because I looked at the apps that I'm running on my CapRover. I've got Bitwarden, Drone, CI, Git and Firefly, which is a cool personal finance tracker. I've got my wife's Pilates website. I've got an uptime monitor, an SMTP mail folder and Mini Flux, which is an RSS reader. It’s the vast majority of those every day. Thank you for your time but also the project because I have a lot of fun fiddling around with stuff. Sometimes things break. That's not your fault. It's mine. I have to wonder why I'm staying in bed and doing what I do as a day job. You know the feeling.
Here's a fun fact. I learned about many self-hosted applications by the virtual PR that people submit to our one-click repository. I was like, “This is super cool. I didn’t know that there is an open-source version of Google Photos.” Forgot what it's called but it is not one of the one-click apps. This whole self-hosted thing is crazy. One-click apps make it super easy for you.
They do. I love it. Thanks again. I'm going to put a note in my diary to see where you guys are at in many months, and maybe we can catch up again. Thanks, Kasra.
Thank you.
Important Links
An Engineering Leader with a demonstrated history of delivering world-class quality products. Currently leading the team behind in-app and out-of-app messaging on UberEats, previously built Uber’s financial products.
Also, led the Android team behind the second largest online dating apps in the world at the time, POF.