NanoVMs
One thing you will find in the unikernel world is that every single implementation has an opinion.
This whole idea of having an operating system that runs many different programs on one computer is completely outdated and doesn't reflect reality whatsoever. The question is, “What would an operating system look like for something in 2020 to serve up the software?” That's where NanoVMs come in with a unikernel concept. Today, its CEO, Ian Eyberg, takes us into a deep dive into unikernels and how they function. Tune in to discover how to efficiently manage virtual machines and create a truly agile environment for your software development.
---
I'm super interested to be talking to Ian Eyberg from NanoVMs. We have an interesting product. We had a little brief chat and we decided we are going to try and go a little bit deeper here on technicalities because I'm interested in it and it's my show. Ian, welcome. Do you want to give us a little bit of background about yourself and your project?
My name is Ian Eyberg. I'm like a lot of people. I have been hacking since age 10 or 11. I have always been interested in the intersection of operating systems and security. One of the topics we are talking about is unikernels.
Tell me what they are and what the commercial side of the company is and does and how it functions.
To set the stage, first off, defining what they might not be. That is most computers that people use or general-purpose operating systems. Windows, Linux, and Mac all fall into those categories. Unikernels are fundamentally different because they are systems that are designed only to run a one-and-one-only program and 99% of those cases are inside a virtual machine, although there are some bare metal types out there.
The rationale behind that is, why would you do this? If you look at a lot of the systems that we deploy software and I'm talking about Linux, Linux came out in ‘91. It was built on probably a 286 at the time. This was about 10 years before VMware and about 15 years before Amazon EC2. It was built at a different time.
Even all the way back in 1991, Unix itself had been out for years. The entire architecture was already old because when units were created, it was built on computers like PDP-7 and PDP-11. Those machines took up entire walls. They cost $500,000. There are reasons why Unix has this concept of multiple users running many multiple programs on the same physical computer.
It's day and night of how we do software. Nowadays, you can't even fit those software on one computer. For a lot of people, their company's database might span many different servers. If you work at a large tech company like Uber or Airbnb, they don't have one database server. They have thousands.
This whole idea of having an operating system that runs many different programs on one computer is completely outdated and doesn't reflect reality whatsoever. The question is, “What would an operating system look like for something in 2020 to serve up the software?” That's where we come in with a unikernel concept.
Let's take a step back because I want to understand. General purpose, operating systems, they feel like I grew up with well 8-bit computers and 16-bit computers. An 8-bit computer as far as I understand was doing one thing. It had one process running in memory. 16-bit computers on 68,000 had some semblance of multiple processes running at the same time. The same with Windows 3.1 and 95. As time went on, those machines stopped crashing as much, because there was more technology being bought into the kernels of those operating systems.
Advancements like virtual memory and things of that nature to start protecting it. You weren't directly writing to that physical memory address.
It's interesting because I hadn't thought about that because I do remember being in my university lab. The administrator was running and told everyone off for running a graphical male client because we were using all the memory up on a mini-computer. It was like a massive thing in a room that sounded loud. From that point of view, the direction of travel has been that computers have become more specialized than done fewer things. What you are talking about in terms of unikernel that would be specifically for a server-class workload?
That's one very large distinction as well. The computers that we are chatting on right now, I'm on a Mac right now. I have God knows how many programs installed on this machine. There are probably a few hundred of them running right now. That's a very different experience than the server-side system. On the server, generally speaking, you are going to find that one core application that somebody wants to run per instance. Assuming it's virtualized or in the cloud, then you split it up into all the various instances.
The web server for instance, you might have 4 or 5 app servers load balance behind something, then you might have your database that's split up into a couple of different shards replicated and so that's another set of servers. The server environment is extremely different from the desktop environment. Another implication is that at least all the unikernels that I'm familiar with only are meant to address server workloads.
It's also interesting because it makes me think that, back in the day, you would go into a data center and cut your hands, putting a server in the data center, and then getting cold. Having a ton of crap on the server running that you weren't interested in, it feels like the direction of travel in that regard has been removed as much stuff from your server as possible, because there are security risks with regards to it. There are performance risks with it.
From my point of view as a layperson, Docker images are getting to the point where they are trying to trim themselves down as much as possible to the point where they have got as few processes running as possible. A unikernel is the logical end state for that way of thinking or do they address something other than that?
There's a trimming down to what is necessary to run. There's a little bit of that. Although the unikernel goes even deeper down the rabbit hole in terms of performance and isolation. Starting at the top level, if you go to Amazon right now, you spin up an Ubuntu instance, whatever the latest Ubuntu is. There are going to be 100 different programs running automatically without you installing anything.
There are going to be about half a dozen different compilers on there. There are going to be thousands of libraries that you may or may not be using. There are going to be tens of users that are daemon-related. There's a lot of stuff that comes on these default distros. It’s the same thing. Go to Google Cloud and the first image that they want you to spin up is WM whatever version it is. It's the same problem.
Some people think that you can use something like Alpine, Puppy Linux, or one of the smaller distros. Unikernels take it much further than that. Comparing something like Alpine with a heavy-set comp profile attached to it to a unikernel, you still have this concept of users. You still have this concept of interactivity with that actual instance.
I can still SSH into Alpine and start running lots of different commands and installing whatever I want. There is no facility to do that on a unikernel. There is no SSH. There is no shell. It's just that one process. To quickly define that, when I say one process, it's a little bit more tightly defined than one program. I could say one program, but something like Postgres, for instance, has many processes.
There's a whole class of software written in the '90s that is inherently multi-process and makes use of message passing, IPC, shared memory, and all that stuff. That reflects the types of computers that people were working on back then. There's nothing wrong with it. It's different. Unikernels is a single process with as many threads as your hardware will support.
Going back to our example on Amazon, when I go set up Ubuntu on an EC2 small, I have one hyper thread that I have access to. Even though there are 100 programs running all at the same time, that's not true. What's going on is the CPU is flipping back and forth between all 100 programs so fast that we can't tell the difference, but that's from a computer standpoint. It's massively slow to do. The unikernel gets a lot of performance speed up because there's only that one process with different threads.
You are saying that these are normally run inside like there's a host or manager process that's managing these?
A unikernel doesn't make use of a net system or orchestration software like Kubernetes. This is another big major difference because if you deploy a unikernel to Amazon, we are not deploying a Linux AMI and then installing orchestration software on top, and then running the unikernel. What we are doing is we are taking your application, Python, Node, or Go, whatever it's written in. We are taking that one application and creating an AMI out of it. When you boot it up, the application is the only thing that's running. There is no Linux inside. There is no container. There's none of that stuff in there. It's just the application running.
The code that makes up that unikernel, is that some part of the Linux kernel, or is that something completely different?
No. We have made a Linux binary-compatible unikernel. What that means are all those different CIS calls that all the libc functions and all these different libraries are written against. We have made the same interface where all your software basically works out of the box. It was a big stumbling block for a lot of earlier unikernel implementations because they were using these alt libcs and hot patching all these CIS calls and so forth.
One thing you will find in the unikernel world is every single implementation has an opinion. There are ten different unikernels out there that I'm aware of. They all have slightly different takes on how to do things. One of our takes was erasing the CIS calls boundary. It has security issues and that's one of the reasons why we are using unikernels to begin with is because of security. We did keep that in place. Although, we got rid of the heavy process to process contact switching and many other things, and we also have the same address space.
One thing you will find in the unikernel world is that every single implementation has an opinion.
You sat down and started a new project with the goal of emulating the entire low-level platform of Linux.
I wouldn't use the term emulating which usually implies performance issues, but yes. That's exactly what we did. Nanos is a kernel we wrote from the bootloader all the way up. It's open source under Apache2 on GitHub so anybody can do whatever they want with it. That's the main open source project that we work on.
The reason I mentioned emulation is because it sounds like the process is a little bit similar to writing an emulator. Is that fair to say or like a game emulator?
At the end of the day, depending on our architecture, whether it's X86 or ARM, we do conform to that machine standard. You can't see them but I have a stack of Intel books that's about yay high. We don't have to support all that stuff. We only have to support maybe this much, but it's still a beast of architecture.
The cool thing is that we are not writing most of the emulation code. We are reusing hypervisors that already do that perfectly well. What we are doing is writing the guest OS layer. When people say public cloud, maybe some newer developers don't understand this, but public cloud is essentially virtualization with an API on it. That's all it is.
When people say public cloud, maybe some newer developers don't understand this, but public cloud is essentially virtualization with an API on it. That's all it is.
AWS was built on Xen. Their newer stuff is working on a forked KVM. Google Cloud is built on KVM. Oracle uses their Vagrant stuff. Microsoft has Azure. These are all different hypervisors that we can target and deploy. The hypervisor is doing some emulation and then some of it gets passed through to the underlying hardware. Especially with some of the newer stuff like Nitro from AWS, the ENA network driver. That's talking directly to the custom silicon that Amazon built themselves. It’s the same thing with their storage going over NVME. Some are simulated and some are not.
You are building software on top of these things like KVM.
The hypervisor for us is the base layer. We assume that it's always going to be deployed as a VM. Whether you are in the cloud or in a private data center, we don't care. We assume that it's going to be a VM.
Some of that stuff that lives below the hypervisor connects you two directly and some of it wraps around, then you are talking through the hypervisor.
Sometimes we have to add some drivers and so forth, but that's another huge difference between Nanos and Linux. More than half the Linux code base is pure drivers. Why is that? They support 30 million network adapters. They support half a dozen seeds. They support all this hardware. One of the big ideas with unikernels is we only do VMs. All that support and all those drivers and stuff, we don't have to deal with. There are literally only half a dozen different things we need to deal with, a network card, storage device, and clock. That's about it. There are 2 or 3 hypervisors that we support. It's low single digits to maybe right over ten.
You are shedding code like left, right, and center here.
It's not just the drivers even though that's half Linux. It's not just that. It’s things like users for instance. The whole purpose of having a user is this idea of this interactivity. I'm going to SSH into a server and I'm going to run LS and all this stuff. We call those commands, but those are separate programs. The whole idea of a shell is to run many different programs.
Shells are a completely foreign concept to unikernels. They simply don't exist. Let’s say you had a web server that you are deploying as a unikernel. You could throw in some webpage that does the equivalent of an LS if you want or need that functionality. I would argue that very few people need that if all there is a web server.
The whole concept of running other programs other than the one that's running is what we are getting rid of. Diving into multiple process support, a lot of newer developers get confused over processes versus threads, but again, we support as many threads as the underlying hardware gives you. It's the idea of multiple processes that we get rid of. A lot of people wonder, “Why can't you build a custom Linux kernel? Why can't you patch it out?” I don't think people understand how much code that exists to support that.
The whole idea of a scheduler, for instance. We talked about those 100 programs running on Ubuntu. How does Lennox decide which program gets the five milliseconds of CPU time? Scheduling is a whole beast right there. We talked about access rights. With users, you need permissions, access rights, and all this stuff. If you gut multiple processes and users, all of a sudden, all these permissions go out the door. There's so much stuff that gets thrown out the window when you isolate it in this manner.
What's the history of that as a concept? Was there an implementation that sparked the market or how did it get off the ground?
We did not invent this concept at all. It's been an idea that's been stuck in academia forever. You could trace a lot of it back to the mid to late-'90s. There are quite a few different research projects at that time that people were doing. I don't know where Cambridge is in relation to London, but a lot of it came out of there.
That was entwined with a lot of the microkernel stuff even though that's a little bit of a different approach. It wasn't until 2008, 2009, and 2010 that you started seeing a lot more people coming out with this design. The OSv people from Cloudius Systems. Cloudius ended up pivoting into, which is a Cassandra replacement. You then had Rumprun and so forth.
That was one guy asking the question, “How do I debug colonel drivers without having to reload the kernel every single time?” He spent ten years on that. You then had a group called Unikernel Systems. They wrote a ton of papers on the subject. They were all focused on OCaml and Mirage. That's one thing that you will see in the various implementations out there.
There are two camps of unikernels. One is language specific and then the other is what we would call POSIX-compliant. They can run anything. In my view, that was one reason why those projects never were able to capture the zeitgeist. Here in the US, I can name two companies that run OCaml. Docking language is a great language. It's the same thing with Haskell. I can name one company that runs Haskell in Portland. When you create projects for this, it's interesting but you can't expect a lot of traction in terms of adoption when you are using the languages like that.
You mentioned a bunch of functional languages. Are there technical reasons why they lend themselves?
You have to look at why those people wanted to use those languages. A lot of this was coming from the provable system’s territory. Can we mathematically prove that this system is going to function correctly? There are a lot of industries that do want that. Think airplanes. There are certain computers out there that we want to function correctly. Airplanes are in that category, cars, and things like that. Things that can kill you basically. There's always research going on in that and I think that's where some of that came from. With functional, it's so much easier to prove whether something is correct or not. The other thing was being able to focus on one language makes it so much easier to deal with.
They are in academia and people are building implementations of them. You guys decided that a more general-purpose POSIX. I'm thinking about this conversation in the context of me, a retiring Python developer. OCaml is a fairly obscure language relatively. I piss off Haskell people, but it's not that popular. It's not the tool that you immediately go to when you want to write software generally.
The most popular language in the world is JavaScript. I'm not going to speak to JavaScript’s good or bad points. I'm saying that there are 30 million developers in the world. If you learn to code in the past year and you have been coding for 1 or 2 years, more than likely, you are using JavaScript and there are things like that.
There are 30 million developers in the world. If you learned to code in the past year and have been coding for one or two years, it is more than likely that you are using JavaScript.
We knew that there was no way that this technology was going to save the light of day if you couldn't make something to that it’s not only supporting all languages but where the end developer would not have to do anything to use it. Our goal is we don't even want non-developers to be able to use this technology. Let’s say you are a DBA or maybe you are some other type of CIS admin that doesn't code, you should be able to still install common software like Redis, MySQL, or whatever it is and be able to use it without having to get into the weeds at all.
You started off fairly audacious. Were you scratching an intellectual itch or there's an obvious business here?
I was working on performance monitoring software when I started reading some of these initial papers. I was reading the papers, I was like, “This is interesting.” Food in single-digit milliseconds and stuff like this. I was diving into it and exploring the ecosystem and Rumprun and OSV were the two projects I was interested in at the time. There are a handful of others, but those were the most interesting ones.
I was also writing a lot of Go at the time and I was like, “Why hasn't anybody written a Go one?” It turns out that Go hard codes all their CIS calls in their own assembly format inside the language, which allows them to do some of the static linking stuff that they do. That was one reason why nobody had ported Go to a unikernel. We had the Go Rumprun project started so it could run in Rumprun.
Rumprun was a project started by Dr. Antti Kantee. That was a project that he originally was using for NetBSD to debug kernel drivers without having to reboot the kernel each time. That's where that grew out of. There are lots of interesting work there. He was reaching the point where he was the only contributor and maintainer.
He got to a point where he is like, “I want to brew beer. I don't want to screw with this. I have been doing this for years.” it’s totally understandable. The more that we learned about the ecosystem and some of the challenges and what it takes to build something like this, we realized that we needed to start from the ground up. That's what we started doing.
One of the most popular myths out there is that a unikernel is not an operating system. You are somehow running an application without an operating system. It is not true at all. There’s a lot of functionality going on there underneath. It's purpose-built to do this one thing versus doing everything under the sun.
You look at Linux and Linux can run in cars and microwaves. Linux is used for TV and desktop. It's used for everything. Knocking Linux at all, when we are thinking, “We want to take server-side applications that are running in the cloud. What's the actual stuff that we need to run and can we take these unikernel characteristics, performance, and security? Are you going to get much of those benefits out there?”
The answer is yes. There's performance. A lot of applications we test run twice as fast on Google. They run three times as fast on Amazon. We are not doing any magic there. It's the architecture that is giving that performance. Security is such a huge deal. I come from the security space. That's what interested me and unikernels, to begin with. It was the whole idea of like, “I can't run other programs on a damn thing.”
As an attacker, that's the whole reason for breaking into the server. A lot of people were like, “They are breaking my software.” They could care less about your software. They want to install a crypto miner. They want to dump the database. It always takes other programs. You could still use rock gadgets or whatever to build out your payload, but I have never seen like a MySQL dump built out of rock gadgets.
It's always like a fork and shell. That's where the security starts to come into play because a lot of these payloads start becoming incredibly difficult to do anything intelligent with. We have articles we are running rest web servers on Google Cloud. We can inject faults into it to where a request comes in, it crashes the entire instance.
Not just the application but the entire instance, and then you can immediately shoot at a new request and it will respond. The web servers are back up again because it takes milliseconds to boot. What's interesting from a security perspective is that the memory layouts completely changed. All the memories are completely different in a matter of milliseconds. That would screw with an attacker's head because when they are trying to attack your machine, it's all about the memory layout.
The two main benefits from my point of view are that it's way more secure and it's probably going to be more performant, and I'd get both of those things for free. The performance is because you have stripped the car of everything. All the backseats have come out and the soundproofing. It's lighter so it's faster. The security, you are delegating some of that security to the hypervisor as well.
The other thing is modern-day hardware has a lot of built-in actual hardware-based protections built into it. We have traditionally relied on a software application layer approach to security, and that's because of these multiple-user and multiple-process systems. You pop onto a Linux system and there's a half a dozen users and there are certain files. Thou shall not touch. It's funny because working at all these companies for all these years, the very first thing people do when they get on these systems is pseudo sue and they are rude anyways.
I have got three Raspberry Pis downstairs and they all log it as root.
Even the default systems that you are greeted with on Amazon and Google, you can immediately pseudo up. That's the thing. People are treating these VMs as these individual application hosting areas because it's so hard to manage software at scale if you don't do that. I know 20-person companies that give Amazon $20,000 a month. That's the low end. Lyft said that they gave $300 million over the course of five years. It paints a picture of how much software is out there. How much software is being consumed and produced? We need a much better way of dealing with it. It's getting to a breaking point for using the older ways.
How different is the experience compared to dockerizing your application? Are they analogous in any way? I have got my rest API that's written in Python. I don't care. I want it to run cheaply, efficiently in a performant manner, and reliably. Other than that, I couldn't care less. For me, Docker is pretty amazing.
I can't say enough good things about it, but you feel like it's a great solution that was working with a big ball of wool at the start because it’s like UUID 6001 halfway through the Docker file. Why am I doing that? It's some hack because that user hasn't got this thing in that file. Two paradigms that fight each other. You will basically run one thing anyway. You have to make an effort to make a Docker file run more than one program that you want to run anyway.
One thing that we didn't realize when we first started getting started was by removing all these layers and provisioning them the way we do on Amazon, Google, and so forth, simplifies a lot of things. You take your Docker container. You throw in Kubernetes or wherever. Over the past decades, we have seen all this stuff built up.
Go look at the CNCF graphic out there of the ecosystem. There are 1 billion ways to do this and that. You wonder why that is. Networking, for instance, they don't use the underlying networking that the cloud gives them. Even 99% of Docker and Kubernetes users are on the cloud. They create underlay networks and overlay networks. The exact same thing with storage, same thing with security, same thing with all these different layers, they have created multiple layers.
Now you can talk about how the performance goes down, but you introduce all this complexity. When we deploy unikernels, we create AMIs every single time you hit the deploy button. It sounds expensive but we can push to Amazon in about twenty seconds. That's your website is running for twenty seconds. That AMI boots up on its own disc image and it's using the virtual NICs that Amazon, Azure, or Google gave you. You can configure as much as you want but you are using that base layer that they give you.
The same thing with storage. If I run a database, I'm attaching an EBS volume. I'm using those volumes that they give me. I'm not creating another one on top. It becomes so much easier to deploy them, to debug them, and to interact with them in general. Debugging, I will give you an example. Earlier on, we were stress-testing this database in production and it would crash for lots of different reasons. It became very easy for us to clone that VM as it was running and we could download it. Since it was only 20 MIGs, we sat there and attached GDB to it and we would instantly find like, “It crashed because of this.” It opens up new paths of interacting with these systems too.
I do remember. We hit an issue. We were running the App Engine Flex, which is probably like Cloud. There was a process that Google siloed with the app engine in instances that were doing like APM logging. There was a memory leak in that. They upgraded it one day and the VM restarted one day because of security. The application follows because the VM ran out of memory. This can't be real. It’s Google, and it was. Nothing to do with me. I didn't even know because this process has to run because it has to exfiltrate that logging data out of my VM into the rest of GCP.
The myths were broken a little bit at that point because people were throwing stuff into a container or virtual machine and then hoping it works. In terms of developer experience, I have got a Jango application. It's not like mega-high volumes of traffic but it's a fair amount. It's more than I'm used to. What would be my user experience of doing that going through that process?
The answer is different depending on what type of language you are using and I will clarify why. Languages that are typically compiled are your Go and your REST compiled by code like JVM. They all have access to threads. In the case of a language like that where you have access to multiple threads, I say you vertically scale. You spawn as heavy of an instance as you need in two commands. You are off to the races.
It's different for the interpreted languages like your Rubys, Pythons, Node.js, and all that stuff. These languages came out in the mid-'90s. This was pre-SMP commodity servers, pre-even good Linux threads. These languages are inherently single processes, single threads. In the past, when I worked at a rail shop, we would spin up at Nginx and we would spin up five app servers behind it, and then they can talk to those and that's how you scale. You do the same thing, but in this case, you are spinning up individual instances behind a load balancer, whether it's ELB or a proxy. There are lots of different ways to skin the cat, but it's the same thing except each of those interpreters is on their own instance. That's how it works.
There are lots of different ways to skin the cat, but it's the same thing except each of those interpreters is on their own instance.
Instead of my Docker container selling or telling unikernels to run ten copies of itself, tell Amazon to run ten copies.
Now you might have one big beefy instance and then you have ten unikernels behind it and then you have Nginx or whatever in front of it. That's what you would do now. In the unikernel world, because again these are only single VMs, each one of those gets its own instance. I might give it EC2 small, micro, or whatever is necessary, it depends from app to app and then you spin that up behind your load balancer. It's a different way of thinking.
At what point do you guys start and stop? You start where the hypervisor hands off. In terms of that orchestration for provisioning these AMIs and stuff, because that's a slightly different layer in the stack, I'm used to thinking, “I want ECS Fargate,” and Fargate does that for me, but it does it a different level.
We are right at the VM layer. If we go to Google, Google is running KBM, one of the bazillion servers they have. We give them a disc image and they orchestrate KVM underneath. They give it network cards and storage, and then we are sitting there on that instance in that VM. That's where we start. KVM exits out to that disc image and the boot sector knows where to start, and then it goes from there. It loads up through stages 2 and 3 into the kernel and then we exec your ELF and it's off to the races.
Let's take Python to support as an example. When the Python organization released 3.9, is that something that you then have to do work to support?
There are two ways to do that. OPS is a tool that we made that will create disc images out of whatever you give it. If it's a Go binary, then all you have to do is give it that ELF file. If it's Python, a Python install can be like 1,000 different files in it because you have user local lib Python and you have some shared libraries, and you have Pydot Cs and you have all the stuff that comes into a distro. The 3.9, we either create that package for people. We play like the package maintainer as you might do an app get install Python 3.9. You could do the OPS package and get Python 3.9.
That's what the vast majority of people would do because they are not used to building these packages, to begin with, but you are more than welcome to build your own too if you are okay doing the configure, make, install, and knowing which libraries to put in the package. Different people have different dependencies. If you are trying to run like an Instagram, you might need image magic in your distribution. If you know that you need to put that in there, then by all means you can go ahead and do that. Otherwise, you might use a stock install that other people have.
Interpreted languages, is it more analogous to running a different Python and runtime?
At the end of the day, Nanos the kernel is one file. In the case of a compiled binary, it's that ELF file. There's inter part where it knows where to go in. It knows these libraries are linked to this file so it can load those up and resolve them. The same thing's happening with Python. The Python interpreter is the actual file. It's that Python comes in and says, “I need all this stuff in user local lib. Load that up. I want you to load all this other stuff, and then once we have it all loaded, I can start running the user's code, which is the code in the form of Python. At the end of the day, it's a script that will be compiled and interpreted.”
Before we run out of time about the project from a source point of view and then the company from a commercial point of view, where are you guys at the moment?
There are a handful of open source projects, but the two main ones are Nanos, the kernel, and then OPS. I found that Ops.city, which is the compilation tool. It's hard to call it an orchestration tool because it doesn't do that. It's more like a Terraform versus a Kubernetes if that makes sense. Neither of those is a great comparison, but it's closer to a Terraform than it is to Gates.
That's the open source. It's free. It's Apache2. Anybody can use it for whatever reason they want. We are not doing any of the business licensing stuff. I don't expect we will either because we don't see the same problems arising. The company itself provides support, bug patching, and feature development. We do a lot of feature development. People are always asking for random crap we have never even thought of.
One popular thing is the concept of KLibs, which is added functionality that you can plug into your program without having to modify the source. People are asking for a lot of this because of all the Kubernetes sidecars and that stuff. That's where a lot of those requests are coming from. We do custom development for that.
We have customers that give us money for some of the workloads that they have been having in production for years now. There's a company in France that I'm aware of that's been using us for years. They only reached out when something broke. The company's main focus is developing the tooling and the ecosystem for it right now.
Do most of the customers that come to you, is it mainly the performance or the security benefits?
Security is probably the main thing, but simplicity probably comes even before the performance. It’s a lot of the random OPS work that people do. I'm very comfortable living in the shell. I still use Mutt for serving mailing lists. It's extremely comfortable in the shell, but I realize that the vast majority of the developers out there are not, and that's fine.
Simplicity comes before performance.
That's one of the things that even kernels do well. They remove all that OPS complexity. I mentioned there is no shell. It's like your application's working or it's not working. It's binary in that respect. It's not like, “Which process is opening up 1,000 connections and I got to figure out through all this off or what log rotation is not working and preventing me from SSHing into the system because the disc is full.” All those sorts of problems go away because of the architecture. A lot of people like it simply because it's a simplicity reason.
Do you chat with the AWS and GCPs of this world? Are there things that they need you to have done to enable this?
We haven't chatted too much with them. We can support any public cloud and we support it well, all three big ones, AWS, Google, and Azure. It all works not well, but fast and has all the different benefits. There are things that they could do to make the experience a lot better.
For your customers or for both?
For both. There are a handful of things where we have solutions and there's software that fills the gap. It would be a lot better if they would do certain things differently. The longer that these solutions are not dealt with, we come in and we make our own solution. It is what it is. It could be more business for us. It could be other incumbents developing better integrations. It's hard to look at the crystal ball and figure out where a lot of this stuff is going. I would say that we are starting to fill in some more of these gaps that we are hoping the incumbents deal with.
If you are reading and you have got a Node application or a Go server and you are like, “I want to get my server running on Nano.” Where would you guide them and what should they look at first?
I would go to Nanos.org and try the example out. After you install OPS, which is nothing. It's like 10 to 15 MIGs. After you install it, you can deploy, “Hello, world,” to Google or Amazon in twenty seconds. Two commands. You create the image. That's instantaneous and then you deploy it. Try it out and you will instantly understand what these things are and how they work. You will have a much better idea of what the hell these things are rather than reading some articles. Reading articles and watching videos is great, but it's nothing compared to trying things out.
I thought it was going to be quite interesting talking to you, but it's been super interesting. I'm pulled away by the audacity of what you must have been facing in terms of the scale of the software that you were planning on building. Thank you so much. I'm going to see what happens with my test of the platform. Thank you very much. Take care.
Important Links
Building safer and faster systems at NanoVMs.