L8ist Sh9y Podcast: Sastry Malladi on Edge-ification and Real-World IoT Deployments

Sastry Malladi, CTO of FogHorn, sits down with Latest Shiny Podcast to discuss all things edge computing, edge-ification and real-world IoT deployments. Listen to the in-depth discussion below.

 

Highlights:

  • FogHorn Data Aggregation Technology for Devices
  • Edge is Constrained
  • Containers are Standard
  • IoT Standard Environment
  • Autonomy at the Edge
  • Edge Architecture Tradeoffs
  • Machine Learning

 

About L8istSh9y Podcast
The L8istSh9y podcast is a non-commercial series of discussions with global industry leaders in Edge, DevOps, Security, Open Source and more. Run by Stephen Spector (Edge Gravity by Ericsson) and Rob Hirschfeld (RackN) to share the latest technologies and ideas from industry thought leaders.

Contact:

 

Full Transcript:

Stephen Spector:  [00:00] Hello everyone. Welcome to another edition of "Latest Shiny Podcast." This is your host, Stephen Spector. Of course, with me is Rob Hirschfeld. Good afternoon, Rob.

Rob Hirschfeld:  [00:10] Afternoon, Stephen.

Stephen:  [00:12] We have a new guest today from a new company, again, finding new people to listen to, which is fantastic. Before we recorded this podcast, we discovered a perk for working at StubHub. Rob and I are both looking at the StubHub employment page right now that we did not know about.

[00:33] Let me introduce Sastry...

Rob:  [00:34] It's certainly better for your waistline than the perks at Ben and Jerry's.

Stephen:  [00:41] That's correct. Let me introduce you to Sastry Malladi, who is the CTO of FogHorn. Welcome to the podcast.

Sastry Malladi:  [00:50] Thank you. It's been a pleasure.

Stephen:  [00:52] Besides commenting on your prior history at StubHub, why don't you tell us a little about yourself, and tell us a little bit about FogHorn? We'll jump into edge and see where the conversation goes.

Sastry:  [01:06] I am a technologist by nature. Been in the technology business for 30 plus years. Been at many different companies, both small and big, sometimes my own startup, self‑funded as well. Worked at a lot of big companies. It started from operating systems, to distributed computing, to app servers, IBM, Oracle, eBay, Dolan, so on.

[01:30] For the last three plus, or close to four years, is where we have started this company called FogHorn based in the Silicon Valley here. It's a atypical way to start a company. It's seed‑funded by a seed fund company called The Hive in Palo Alto.

[01:47] The way it works is that Hive bootstraps the companies by providing some seed fund, maybe hiring a couple of engineers to [inaudible] PLCs, and then finally go hire, effectively, the founders for the company, the CTO and the CEO. That's how myself and David King, our CEO, got hired.

[02:06] What we do is provide edge computing, edge intelligence software especially focused on industrial side of things. I know IoT, edge, and all of these are buzzwords these days.

[02:17] More specifically, we provide solutions for real‑time analytics, data processing, machine learning in constrained environments, especially PLC small devices, detecting problems or solutions for the problems. We'll go, I'm sure, in more details. That's where we are.

[02:37] We have a shipping product. We're based out of Sunnyvale here in California. We do have a offshore setup as well. We have a global presence, so lots of customers across the globe.

Stephen:  [02:48] Thank you. This is exciting. In our area of focus on edge, which I love, there's a difference because you're really building, it sounds like a gateway. The purpose of the software you build is to aggregate devices, then make decisions locally, and then send the data off. Is that a fair sort of assessment?

Sastry:  [03:13] Yes, exactly. Not necessarily specific to gateway, per se. Our main core capabilities is to build a software that can be installed in an existing device that the customers may have. In some cases, they may choose to put a new gateway, whether it's an x86‑based IoT gateway or something like a Raspberry Pi.

[03:37] In many other cases, they may also try...They may not have the opportunity to add those devices. They will ask us to install this on their existing PLCs. PLCs are actually quite powerful too, and in some other cases, embedded systems.

[03:51] That's why it's important for the footprint of our software and how much memory CPU compute storage that you would need to compute all of that. Then do all of the processing locally, so that you don't have to incur the cost of sending of all the data to the cloud.

[04:06] You may not even have connectivity. You have security issues, but more importantly, the real goal here is to identify the issues before it's too late, so the customer has a chance to fix it and then optimize their yields or reduce their scrap and whatever the case might be.

Stephen:  [04:23] Wow. I wanted to break this down quite a bit for listeners. One thing I want to do is, if you're not familiar with the term PLC, it's an industrial controller, Programmable Logic Controller. They're usually fixed‑function computers that have very limited capabilities, although that was 20 years ago when I was playing with PLCs.

[04:46] I'm sure that they're much more generic now. One of the questions I have for you is if you're talking about embedded software in the device, then what you're doing is it ends up being a library? Is it containerized? How do you take a piece of software and then fit it into a PLC environment which is a very, very constrained system?

Sastry:  [05:12] Right, but that's the core. Two things. First of all, thank you for explaining the PLC. That's exactly right, although some of the modern PLCs that are coming out these days are a lot more generic than some of the constraints when they were there 20, 30 years ago.

Stephen:  [05:30] [laughs] When I was doing them, it was ladder logic programming, yes.

Sastry:  [05:32] Exactly, and then people still do that?

Stephen:  [05:33] I was in kindergarten. We had some PLCs to help manage sandwich time.

Sastry:  [05:35] It was exactly right. Now as far as to answer your question around, how do we really make it work? There's two things.

[05:45] One is as you alluded to. The container is a software, so we are operating system, hardware agnostic, in the sense that we can work on any chipset, hardware, and operating systems. We containerize and ship those containers.

[05:59] The main reason why it's able to be even run in those environments is because we have a secret side. They've got several patents on how we build and how do you do the reactive flow programming on a number of streams of sensor‑data streams that are coming in, including some of the modern sensors like video and audio?

[06:16] Because it's fairly easy for customers to install a video camera these days, or an acoustic sensor, or a vibration sensor. That's a lot of data.

[06:25] What we've built is a highly innovative CP engine combined with a machine learning engine that can run in that kind of a small footprint. We're talking about hundreds of megabytes for very most common use cases, with almost no storage and be able to do reactive programming and do the data analysis on that.

[06:45] Then, on top of that, we take that container as in ship it, so it's easier from a deployment perspective.

Stephen:  [06:51] It makes a lot of sense, so because it's containerized, you then can move it into different environments and different topologies,

Sastry:  [07:02] That's exactly right.

Stephen:  [07:04] You're not allowed to just scale up. If you have a machine that has more capability, then it could become a full gateway from that perspective.

Sastry:  [07:09] It is. In fact, that's a great question, by the way. Some of our customers, especially in the manufacturing plants where you have lots of processing in the manufacturing machines, they have a choice. If they wanted to install this software in each of these machines, in the PLCs individually and then locally process them, that's fine, too.

[07:28] Some, especially in the plant environment, they may have a NOC, a Network Operation Center, locally available, where they may be able to put a slightly bigger gateway where they connect all of the 20 different machines and then process them all together in the same gateway.

[07:43] Software scales up very nicely. We've solved the harder problem first, which is, how do you scale it down and run in a smaller footprint? Scaling up is relatively easier.

Stephen:  [07:53] One of the things that you said I'm glad to come back to is that you described edge in a way that I really liked, which is, that it's a constrained environment.

Sastry:  [08:09] Absolutely.

Rob:  [08:10] When we look at what edge is about, the big difference is there's constraints in an edge environment because you need to be low latency, or close, or physical proximity. There's reasons all those are the constraints. That's to me the defining characteristic.

Sastry:  [08:25] You are precisely right. Those are the constraints that we work with because that's the reality. That's what customers are faced with today.

Rob:  [08:35] Does that mean when a customer's building logic on top of your platform for the edge, is that something that gets delivered in that container? How do you make sure that the environment has the latest code, if you solve the code distribution problem?

Sastry:  [08:52] Yeah. Remember because we contain it in the software, we use, for example, Docker, It could be any other container involvement, too. Any time you wanted to update the software, our own container upgrade the software or supply a patch, a security patch or next version of it. It's simply a matter of pushing the new images into our container repository.

[09:14] The customer who is willing to apply to that, we have given them a pseudo‑installer, updater, if you will. That's what keeps running on the device that can pull those new images from the repo. That's our own software.

[09:27] You might also be asking on top of our software. Obviously, our customers use our tools. We've got a number of authoring and deployment management configuration tools using which they would configure.

[09:40] For example, what kind of sensors they have in their environment? What kind of expressions or patterns they might be looking for? What kind of models they want to deploy? We've got tools for that. They can do that anytime they want.

[09:51] They bring up our tool, update their model or change their model, change their configuration. They can do all of that without us getting involved.

Rob:  [10:00] Basically, using the container as processes, you're using that as your software delivery mechanism, which makes a ton of sense to me. I build, test, develop, integrate with the devices that I want, then I commit that into my hub. It doesn't have to be Docker, a public Docker hub. That then propagates the software to the field.

Sastry:  [10:23] That's exactly right. We don't put it in a public Docker hub. We have on our own private hub, but exactly right. We test it, commit it for both different types of chipsets, device, hardware architectures, and then the customers simply pull it from there.

Rob:  [10:39] That's a really good use of containers. One of the things that's fascinating to me in this model is that containers have gotten so ubiquitous that you're counting on edge devices being able to run containers.

Sastry:  [10:53] That's exactly a good point you bring up, Rob, in the sense that most of these existing, even though they're constrained devices, they're running some flavor of Linux, Ubuntu, or some other version or some flavor of Windows, or some flavor of real‑time [inaudible] operating system.

[11:11] Almost all of them, including, in fact, the latest Windows now support the notion of containers. That has never been a problem so far.

[11:18] The only environment where the container type of stuff did not work is most recently, one of our largest partners and customers have also asked for the singular traction, is to get this up and running on handheld, like iOS or an Android type of device. Docker or otherwise, they don't quite work well in that environment where there's a notion of an app.

[11:44] We cannot package our software as an app there in those environments. Other than that, in almost all environments, containers have worked really, really well.

Rob:  [11:54] This is fascinating. In what you're building, you're creating a software distribution model, very cloud‑like, very comfortable for people who are used to that type of environment, but then being able to position this intermediate piece from an IoT.

[12:10] To me, it's still sounds like you're building a gateway. You're just making a gateway that can be embedded in the device where you need it to be.

Sastry:  [12:17] Well, if you're calling gateway, just to be clear, we only sell software. We only produce and sell software. We do have a lot of hardware partnerships with Dell, HPs, ADLINK, Ciscos, and so on. They test them. They, in fact, sell our software preloaded on their gateways, [inaudible] and all of that, but we only make software.

[12:39] If you are referring to gateway as a piece of hardware, no. We don't actually sell or are making hardware, but the individual piece of software, that's what you're referring to as a gateway, sure. We provide more than a container.

Stephen:  [12:53] Let's talk for a second about what a gateway is. In a lot of previous podcasts about edge, we made industry sweeping comments about IT infrastructure at the edge deliver advanced functions ‑‑ collect data, process data, decide to forward it.

[13:15] What type of software do you think is going to be normal in an IoT environment? Just generically and then let's figure out how people are going to deploy it and manage it.

Sastry:  [13:28] Let me describe almost all of these different verticals that we rua into in an IoT environment. They always have some sensors, whether they're measuring different things, temperature, pressure, velocity, whatnot. They've got other types of sensors, like video, audio and so forth that we talked about.

[13:43] One layer of the software is what we call the data injection layer. You got to have in a protocol‑specific data injection layer. For example, OPCEA, Modbus, MKTT, BACnet, or ZigBee. It doesn't matter what those different sensor protocols are.

[14:00] One set of containers are simply connecting to these sensor streams, ingesting data, decoding it, enriching it, and publishing onto the system.

[14:09] Then there is this other set of containers, which is really where the rubber meets the road where we're doing real‑time processing as the data is coming in. After the data is enriched, applying analytics, applying CP, applying different pattern detections, applying different kinds of regressions, models and so on.

[14:25] In other cases, deep learning, too. Video is involved. You can also do neural nets and deep learning.

[14:29] Believe it or not, we do all of that and it's still this small equipment. That's what we call edge‑ification. We'll get to that in a second.

[14:35] Then there is a third layer on top of that, which is once you process this data, derive some insights that are useful to the customer. What do you do with it?

[14:46] This is what we call the publication layer, a set of containers in that layer where you can take that information, publish it to other external systems, whether they are IoT hub in Microsoft, a cloud provider, a Google IoT core, or even the AWS Greengrass, and so on.

[15:00] You can do all of that, but many a time they just simply took those insights, and then use our STK to automate taking actions into their PLC.

[15:09] This is generally the IoT software that is an injection component, that is a processing component and then there's a publication component. There is always an optional storage, local time series database, if you wanted to, if you have a local disk space to store some information.

[15:24] This is a general piece of software. The whole thing put together is really what we're calling this IoT platform on the software. Now, it has to run somewhere. It could be running in a PLC. It could be running in a gateway. It could be running in an average system. That varies on the customer's scenario.

Rob:  [15:39] It seems like if you can move it into some more powerful gear, then you could add more collaborative logic. You could start saying, "When I have data from these sensors, I could do some integrations. I could do some analytics, and then I could make decisions." Is part of the goal here to be able to have more autonomy at the edge?

Sastry:  [16:02] That is correct. The goal for doing this is, number one, the most important, number one, is that they want to find the insights before it's too late. They want to be able to leverage those insights to optimize their cost, whether it's scrap reduction, yield improvement, predictive maintenance, or simply security issues, whatever the case might be. That's the ultimate goal.

[16:22] Now, there are obviously many cases they might need to combine the information that's coming live from these sensors with some existing MES or some other historical data that might have been stored in other data systems.

[16:36] The way we make that happen is simply, because for us everything is a stream, we write components for all these different external systems, where the data might be sitting, to inject that as just one more stream. For the customer, it doesn't make any difference where the data is coming from.

[16:52] We can also do it in such a way that it's distributed. Let's say, you've got a wind farm. You've got hundreds of windmills, wind farm turbines running. Whatever you learn, insights from one windmill, you can apply onto the next one as well.

[17:06] We have this notion of a distributed edge where you can actually share the information between these edges without having to be going back all the way to the cloud, because you may not even have the connectivity and the latency could be an issue, not to mention the cost.

Stephen:  [17:20] That makes a lot of sense. Having a platform that enables you to do those things really translates into creating a valuable environment.

Sastry:  [17:29] That's exactly right. We are a flexible, valuable platform which customers can elaborate still, either just do some local processing in one edge, or connect a number of [inaudible] . You've got customers.

[17:42] For example, we run this on top of an elevator, millions of elevators to do predictive maintenance on an elevator. You wouldn't even know it. An elevator has, for example, accelerometer, temperature, pressure, all of these things. Most of the time, these days, elevators are not sold as asset, but instead they're sold as a service contract, just like software these days.

Stephen:  [18:01] What about the floor?

Sastry:  [18:02] Exactly. [laughs] When they go [inaudible] , you make a call to get a maintenance guy to come in. It's going to cost you a lot. These companies are spending tons of money.

[18:12] What we do now is put our little software onto the controller sitting on top of the elevator, connect all of these different sensors, and do predictive maintenance and proactively alert the customer there. In how many days this thing is likely to go fail? That the door is not opening or something else is failing. That actually saves them a lot more money.

[18:31] The possibilities are endless where we're applying all of this. This is the point.

Stephen:  [18:37] I like that you're talking about edge and doing analytics at the edge, in terms of profits and savings, because I think that people underestimate just how much efficiency, savings, and really reliability, that you're going to get from having these environments.

[18:58] What I guess I'm curious about, let me take a step back. There's a fundamental question I've had in some of these conversations that comes back to, "Are we going to push decisions very, very close to the edge?" Like at the top of the elevator? Or does it make sense at some point that you're going to have a building data center that is the edge instead of each elevator?

[19:25] How does a person designing that system make the trade‑off to move from a Raspberry Pi or a NOC riding up and down on the floors to service in the basement.

Sastry:  [19:39] This is a hybrid, hierarchical approach. In the example that you're talking about, elevator itself, a building may have multiple elevators.

[19:47] Each elevator has its own sensors. Whatever logic locally that you need to apply to derive some insights or do failure prediction, it's most optimal to do it locally there, without having to send all of the raw data every single time the elevator moves up and down, whatnot.

[20:03] However, the building may have many elevators. The building may have other systems, edge vac systems or other energy consumptions, lighting systems, and so on. Each of them have their own local edge processing.

[20:15] Then you will have a local building edge, so to speak, which can take the insights of the results from each of these different edges within that building, then consolidate that and say, "Look, how is this building performing overall?" Somebody wants to know that.

[20:31] They may not have to go to each and every individual elevator, but the actual data, where the data is produced. If you actually think about big data processing, that's exactly the concept. Process where the data is, rather than where the computer is. Shift idea, it has a lot of benefits in real life.

Stephen:  [20:49] Where I start thinking, you're describing fixed sensor systems. I can easily jump to a hotel where there's cameras in the hotels, in the halls.

[20:59] There's cameras in the elevator base. You could actually use the video analytics coming off of those stream to detect that people are walking to the elevators, and actually call elevators in advance and do predictive. How do we get to that level of analysis and integration into an edge infrastructure?

Sastry:  [21:21] We're already doing that. In fact, I won't name the customer. One of the largest customers where they came up with this notion of there's a huge campus, a huge building.

[21:28] There's so many doors to get in. There's so many elevators to get in. Each employee, for example, people who are walking in, based on their badge, they already know what's the best location to get in.

[21:39] Trying to correlate and connect the information from the camera information, which door they're entering and what badge, what's the identity of the person? What elevator they need to go into to automatically open it for them and exactly press that button?

[21:53] We are already doing that. That's another use case, but these are not mutually exclusive. I was giving an example where individual assets, if you're trying to do the asset performance optimization, you have an edge there. In the example that you're providing, the asset is the building, or the asset is the person moving.

[22:10] Therefore, you have to correlate the information across these different sources. We're doing that as well.

Stephen:  [22:15] Does that then be hierarchical? What you're describing is you might have the system in the elevator. Then it's going to forward relevant data to a building system. Then the building system might forward up to the campus system, and then the campus system's going to forward into some global analytics.

[22:33] Actually, it might split the streams. Some goes to the elevator people. Some goes to the video people. Some goes to security. Is that the vision of all these systems being federated up?

Sastry:  [22:45] That's exactly right. In a way, that is much more optimal.

[22:48] Like you said, elevator locally processing all of this information, because that's terabytes to petabytes of data per day. It's a lot of data. There's no reason to send all of the raw data anywhere, and then the elevator locally processes that.

[23:01] If there is an insight or some information that it derives to say that something is wrong or something is about to go bad or whatnot, or everything is OK, too. That information is the one that goes into the building edge, if you will.

[23:14] All such elevators in that same building will send it to the same building edge, which can then use that information and say, "Look. Overall, how is this building performing?" I'm not all going to say, "How is the elevator doing? How is the energy consumption? How is the edge factoring? Is there an alert?"

[23:27] Another building edge can generally send to the building operator to say, "Look. How is the building doing?"

[23:32] Absolutely federated hierarchical in a way that's much more optimal that actually provides real‑time insights. The user has an option to optimize their costs.

Stephen:  [23:46] Now I see why PLCs matter, because a lot of these old building systems are PLCs. You're trying to build this aggregated viewpoint of an environment from multiple sources. You need to be able to basically make a device that was never designed for this, more intelligent about edge aggregation and being part of a collective system.

Sastry:  [24:09] That's exactly right. If you look at the manufacturing plant, oil and gas, oil rigs and all of that, completely remote, they have this existing PLCs and existing control systems. We got into that.

[24:20] I'll give you one other interesting example, if I may. The transportation. We have one of our customers, who's the largest in the North American locomotive operator. The locomotive, for example, running at North America alone, 5,000 [inaudible] . When I heard this stat, three, four years ago that if they were to increase the average speed of a locomotive by one mile per hour, they would save about $10 million a day. That's a lot of money.

Rob:  [24:45] That's a lot of money:

Sastry:  [24:47] Anyway, I didn't really believe it until they actually showed me the facts, this company. What they had asked us is, "Look, we've got all these fuel inefficiencies. We've got all these intuitive detection, wear and tear of these problems, and all of that with all the sensors. The locomotive is running in white space, nowhere. We've got all this information."

[25:07] Today, what happens before our solution got in place there, when the train stops in some station or some location where the data gets offloaded, maybe, they do something with it. Whatever problem had happened, it already happened.

[25:19] Now, what they told us is the locomotive engine already has a built‑in computer. They said, "You can't put in any other computer. No gateway, no Raspberry Pi, nothing. However small that is, we already have a built‑in computer in the engine and we'll give you a [inaudible] card.

"[25:34] Now, put your software there, connect to these sensors and raise to the engine operator who's actually in the engine compartment, in the car. When there is an issue, raise an issue so that person can act."

[25:44] That's what we did. This is another example where edge makes a lot of sense where you can't send all this information anywhere. It's too late.

Rob:  [25:52] This is what's fascinating to me is that when you're describing edge in this case, that's definitely an edge use case, but the edge use case has to be integrated in with broader data analytics, which is what my next question is.

[26:07] It's about how do you do machine learning? Is machine learning a component with this? What's the data flow for getting data to build the model and then sending the models back?

Sastry:  [26:17] Got it. Its' a good question. Machine learning is part of this. The data processing happens through a combination of three different layers in our system at the edge. One is what we call the enrichment layer, which is filtering out all of the noise, and extrapolating or interpreting it when they're missing values because sensors do go bad sometimes.

[26:38] Followed by a complex event processing engine that we built from the ground up, which can run in just a couple of megabytes of memory that can do this pattern detection based on all these different signals coming in from the sensors, followed by a machine learning engine where you have multiple options.

[26:57] Especially if the data is not that big or the failures are frequent, you can pick one of these digressions, anomaly detections, or random fault.

[27:05] Any of these algorithms that we include with the system, train the data locally there ‑‑ although not many people actually do that because the data is quite high ‑‑ or you can pre‑train your model in an offline environment using your favorite tool, bring that model into our tool and push that into part of the edge environment.

[27:24] You can do that either in traditional Python‑based models, or you could also do in PMML, Predictive Markup Model Language, especially if you're developing your models in Spark ML, R Studio and so on. You don't have to worry about that. You can still use your existing tools. No problem.

[27:41] Then export all those model descriptions into something called PMML, Predictive Markup Model Language, reimport that and then they can redo the right stuff for generating our CP expressions.

[27:53] What we also do in the process, because a lot of the times when people are developing machine learning models in the cloud or offline environment, they don't pay too much attention to how much am I using. How many layers am I building? What are my rates? How long is it taking? Because if you want to compute, it's almost available for them in the cloud. The environment is elastic.

Rob:  [28:11] [laughs]

Sastry:  [28:12] That's not the case in the edge. Therefore, we worked with a process called edge‑ification, which involves a number of steps. Believe it or not, by doing this edge‑ification process, you get higher, more accurate results at the edge because of the fidelity of the data is a lot higher at the edge compared in the cloud because you're almost down‑sampling when you send into the cloud.

[28:37] We do this optimization, reduction of rates, the number of layers. We take out all of the pre‑processing, post‑processing aspects of the machine learning model into the CP expression, which runs a lot faster, in a number of things. If you want, we can go into more detail.

[28:52] The bottom line is we give the flexibility to the customers to build the models, however, they want to build and then come in whichever format they want to go build, come to our tools, go through our deployment tool process and deploy to the edge. Those three layers will take care of the rest.

Rob:  [29:09] Wow. This is something people need to think through from how they design data cascades. Do you see a need for machine learning acceleration?

[29:23] I'm trying to put my IT infrastructure at the edge hat on for a second. If I came in and said, "Look, I actually can give you a data center that has a whole bunch of TPM or GPU processing," not TPM. What am I thinking? GPUs. That can actually do some number crunching. Can you then take advantage of that? Do processing analytics before you do offload?

Sastry:  [29:53] That's exactly right. You probably meant TPU, the Google TPU.

Rob:  [29:56] That's it.

[29:57] [laughter]

Rob:  [29:57] TPM is the security components for hardware.

Sastry:  [30:00] That's right.

Rob:  [30:01] That's deep in my hardware life.

Sastry:  [30:03] I can see that. Yes, we can leverage. If your edge device happen to have a GPU, a TPU, or even a FPGA, or some faster processor, we can certainly leverage that.

[30:16] In fact, Intel, one of our investors and a partner, they came up with this framework called Open LINA, which is an accelerator for machine learning models. We apply also on top of that where we can effectively leverage higher, faster, clock speed processes underneath.

[30:35] The short answer is yes. If that device happens to have this chipset, we can leverage that.

Stephen:  [30:43] Sastry, this is where I always come in. I have to stop Rob.

Rob:  [30:50] No.

Stephen:  [30:49] It's a standard process. This has been a really good conversation. Just really appreciate you joining us. Your feedback has been great.

[31:01] If any of our listeners are interested in reaching out to you, your company, what's the best place for them to go?

Sastry:  [31:10] They should go to check our first foghorn.io. That's the website that we have. They can always request additional information. They can send an email to info@foghorn.io. Of course, we all get word of that. If there's any specific people that want to reach out, they can contact directly with me, too.

Stephen:  [31:29] Sastry, thank you for joining us. Rob, thanks for another outstanding set of questions. It's really useful. And to our listeners...

Rob:  [31:36] Love when we go deep.

[31:36] [laughter]

Stephen:  [31:37] To our listeners, hopefully you found this really useful. We do. As I always say, if you have a company or someone we need to reach out to, let us know. We're happy to bring them on as a guest.

[31:49] Sastry and Rob, thank you again for joining us today.

Sastry:  [31:53] Well, thank you. Thank you [inaudible] . It's been a pleasure. Appreciate it.



Transcription by CastingWords