Let’s get started. Again my name is Mike Kelly, the CTO of Blue Medora. At Blue Medora we are IT performance monitoring company. We’re entirely focused on separating out the data collection piece of IT performance monitoring, which is what we do, from the data analysis piece of IT performance monitoring, which is what our partners are doing.
To give you a little background on this, and to help you understand why this is important. What we’ve seen over the past five to ten years are some very significant advances in the monitoring platforms themselves. We see that in their adoption of machine learning, new vendors that are coming into the space, existing vendors that are making big improvements in their products. While those platforms are innovating at the core platform level, what we’ve seen is that over and over again they tend to neglect, or are a little bit slow on the integrations piece.
So all of the technologies that you see inside of your data center. It’s very difficult for a company that is either a new vendor or even existing vendors to support all of those and get that data into their analytics platform. There are a couple of reasons. One is it may be may not be a core competency for that company–maybe it’s a competitor. But our customers, and the customers that are using those platforms, really want to expand the aperture of that so they can see everything inside of their data center within whichever monitoring platform they happen to be using. They want to see consistent quality across all of these integrations. That’s an area where we see where that can lack quite a bit. They want these to be updated frequently so that they’re always up to date. They want to understand the relationships between components in the stack. Those are all some of the challenges that we see that our customers see in the monitoring space.
This is where we fit in. We have is a library of over 150 integrations across the data center, so this goes from databases to compute, network, storage, hardware – cloud services like AWS, Azure, and others. We have integrations for all of those. We provide a layer to provide that data to over twelve (or nearly twelve) specific IT performance monitoring platforms. Because monitoring integrations are all we do, we do it very well. What we found is if you do the integrations very well, you have a cascading impact. It’s a significant impact on the platform itself. You can go a lot further if you understand, for example, the relationship between all these data center technologies that you’re monitoring than you could if they were separate, or if the data is flat and it doesn’t have these dimensional components.
I mentioned we have a hundred fifty integrations. Our core focus is on connecting monitoring engines to the customer’s IT stacks–wherever that may be–to allow them to use the platform of their choice to monitor their entire data center. We don’t only provide the breadth, so I mentioned a hundred and fifty, but it’s not just that we provide coverage for all of the data center technologies. We also have the depth that matches any of the technologies out there. It is almost always far beyond what other monitoring companies are doing. We’re very responsive to updates. APIs are changing constantly. Most of our integrations are updated multiple times per year. In many cases, we have a relationship with the vendors. So we have a zero-day release. If a new API comes out, we’re beta testing it along with the vendor, and we provide those the same day the API is released. The intent is to expand that.
A lot of times it can be confusing to people who just hear about Blue Medora. We’re not a monitoring platform company, right? We work with monitoring platforms to expand the visibility into your data center.
Q. What exactly is the customer data exposed through your API?
A. So they don’t have to expose it through their API.
Q. Is that really the goal?
A. Yeah and there are a couple of different variations, but almost always we’re partnering with both the platform and those technologies themselves. Just trying to get the API into a consistent interface as opposed to there’s one and then it’s understanding the technology, keeping it up to date.
One of the things that you’ll find if you’re dealing with monitoring is frequently you try and integrate with another platform you may be are using technology that hasn’t been updated or an integration that hasn’t been updated. In a couple of years, very quickly, that’s out of date and is essentially useless. So unless somebody is on them and revving it constantly, you find yourself out of luck.
Q. And that’s part of the value proposition you guys provide? You’re maintaining exact versions, keeping up-to-date?
A. You don’t have to worry about figuring that out. The next big piece is is one of the things that you rarely see is understanding how these relate to one another. Just to give you a simple example, right, a VM, or really an example you may have a database running on a VM. We’re running in a storage array. Let’s say it’s a Dell compute system, something like that. A lot of monitoring systems would just monitor all those separately and so they’re all siloed. So you know the performance of each individual one. What we’re doing, is we’re saying, well, here’s the performance of the database. I’m gonna link that to the virtual machine that is running on. So you can see the performance of the virtual machine. We’re gonna link that virtual machine down to the compute that it’s running on, and the data stores that are going to be connected to the the storage array. When you have visibility into all of that, that’s a lot more value than separating them out and looking at them individually.
So we’re going to talk about two things specifically today. You know, we do monitoring integrations for both on-prem systems as well as cloud systems. Today we’ll talk about our vRealize Operations Management Packs, this is where we’ve taken over 60 of those integrations. We’ve packaged them into management packs and bundle them together with vRealize Operation. We can provide visibility into all those other areas of the data center that out of the box VMware doesn’t do. Our customers, they love working with vRealize. They want to use it for more than just VMware. They want to use it for their entire data center. So when you add in Blue Medora, and what we call our True Visibility Suite, you open up that aperture. You can see everything inside of your data center. You can extend it out to AWS, for example, to Azure or other cloud environments. You get a single view of everything in your environment.
Q. A quick question on the management packs. There’s a lot of service providers that use vCloud Director and vROps kind of their management monitoring and operations suite. VCD 9.5 opened up the extensibility ramework into the HTML5 UI. Do you guys have any inventions or thoughts about taking the functionality you’re providing in the management pack and allowing it to integrate directly with the VCD API as opposed to into vROps and then from there?
A. It’s a really good question. We’ve explored it in the past. They’ve come a lot further now, so it’s another area where we’re looking at it much more closely because we do see that as a big opportunity for expanding. I had talked to Abu Midori as part of the vCloud Program as well. We are on the pricelist in that scenario and have dealt with several multi-tenant configurations, but primarily with those use cases it’s within vROps, within that realm. Perhaps that’s something we could revisit now that there’s more functionality.
Q. Yeah so I know a lot of sites that have more than one monitoring tool for various reasons. One for the data center, one for their applications. He listed a bunch of them that are out that are generally used. What’s the licensing like if I have more than one I’m feeding. So if I had to get vRealize management packs for vRealize Operations, but I also want to use it for New Relic, is it just one price? Or is it buy as you go? How’s this work, because it sounds like I need to do piecemeal and that would be like a really painful thing.
A. Yeah, no, that’s a great question and this is it. Ee’ll talk a little bit more about it further on but, yes, this is something that we bundle it together. So in the vast majority of cases, there’s an option to get these extensions for all the platforms that you want under a single contract.
The other one we wanted to talk about today is BindPlane. This is a newer product for us. It’s been out less than a year now. This is a SaaS service or SaaS application. It is the first of its kind. We call it a monitoring integrations or monitoring integrations as a service. The concept behind this is that particularly for cloud or SaaS monitoring solutions, we want to make it as simple as possible to manage the lifecycle of your integrations. To have a single source for all of your integrations.
So we’ve taken all the integrations that we already had available. These are, under the covers, this is the exact same solution. So these are libraries that we’ve been using for vROps and other platforms for years now and are getting upgraded regularly. Now using BindPlane we can send that same data to, you know, a number of SaaS solutions–New Reic, Wavefront, Microsoft OMS, Google StackDriver and others. It becomes very easy. And you do that with one integration, so one agent. You put one BindPlane agents inside of your environments, collect data for all the things that you want to monitor, and then send that out into any platform that you’re planning to use. We’ll dig more into that just a minute.
Q. Will it work with non-SaaS related options? Because if I could use BindPlane to do exactly what I was yapping about before…
A. Yeah. Yeah, so it will work with those. What we found that the challenge for customers, or what we’re hearing from customers is if it’s something that’s on-prem, we’d rather not send the data up to the cloud and back down. But if that’s if that’s not an issue, then we can use it.
Q. BindPlane works by not just having a collector, but also having it sending it up to a SaaS. You send it from there?
A. Exactly, yes.
Q. That piece is missing. I know, okay, in a lot of cases, you’re absolutely right, that would not be welcome, but ultimately it’s, you know, it’s not a technical challenge.
A. It’s really just whether the customer would like to would like that.
Q. It’s a security challenge, so yes it is a technical challenge. Is there some way of controlling what a target or metrics go up to their service? To not let others go, while some can?
A. Sure. Yeah, it’s fully configurable. So everything that you see, all of those integrations, we can turn those on and off. And you can do that at the integration level, you can do it at the…what we call “resource” level. So each, each maybe, if it’s a database. It has resources, like the database instances, queries, several different nodes, maybe the tables. You can turn each of those off. Or you can go all the way down into an individual metric.
Q. So different countries define PII very, very different than each other. Like China now defines IP addresses and user names and other things as being PII. Others are even more stringent than that. When you start thinking about that, can you
tokenize, redact, encrypt, do something to the data before it gets pushed up into your clouds, so that this actually meets
A. So there are a couple of ways that we solve that. One is, we don’t have redaction capabilities yet, we allow you to eliminate it, so as opposed to tokenizing, you can reduce it, right? And then with some specific cases like query monitoring, for example, we do in those cases allow something like a tokenization. You can eliminate anything that’s inside of the query and it just comes up as a token.
Q. It would be really cool if you actually had a configurable JSON type thing sure that I could configure and just say “run this filter with these rules for all my data” that way I have one place to control. It all part of your game because
it has to happen before even the egress is my network, and if you just do it for me. Compliance doesn’t know what you’re
doing so they can’t say it’s actually valid. So then we have to go back to you and back in. For sure it becomes a very long tail to that, but if you had something that we can control. It’s very easy. Here’s my rules, have fun.
A. Yeah, no that’s great. It’s a great point PII has become incredibly difficult or challenging these days, with GDPR and all. That would make that a lot easier.
Q. I could say “oh tokenize, remove whatever” right? Let’s prepare those rules. Okay, so going back to your SaaS offering, yeah, um to get the monitoring information to the analytic engine back end is itreal time? What type of delay are we talking about?
A. A great question. So from the time of collection to the time that it’ll show up, it actually hits the platform. It is a roughly one second or in that range. The time for the platform then to analyze it is going to be depending on which one we’re going to. Some of them are almost instant and some of them will take maybe five minutes for it to show up inside of their environment?
Q. Are you doing anything to control the amount of data that flows from the on-prem environment into this?
A. This great question, yeah, yes so that’s that’s where the fine-grained controls come in. Because that’s a problem or can be a challenge, right? It depends on how much you want. We provide potentially a lot of data, you know that’s part of what we’re doing right, but what we also allow you to do is just pick. Maybe you just want a few of the key performance indicators. It’s really easy to just say that’s what I’m gonna do. I’m gonna expose those along there. There’s the data
being sent to sent out of your network, there’s network traffic costs, there’s also costs on the platform side itself–so consuming those metrics. And for a SaaS platform you’re gonna pay for every single one of those. Potentially it’s a lot of duplicate data. I mean if your database is running hot, your virtual machine is running out as well and your physical server right now might be not doing anything. We’ll dedupe–is not the correct term–but you know reduce the amount of data that signals the same problem.
Q. To ask a question in a different way are you processing the data before you send it out?
A. So we do a decent amount of processing, although I wouldn’t call it doing it right there. There’s processing in terms of what we reduce things to and that’s the primary case. We’re saying, well, maybe it’s just “keep your eyes on that for me” a limited amount. It’s going to be a subset of what we do. And then there’s a lot of processing that is actually maybe adding data. That may be the opposite of what you’re talking about, but that’s where we add on relationship data or add-on metadata. That can be turned on and off as well.
You guys have the whole concept of capacity definitions. So that stuff’s all looked at locally before it’s shipped up in here process. So, anyone who’s trying to stitch together a set of rules about what’s going on, capacity definition happens first, before it actually right hammers out. Also, to build on to the concept of filtering data. When you turn off, say for example, if you’re collecting on on a storage array. It doesn’t really make sense to actually grab the individual metrics on the individual disks unless you’re a very small environment. If you turn that off we’re not going to make those call to the APIs, even so, we just basically ignore that particular function. So it’s all configurable and really reduces the load on this particular API.
Q. So when, if, I’ve actually wanted that for whatever reason–maybe forensics analysis. You can you dump it from BindPlane to a local storage device that I can then pump up at a later date?
A. I missed the very beginning of that question.
Q. If I may want that data for performance for what–as performance metrics are often a really early warning sign for security and other technological issues. When I get a lot of my system goes nuts. It really means I need to do something. But I may not want all that detail sent because it’s just gonna cost me a lot, so I want it stored locally and only pumped to where I need to go when I need it. So like kind of pumping historical data to do an analysis when the incident response team needs it. Because getting it, collecting old data after the fact is impossible, but I may not want it in my performance analysis tool. I want it for my incident response people to be able to do forensics.
A. So what we have right now is on the what we call the collector so it’s where we’re actually gathering all the data locally. It has a cache that’s configurable in terms of how much data you want to want it to keep there. And that’s essentially what you’re describing. It’s not designed for forensics, but it would basically. It’s only it’s limited based
on resources. So how big is the box?
Q. So like several terabytes would be fine. Okay, that would solve that part of that problem. How do I take that data then and inspect it or move it into something else so I can do this forensics analysis?
A. So by default once it gets to a certain size that cache data is written to disk.
Q. So those that would be available in files on disk. So that’s that’s essentially log files that are available to you. Okay, thank you.
Take us for a test drive. Play in our sandbox or set up a personalized tour.Start a trial