In any organization with more than a small handful of IT staff today, it’s uncommon to find that they’re using a single visibility tool to monitor their IT infrastructure and applications. What’s more likely is that the operations team relies on many disparate tools to get an accurate read on their infrastructure. From an efficiency standpoint, using multiple tools instead of just one is probably not ideal; so why would anyone do this?
The unfortunate reality is that most operations tools serve a single purpose or a small subset of the purposes required by the organization. For example, a storage array manufacturer may also offer a monitoring suite to keep tabs on the storage platform. While that’s helpful in a small way, trying to manage a plethora of single-purpose operations tools across a large enterprise is nearly impossible, and it is at best an operationally cumbersome way to perform those disparate functions.
TV personality Alton Brown is famous for his distaste for single- purpose kitchen utensils. He believes they’re a waste of space, and that you should own tools that serve many functions and cover the widest breadth of cooking and baking needs possible. There’s a case to be made that the same sentiment applies to operations analytics software in the data center; the more data that an organization can collect, parse, and analyze by a single tool, the more helpful that tool can be.
“The only unitasker allowed in my kitchen is a fire extinguisher.”
Besides the operational overhead of IT administrators needing to use ten different consoles to check ten different tools – IT staffing isn’t free! – there are other reasons to strive for consolidation of operational analytics data:
The key word for understanding the value of centralized operations analytics is context. While silos of operations information can be somewhat useful in accomplishing a specific goal, those single- purpose tools have a very narrow focus. The difference between being reactive and being proactive is the ability to put all of the available operations data into a system that can provide the correlation and contrasts that show where legitimate areas of interest are and where real problems may lie.
Centralized Analytics with vRealize Operations
If a single tool is going to pull in data from throughout the infrastructure and process it into something meaningful, it needs to be pretty powerful. One example of such a tool is VMware’s vRealize Operations. This tool has a good reputation for being extensible and robust, and many VMware customers have this tool packaged into their Enterprise License Agreement. If they’re already paying for it, they just need to take advantage of that idle power.
Unfortunately, vRealize Operations can’t do it all. It would be impossible for any one manufacturer to create a tool that interacts correctly with every other manufacturer’s products. So the vRealize Operations architecture includes a construct called Management Packs which allows for pluggable software modules to be installed to provide additional data and insight to the vRealize Operations engine.
Extending vRealize Operations with Blue Medora Management Packs
While VMware’s software is mostly limited to monitoring and analyzing VMware-specific infrastructure components, Blue Medora has put considerable time and effort into developing software called Management Packs. These plugins provide the interface for vRealize Operations to examine other critical infrastructure components such as storage arrays, servers, network equipment, non-VMware hypervisors and management suites, databases, and applications.
Context Decreases Mean Time to Innocence
As you can see in Figure 1, pulling additional information into a tool like vRealize Operations can empower a single tool to create a full- stack view of the operations landscape. In the case of Figure 1, the operations team would be able to see right away whether a storage issue is impacting a single virtual machine, a particular datastore, or perhaps an entire storage array. This context immediately allows a troubleshooter to eliminate healthy items from the troubleshooting flow. Accelerating Mean Time to Innocence gives support teams a substantial head start.
Interestingly, despite Blue Medora having been introducing new plugins in the virtualization, infrastructure, and application space for quite some time, a similar problem has continued to exist in the arena of databases. As application architectures evolve and database technology matures and becomes quite complicated, database administrators and their operations counterparts have had a seriously hard time getting a handle on their data.
Database Technology is Increasingly Diverse
Databases in the enterprise used to come in fewer flavors than you can count on one hand. But in recent history, IT professionals have had to learn to deploy and support not just the relational databases of the past, but non-relational databases with columnar storage as well as massive, distributed, eventually consistent databases (see Figure 2).
Not only that but IT architects have traditionally confined databases to a physical server in the data center; but no longer! Databases now reside in the data center on bare metal, in VMs, in containers, and in the public cloud through any number of database-as-a-service offerings. Keeping track of all these different databases in such a broad array of locations has been next to impossible for many organizations.
SaaS-based Analytics to the Rescue
SelectStar, powered by Blue Medora, provides centralized monitoring of databases for organizations struggling to manage their increasingly diverse database estates. Being SaaS-based, it’s the best database performance monitoring platform to monitor, manage, and optimize database performance for every database type, wherever it resides.
SelectStar normalizes traditional relational, distributed and cloud databases as well as their underlying cloud and virtualized infrastructure into a single view so that you can maximize performance and availability of database-driven applications and services. You already know how important context is; this normalization process helps create powerful meaning and context out of heterogeneous data.
IT professionals everywhere – from the C-suite to the boots on the ground in the data center – are looking for ways to better understand and analyze their infrastructure stacks and applications. A mountain of data is already at their fingertips, provided by the single-purpose tools that monitor each different component of the data center. Despite the fact that most IT organizations are rich with data, many of them lack a way to connect significant bits of it to tell a useful story.
Blue Medora provides the tools to do more than just make the connection possible. The enhanced capabilities that Blue Medora’s software enable also mean that it’s easy and intuitive to take advantage of powerful existing platforms like VMware vRealize Operations and New Relic Software Analytics – tools with which operations teams are already familiar and perhaps already licensed to use.