Google Stackdriver Logging Expanded to 38 New Sources via Blue Medora BindPlane

by bluemedora_admin on July 17, 2019

Extending observability for Stackdriver customers across on-prem and multi-cloud Kubernetes, databases and applications

GRAND RAPIDS, MICH. – July 17, 2019 – IT monitoring integration innovator Blue Medora today announced open access availability of an initial 38 log source integrations through its BindPlane log streaming platform, for all Stackdriver customers.

This follows the April announcement with Google Cloud to bring to market its Monitoring- Integration-as-a-Service at no additional licensing cost to Stackdriver customers. The managed log streaming capability simplifies extending Stackdriver’s observability to enterprise customer data centers and other public clouds. The addition of log source ingestion complements BindPlane’s existing metrics data pipelining capabilities.

The new log source integrations include Kubernetes, Amazon EKS and Azure AKS, along with support for key workloads including Windows applications, Microsoft SQL Server, Oracle, Elasticsearch, Kafka, NGINX and more.

They allow Stackdriver customers to unify event analysis, even when running multiple Kubernetes orchestrated services across Google Cloud, Amazon Web Services, Microsoft Azure and private data centers. These capabilities also enable diagnosing production issues for application stacks running on Google Cloud VMs.

BindPlane automates the collection and enhancement of diverse IT operations data, and the metadata that exposes IT relationships. Designed in response to the complexity of managing operations data in hybrid and multi-cloud environments, this real time data stream improves upon the analytics of popular monitoring platforms including Google Stackdriver, Azure Monitor, New Relic, VMware, Datadog and others.

“The Blue Medora and Stackdriver solution has allowed me to gather signals from a really diverse environment and see them in a single pane of glass,” said Timothy Wright, founder at Eastbound Technology. “This has saved us significant money and effort otherwise spent on managing additional tools and open source solutions.”

“The key to BindPlane’s greatest value is having it offer the widest range of integrations possible to customers, for their preferred monitoring tools,” said Mike Kelly, chief technology officer at Blue Medora. “We’ll continue to build on these integrations to make sure we offer the most inclusive monitoring and observability capabilities available.”

Stackdriver brings together application performance tracing, logs, and infrastructure monitoring functions into a single tool. The addition of BindPlane unlocks a real-time, dimensional data stream to as many as 150 operations data sources that support monitoring:

  • Non-GCP public cloud resources including Amazon AWS, Microsoft Azure, Alibaba Cloud, and IBM Cloud.
  • Critical workloads and databases on GCP (or data center) virtual machines and on-premises infrastructure.

Details on how to deploy Stackdriver with BindPlane can be found on Google’s website.

About Blue Medora

Blue Medora’s pioneering IT monitoring integration as a service addresses today’s IT challenges by easily connecting system health and performance data–no matter its source–with the world’s leading monitoring and analytics platforms. Blue Medora helps customers unlock dimensional data across their IT stack, otherwise hidden by traditional approaches to metrics collection.

Ann O’Leary
Blue Medora
P: +1 650 996 0778

Follow us:

View this press release on TalkCMO.

Migrating From On-Prem to GCP: Storage

by bluemedora_editor on July 16, 2019

An essential aspect of every migration to the cloud is storage. In some instances, migrating storage to the cloud is about copying and pasting. But problems can arise when the amount of data you need to move is too big or too sensitive, the transfer needs to be secure, or you need to have business continuity while migrating. In many enterprises, all these use cases apply. But there’s a solution to these problems enterprises face when migrating to Google Cloud Platform.

In today’s post, I’ll share the list of services that will help you to migrate your on-premises storage layer to GCP.

Storage Services in GCP

In GCP, there are many services that you can use for storage, depending on the storage type your applications need. GCP has a service for common needs. If it’s about unstructured data like log files, database backups, images, or any other file, you have Cloud Storage. There are also fully managed services for MySQL and PostgreSQL with Cloud SQL, where Google takes care of patching, high availability, and read replicas. And if you want a database fully managed by Google with extra features like horizontal scaling, strong consistency, and global, you have Cloud Spanner. In case you can’t decide which storage service to use—I’m with you— there’s a decision tree in the official docs, including detailed information for when to choose each storage service.


So let me introduce you to a few services that you might use when migrating to GCP.

Online Transfer

The first option you have to store data is Cloud Storage, and you can upload data via their drag and drop tool in the console by installing the command line or through their JSON API. Consider this approach when the data you need to upload doesn’t have a file size that’s too big. You can learn more about the limits in Cloud Storage—for example, you can’t have individual files (or objects) with more than 5 TB of size. Also, try to avoid uploading too many objects at the same time. GCP will throttle the requests you send to Cloud Storage. I’d advise you to take a look at their best practices and recommendations when working with Cloud Storage.

Besides being able to use the GCP console to upload data, you can also make use of their CLI called gsutil. Because you’ll be running the commands from on-premises, you need to ensure that you have good internet connectivity or else you’ll need more time to upload data. You could also use a direct peering to access Google’s network through its edge points of presence. Or, ultimately, configure a direct connection using Cloud Interconnect through GCP’s service providers. Lastly, what’s good about gsutil is that you can upload data in a multi-threaded way, which is useful when you need to upload several files at the same time faster.

Transfer Appliance

Another option is the Transfer Appliance service, which is like a USB drive used to store terabytes of data. This is useful if you need to upload more than 100 TB and up to 480 TB of data and want to avoid network connectivity problems. If you need to upload more data, you need to order more appliances. The Transfer Appliance is a physical server that you request from Google. Then, you connect the appliance to your data center, upload the data to the appliance, and ship it to Google. All the data you upload to the appliance is encrypted. Then, Google uploads the data to your Cloud Storage account for you faster.


Once the data is in GCP, you can access it, decrypt it, and use it. You don’t need to reserve bandwidth or configure a direct connection, and the data will be migrated faster.


Komprise is a partner solution where you deploy agents in your data center to upload data to GCP. This solution is applicable in many cases. But regarding migration, you might find Komprise useful if you don’t urgently need to upload data and it can be done transparently. As Komprise puts it, it’s like you’d be extending your network-attached storage on-premises to GCP.

Once you’ve installed and configured the virtual appliances on-premises, Komprise analyzes the data you have in your data center. After reviewing the insights the tool provides, you can decide how to manage data and plan the capacity you’ll need in GCP. This will help you determine how much it’ll cost you too. Komprise can automatically upload—or replicate—the data in the background to GCP. Or you can also configure it to retrieve only the data your users need. Keep the hot data on-premises and everything else in the cloud, for example.


One important thing to stress, you can read in their blog post: “Komprise does not store data, and simply moves data through SSL to Cloud Storage, which is HIPAA-compliant.”

Database Migration

To migrate databases from on-premises to GCP, Google has acquired a company called Alooma. Alooma is an enterprise data pipeline tool that can integrate different data sources and transform the data before it is stored in a data warehouse. With this acquisition, you’ll be able to migrate data in an ETL way. Alooma will enable you to maintain business continuity when migrating their database workloads.

Moreover, Google has created a series of blog posts using existing tools to migrate data from different data engines to GCP. You may also request assistance with assessing the migration with the help of partners by contacting Google’s sales team.

From the list of migration assessment guides, you have the following documents:

As you can see, Google just published a set of guides for existing tools on how to migrate databases. But this might change in the future. Google might decide to create a service dedicated to migrating databases as the one AWS currently has.


I talked about Velostrata in a previous post, where I discussed the migration options to GCP in general. But Velostrata is a new offering from GCP that you can install and configure to migrate on-premises VMs to GCP. Velostrata is a good solution because it’ll help you migrate workloads by streaming the data. Therefore, once the migration finishes, you can configure Velostrata to replicate the data from the cloud to on-premises. This feature is useful in case you need to run a rollback and avoid losing data.

With Velostrata, you can also anticipate migration for the VM storage only. As a consequence, you might be able to create multiple VMs to increase redundancy or performance. Having the storage in GCP before the VM will help to know precisely how much it’ll cost to migrate specific workloads to GCP.

Many Options to Migrate Data to GCP

Although this post focused specifically on migrating on-premises data to GCP, bear in mind that there are services that you can use to migrate data from AWS to GCP or from a SaaS solution like Google Ads to BigQuery. In this post, I included a lot of tools from GCP and partners. Which tool and storage service you choose will depend on your needs and how much data you need to migrate.

Hopefully, you’ve found this post useful, but remember that GCP updates their services very frequently. So I’d advise you to confirm in their official docs for data transfer.

This post was written by Christian Meléndez. Christian is a technologist that started as a software developer and has more recently become a cloud architect focused on implementing continuous delivery pipelines with applications in several flavors, including .NET, Node.js, and Java, often using Docker containers.

This Month in BindPlane: More Logs on the Fire

by Nate Coppinger on July 11, 2019

Nature’s long awaited summer update is finally here. While you throw bundles of logs on your summer bonfires, our team has thrown some new log bundles to stoke the  BindPlane for Stackdriver fire. Since its summer we thought we would cook you up some delicious new features for our summer BindPlane Barbecue. Tired of the summer analogies yet? Well better get used to them, they are here to stay for the foreseeable future, just like summer!

Log source bundles

June was a busy month for the Logs team as the list of supported Log source bundles continues to grow at a fast rate!

  • Aerospike
  • Apache Cassandra
  • Apache HBase
  • Azure Log Monitoring
  • CouchBase
  • CouchDB
  • Elasticsearch
  • JBoss
  • Memcached
  •  NGINX
  • PGBouncer
  • RabbitMQ
  • Redis log sources
June bundles
Click to enlarge

A couple of Log bundles we would like to highlight that you may find useful are Elasticsearch and Apache Cassandra 

Elasticsearch is a very powerful and important tool that you will want to make sure you keep healthy and running smoothly. It is a great database search tool and the sheer expanse of data that it must parse will mean a potential for many events that you will want to track with logs. You can use Elasticsearch logs with BindPlane for Stackdriver to track important events occurring within your environment. For example, you could set up logs to track instances such as failed the number of Queries ran, failed queries, how often Elasticsearch goes down, and Index creation. The logs feature will also give you the ability to sort by severity level, type of event, and a time frame, node ID and other important information. 

Elasticsearch Logs
Click to enlarge

As a scalable, distributed, NoSQL database, Apache Cassandra handles a large amount of data across multiple servers, making it an extremely important and powerful tool that users will want to make sure they have visibility into the health and performance of the database. Since Cassandra is a distributed database that stores information in multiple nodes, logs will be invaluable when it comes to tracking the multitude of events that will occur on Cassandra. For example, you could use logs to track node failures, replication instances, and other similar events. 

Cassandra Logs
Click to enlarge

We have also launched monitoring support for the Cassandra Garbage Collector. Our logs and metrics will allow you to keep an eye on its performance and its key performance indicators, such as how much time it takes each time for the garbage collector to run.

Garbage collector graph
Click to enlarge

For the full list of Log sources for Stackdriver that we currently support, check out our BindPlane Logs page.

Log agent features

During the last development cycle,we were able to Enhance the lifecycle management capabilities of the Log Agent. 

Great news!

In a recent blog, we announced the launch of our BindPlane CLI tool (BPCLI). The BPCLI will help you automate the creation and configuration of sources and credentials. It will allow you to modify your existing infrastructure for monitoring purposes. To find and implement the BPCLI, visit our GitHub public repository and follow the instructions to get started. 

BPCLI repo

Stay Tuned

That’s it for the June BindPlane update, visit our site for more details on everything we offer here at Blue Medora for your monitoring and integration needs. Keep an eye out for our weekly blog posts and monthly updates over on our site!

Creating an Application Health Dashboard

by Rick Pocklington on July 10, 2019

As data and Information systems continue to grow as an essential asset in nearly every facet of business, the need for dashboards to provide visibility and insight into the performance of your systems is greater than ever. Dashboards are an invaluable tool when it comes to ensuring the health and performance of your information systems and data, and in most situations, an Application Health Dashboard will help you achieve your goal of maintaining a healthy IT environment.

An Application Health Dashboard shows each of the components of your infrastructure at a glance. If a component is unhealthy, you can expand it to list all of the individual objects that make up the component, each with their own health score. Selecting an object lets you see all of the health alerts on that object and a history of its alert volume.

Follow along as we take you through the steps to build your Application Health Dashboard in vRealize Operations. Click here for a video tutorial of the process.

Part 0: Prerequisites

  • vROps 6.7 or newer, licensed for Advanced or Enterprise
  • Blue Medora TVS (True Visibility Suite)
  • Infrastructure monitoring configured
  • A way to identify VM’s supporting your application
    We support identifying VM’s by a naming convention or by tags in vCenter

Part 1: Create the VM’s group

  • Create a new group type.
    1. Select the Administration tab
    2. Expand Configuration
    3. Select Group Types
    4. Click the green plus sign
    5. Enter the Group Type name and select OK
      We recommend “[Application name] Group Type”
Add group Type
Click to enlarge
  • Create the group of VM’s used by the application.
    1. Select the Environment tab
    2. Click the green plus sign
    3. Enter the group name
      We recommend “[Application name] VM’s”
    4. Select the group type
    5. Leave the policy blank
    6. Check “Keep group membership up to date”
New Group
Click to enlarge
  • Define the membership criteria
    1. Select Virtual Machine in the Object Type for the membership criteria
    2. Select Properties
    3. Select either “Summary|vSphere Tag” or “Summary|Name”, depending on which property you’ll use to identify the relevant VM’s
    4. Select contains
    5. Enter the string value that will identify the VM’s
New group Gif
Click to enlarge
  1. Preview results
    1. Click on Preview Group to confirm that the appropriate VM’s are found
    2. Click OK
Preview Group
Click to enlarge

Part 2: Create the component groups

  1. Create the Storage group.
    1. Select the Environment tab
    2. Click the green plus sign
    3. Enter the group name
      We recommend “[Application name] Storage”
    4. Select the group type
    5. Leave the policy blank
    6. Check “Keep group membership up to date”

  2. Define the membership criteria
    1. Select the appropriate Volume or LUN type as the Object Type – Blue Medora has management packs for monitoring Dell EMC, NetApp, 3Par, Nimble, and Pure storage systems
    2. Set the membership criteria to:
      Relationship, Descendant of, is, [Application name] VM’s
New group 2
Click to enlarge
  • Repeat steps 1 and 2 for the related compute hardware and databases
  1. Create a new “parent” group type.
    1. Select the Administration tab
    2. Expand Configuration
    3. Select Group Types
    4. Click the green plus sign
    5. Enter “Application Parent Group Type”
Click to enlarge
  1. Create the application parent group
    1. Select the Environment tab
    2. Click the green plus sign
    3. Enter the group name
      We recommend “[Application name]”
    4. Select “Application Parent Group Type” as the group type
    5. Leave the policy blank
    6. Check “Keep group membership up to date”
    7. Select “[Application name] Group Type” as the Object Type
    8. Leave the membership blank
Parent Group
Click to enlarge

Part 3: Create the dashboard

  1. Create a blank dashboard
    1. Select the Dashboards tab
    2. Select Actions -> Create Dashboard
    3. Change the dashboard name
      We recommend “[Application Name] Health”

  2. Add the Topology widget
    1. Click and drag the Topology Graph widget onto the dashboard
    2. Click the Edit Widget button
    3. Change the name to “[Application Name] Topology”
    4. Set Self Provider to On
    5. Enter the [Application Name] in the object Filter and select the parent group for the application
    6. Click Save
Topology Graph
Click to enlarge
  1. Add the alert widgets
    1. Click and drag the Alert Volume widget onto the dashboard
    2. Click and drag the Alert List widget onto the dashboard
    3. Click Show Interactions
    4. Drag the arrow icon from the [Application Name] Topology widget to the circle icon on the Alert Volume widget
    5. Drag the arrow icon from the [Application Name] Topology widget to the circle icon on the Alert List widget
    6. Click Hide Interactions
Click to enlarge
  1. Add object metrics
    1. Click and drag the Metric Picker widget onto the dashboard
    2. Click and drag the Metric Chart widget onto the dashboard
    3. Click Show Interactions
    4. Drag the arrow icon from the [Application Name] Topology widget to the circle icon on the Metric Picker widget
Click to enlarge
  1. Drag the arrow icon from the [Application Name] Topology widget to the blue circle icon on the Metric Chart widget
    1. Drag the orange arrow icon from the Metric Picker widget to the orange circle icon on the Metric Chart widget.
    2. Click Hide Interactions
    3. Click Save
Metrics gif
Click to enlarge

Thank you for following along with building your new Application Health Dashboard with VMware vRealize Operations Manager . To get started, be sure to check out the True Visibility Suite page on the Blue Medora website and find a package that is right for you!

Making Easy Easier

by Nate Coppinger on July 3, 2019

Blue Medorathon 14 brought many cool projects, that add a lot of value to our business, and one project that we wanted to highlight was some great automation work done by none other than our very talented DevOps engineering team, specifically Joe Sirianni. The title says it all. Joe, took it upon himself to figure out how to make deploying BindPlane even easier than it already is, and he achieved this with great success! Now the reason for this may seem impossible since BindPlane can already be deployed through a few presses of your mouse. But now through the creation of our BindPlane CLI tool (BPCLI) and the integration of Terraform Provider(coming soon), nearly the whole process can be done for you automatically and intelligently.

BPCLI: What is it?

Our BindPlane CLI (BPCLI) is a lightweight open source utility that automates the deployment and configuration of all of your BindPlane sources, and collectors. On the front end, the tool is a very simple command line interface (CLI). On the back end, BPCLI manages several public BindPlane APIs that control Credentials, Jobs, Collectors and Sources. It simplifies interacting with these APIs based on common use cases. Joe has pretty much done all the heavy lifting to programmatically configure and control BindPlane, which is super useful when you have large on-premesis environments or deploy frequently in cloud environments (or you just live for the command line).

BPCLI: Getting Started

To get started with automating your deployment, all you will need to do is visit our GitHub repository, here you will find and download the BPCLI executable.  

Click to enlarge

To do this, find the “README” text file found in the repo and follow the instructions to get everything you need to start. After you download what you need, using commands found here, you can use the BPCLI to connect your BindPlane account and export your BindPlane API key. Once your API connections have been set up, you will be able to create and edit your source credentials as well as create templates to help you with future configurations. The BPCLI also lets you easily run commands to grab a collector ID, apply the necessary credentials and configure your sources. Just like with credentials, it also has the ability to create source templates so that you can easily recreate similar source deployments. Once everything is set up, you can watch the Job status from the BPCLI to make sure everything is running smoothly. Now you can repeat this simple step-by-step process for any new sources you would like to configure.

This is great for managing sources in BindPlane. But if you want to automate the deployment of BindPlane collectors with dynamic infrastructure environments, or simplify a large initial setup, you’ll need something like Terraform. Below you will see the BPCLI being run to set up a source configuration.

Click to enlarge

Terraform: Intelligent Automation

Now, if you’re not familiar with it,  Terraform is an open source infrastructure-as-code automation tool created by HashiCorp. Terraform enables easy collaboration as configurations can be stored in version control and be shared to the team of operators. Terraform providers are responsible for understanding API interactions and exposing resources. The Blue Medora Terraform provider for BindPlane really helps extend the automation capabilities of the BPCLI, allowing you to easily deploy, destroy and change BindPlane collectors in your environment—safely and securely. When the Terraform provider for BindPlane is live, you will also be able to find it in our public repository . When Terraform runs with the CLI, it will create a Postgres server, its credentials and a BindPlane collector on immediately. The animated gif below shows Terraform being applied to the system.

Terraform Apply
Terraform Run
Click to enlarge

The BPCLI allows you to create, view, and list resources like your sources, collectors and templates through executing commands. Combined with the Terraform provider the tool can also create and destroy as many sources and collectors as you want (to an extent) declaratively. This means that along with being able to create everything for you, it can intelligently manage the relationships between the different sources, collectors and their credentials. The animated gif below shows an example of applying a change with Terraform

Terraform Apply change
Apply Change
Click to enlarge
Postgres change
Changes Applied
Click to enlarge

This is a really cool feature, because when you change or destroy a source or collector, Terraform automatically determines the dependencies between everything else in your BindPlane configuration, allowing it to prevent destroying anything that has multiple dependent processes, meaning you won’t have to dig around to see if anything was left floating or orphaned. When you run a change, it will prompt you with a check to make sure you want to make the change. Below is the prompt you receive when trying to destroy resources with Terraform

Terraform destroy
Terraform Destroy
Click to enlarge

What Else can it do?

Along with integration, monitoring and an easy deployment, we also take security here at Blue Medora very seriously. Staying true to our values, the Terraform provider adds Vault support to the BPCLI right out of the box. Using Vault helps you protect your secrets like passwords, usernames and other sensitive data that you don’t want to show up when you run the command lines and ensures your secrets don’t show up if your code is made public. Another great feature of the Terraform provider interfaced with the BPCLI is it allows you to version and share your work across the team. This helps stop multiple people working on the same project from deploying at the same time and lets you revert to your previous work in case something goes wrong.

Get it all from our Repo

Well, what are you waiting for? The BPCLI is live today, head over to our GitHub public repository and get started on making your easy BindPlane deployment even easier! The Terraform provider is still a work in progress and will be coming soon, so keep an eye out for that announcement. Until then, you can still use the BPCLI to automate most of your deployment, you will just need to be careful about destroying resources, so you don’t break any important relationships.

Thank you for contacting us. Your information was received. We'll be in touch shortly.