For the past 6 years, we’ve built a strong partnership with VMware to deliver dozens of our metric monitoring integrations to VMware vRealize® Operations customers. Over that time, the True Visibility Suite of products has become a critical piece of the vRealize Operations landscape. It unlocks the power of vRealize Operations for customers looking to manage their entire enterprise stack from a single tool. Due to the success of the relationship and a desire to provide a seamless process for our shared customers, VMware acquired the True Visibility Suite from Blue Medora and will move forward with True Visibility Suite as a VMware solution. We at Blue Medora couldn’t be happier to see this powerful collection of tools reach more VMware customers.
So what’s next for Blue Medora?
Over the past several years, apart from our VMware business, Blue Medora has launched a number of market defining products, most recently BindPlane. BindPlane is the first of its kind monitoring integration as a service platform. BindPlane brings hundreds of metrics and monitoring integrations to half a dozen monitoring platforms.
In addition to our expertise in monitoring metrics, we have entered the log management space with BindPlace for Logs, a fully managed solution for gathering and shipping logs to your preferred destination.
What we’ve found after years of laser focus on the monitoring space and after talking to hundreds of customers is that there’s a gap in the log management space. Logs are a struggle for most organizations, the growth in log data has been exponential and the tools to monitor logs haven’t kept up with that expansion. Some of the largest organizations in the world are struggling to solve the challenge of shipping terabytes of log data every day, or every hour. The bottleneck is often at the agent level. We’ve been working hand in hand with some of our largest customers to develop a new, high performance, highly configurable, open source log agent capable of meeting their needs. We’re pleased to announce that today it’s available for download.
We’re excited to share this with you and as a way of marking this transition, Blue Medora is launching observIQ (www.observIQLabs.com). At observIQ we’re focused on solving the log data acquisition challenges through innovative open source products that are designed with performance and configurability first.
The open source log agent is the first product we’re releasing. Architected from the ground up for high performance, the agent is written in Go, optimized for low resource utilization, and designed to have even greater configurability and customizability than legacy agents. We’ve released an initial set of input and output plugins to support the most common workloads and will be releasing new plugins weekly.
Over the next few months, we’ll be launching observIQ Cloud which provides remote configuration, simplified deployment, and best-of-class visualization for a full log solution. Click here to join the beta.
At observIQ we’re producing solutions by engineers for engineers and we love hearing from our customers so connect with us, give our products a try, or consider working here by emailing careers if you share our passion for Observability!
New Strategy Provides Disruptive Open Source Observability Solutions for DevOps and ITOps
GRAND RAPIDS, MI. – July 21, 2020 – observIQ, a global leader of open source observability solutions for DevOps and ITOps, is being unveiled today. Blue Medora is launching this new brand after VMware’s acquisition of its True Visibility Suite team and products. With a specific focus on developing next-generation agent technologies, observIQ leverages over a decade of expertise in logs and metrics monitoring to emerge as a disruptor in the observability space. A key component of that strategy is observIQ’s high performance open source log agent, also being announced today.
“With log data growing exponentially, we set out to deliver a solution that radically changes how enterprise customers deal with massive amounts of machine data at cloud scale. Commercial solutions such as Splunk are too costly and inflexible, and today’s existing open source solutions don’t scale at the log agent level,” explained Bekim Protopapa, observIQ CEO. “The observIQ open source log agent extends the Elastic stack with unmatched technical performance – up to 10x faster than other leading log agents – effectively disrupting the open source logging landscape.”
observIQ’s agent is based on deep subject matter expertise established while developing and supporting Blue Medora’s BindPlane product, an IT operations data management platform that delivers a relationship-aware stream of metrics and logs in real time. “We continue to develop the BindPlane solution and are bringing our advanced log agent technology to BindPlane customers,” confirmed Mike Kelly, observIQ CTO. “Customers rely on BindPlane for mission critical infrastructure and application monitoring, and we remain committed to their success. We’re also excited to show our Blue Medora customers the incredible advancements we’ve made in log monitoring and management under the observIQ brand.”
observIQ also announced a powerful full stack SaaS log management platform built upon its high-performance log agent. Now in beta, observIQ Cloud allows customers to seamlessly monitor multi and hybrid cloud environments at enterprise scale.
To learn more about observIQ, please read Mike Kelly’s blog. Visit GitHub to download the open source log agent. To arrange a private demonstration of observIQ’s new solutions, please contact us at sales@observIQLabs.com.
observIQ’s mission is to build the best open source observability solutions for DevOps and ITOps. Built by engineers for engineers, observIQ has a specific focus on developing the latest next-generation agent technologies as part of its modern observability platform.
observIQ delivers scalable observability and intelligent control.
observIQ is a privately held, venture backed company, funded by Edison Partners, First Analysis, Lewis & Clark Ventures, eLab Ventures and others.
Blue Medora is thrilled to announce that we have entered into a definitive agreement for the True Visibility Suite team and products to be acquired by VMware. Blue Medora and VMware have partnered together for years, expanding vRealize Operations’ self-driving management scope and support for packaged applications, middleware, data center infrastructure and public clouds via management packs. We are excited that our customers will benefit from tighter integration and a more seamless customer experience. Learn More >
This is part 3 of our 3 part Google Cloud (GCP) and SAP blog series. This entry focuses on monitoring Google Cloud hosting SAP, keeping it healthy, and reducing downtime while troubleshooting. Read parts 1 and 2 for more background before jumping into Monitoring Google Cloud hosting SAP
Migrating SAP to Google Cloud: What’s Next?
So you have successfully migrated your SAP environment from on-prem to Google Cloud. Great! But your job’s not done yet, and unfortunately, it never really will be. We all know that implementing a new system architecture is never just a one and done deal. To get the most out of Google Cloud hosting SAP, and ensure that it is all running smoothly, you will want to strictly monitor the system’s performance and how it affects your business processes. Since SAP is the home to many crucial business processes that keep your organization running smoothly, you will want to establish a team that handles the monitoring and maintenance of the systems themselves. Hopefully, your migration plan will help curb the number of issues you run into with Google Cloud hosting SAP; however, if you find that you are running into bottlenecks consistently, there are a litany of reasons that could be causing your issues.
Metrics to Watch with Google Cloud Hosting SAP
When monitoring GCP itself, there are a few basic metrics that you should consider watching closely. When monitoring your network for any issues, you should check your VPC flow logs to help you understand how traffic moves through your network, allowing you to maintain and scale it accordingly. Logging and versioning your cloud storage buckets is a huge help when it comes to capturing important data that will help during an inspection of incidents and keep different versions of an object in a single bucket. It is also best to track general uptime and performance to help alert you to issues that need your attention.
Now that you are using Google Cloud to host your SAP environment, there are some great monitoring tools at your disposal. Google Cloud’s Operations Platform offers comprehensive monitoring with metrics monitoring. Google Cloud Monitoring allows you to monitor countless metrics related to your ABAP instances, database, and its components, dual-stack instances, java instances, Master Data Server Instances, and much more, all made possible by BindPlane. Through comprehensive dashboards in Google Cloud Monitoring, you can closely monitor SAP Key Performance indicators (KPI) that are critical to the success of your IT environment and your business processes in general. With Google Cloud hosting SAP, you also gain the ability to create system alerts that notify you whenever a certain metric or KPI exceeds your set threshold.
Intelligent Alerts with Google Cloud Monitoring
Alerts can be created to notify when the amount of usable memory remaining drops below your set minimum, or when usage exceeds your maximum threshold. Memory bottleneck indicators can be established and monitored, helping you stay on top of troubleshooting and get to the bottom of the issue before it becomes a real problem. You can also track read and write operations and how often and how often SAP is locked when users access it concurrently. This should help you understand workflows and how often your environment is locked up, slowing down operations. If you are using a hybrid-cloud environment to host SAP, not only can you monitor your cloud infrastructure, you can also monitor your on-prem hardware using BindPlane and Google Cloud Monitoring. You can create dashboards to monitor fan blade speeds, the rpm of your drives, average power consumption, and more. You can check out what other SAP performance metrics that can be monitored with Google Cloud Monitoring over on the BindPlane documentation page.
Your Job is Never Done when Monitoring Google Cloud hosting SAP
No matter how well your migration is planned out or how powerful your infrastructure is, it is inevitable that you will still run into hiccups and bottlenecks. Bindplane can help you mitigate the damage done by these issues and let you jump on the issue immediately to begin solving your system’s problems… While this blog series mainly focuses on general aspects of Google Cloud hosting SAP application server and using SAP for ERP purposes, Google Cloud supports hosting most of SAPs suite, including SAP HANA. BindPlane also supports metrics monitoring for SAP HANA in Google Cloud. You can learn more by viewing our integration page.
No matter how well your deployment is planned out or how powerful your infrastructure is, it is inevitable that you will still run into hiccups and bottlenecks. Bindplane can help you mitigate the damage done by these issues and let you jump on the problem immediately to begin solving your system’s problems. BindPlane is free to use for all Google Cloud Monitoring customers. Visit the Blue Medora website to learn more about how to get started monitoring your SAP environment within Google Cloud.
This is part 2 of our 3 part Google Cloud and SAP blog series. This entry focuses on creating an SAP on Google Cloud Migration Plan.
Now whether this is your first-time using SAP or you are migrating your existing ERP infrastructure from on-premises to Google Cloud (GCP), you will want to take the time to create a comprehensive migration plan. The biggest key to your success is to be adequately prepared for implementation and to have contingency plans in place for any problems that may arise. By consulting every resource at your disposal, you can prepare a successful deployment plan for migration.
So You’re Migrating to the Cloud (Architecture Review)
When creating your SAP on Google Cloud migration plan, the first step is to review the IT infrastructure that is currently in place. In part 1 of our SAP blog series, we discuss reviewing your existing IT environment to give you a baseline for what you need from a cloud solution. But for your migration plan, we are looking at a more in-depth review to help you determine what parts of your architecture that you can continue to operate, and what will need to be added or replaced. A thorough review of your network, databases, VMs, CPU and all of your other systems may uncover variables that will affect your transition to the cloud and influence whether or not a hybrid-cloud environment is the right move for you. These could be bottlenecks that will require you to scale your bandwidth, upgrade your physical machines, or increase the amount of storage and or memory that you currently have in place.
Determining this will help you avoid complications in your environment during, and after migration, which will be harder to resolve once everything is in place. A common mistake that occurs when the review is rushed or skipped is unknowingly altering or removing a piece of your infrastructure that had multiple key processes relying on it. Some best practices you should focus on when migrating to GCP are to optimize persistent disks to help performance and ensure your firewall rules allow GCP to run securely and at full capacity, ensuring a continuous delivery of data with little to no packet loss.
The Dream Team
Reviewing and re-learning the intricacies of your environment is step one of preparing for your migration. Step two is to have a good team of subject matter experts (SME) and decision-makers in place for implementation. This is crucial for success. It is highly recommended to have representation from each aspect of the business that these decisions will impact to ensure everyone’s voice is heard and that you can cover every potential problem. Only having your executive decision-makers and the “IT guy” in the room is a recipe for disaster when choosing how to implement SAP on GCP.
Your executives might be experts in the industry, and very involved in many day-to-day processes, but most of the time they can not be engrossed in every facet of the business that will be affected by this transition. This can cause some important information to be overlooked in one place or another. Executives should be focused on the big picture of this migration and how it will affect overall operations and business functions. Getting bogged down with every little detail will slow everything down to a standstill, that is why you should turn to your team and trust in their expertise and divide and conquer.
Creating your team will obviously depend on the size of your organization. If you are a small start-up you might only have your executives and a couple of employees to handle everything, and that’s not a problem. In a larger organization, this just won’t cut it. Your CIO or CTO may have helped build everything from the ground up, or they were just hired, either way, when was the last time they were involved in day-to-day security or networking operations? Your COO will be focused on how migration will affect overall operations and production, or how SKUs will be processed. They might look at how the migration will change how teams interact with each other but overlook how it affects the workflows of individual team members and how the new system could speed up or slow down their individual productivity. We could keep and cover the smaller, but important benefits and drawbacks that your CFO, CMO, or VP of sales might overlook that their subordinates would notice, but we think you get it.
Even if you completely ignore how your business processes are affected, migrating SAP itself will take team members that are familiar with the underlying aspects of SAP. Google Cloud offers tools that make migrating as easy as possible, but that doesn’t mean you can relax. Migrating SAP will require input from experts including ABAP and Java coders, database managers, network, and security admins. You need to consider the differences in experiences and needs between the GUI and Web App users.
Use All Available Resources
Once you know everything there is to know about your architecture and have your migration team in place, you can sit down and begin to create your full plan. Since it’s probably your first time migrating an environment to the cloud, you should start by looking around for examples of migration plans from organizations similar to yours. Blogs like these are a great kick-off point, but will only get you so far. There are plenty of free, in-depth resources out there on the internet. You can find comprehensive migration and implementation guides from Google’s SAP page. Here, you will also find other technical resources that can assist you with building your migration plans including reference architecture, best practices videos, and other support sources. However, you should not rely solely on these documents for implementation. You will also want to consult your SMEs and even bring in outside consultants to help with your plan and implementation. While you might be hesitant to allocate the funds for expert help with all of the other costs related to your migration, but unless you have an in-house expert, these outside SMEs will help minimize mistakes and save you valuable time in the long run. Look at it this way, if you don’t hire them in the beginning, you will end up hiring them later to fix the mistakes.
Well, this is as far as we can take you without turning this blog into a solution-brief. The rest will be up to you and your team. Continue to use the resources you have at hand and take it slow. It may seem like a long and tedious process, but rushing will only make things worse, costing you more resources in the long run. Once you have Implemented Google Cloud and migrated SAP, come back for part 3 of our SAP blog series where we will discuss monitoring the health and performance of SAP and your cloud environment, minimizing errors, and bottlenecks that could arise.