Visibility, at its simplest level, is the ability to instrument or observe an environment. This allows us to gain access to data which describes its state and performance characteristics. You can describe monitoring as the use of this data to identify situations of interest. From there you can then drive operational and business processes to address them. In essence, visibility and monitoring answer the question of how an environment is performing.
Traditionally, the focus of visibility and monitoring has been on the individual elements that make up the environment. The assumption is that these collections of data will somehow synthesize into an integrated view. This is rarely, if ever, achieved. As the services provided by IT move from being necessary evils to the primary interface point between organizations and their customers, this focus in changing. What is the quality of service delivery? What is the customer experience? These are the questions that are being asked. The business is demanding visibility that is relevant to them. They are no longer willing to accept the opaque, disjointed, and technology focused monitoring of the past.
The New Service Delivery Challenge
Successfully providing this visibility to the business can be challenging. Without question, the advent of technological advances such as Cloud Computing, the “Internet of Things” (IoT), and “Software Defined Networking” (SDN) are enabling innovation at a blazing pace. But these same technologies also make the environment vastly more complex with respect to monitoring.
Elastic computing, virtualization, and cloud computing have an obvious impact with respect to scale. But more important, they make the components that services are created from far more transient. Components may be spun up for days, hours, or even minutes in response to capacity demands then spun down again just as quickly as the demand passes. The transient nature of these components invalidates many of the fundamental assumptions made by traditional monitoring systems. In a similar way, the IoT presents similar or greater scaling challenges. More important, it introduces a huge population of devices that do not support monitoring in the style of legacy servers and network equipment. In many cases, they do not supporting monitoring at all.
The moves to agile development and continuous delivery introduce more challenges. Because both are associated with an accelerated rate of change, they contributes to the increasingly dynamic nature of the environments. Add to this the fact that in the continuous development paradigm, visibility becomes a core functionality that can make or break the success of the delivery pipeline.
Watching Who’s Talking
Addressing these challenges requires fundamental changes in our approach. One of the more promising strategies comes from the realization that for all the complexity, there remain sources of consistent data that can be leveraged. One of these is the information traversing the network, also referred to as wire data. The network contains all communication which occurs between the customer and a service as well as between all the components that participate in providing that service. With appropriate technologies such as taps and packet brokers, access to this data is available via completely non-intrusive mechanisms which impose no impact on network devices. Better yet, they require no changes at all to the applications and other components which are participating in service delivery. Furthermore, the data itself is intrinsic to the delivery of the service, not additional overhead added to allow the visibility.
Wire Data has Challenges
The understanding of the value in wire data is not new. Network engineers have leveraged it to provide insight into the behavior of the network itself for years. But the amount of data is vast. As network capacity has expanded, a traditional focus on capturing all of the information in the form of raw packets has imposed a greater and greater “storage tax”. There is minuscule signal to noise ratio using the limited tooling and post-processing approach of traditional packet capture solutions. Extracting usable knowledge from that volume of data has limited its successful use to a small number of highly technical network engineers.
Big Data Spawns Opportunities
It is only recently that advances in big data theory and tooling have begun to allow real-time normalization and analytics of this wire data in motion. Consequently, this allows us to answer questions about the services as a whole. More important, these capabilities do not require in house teams of data scientists and custom code development to realize. New products from companies like NetScout and ExtraHop provide technical capability but still stress ease of implementation and use. It is now possible for any organization to transform the data flowing through their network into knowledge that can be visualized and interacted with. This access is available to IT as well as business domain experts throughout the entire organization. It has even spawned a new area of practice in the form of “IT Operational Intelligence”.
Is wire data a panacea for visibility and monitoring in the modern age? Does it obviate the need for all other monitoring? Of course not. But it does provide one of the more promising visibility strategies for dealing with the scale, transience, and ambiguity of today’s services. This make it a solid contributor to any modern monitoring strategy.