Written by Roark Pollock and Presented by Ziften CEO Charles Leaver
According to Gartner public cloud services market surpassed $208 billion in 2016. This represented about a 17% rise year over year. Not bad considering the on-going issues most cloud clients still have regarding data security. Another especially fascinating Gartner finding is the typical practice by cloud customers to contract services to several public cloud service providers.
In accordance with Gartner “most companies are already utilizing a mix of cloud services from various cloud companies”. While the business rationale for making use of multiple vendors is sound (e.g., avoiding supplier lock in), the practice does create extra intricacy intracking activity throughout an company’s increasingly fragmented IT landscape.
While some service providers support better visibility than others (for example, AWS CloudTrail can monitor API calls throughout the AWS infrastructure) companies need to comprehend and address the visibility issues related to transferring to the cloud regardless of the cloud service provider or service providers they deal with.
Unfortunately, the capability to track application and user activity, and networking interactions from each VM or endpoint in the cloud is restricted.
Irrespective of where computing resources reside, companies must answer the concerns of “Which users, devices, and applications are communicating with each other?” Organizations require visibility across the infrastructure so that they can:
- Rapidly identify and prioritize concerns
- Speed origin analysis and recognition
- Lower the mean-time to repair problems for end users
- Quickly determine and get rid of security dangers, lowering total dwell times.
Alternatively, bad visibility or bad access to visibility data can lower the effectiveness of existing security and management tools.
Businesses that are comfortable with the maturity, ease, and relative low cost of keeping track of physical data centers are apt to be disappointed with their public cloud choices.
What has been lacking is a basic, common, and stylish service like
NetFlow for public cloud infrastructure.
NetFlow, of course, has had 20 years or thereabouts to become a de facto requirement for network visibility. A common deployment includes the monitoring of traffic and aggregation of flows at network chokepoints, the collection and storage of flow info from several collection points, and the analysis of this flow information.
Flows consist of a basic set of destination and source IP addresses and port and protocol information that is generally collected from a switch or router. Netflow data is fairly inexpensive and easy to gather and provides nearly ubiquitous network visibility and enables actionable analysis for both network tracking and performance management applications.
Most IT staffs, specifically networking and some security groups are very comfy with the technology.
However NetFlow was developed for fixing exactly what has actually become a rather restricted issue in the sense that it just collects network info and does so at a minimal variety of prospective locations.
To make better use of NetFlow, two essential modifications are required.
NetFlow at the Edge: First, we have to broaden the useful deployment scenarios for NetFlow. Instead of only gathering NetFlow at network points of choke, let’s broaden flow collection to the edge of the network (cloud, servers and clients). This would considerably expand the big picture that any NetFlow analytics provide.
This would allow companies to enhance and take advantage of existing NetFlow analytics tools to get rid of the ever increasing blind spot of visibility into public cloud activity.
Rich, contextual NetFlow: Second, we have to utilize NetFlow for more than easy network visibility.
Instead, let’s utilize an extended version of NetFlow and include data on the device, application, user, and binary responsible for each tracked network connection. That would permit us to rapidly associate every network connection back to its source.
In fact, these 2 modifications to NetFlow, are precisely what Ziften has accomplished with ZFlow. ZFlow provides an expanded variation of NetFlow that can be deployed at the network edge, also as part of a container or VM image, and the resulting information collection can be consumed and examined with existing NetFlow analysis tools. Over and above conventional NetFlow Internet Protocol Flow Information eXport (IPFIX) networking visibility, ZFlow offers higher visibility with the inclusion of details on device, application, user and binary for every network connection.
Ultimately, this enables Ziften ZFlow to provide end-to-end visibility in between any two endpoints, physical or virtual, removing traditional blind spots like East West traffic in data centers and enterprise cloud deployments.