A Poorly Managed Environment Will Not Be Secure And It Is True In Reverse – Charles Leaver

Written by Charles Leaver Ziften CEO


If your enterprise computing environment is not appropriately managed there is no chance that it can be totally protected. And you can’t effectively manage those complicated business systems unless there’s a good sense that they are secure.

Some might call this a chicken-and-egg situation, where you don’t know where to start. Should you begin with security? Or should you begin with system management? That is the incorrect approach. Consider this instead like Reese’s Peanut Butter Cups: It’s not chocolate initially. It’s not peanut butter initially. Rather, both are mixed together – and dealt with as a single delicious treat.

Many companies, I would argue a lot of organizations, are structured with an IT management department reporting to a CIO, and with a security management group reporting to a CISO. The CIO team and the CISO team don’t know each other, talk with each other just when absolutely necessary, have unique budget plans, certainly have separate priorities, check out different reports, and make use of various management platforms. On an everyday basis, what makes up a task, an issue or an alert for one group flies totally under the other group’s radar.

That’s not good, since both the IT and security groups must make assumptions. The IT team thinks that everything is secure, unless somebody tells them otherwise. For example, they presume that devices and applications have actually not been compromised, users have not escalated their privileges, and so on. Likewise, the security team assumes that the servers, desktops, and mobiles are working properly, operating systems and applications are up to date, patches have actually been applied, etc

Since the CIO and CISO groups aren’t speaking to each other, do not comprehend each others’ roles and concerns, and aren’t using the exact same tools, those presumptions might not be appropriate.

And once again, you cannot have a safe and secure environment unless that environment is properly managed – and you can’t manage that environment unless it’s safe and secure. Or putting it another way: An environment that is not secure makes anything you carry out in the IT group suspect and irrelevant, and implies that you cannot understand whether the details you are seeing are right or controlled. It may all be phony news.

How to Bridge the IT / Security Space

The best ways to bridge that gap? It sounds simple however it can be hard: Ensure that there is an umbrella covering both the IT and security teams. Both IT and security report to the exact same individual or organization someplace. It might be the CIO, it might be the CFO, it might be the CEO. For the sake of argument here, let’s say it’s the CFO.

If the business does not have a protected environment, and there’s a breach, the worth of the brand name and the company may be decreased to nothing. Similarly, if the users, devices, infrastructure, application, and data aren’t well-managed, the company cannot work effectively, and the worth drops. As we have actually discussed, if it’s not well handled, it cannot be protected, and if it’s not secure, it cannot be well handled.

The fiduciary obligation of senior executives (like the CFO) is to protect the value of business assets, and that suggests making certain IT and security speak with each other, comprehend each other’s priorities, and if possible, can see the same reports and data – filtered and displayed to be meaningful to their specific areas of duty.

That’s the thought process that we adopted with the design of our Zenith platform. It’s not a security management tool with IT abilities, and it’s not an IT management tool with security abilities. No, it’s a Peanut Butter Cup, developed similarly around chocolate and peanut butter. To be less confectionery, Zenith is an umbrella that offers IT groups exactly what they require to do their tasks, and provides security groups what they need also – without coverage spaces that might weaken assumptions about the state of business security and IT management.

We need to guarantee that our business’s IT infrastructure is created on a protected structure – and that our security is implemented on a well managed base of hardware, infrastructure, software and users. We can’t run at peak efficiency, and with complete fiduciary obligation, otherwise.

Continuous Visibility Of The Endpoint Vital In This Work From Home Climate – Charles Leaver

Written By Roark Pollock And Presented By Charles Leaver Ziften CEO


A study recently completed by Gallup found that 43% of US citizens that were in employment worked remotely for some of their work time in 2016. Gallup, who has been surveying telecommuting trends in the United States for almost a decade, continues to see more staff members working beyond conventional offices and an increasing number of them doing this for more days out of the week. And, of course the variety of connected devices that the average employee uses has jumped also, which assists encourage the benefit and desire of working away from the workplace.

This mobility definitely makes for better staff members, and it is hoped more productive employees, however the problems that these patterns represent for both security and systems operations teams should not be dismissed. IT asset discovery, IT systems management, and hazard detection and response functions all take advantage of real-time and historical visibility into device, application, network connection and user activity. And to be genuinely effective, endpoint visibility and monitoring must work regardless of where the user and device are operating, be it on the network (regional), off the network but linked (remote), or detached (not online). Current remote working patterns are significantly leaving security and operational groups blind to potential concerns and risks.

The mainstreaming of these trends makes it even more tough for IT and security teams to limit what used to be deemed higher danger user habits, such as working from a coffeehouse. However that ship has actually sailed and today systems management and security groups have to have the ability to thoroughly track user, device, application, and network activity, spot anomalies and unsuitable actions, and implement appropriate action or fixes no matter whether an endpoint is locally connected, from another location linked, or detached.

In addition, the fact that numerous staff members now routinely gain access to cloud based applications and assets, and have back-up network or USB connected storage (NAS) drives at their homes further magnifies the requirement for endpoint visibility. Endpoint controls often provide the one and only record of activity being remotely performed that no longer always terminates in the organization network. Offline activity presents the most severe example of the requirement for continuous endpoint monitoring. Clearly network controls or network tracking are of negligible use when a device is operating offline. The setup of a suitable endpoint agent is vital to ensure the capture of all important system and security data.

As an example of the kinds of offline activities that may be detected, a customer was just recently able to track, flag, and report uncommon behavior on a business laptop computer. A high level executive transferred substantial amounts of endpoint data to an unapproved USB stick while the device was offline. Because the endpoint agent was able to gather this behavioral data during this offline duration, the client had the ability to see this unusual action and follow up appropriately. Through the continuous monitoring of the device, applications, and user behaviors even when the endpoint was disconnected, offered the customer visibility they never ever had previously.

Does your organization have constant tracking and visibility when staff member endpoints are offline? If so, how do you do so?

Machine Learning Technology Has Promise But Be Aware Of The Likely Consequences – Charles Leaver

Written By Roark Pollock And Presented By Ziften CEO Charles Leaver


If you are a student of history you will see numerous examples of serious unintentional consequences when new technology has been presented. It frequently surprises people that new technologies may have dubious purposes in addition to the positive purposes for which they are launched on the market but it takes place all the time.

For example, Train robbers using dynamite (“You believe you utilized adequate Dynamite there, Butch?”) or spammers utilizing email. More recently making use of SSL to hide malware from security controls has actually become more common just because the genuine use of SSL has made this technique more useful.

Due to the fact that brand-new technology is often appropriated by bad actors, we have no need to think this will not hold true about the brand-new generation of machine-learning tools that have actually reached the marketplace.

To what effect will there be misuse of these tools? There are most likely a couple of ways in which enemies might utilize machine-learning to their advantage. At a minimum, malware writers will evaluate their brand-new malware versus the brand-new class of innovative hazard security products in a bid to customize their code so that it is less probable to be flagged as destructive. The efficiency of protective security controls always has a half-life because of adversarial learning. An understanding of artificial intelligence defenses will help assailants be more proactive in lowering the effectiveness of machine learning based defenses. An example would be an enemy flooding a network with phony traffic with the hope of “poisoning” the machine-learning model being developed from that traffic. The goal of the opponent would be to deceive the defender’s artificial intelligence tool into misclassifying traffic or to develop such a high degree of false positives that the defenders would dial back the fidelity of the alerts.

Machine learning will likely likewise be utilized as an offensive tool by enemies. For instance, some scientists forecast that opponents will use artificial intelligence strategies to sharpen their social engineering attacks (e.g., spear phishing). The automation of the effort that is required to tailor a social engineering attack is particularly unpleasant provided the efficiency of spear phishing. The capability to automate mass customization of these attacks is a potent economic incentive for assailants to adopt the strategies.

Expect breaches of this type that deliver ransomware payloads to increase dramatically in 2017.

The requirement to automate tasks is a significant motivation of financial investment choices for both aggressors and protectors. Machine learning promises to automate detection and response and increase the functional pace. While the innovation will progressively end up being a standard part of defense in depth methods, it is not a magic bullet. It should be understood that attackers are actively working on evasion techniques around machine learning based detection products while likewise utilizing machine learning for their own offensive functions. This arms race will require defenders to progressively attain incident response at machine pace, further exacerbating the requirement for automated incident response capabilities.

Use Of Certain Commands Can Mean Threats – Charles Leaver

Written By Josh Harriman And Presented By Charles Leaver Ziften CEO


The repeating of a theme when it comes to computer system security is never ever a bad thing. As advanced as some attacks may be, you really need to look for and understand using common easily offered tools in your environment. These tools are usually used by your IT staff and more than likely would be white listed for use and can be missed out on by security groups mining through all the appropriate applications that ‘could’ be executed on an endpoint.

When someone has actually breached your network, which can be done in a range of ways and another blog for another day, indications of these programs/tools running in your environment must be examined to guarantee correct usage.

A couple of commands/tools and their features:

Netstat – Details on the existing connections on the network. This may be utilized to recognize other systems within the network.

Powershell – Built-in Windows command line function and can perform a host of actions for example getting important info about the system, killing procedures, including files or removing files etc

WMI – Another effective integrated Windows utility. Can move files around and gather essential system information.

Route Print – Command to view the local routing table.

Net – Including users/domains/accounts/groups.

RDP (Remote Desktop Protocol) – Program to access systems from a remote location.

AT – Set up jobs.

Looking for activity from these tools can take a long time and often be overwhelming, but is required to deal with who might be moving around in your environment. And not simply what is happening in real-time, however historically too to see a course somebody might have taken through the environment. It’s often not ‘patient zero’ that is the target, once they get a grip, they could use these tools and commands to begin their reconnaissance and lastly shift to a high value asset. It’s that lateral motion that you want to find.

You need to have the ability to collect the details gone over above and the ways to sift through to discover, alert, and examine this data. You can make use of Windows Events to monitor various modifications on a device and then filter that down.

Looking at some screen shots shown below from our Ziften console, you can see a quick distinction between what our IT group used to push out changes in the network, versus somebody running a very similar command themselves. This could be much like what you discover when someone did that from a remote location say by means of an RDP session.





An intriguing side note in these screenshots is that in all of the cases, the Process Status is ‘Terminated’. You would not observe this detail throughout a live examination or if you were not constantly collecting the data. However given that we are gathering all of the information continually, you have this historical data to take a look at. If in the event you were observing the Status as ‘Running’, this might suggest that someone is actually on that system as of now.

This only scratches the surface of what you must be collecting and how to evaluate exactly what is right for your network, which of course will be distinct from that of others. However it’s a good place to start. Harmful actors with the intention to do you harm will usually search for the path of least resistance. Why attempt and produce brand new and intriguing tools, when a great deal of what they need is currently there and ready to go.

Understanding The Distinction Between Incident Response And Forensic Analysis – Charles Leaver

Written By Roark Pollock And Presented By Ziften CEO Charles Leaver


There might be a joke someplace regarding the forensic analyst that was late to the incident response celebration. There is the seed of a joke in the idea at least however obviously, you have to understand the distinctions between incident response and forensic analysis to appreciate the capacity for humor.

Forensic analysis and incident response are related disciplines that can utilize comparable tools and associated data sets however likewise have some crucial differences. There are four particularly important differences between forensic analysis and incident response:

– Goals.
– Data requirements.
– Group skills.
– Advantages.

The distinction in the goals of forensic analysis and incident response is perhaps the most crucial. Incident response is focused on figuring out a quick (i.e., near real time) reaction to an instant risk or concern. For instance, a home is on fire and the firefighters that show up to put that fire out are involved in incident response. Forensic analysis is typically performed as part of a scheduled compliance, legal discovery, or police investigation. For example, a fire detective may take a look at the remains of that house fire to determine the overall damage to the house, the reason for the fire, and whether the source was such that other houses are likewise facing the same risk. In other words, incident response is concentrated on containment of a danger or problem, while forensic analysis is concentrated on a full understanding and extensive removal of a breach.

A 2nd major distinction between the disciplines is the data resources required to attain the goals. Incident response teams normally only need short-term data sources, typically no greater than a month or so, while forensic analysis teams usually need much longer lived logs and files. Remember that the typical dwell time of an effective attack is somewhere between 150 and 300 days.

While there is commonness in the workers abilities of incident response and forensic analysis groups, and in fact incident response is often considered a subset of the border forensic discipline, there are essential distinctions in task requirements. Both kinds of research require strong log analysis and malware analysis capabilities. Incident response needs the ability to quickly isolate a contaminated device and to develop means to reconcile or quarantine the device. Interactions tend to be with other operations and security staff member. Forensic analysis generally requires interactions with a much broader set of departments, including compliance, HR, legal and operations.

Not remarkably, the perceived advantages of these activities also differ.

The ability to get rid of a risk on one device in near real-time is a significant determinate in keeping breaches isolated and restricted in impact. Incident response, and proactive danger searching, is the first defense line in security operations. Forensic analysis is incident responses’ less attractive relative. However, the benefits of this work are indisputable. A thorough forensic examination allows the removal of all dangers with the cautious analysis of an entire attack chain of events. Which is no laughing matter.

Do your endpoint security processes allow both instant incident response, and long-lasting historic forensic analysis?

Part 1 Of Using Edit Difference For Detection – Charles Leaver

Written By Jesse Sampson And Presented By Charles Leaver CEO Ziften


Why are the exact same techniques being used by enemies over and over? The basic answer is that they are still working today. For instance, Cisco’s 2017 Cybersecurity Report tells us that after years of wane, spam email with malicious attachments is once again on the rise. Because conventional attack vector, malware authors usually conceal their activities by using a filename just like a typical system process.

There is not always a connection between a file’s path name and its contents: anyone who has tried to conceal delicate details by providing it a dull name like “taxes”, or altered the extension of a file attachment to circumvent e-mail guidelines understands this principle. Malware authors understand this too, and will often name malware to resemble common system procedures. For instance, “explore.exe” is Internet Explorer, but “explorer.exe” with an extra “r” may be anything. It’s simple even for professionals to neglect this small difference.

The opposite issue, known.exe files running in uncommon places, is simple to fix, using SQL sets and string functions.


What about the other case, finding near matches to the executable name? The majority of people begin their search for near string matches by arranging data and visually searching for discrepancies. This typically works effectively for a small set of data, maybe even a single system. To discover these patterns at scale, nevertheless, requires an algorithmic approach. One established strategy for “fuzzy matching” is to utilize Edit Distance.

Exactly what’s the very best method to determining edit distance? For Ziften, our technology stack consists of HP Vertica, which makes this task easy. The internet has lots of data scientists and data engineers singing Vertica’s praises, so it will be sufficient to point out that Vertica makes it easy to develop custom-made functions that take full advantage of its power – from C++ power tools, to analytical modeling scalpels in R and Java.

This Git repo is kept by Vertica lovers operating in industry. It’s not a certified offering, but the Vertica team is definitely familiar with it, and furthermore is thinking everyday about ways to make Vertica better for data scientists – a great space to watch. Most importantly, it contains a function to determine edit distance! There are also some other tools for the natural processing of langauge here like word stemmers and tokenizers.

By utilizing edit distance on the leading executable paths, we can rapidly discover the closest match to each of our leading hits. This is an interesting dataset as we can arrange by distance to discover the closest matches over the entire data set, or we can sort by frequency of the top path to see what is the nearest match to our typically utilized processes. This data can also surface on contextual “report card” pages, to reveal, e.g. the leading five closest strings for a given path. Below is a toy example to provide a sense of use, based upon real data ZiftenLabs observed in a customer environment.


Setting a threshold of 0.2 appears to discover excellent results in our experience, however the point is that these can be adapted to fit individual use cases. Did we discover any malware? We notice that “teamviewer_.exe” (must be simply “teamviewer.exe”), “iexplorer.exe” (should be “iexplore.exe”), and “cvshost.exe” (must be svchost.exe, unless perhaps you work for CVS pharmacy…) all look weird. Considering that we’re already in our database, it’s likewise insignificant to get the associated MD5 hashes, Ziften suspicion ratings, and other attributes to do a deeper dive.


In this specific real-life environment, it turned out that teamviewer_.exe and iexplorer.exe were portable applications, not known malware. We assisted the client with more investigation on the user and system where we observed the portable applications because use of portable apps on a USB drive could be proof of naughty activity. The more troubling find was cvshost.exe. Ziften’s intelligence feeds indicate that this is a suspect file. Searching for the md5 hash for this file on VirusTotal verifies the Ziften data, indicating that this is a potentially major Trojan virus that may be a component of a botnet or doing something much more harmful. When the malware was discovered, however, it was simple to solve the problem and make sure it remains resolved using Ziften’s capability to kill and constantly block processes by MD5 hash.

Even as we develop sophisticated predictive analytics to identify harmful patterns, it is very important that we continue to improve our capabilities to hunt for known patterns and old tricks. Just because brand-new hazards emerge does not imply the old ones go away!

If you enjoyed this post, keep looking here for part 2 of this series where we will use this approach to hostnames to detect malware droppers and other malicious sites.

Increasing Numbers Of Connected Devices Will Present A Number Of Endpoint Challenges – Charles Leaver

Written By Roark Pollock And Presented By Ziften CEO Charles Leaver


It wasn’t long ago that everyone knew exactly what you meant if you raised the issue of an endpoint. If somebody wished to sell you an endpoint security solution, you understood exactly what devices that software was going to protect. But when I hear someone casually discuss endpoints today, The Princess Bride’s Inigo Montoya enters my mind: “You keep utilizing that word. I don’t believe it suggests what you believe it means.” Today an endpoint could be practically any type of device.

In truth, endpoints are so varied today that individuals have reverted to calling them “things.” According to Gartner at the end of 2016 there were over 6 billion “things” connected to the web. The consulting company forecasts that this number will grow to twenty one billion by the year 2020. Business uses of these things will be both generic (e.g. connected light bulbs and HVAC systems) and industry specific (e.g. oil rig security monitoring). For IT and security groups responsible for connecting and protecting endpoints, this is only half of the new difficulty, however. The acceptance of virtualization technology has actually redefined what an endpoint is, even in environments where these groups have generally run.

The last decade has seen a massive modification in the way end users gain access to information. Physical devices continue to be more mobile with many information employees now doing most of their computing and interaction on laptops and mobile phones. More significantly, everyone is becoming an info employee. Today, much better instrumentation and monitoring has permitted levels of data collection and analysis that can make the insertion of info-tech into practically any task successful.

At the same time, more traditional IT assets, especially servers, are becoming virtualized to remove a few of the traditional restrictions in having those assets tied to physical devices.

These two patterns together will impact security groups in essential ways. The totality of “endpoints” will consist of billions of long lived and unsecure IoT endpoints along with billions of virtual endpoint instances that will be scaled up and down as needed along with migrated to various physical places as needed.

Organizations will have really different concerns with these two general kinds of endpoints. Over their life times, IoT devices will need to be safeguarded from a host of risks some of which have yet to be dreamed up. Monitoring and safeguarding these devices will need advanced detection abilities. On the plus side, it will be possible to maintain distinct log data to make it possible for forensic examination.

Virtual endpoints, on the other hand, present their own important concerns. The ability to move their physical location makes it far more hard to guarantee right security policies are always attached to the endpoint. The practice of re-imaging virtual endpoints can make forensic investigation tough, as essential data is usually lost when a new image is used.

So no matter what word or phrases are used to describe your endpoints – endpoint, systems, client device, user device, mobile phone, server, virtual machine, container, cloud workload, IoT device, and so on – it is essential to understand precisely what someone suggests when they utilize the term endpoint.

Detection Is Crucial Post Compromise – Charles Leaver

Written By Dr Al Hartmann And Presented By Charles Leaver CEO Ziften


If Avoidance Has Stopped working Then Detection Is Vital

The last scene in the well known Vietnam War film Platoon depicts a North Vietnamese Army regiment in a surprise night attack breaching the concertina wire border of an American Army battalion, overrunning it, and slaughtering the shocked protectors. The desperate company commander, comprehending their dire protective dilemma, orders his air support to strike his own position: “For the record, it’s my call – Dispose whatever you have actually got left on my position!” Minutes later on the battleground is immolated in a napalm hellscape.

Although physical dispute, this highlights 2 aspects of cybersecurity (1) You need to deal with inevitable perimeter breaches, and (2) It can be bloody hell if you do not discover early and react powerfully. MITRE Corporation has been leading the call for rebalancing cybersecurity priorities to place due focus on breach detection in the network interior instead of merely focusing on penetration prevention at the network perimeter. Instead of defense in depth, the latter produces a flawed “tootsie pop” defense – hard, crispy shell, soft chewy center. Writing in a MITRE blog, “We could see that it would not be a question of if your network will be breached however when it will be breached,” explains Gary Gagnon, MITRE’s senior vice president, director of cybersecurity, and primary security officer. “Today, organizations are asking ‘What length of time have the trespassers been within? How far have they gone?'”.

Some call this the “assumed breach” approach to cybersecurity, or as posted to Twitter by F-Secure’s Chief Research Officer:.

Q: How many of the Fortune 500 are compromised – Answer: 500.

This is based upon the likelihood that any sufficiently complicated cyber environment has an existing compromise, and that Fortune 500 enterprises are of magnificently complex scale.

Shift the Burden of Perfect Execution from the Defenders to the Attackers.

The standard cybersecurity viewpoint, originated from the legacy border defense model, has actually been that the assailant just has to be right one time, while the defender needs to be right all the time. An adequately resourced and relentless attacker will eventually achieve penetration. And time to successful penetration decreases with increasing size and complexity of the target business.

A border or prevention reliant cyber defense model basically demands ideal execution by the protector, while ceding success to any adequately sustained attack – a plan for particular cyber catastrophe. For example, a leading cybersecurity red team reports successful enterprise penetration in under 3 hours in greater than 90% of their client engagements – and these white hats are restricted to ethical ways. Your enterprise’s black hat opponents are not so constrained.

To be feasible, the cyber defense technique must turn the tables on the hackers, moving to them the unattainable burden of ideal execution. That is the reasoning for a strong detection capability that constantly monitors endpoint and network habits for any unusual indications or observed enemy footprints inside the boundary. The more sensitive the detection ability, the more care and stealth the hackers should exercise in committing their kill chain sequence, and the more time and labor and skill they must invest. The protectors need but observe a single assailant tramp to uncover their foot tracks and loosen up the attack kill chain. Now the protectors end up being the hunter, the hackers the hunted.

The MITRE ATT&CK Design.

MITRE provides a detailed taxonomy of hacker footprints, covering the post-compromise section of the kill chain, understood by the acronym ATT&CK, for Adversarial Tactics, Techniques, and Common Knowledge. ATT&CK project group leader Blake Strom says, “We chose to focus on the post attack period [portion of kill chain lined in orange below], not just because of the strong likelihood of a breach and the dearth of actionable information, but also because of the many opportunities and intervention points offered for efficient protective action that do not always rely on anticipation of adversary tools.”




As displayed in the MITRE figure above, the ATT&CK model offers additional granularity on the attack kill chain post compromise phases, breaking these out into 10 strategy classifications as shown. Each strategy classification is additionally detailed into a list of methods an attacker might utilize in carrying out that tactic. The January 2017 design update of the ATT&CK matrix lists 127 techniques throughout its ten strategy categories. For instance, Computer registry Run Keys/ Start Folder is a method in the Perseverance classification, Brute Force is a technique in the Credentials category, and Command-Line Interface is a technique in the Execution category.

Leveraging Endpoint Detection and Response (EDR) in the ATT&CK Design.

Endpoint Detection and Response (EDR) solutions, such as Ziften supplies, use vital visibility into attacker usage of strategies noted in the ATT&CK design. For instance, Computer system registry Run Keys/ Start Folder method use is reported, as is Command-Line Interface usage, since these both include easily observable endpoint behavior. Strength usage in the Qualifications classification must be blocked by design in each authentication architecture and be observable from the resulting account lockout. But even here the EDR product can report events such as failed login attempts, where an attacker might have a couple of guesses to attempt this, while staying under the account lockout attempt limit.

For mindful protectors, any method usage might be the attack giveaway that unravels the entire kill chain. EDR solutions compete based on their method observation, reporting, and signaling capabilities, in addition to their analytics capability to perform more of the attack pattern detection and kill chain reconstruction, in support of protecting security analysts staffing the enterprise SOC. Here at Ziften we will detail more of EDR solution capabilities in support of the ATT&CK post compromise detection design in future blog posts in this series.

The Buzz From RSA 2017 Is That Enterprises Demand Tailored Security Solutions – Charles Leaver

Written By Michael Vaughan And Presented By Charles Leaver Ziften CEO


More tailored products are required by security, network and operational groups in 2017

A number of us have actually participated in security conventions over the years, but none bring the same high level of enjoyment as RSA – where security is talked about by the world. Of all the conventions I have attended and worked, absolutely nothing comes close the passion for brand-new innovation people displayed this previous week in downtown San Francisco.

After taking a couple of days to digest the lots of discussions about the requirements and restrictions with existing security tech, Ihave actually been able to synthesize a particular theme amongguests: Individuals want personalized solutions that fit their environment and work well throughout several internal groups.

When I describe the term “individuals,” I mean everyone in attendance regardless of technological section. Operational professionals, security pros, network veterans, as well as user habits analysts frequented the Ziften booth and shared their stories with us.

Everybody seemed more prepared than ever to discuss their wants and needs for their environment. These guests had their own set of objectives they wanted to attain within their department and they were hungry for answers. Since the Ziften Zenith service offers such broad visibility on business devices, it’s not unexpected that our booth stayed crowded with individuals eager to read more about a brand-new, refreshingly easy endpoint security innovation.

Attendees featured grievances about myriad enterprise centric security concerns and looked for deeper insight into exactly what’s truly taking place on their network and on devices traveling in and out of the office.

End users of old-school security solutions are on the look
out for a more recent, more essential software.

If I could choose just one of the regular questions I received at RSA to share, it’s this one:

” What exactly is endpoint discovery?”

1) Endpoint discovery: Ziften exposes a historical view of unmanaged devices which have been connected to other business endpoints at some
time. Ziften allows users to find recognized and unidentified entities which are active or have actually been interactive with recognized endpoints.

a. Unmanaged Asset Discovery: Ziften utilizes our extension platform to
expose these unknown entities working on the network.

b. Extensions: These are custom-fit services customized to the user’s particular desires and requirements. The Ziften Zenith agent can execute the designated extension one time, on a schedule or on a continuous basis.

Almost always after the above explanation came the genuine factor they were going to:

People are looking for a large range of options for different departments, which includes executives. This is where working at Ziften makes answering this question a real treat.

Only a part of the RSA guests are security experts. I spoke with dozens of network, operation, endpoint management, vice presidents, general supervisors and channel partners.

They clearly all use and understand the need for quality security software however relatively find the translation to business worth missing out among security vendors.

NetworkWorld’s Charles Araujo phrased the problem quite well in a post last week:

Businesses must also rationalize security data in a service context and manage it holistically as part of the general IT and company operating design. A group of suppliers is likewise trying to tackle this challenge …

Ziften was among only three businesses mentioned.

After paying attention to those wants and needs of individuals from different business critical backgrounds and discussing to them the abilities of Ziften’s Extension platform, I typically explained how Ziften would regulate an extension to fulfill their need, or I gave them a short demo of an extension that would enable them to overcome a difficulty.

2) Extension Platform: Customized, actionable options.

a. SKO Silos: Extensions based on fit and need (operations, network, endpoint, etc).

b. Customized Requests: Require something you can’t see? We can fix that for you.

3) Boosted Forensics:

a. Security: Risk management, Danger Assessment, Vulnerabilities, Metadata that is suspicious.

b. Operations: Compliance, License Justification, Unmanaged Assets.

c. Network: Ingress/Egress IP motion, Domains, Volume metadata.

4) Visibility within the network– Not simply exactly what enters and goes out.

a. ZFlow: Lastly see the network traffic inside your enterprise.

Needless to say, everybody I talked to in our booth quickly comprehended the critical benefit of having a tool such as Ziften Zenith running in and throughout their business.

Forbes writer, Jason Bloomberg, said it best when he recently explained the future of enterprise security software and how all signs point toward Ziften blazing a trail:

Possibly the broadest interruption: suppliers are improving their ability to understand how bad actors act, and can thus take steps to prevent, identify or mitigate their malicious activities. In particular, today’s vendors understand the ‘Cyber Kill Chain’ – the actions a skilled, patient hacker (understood in the biz as an innovative persistent threat, or APT) will require to accomplish his/her nefarious objectives.

The product of U.S. Defense professional Lockheed Martin,
The Cyber Kill Chain contains seven links: reconnaissance, weaponization, shipment, exploitation, installation, developing command and control, and actions on objectives.

Today’s more ingenious vendors target several of these links, with the goal of avoiding, finding or mitigating the attack. Five suppliers at RSA stood apart in this category.

Ziften offers an agent based  technique to tracking the behavior of users, devices, applications, and network aspects, both in real time in addition to throughout historical data.

In real time, analysts utilize Ziften for hazard recognition and avoidance,
while they use the historic data to uncover steps in the kill chain for mitigation and forensic purposes.

Read This To Ensure That Operational Problems Do Not Become Security Issues – Charles Leaver

Written By Dr Al Hartmann And Presented By Ziften CEO Charles Leaver


Return to Essentials With Hygiene And Avoid Serious Problems

When you were a kid you will have been taught that brushing your teeth effectively and flossing will avoid the need for pricey crowns and root canal procedures. Basic health is way simpler and far cheaper than neglect and disease. This same lesson applies in the realm of enterprise IT – we can run a sound operation with correct endpoint and network hygiene, or we can deal with mounting security problems and disastrous data breaches as lax hygiene extracts its difficult toll.

Functional and Security Issues Overlap

Endpoint Detection and Response (EDR) tools like those we have created here at Ziften supply analytic insight into system operation across the enterprise endpoint population. They likewise supply endpoint-derived network operation insights that substantially broaden on wire visibility alone and extend into virtual and cloud environments. These insights benefit both security and operations groups in considerable ways, given the considerable overlap between functional and security issues:

On the security side, EDR tools offer critical situational awareness for event response. On the functional side, EDR tools provide vital endpoint visibility for functional control. Important situational awareness demands a baseline understanding of endpoint population running norms, which comprehending facilitates correct operational control.

Another method to explain these interdependencies is:

You cannot protect what you do not manage.
You cannot control what you don’t measure.
You can’t measure what you do not track.

Managing, measuring, and monitoring has as much to do with the security role as with the functional role, do not aim to split the child. Management indicates adherence to policy, that adherence must be measured, and operational measurements constitute a time series that must be tracked. A few sporadic measurements of important dynamic time series lacks interpretive context.

Tight security does not make up for lax management, nor does tight management compensate for lazy security. [Check out that once more for emphasis.] Mission execution imbalances here lead to unsustainable ineffectiveness and scale obstacles that inevitably cause significant security breaches and operational shortages.

Areas Of Overlap

Substantial overlaps between functional and security problems consist of:

Configuration hardening and basic images
The group policy
Application control and cloud management
Network division and management
Security of data and file encryption
Asset management and device restoration
Mobile device management
Log management
Backups and data restoration
Vulnerability and patch management
Identity management
Management of access
Employee continual cyber awareness training

For instance, asset management and device restore as well as backup and data restore are most likely operational group responsibilities, however they end up being significant security problems when ransomware sweeps the enterprise, bricking all devices (not just the typical endpoints, but any network connected devices such as printers, badge readers, security cams, network routers, medical imaging devices, commercial control systems, etc.). Exactly what would your business response time be to reflash and revitalize all device images from scratch and restore their data? Or is your contingency plan to immediately stuff the attackers’ Bitcoin wallets and hope they haven’t exfiltrated your data for further extortion and monetization. And why would you offload your data restoration obligation to a criminal syndicate, blindly trusting in their perfect data restoration integrity – makes absolutely zero sense. Operational control responsibility rests with the business, not with the opponents, and may not be shirked – shoulder your duty!

For another example, standard image construction using best practices setup hardening is clearly a joint responsibility of operations and security staff. In contrast to inefficient signature based endpoint protection platforms (EPP), which all large business breach victims have actually long had in place, setup hardening works, so bake it in and continuously revitalize it. Likewise consider the needs of business personnel whose job function needs opening of unsolicited email attachments, such as resumes, billings, legal notices, or other required files. This must be done in a cloistered virtual sandbox environment, not on your production endpoints. Security staff will make these decisions, however operations staff will be imaging the endpoints and supporting the workers. These are shared duties.

Example Of Overlap:

Use a safe environment to detonate. Do not utilize production endpoints for opening unsolicited but needed email files, like resumes, invoices, legal notices, and so on

Focus Limited Security Resources on the Tasks Only They Can Perform

A lot of big businesses are challenged to successfully staff all their security functions. Left unaddressed, deficiencies in functional efficiency will burn out security staff so quickly that security functions will constantly be understaffed. There won’t be enough fingers on your security group to jam in the increasing holes in the security dike that lax or inattentive endpoint or network or database management produces. And it will be less hard to staff operational roles than to staff security roles with gifted analysts.

Transfer routine formulaic activities to operations personnel. Focus restricted security resources on the jobs just they can carry out:

Security Operations Center (SOC) staffing
Preventative penetration screening and red teaming
Reactive occurrence response and forensics
Proactive attack searching (both insider and external).
Security oversight of overlapping functional functions (ensure existing security frame of mind).
Security policy advancement and stake holder buy-in.
Security architecture/tools/methodology design, selection, and development.

Impose disciplined operations management and focus limited security resources on vital security roles. Then your business might prevent letting operations concerns fester into security issues.