If You Want The Best End To End Protection For Your Organization Choose Ziften – Charles Leaver

Written By Ziften CEO Charles Leaver

 

Do you wish to handle and protect your end points, your data center, the cloud and your network? In that case Ziften can provide the ideal service for you. We gather data, and let you correlate and utilize that data to make decisions – and remain in control over your enterprise.

The info that we obtain from everybody on the network can make a real world distinction. Think about the inference that the 2016 U.S. elections were influenced by hackers in another country. If that’s the case, hackers can do practically anything – and the concept that we’ll go for that as the status quo is simply ridiculous.

At Ziften, our company believe the way to combat those threats is with higher visibility than you’ve ever had. That visibility goes across the entire enterprise, and connects all the major players together. On the back end, that’s real and virtual servers in the data center and the cloud. That’s infrastructure and applications and containers. On the other side, it’s notebooks and desktops, no matter where and how they are connected.

End-to-end – that’s the believing behind all that we do at Ziften. From endpoint to cloud, all the way from an internet browser to a DNS server. We connect all that together, with all the other parts to offer your business a complete service.

We also catch and save real-time data for approximately one year to let you know what’s happening on the network today, and offer historic trend analysis and cautions if something changes.

That lets you discover IT faults and security problems instantly, as well as be able to search out the origin by recalling in time to uncover where a fault or breach may have first happened. Active forensics are an outright requirement in this business: After all, where a breach or fault tripped an alarm might not be the place where the issue began – or where a hacker is operating.

Ziften supplies your security and IT groups with the visibility to comprehend your existing security posture, and identify where enhancements are needed. Endpoints non-compliant? Found. Rogue devices? These will be discovered. Penetration off-network? This will be detected. Obsolete firmware? Unpatched applications? All found. We’ll not only help you discover the issue, we’ll help you fix it, and make sure it stays fixed.

End to end security and IT management. Real time and historical active forensics. In the cloud, offline and onsite. Incident detection, containment and response. We have actually got it all covered. That’s what makes Ziften much better.

Our Enhancing Of NetFlow Will Provide You With Close Monitoring Of Cloud Activities – Charles Leaver

Written by Roark Pollock and Presented by Ziften CEO Charles Leaver

 

According to Gartner public cloud services market surpassed $208 billion in 2016. This represented about a 17% rise year over year. Not bad considering the on-going issues most cloud clients still have regarding data security. Another especially fascinating Gartner finding is the typical practice by cloud customers to contract services to several public cloud service providers.

In accordance with Gartner “most companies are already utilizing a mix of cloud services from various cloud companies”. While the business rationale for making use of multiple vendors is sound (e.g., avoiding supplier lock in), the practice does create extra intricacy intracking activity throughout an company’s increasingly fragmented IT landscape.

While some service providers support better visibility than others (for example, AWS CloudTrail can monitor API calls throughout the AWS infrastructure) companies need to comprehend and address the visibility issues related to transferring to the cloud regardless of the cloud service provider or service providers they deal with.

Unfortunately, the capability to track application and user activity, and networking interactions from each VM or endpoint in the cloud is restricted.

Irrespective of where computing resources reside, companies must answer the concerns of “Which users, devices, and applications are communicating with each other?” Organizations require visibility across the infrastructure so that they can:

  • Rapidly identify and prioritize concerns
  • Speed origin analysis and recognition
  • Lower the mean-time to repair problems for end users
  • Quickly determine and get rid of security dangers, lowering total dwell times.

Alternatively, bad visibility or bad access to visibility data can lower the effectiveness of existing security and management tools.

Businesses that are comfortable with the maturity, ease, and relative low cost of keeping track of physical data centers are apt to be disappointed with their public cloud choices.

What has been lacking is a basic, common, and stylish service like
NetFlow for public cloud infrastructure.

NetFlow, of course, has had 20 years or thereabouts to become a de facto requirement for network visibility. A common deployment includes the monitoring of traffic and aggregation of flows at network chokepoints, the collection and storage of flow info from several collection points, and the analysis of this flow information.

Flows consist of a basic set of destination and source IP addresses and port and protocol information that is generally collected from a switch or router. Netflow data is fairly inexpensive and easy to gather and provides nearly ubiquitous network visibility and enables actionable analysis for both network tracking and performance management applications.

Most IT staffs, specifically networking and some security groups are very comfy with the technology.

However NetFlow was developed for fixing exactly what has actually become a rather restricted issue in the sense that it just collects network info and does so at a minimal variety of prospective locations.

To make better use of NetFlow, two essential modifications are required.

NetFlow at the Edge: First, we have to broaden the useful deployment scenarios for NetFlow. Instead of only gathering NetFlow at network points of choke, let’s broaden flow collection to the edge of the network (cloud, servers and clients). This would considerably expand the big picture that any NetFlow analytics provide.

This would allow companies to enhance and take advantage of existing NetFlow analytics tools to get rid of the ever increasing blind spot of visibility into public cloud activity.

Rich, contextual NetFlow: Second, we have to utilize NetFlow for more than easy network visibility.

Instead, let’s utilize an extended version of NetFlow and include data on the device, application, user, and binary responsible for each tracked network connection. That would permit us to rapidly associate every network connection back to its source.

In fact, these 2 modifications to NetFlow, are precisely what Ziften has accomplished with ZFlow. ZFlow provides an expanded variation of NetFlow that can be deployed at the network edge, also as part of a container or VM image, and the resulting information collection can be consumed and examined with existing NetFlow analysis tools. Over and above conventional NetFlow Internet Protocol Flow Information eXport (IPFIX) networking visibility, ZFlow offers higher visibility with the inclusion of details on device, application, user and binary for every network connection.

Ultimately, this enables Ziften ZFlow to provide end-to-end visibility in between any two endpoints, physical or virtual, removing traditional blind spots like East West traffic in data centers and enterprise cloud deployments.

Part 2 Of Using Edit Difference For Detection – Charles Leaver

Written By Jesse Sampson And Presented By Charles Leaver CEO Ziften

 

In the very first about edit distance, we looked at searching for harmful executables with edit distance (i.e., how many character modifications it takes to make two matching text strings). Now let’s take a look at how we can use edit distance to look for malicious domains, and how we can build edit distance functions that can be integrated with other domain name features to pinpoint suspect activity.

Here is the Background

What are bad actors doing with harmful domains? It might be merely using a close spelling of a typical domain name to fool negligent users into looking at ads or getting adware. Legitimate websites are slowly catching onto this technique, often called typo squatting.

Other harmful domain names are the result of domain generation algorithms, which could be used to do all types of dubious things like avert countermeasures that obstruct recognized compromised websites, or overwhelm domain servers in a dispersed DoS attack. Older variations use randomly generated strings, while further advanced ones add techniques like injecting typical words, further confusing protectors.

Edit distance can assist with both use cases: here we will find out how. First, we’ll leave out typical domains, because these are usually safe. And, a list of regular domains supplies a standard for discovering abnormalities. One great source is Quantcast. For this discussion, we will stick to domain names and prevent subdomains (e.g. ziften.com, not www.ziften.com).

After data cleaning, we compare each candidate domain (input data observed in the wild by Ziften) to its prospective neighbors in the exact same top-level domain (the last part of a domain name – classically.com,. org, and so on but now can be nearly anything). The basic task is to discover the nearest neighbor in regards to edit distance. By finding domains that are one step removed from their closest neighbor, we can easily identify typo-ed domain names. By discovering domains far from their next-door neighbor (the stabilized edit distance we introduced in Part 1 is useful here), we can also discover anomalous domain names in the edit distance area.

What were the Outcomes?

Let’s take a look at how these outcomes appear in reality. Use caution when browsing to these domain names considering that they might contain harmful content!

Here are a few potential typos. Typo squatters target well known domains given that there are more possibilities somebody will visit. Several of these are suspicious in accordance with our risk feed partners, however there are some false positives as well with charming names like “wikipedal”.

ed2-1

Here are some strange looking domain names far from their next-door neighbors.

ed2-2

So now we have produced 2 useful edit distance metrics for hunting. Not just that, we have 3 functions to possibly add to a machine-learning design: rank of nearest neighbor, range from neighbor, and edit distance 1 from next-door neighbor, showing a threat of typo shenanigans. Other functions that could be used well with these include other lexical functions like word and n-gram distributions, entropy, and the length of the string – and network functions like the total count of failed DNS requests.

Streamlined Code that you can Play Around with

Here is a streamlined variation of the code to play with! Created on HP Vertica, but this SQL should run with the majority of innovative databases. Note the Vertica editDistance function might differ in other implementations (e.g. levenshtein in Postgres or UTL_MATCH. EDIT_DISTANCE in Oracle).

ed2-3

A Poorly Managed Environment Will Not Be Secure And It Is True In Reverse – Charles Leaver

Written by Charles Leaver Ziften CEO

 

If your enterprise computing environment is not appropriately managed there is no chance that it can be totally protected. And you can’t effectively manage those complicated business systems unless there’s a good sense that they are secure.

Some might call this a chicken-and-egg situation, where you don’t know where to start. Should you begin with security? Or should you begin with system management? That is the incorrect approach. Consider this instead like Reese’s Peanut Butter Cups: It’s not chocolate initially. It’s not peanut butter initially. Rather, both are mixed together – and dealt with as a single delicious treat.

Many companies, I would argue a lot of organizations, are structured with an IT management department reporting to a CIO, and with a security management group reporting to a CISO. The CIO team and the CISO team don’t know each other, talk with each other just when absolutely necessary, have unique budget plans, certainly have separate priorities, check out different reports, and make use of various management platforms. On an everyday basis, what makes up a task, an issue or an alert for one group flies totally under the other group’s radar.

That’s not good, since both the IT and security groups must make assumptions. The IT team thinks that everything is secure, unless somebody tells them otherwise. For example, they presume that devices and applications have actually not been compromised, users have not escalated their privileges, and so on. Likewise, the security team assumes that the servers, desktops, and mobiles are working properly, operating systems and applications are up to date, patches have actually been applied, etc

Since the CIO and CISO groups aren’t speaking to each other, do not comprehend each others’ roles and concerns, and aren’t using the exact same tools, those presumptions might not be appropriate.

And once again, you cannot have a safe and secure environment unless that environment is properly managed – and you can’t manage that environment unless it’s safe and secure. Or putting it another way: An environment that is not secure makes anything you carry out in the IT group suspect and irrelevant, and implies that you cannot understand whether the details you are seeing are right or controlled. It may all be phony news.

How to Bridge the IT / Security Space

The best ways to bridge that gap? It sounds simple however it can be hard: Ensure that there is an umbrella covering both the IT and security teams. Both IT and security report to the exact same individual or organization someplace. It might be the CIO, it might be the CFO, it might be the CEO. For the sake of argument here, let’s say it’s the CFO.

If the business does not have a protected environment, and there’s a breach, the worth of the brand name and the company may be decreased to nothing. Similarly, if the users, devices, infrastructure, application, and data aren’t well-managed, the company cannot work effectively, and the worth drops. As we have actually discussed, if it’s not well handled, it cannot be protected, and if it’s not secure, it cannot be well handled.

The fiduciary obligation of senior executives (like the CFO) is to protect the value of business assets, and that suggests making certain IT and security speak with each other, comprehend each other’s priorities, and if possible, can see the same reports and data – filtered and displayed to be meaningful to their specific areas of duty.

That’s the thought process that we adopted with the design of our Zenith platform. It’s not a security management tool with IT abilities, and it’s not an IT management tool with security abilities. No, it’s a Peanut Butter Cup, developed similarly around chocolate and peanut butter. To be less confectionery, Zenith is an umbrella that offers IT groups exactly what they require to do their tasks, and provides security groups what they need also – without coverage spaces that might weaken assumptions about the state of business security and IT management.

We need to guarantee that our business’s IT infrastructure is created on a protected structure – and that our security is implemented on a well managed base of hardware, infrastructure, software and users. We can’t run at peak efficiency, and with complete fiduciary obligation, otherwise.

Continuous Visibility Of The Endpoint Vital In This Work From Home Climate – Charles Leaver

Written By Roark Pollock And Presented By Charles Leaver Ziften CEO

 

A study recently completed by Gallup found that 43% of US citizens that were in employment worked remotely for some of their work time in 2016. Gallup, who has been surveying telecommuting trends in the United States for almost a decade, continues to see more staff members working beyond conventional offices and an increasing number of them doing this for more days out of the week. And, of course the variety of connected devices that the average employee uses has jumped also, which assists encourage the benefit and desire of working away from the workplace.

This mobility definitely makes for better staff members, and it is hoped more productive employees, however the problems that these patterns represent for both security and systems operations teams should not be dismissed. IT asset discovery, IT systems management, and hazard detection and response functions all take advantage of real-time and historical visibility into device, application, network connection and user activity. And to be genuinely effective, endpoint visibility and monitoring must work regardless of where the user and device are operating, be it on the network (regional), off the network but linked (remote), or detached (not online). Current remote working patterns are significantly leaving security and operational groups blind to potential concerns and risks.

The mainstreaming of these trends makes it even more tough for IT and security teams to limit what used to be deemed higher danger user habits, such as working from a coffeehouse. However that ship has actually sailed and today systems management and security groups have to have the ability to thoroughly track user, device, application, and network activity, spot anomalies and unsuitable actions, and implement appropriate action or fixes no matter whether an endpoint is locally connected, from another location linked, or detached.

In addition, the fact that numerous staff members now routinely gain access to cloud based applications and assets, and have back-up network or USB connected storage (NAS) drives at their homes further magnifies the requirement for endpoint visibility. Endpoint controls often provide the one and only record of activity being remotely performed that no longer always terminates in the organization network. Offline activity presents the most severe example of the requirement for continuous endpoint monitoring. Clearly network controls or network tracking are of negligible use when a device is operating offline. The setup of a suitable endpoint agent is vital to ensure the capture of all important system and security data.

As an example of the kinds of offline activities that may be detected, a customer was just recently able to track, flag, and report uncommon behavior on a business laptop computer. A high level executive transferred substantial amounts of endpoint data to an unapproved USB stick while the device was offline. Because the endpoint agent was able to gather this behavioral data during this offline duration, the client had the ability to see this unusual action and follow up appropriately. Through the continuous monitoring of the device, applications, and user behaviors even when the endpoint was disconnected, offered the customer visibility they never ever had previously.

Does your organization have constant tracking and visibility when staff member endpoints are offline? If so, how do you do so?

Machine Learning Technology Has Promise But Be Aware Of The Likely Consequences – Charles Leaver

Written By Roark Pollock And Presented By Ziften CEO Charles Leaver

 

If you are a student of history you will see numerous examples of serious unintentional consequences when new technology has been presented. It frequently surprises people that new technologies may have dubious purposes in addition to the positive purposes for which they are launched on the market but it takes place all the time.

For example, Train robbers using dynamite (“You believe you utilized adequate Dynamite there, Butch?”) or spammers utilizing email. More recently making use of SSL to hide malware from security controls has actually become more common just because the genuine use of SSL has made this technique more useful.

Due to the fact that brand-new technology is often appropriated by bad actors, we have no need to think this will not hold true about the brand-new generation of machine-learning tools that have actually reached the marketplace.

To what effect will there be misuse of these tools? There are most likely a couple of ways in which enemies might utilize machine-learning to their advantage. At a minimum, malware writers will evaluate their brand-new malware versus the brand-new class of innovative hazard security products in a bid to customize their code so that it is less probable to be flagged as destructive. The efficiency of protective security controls always has a half-life because of adversarial learning. An understanding of artificial intelligence defenses will help assailants be more proactive in lowering the effectiveness of machine learning based defenses. An example would be an enemy flooding a network with phony traffic with the hope of “poisoning” the machine-learning model being developed from that traffic. The goal of the opponent would be to deceive the defender’s artificial intelligence tool into misclassifying traffic or to develop such a high degree of false positives that the defenders would dial back the fidelity of the alerts.

Machine learning will likely likewise be utilized as an offensive tool by enemies. For instance, some scientists forecast that opponents will use artificial intelligence strategies to sharpen their social engineering attacks (e.g., spear phishing). The automation of the effort that is required to tailor a social engineering attack is particularly unpleasant provided the efficiency of spear phishing. The capability to automate mass customization of these attacks is a potent economic incentive for assailants to adopt the strategies.

Expect breaches of this type that deliver ransomware payloads to increase dramatically in 2017.

The requirement to automate tasks is a significant motivation of financial investment choices for both aggressors and protectors. Machine learning promises to automate detection and response and increase the functional pace. While the innovation will progressively end up being a standard part of defense in depth methods, it is not a magic bullet. It should be understood that attackers are actively working on evasion techniques around machine learning based detection products while likewise utilizing machine learning for their own offensive functions. This arms race will require defenders to progressively attain incident response at machine pace, further exacerbating the requirement for automated incident response capabilities.

Use Of Certain Commands Can Mean Threats – Charles Leaver

Written By Josh Harriman And Presented By Charles Leaver Ziften CEO

 

The repeating of a theme when it comes to computer system security is never ever a bad thing. As advanced as some attacks may be, you really need to look for and understand using common easily offered tools in your environment. These tools are usually used by your IT staff and more than likely would be white listed for use and can be missed out on by security groups mining through all the appropriate applications that ‘could’ be executed on an endpoint.

When someone has actually breached your network, which can be done in a range of ways and another blog for another day, indications of these programs/tools running in your environment must be examined to guarantee correct usage.

A couple of commands/tools and their features:

Netstat – Details on the existing connections on the network. This may be utilized to recognize other systems within the network.

Powershell – Built-in Windows command line function and can perform a host of actions for example getting important info about the system, killing procedures, including files or removing files etc

WMI – Another effective integrated Windows utility. Can move files around and gather essential system information.

Route Print – Command to view the local routing table.

Net – Including users/domains/accounts/groups.

RDP (Remote Desktop Protocol) – Program to access systems from a remote location.

AT – Set up jobs.

Looking for activity from these tools can take a long time and often be overwhelming, but is required to deal with who might be moving around in your environment. And not simply what is happening in real-time, however historically too to see a course somebody might have taken through the environment. It’s often not ‘patient zero’ that is the target, once they get a grip, they could use these tools and commands to begin their reconnaissance and lastly shift to a high value asset. It’s that lateral motion that you want to find.

You need to have the ability to collect the details gone over above and the ways to sift through to discover, alert, and examine this data. You can make use of Windows Events to monitor various modifications on a device and then filter that down.

Looking at some screen shots shown below from our Ziften console, you can see a quick distinction between what our IT group used to push out changes in the network, versus somebody running a very similar command themselves. This could be much like what you discover when someone did that from a remote location say by means of an RDP session.

commands-to-watch01

commands-to-watch02

commands-to-watch03

commands-to-watch04

An intriguing side note in these screenshots is that in all of the cases, the Process Status is ‘Terminated’. You would not observe this detail throughout a live examination or if you were not constantly collecting the data. However given that we are gathering all of the information continually, you have this historical data to take a look at. If in the event you were observing the Status as ‘Running’, this might suggest that someone is actually on that system as of now.

This only scratches the surface of what you must be collecting and how to evaluate exactly what is right for your network, which of course will be distinct from that of others. However it’s a good place to start. Harmful actors with the intention to do you harm will usually search for the path of least resistance. Why attempt and produce brand new and intriguing tools, when a great deal of what they need is currently there and ready to go.

Understanding The Distinction Between Incident Response And Forensic Analysis – Charles Leaver

Written By Roark Pollock And Presented By Ziften CEO Charles Leaver

 

There might be a joke someplace regarding the forensic analyst that was late to the incident response celebration. There is the seed of a joke in the idea at least however obviously, you have to understand the distinctions between incident response and forensic analysis to appreciate the capacity for humor.

Forensic analysis and incident response are related disciplines that can utilize comparable tools and associated data sets however likewise have some crucial differences. There are four particularly important differences between forensic analysis and incident response:

– Goals.
– Data requirements.
– Group skills.
– Advantages.

The distinction in the goals of forensic analysis and incident response is perhaps the most crucial. Incident response is focused on figuring out a quick (i.e., near real time) reaction to an instant risk or concern. For instance, a home is on fire and the firefighters that show up to put that fire out are involved in incident response. Forensic analysis is typically performed as part of a scheduled compliance, legal discovery, or police investigation. For example, a fire detective may take a look at the remains of that house fire to determine the overall damage to the house, the reason for the fire, and whether the source was such that other houses are likewise facing the same risk. In other words, incident response is concentrated on containment of a danger or problem, while forensic analysis is concentrated on a full understanding and extensive removal of a breach.

A 2nd major distinction between the disciplines is the data resources required to attain the goals. Incident response teams normally only need short-term data sources, typically no greater than a month or so, while forensic analysis teams usually need much longer lived logs and files. Remember that the typical dwell time of an effective attack is somewhere between 150 and 300 days.

While there is commonness in the workers abilities of incident response and forensic analysis groups, and in fact incident response is often considered a subset of the border forensic discipline, there are essential distinctions in task requirements. Both kinds of research require strong log analysis and malware analysis capabilities. Incident response needs the ability to quickly isolate a contaminated device and to develop means to reconcile or quarantine the device. Interactions tend to be with other operations and security staff member. Forensic analysis generally requires interactions with a much broader set of departments, including compliance, HR, legal and operations.

Not remarkably, the perceived advantages of these activities also differ.

The ability to get rid of a risk on one device in near real-time is a significant determinate in keeping breaches isolated and restricted in impact. Incident response, and proactive danger searching, is the first defense line in security operations. Forensic analysis is incident responses’ less attractive relative. However, the benefits of this work are indisputable. A thorough forensic examination allows the removal of all dangers with the cautious analysis of an entire attack chain of events. Which is no laughing matter.

Do your endpoint security processes allow both instant incident response, and long-lasting historic forensic analysis?

Part 1 Of Using Edit Difference For Detection – Charles Leaver

Written By Jesse Sampson And Presented By Charles Leaver CEO Ziften

 

Why are the exact same techniques being used by enemies over and over? The basic answer is that they are still working today. For instance, Cisco’s 2017 Cybersecurity Report tells us that after years of wane, spam email with malicious attachments is once again on the rise. Because conventional attack vector, malware authors usually conceal their activities by using a filename just like a typical system process.

There is not always a connection between a file’s path name and its contents: anyone who has tried to conceal delicate details by providing it a dull name like “taxes”, or altered the extension of a file attachment to circumvent e-mail guidelines understands this principle. Malware authors understand this too, and will often name malware to resemble common system procedures. For instance, “explore.exe” is Internet Explorer, but “explorer.exe” with an extra “r” may be anything. It’s simple even for professionals to neglect this small difference.

The opposite issue, known.exe files running in uncommon places, is simple to fix, using SQL sets and string functions.

edit-distance-1

What about the other case, finding near matches to the executable name? The majority of people begin their search for near string matches by arranging data and visually searching for discrepancies. This typically works effectively for a small set of data, maybe even a single system. To discover these patterns at scale, nevertheless, requires an algorithmic approach. One established strategy for “fuzzy matching” is to utilize Edit Distance.

Exactly what’s the very best method to determining edit distance? For Ziften, our technology stack consists of HP Vertica, which makes this task easy. The internet has lots of data scientists and data engineers singing Vertica’s praises, so it will be sufficient to point out that Vertica makes it easy to develop custom-made functions that take full advantage of its power – from C++ power tools, to analytical modeling scalpels in R and Java.

This Git repo is kept by Vertica lovers operating in industry. It’s not a certified offering, but the Vertica team is definitely familiar with it, and furthermore is thinking everyday about ways to make Vertica better for data scientists – a great space to watch. Most importantly, it contains a function to determine edit distance! There are also some other tools for the natural processing of langauge here like word stemmers and tokenizers.

By utilizing edit distance on the leading executable paths, we can rapidly discover the closest match to each of our leading hits. This is an interesting dataset as we can arrange by distance to discover the closest matches over the entire data set, or we can sort by frequency of the top path to see what is the nearest match to our typically utilized processes. This data can also surface on contextual “report card” pages, to reveal, e.g. the leading five closest strings for a given path. Below is a toy example to provide a sense of use, based upon real data ZiftenLabs observed in a customer environment.

edit-distance-2

Setting a threshold of 0.2 appears to discover excellent results in our experience, however the point is that these can be adapted to fit individual use cases. Did we discover any malware? We notice that “teamviewer_.exe” (must be simply “teamviewer.exe”), “iexplorer.exe” (should be “iexplore.exe”), and “cvshost.exe” (must be svchost.exe, unless perhaps you work for CVS pharmacy…) all look weird. Considering that we’re already in our database, it’s likewise insignificant to get the associated MD5 hashes, Ziften suspicion ratings, and other attributes to do a deeper dive.

edit-distance-3

In this specific real-life environment, it turned out that teamviewer_.exe and iexplorer.exe were portable applications, not known malware. We assisted the client with more investigation on the user and system where we observed the portable applications because use of portable apps on a USB drive could be proof of naughty activity. The more troubling find was cvshost.exe. Ziften’s intelligence feeds indicate that this is a suspect file. Searching for the md5 hash for this file on VirusTotal verifies the Ziften data, indicating that this is a potentially major Trojan virus that may be a component of a botnet or doing something much more harmful. When the malware was discovered, however, it was simple to solve the problem and make sure it remains resolved using Ziften’s capability to kill and constantly block processes by MD5 hash.

Even as we develop sophisticated predictive analytics to identify harmful patterns, it is very important that we continue to improve our capabilities to hunt for known patterns and old tricks. Just because brand-new hazards emerge does not imply the old ones go away!

If you enjoyed this post, keep looking here for part 2 of this series where we will use this approach to hostnames to detect malware droppers and other malicious sites.

Increasing Numbers Of Connected Devices Will Present A Number Of Endpoint Challenges – Charles Leaver

Written By Roark Pollock And Presented By Ziften CEO Charles Leaver

 

It wasn’t long ago that everyone knew exactly what you meant if you raised the issue of an endpoint. If somebody wished to sell you an endpoint security solution, you understood exactly what devices that software was going to protect. But when I hear someone casually discuss endpoints today, The Princess Bride’s Inigo Montoya enters my mind: “You keep utilizing that word. I don’t believe it suggests what you believe it means.” Today an endpoint could be practically any type of device.

In truth, endpoints are so varied today that individuals have reverted to calling them “things.” According to Gartner at the end of 2016 there were over 6 billion “things” connected to the web. The consulting company forecasts that this number will grow to twenty one billion by the year 2020. Business uses of these things will be both generic (e.g. connected light bulbs and HVAC systems) and industry specific (e.g. oil rig security monitoring). For IT and security groups responsible for connecting and protecting endpoints, this is only half of the new difficulty, however. The acceptance of virtualization technology has actually redefined what an endpoint is, even in environments where these groups have generally run.

The last decade has seen a massive modification in the way end users gain access to information. Physical devices continue to be more mobile with many information employees now doing most of their computing and interaction on laptops and mobile phones. More significantly, everyone is becoming an info employee. Today, much better instrumentation and monitoring has permitted levels of data collection and analysis that can make the insertion of info-tech into practically any task successful.

At the same time, more traditional IT assets, especially servers, are becoming virtualized to remove a few of the traditional restrictions in having those assets tied to physical devices.

These two patterns together will impact security groups in essential ways. The totality of “endpoints” will consist of billions of long lived and unsecure IoT endpoints along with billions of virtual endpoint instances that will be scaled up and down as needed along with migrated to various physical places as needed.

Organizations will have really different concerns with these two general kinds of endpoints. Over their life times, IoT devices will need to be safeguarded from a host of risks some of which have yet to be dreamed up. Monitoring and safeguarding these devices will need advanced detection abilities. On the plus side, it will be possible to maintain distinct log data to make it possible for forensic examination.

Virtual endpoints, on the other hand, present their own important concerns. The ability to move their physical location makes it far more hard to guarantee right security policies are always attached to the endpoint. The practice of re-imaging virtual endpoints can make forensic investigation tough, as essential data is usually lost when a new image is used.

So no matter what word or phrases are used to describe your endpoints – endpoint, systems, client device, user device, mobile phone, server, virtual machine, container, cloud workload, IoT device, and so on – it is essential to understand precisely what someone suggests when they utilize the term endpoint.