Part 2 Of Using Edit Difference For Detection – Charles Leaver

Written By Jesse Sampson And Presented By Charles Leaver CEO Ziften

 

In the very first about edit distance, we looked at searching for harmful executables with edit distance (i.e., how many character modifications it takes to make two matching text strings). Now let’s take a look at how we can use edit distance to look for malicious domains, and how we can build edit distance functions that can be integrated with other domain name features to pinpoint suspect activity.

Here is the Background

What are bad actors doing with harmful domains? It might be merely using a close spelling of a typical domain name to fool negligent users into looking at ads or getting adware. Legitimate websites are slowly catching onto this technique, often called typo squatting.

Other harmful domain names are the result of domain generation algorithms, which could be used to do all types of dubious things like avert countermeasures that obstruct recognized compromised websites, or overwhelm domain servers in a dispersed DoS attack. Older variations use randomly generated strings, while further advanced ones add techniques like injecting typical words, further confusing protectors.

Edit distance can assist with both use cases: here we will find out how. First, we’ll leave out typical domains, because these are usually safe. And, a list of regular domains supplies a standard for discovering abnormalities. One great source is Quantcast. For this discussion, we will stick to domain names and prevent subdomains (e.g. ziften.com, not www.ziften.com).

After data cleaning, we compare each candidate domain (input data observed in the wild by Ziften) to its prospective neighbors in the exact same top-level domain (the last part of a domain name – classically.com,. org, and so on but now can be nearly anything). The basic task is to discover the nearest neighbor in regards to edit distance. By finding domains that are one step removed from their closest neighbor, we can easily identify typo-ed domain names. By discovering domains far from their next-door neighbor (the stabilized edit distance we introduced in Part 1 is useful here), we can also discover anomalous domain names in the edit distance area.

What were the Outcomes?

Let’s take a look at how these outcomes appear in reality. Use caution when browsing to these domain names considering that they might contain harmful content!

Here are a few potential typos. Typo squatters target well known domains given that there are more possibilities somebody will visit. Several of these are suspicious in accordance with our risk feed partners, however there are some false positives as well with charming names like “wikipedal”.

ed2-1

Here are some strange looking domain names far from their next-door neighbors.

ed2-2

So now we have produced 2 useful edit distance metrics for hunting. Not just that, we have 3 functions to possibly add to a machine-learning design: rank of nearest neighbor, range from neighbor, and edit distance 1 from next-door neighbor, showing a threat of typo shenanigans. Other functions that could be used well with these include other lexical functions like word and n-gram distributions, entropy, and the length of the string – and network functions like the total count of failed DNS requests.

Streamlined Code that you can Play Around with

Here is a streamlined variation of the code to play with! Created on HP Vertica, but this SQL should run with the majority of innovative databases. Note the Vertica editDistance function might differ in other implementations (e.g. levenshtein in Postgres or UTL_MATCH. EDIT_DISTANCE in Oracle).

ed2-3

A Poorly Managed Environment Will Not Be Secure And It Is True In Reverse – Charles Leaver

Written by Charles Leaver Ziften CEO

 

If your enterprise computing environment is not appropriately managed there is no chance that it can be totally protected. And you can’t effectively manage those complicated business systems unless there’s a good sense that they are secure.

Some might call this a chicken-and-egg situation, where you don’t know where to start. Should you begin with security? Or should you begin with system management? That is the incorrect approach. Consider this instead like Reese’s Peanut Butter Cups: It’s not chocolate initially. It’s not peanut butter initially. Rather, both are mixed together – and dealt with as a single delicious treat.

Many companies, I would argue a lot of organizations, are structured with an IT management department reporting to a CIO, and with a security management group reporting to a CISO. The CIO team and the CISO team don’t know each other, talk with each other just when absolutely necessary, have unique budget plans, certainly have separate priorities, check out different reports, and make use of various management platforms. On an everyday basis, what makes up a task, an issue or an alert for one group flies totally under the other group’s radar.

That’s not good, since both the IT and security groups must make assumptions. The IT team thinks that everything is secure, unless somebody tells them otherwise. For example, they presume that devices and applications have actually not been compromised, users have not escalated their privileges, and so on. Likewise, the security team assumes that the servers, desktops, and mobiles are working properly, operating systems and applications are up to date, patches have actually been applied, etc

Since the CIO and CISO groups aren’t speaking to each other, do not comprehend each others’ roles and concerns, and aren’t using the exact same tools, those presumptions might not be appropriate.

And once again, you cannot have a safe and secure environment unless that environment is properly managed – and you can’t manage that environment unless it’s safe and secure. Or putting it another way: An environment that is not secure makes anything you carry out in the IT group suspect and irrelevant, and implies that you cannot understand whether the details you are seeing are right or controlled. It may all be phony news.

How to Bridge the IT / Security Space

The best ways to bridge that gap? It sounds simple however it can be hard: Ensure that there is an umbrella covering both the IT and security teams. Both IT and security report to the exact same individual or organization someplace. It might be the CIO, it might be the CFO, it might be the CEO. For the sake of argument here, let’s say it’s the CFO.

If the business does not have a protected environment, and there’s a breach, the worth of the brand name and the company may be decreased to nothing. Similarly, if the users, devices, infrastructure, application, and data aren’t well-managed, the company cannot work effectively, and the worth drops. As we have actually discussed, if it’s not well handled, it cannot be protected, and if it’s not secure, it cannot be well handled.

The fiduciary obligation of senior executives (like the CFO) is to protect the value of business assets, and that suggests making certain IT and security speak with each other, comprehend each other’s priorities, and if possible, can see the same reports and data – filtered and displayed to be meaningful to their specific areas of duty.

That’s the thought process that we adopted with the design of our Zenith platform. It’s not a security management tool with IT abilities, and it’s not an IT management tool with security abilities. No, it’s a Peanut Butter Cup, developed similarly around chocolate and peanut butter. To be less confectionery, Zenith is an umbrella that offers IT groups exactly what they require to do their tasks, and provides security groups what they need also – without coverage spaces that might weaken assumptions about the state of business security and IT management.

We need to guarantee that our business’s IT infrastructure is created on a protected structure – and that our security is implemented on a well managed base of hardware, infrastructure, software and users. We can’t run at peak efficiency, and with complete fiduciary obligation, otherwise.

Continuous Visibility Of The Endpoint Vital In This Work From Home Climate – Charles Leaver

Written By Roark Pollock And Presented By Charles Leaver Ziften CEO

 

A study recently completed by Gallup found that 43% of US citizens that were in employment worked remotely for some of their work time in 2016. Gallup, who has been surveying telecommuting trends in the United States for almost a decade, continues to see more staff members working beyond conventional offices and an increasing number of them doing this for more days out of the week. And, of course the variety of connected devices that the average employee uses has jumped also, which assists encourage the benefit and desire of working away from the workplace.

This mobility definitely makes for better staff members, and it is hoped more productive employees, however the problems that these patterns represent for both security and systems operations teams should not be dismissed. IT asset discovery, IT systems management, and hazard detection and response functions all take advantage of real-time and historical visibility into device, application, network connection and user activity. And to be genuinely effective, endpoint visibility and monitoring must work regardless of where the user and device are operating, be it on the network (regional), off the network but linked (remote), or detached (not online). Current remote working patterns are significantly leaving security and operational groups blind to potential concerns and risks.

The mainstreaming of these trends makes it even more tough for IT and security teams to limit what used to be deemed higher danger user habits, such as working from a coffeehouse. However that ship has actually sailed and today systems management and security groups have to have the ability to thoroughly track user, device, application, and network activity, spot anomalies and unsuitable actions, and implement appropriate action or fixes no matter whether an endpoint is locally connected, from another location linked, or detached.

In addition, the fact that numerous staff members now routinely gain access to cloud based applications and assets, and have back-up network or USB connected storage (NAS) drives at their homes further magnifies the requirement for endpoint visibility. Endpoint controls often provide the one and only record of activity being remotely performed that no longer always terminates in the organization network. Offline activity presents the most severe example of the requirement for continuous endpoint monitoring. Clearly network controls or network tracking are of negligible use when a device is operating offline. The setup of a suitable endpoint agent is vital to ensure the capture of all important system and security data.

As an example of the kinds of offline activities that may be detected, a customer was just recently able to track, flag, and report uncommon behavior on a business laptop computer. A high level executive transferred substantial amounts of endpoint data to an unapproved USB stick while the device was offline. Because the endpoint agent was able to gather this behavioral data during this offline duration, the client had the ability to see this unusual action and follow up appropriately. Through the continuous monitoring of the device, applications, and user behaviors even when the endpoint was disconnected, offered the customer visibility they never ever had previously.

Does your organization have constant tracking and visibility when staff member endpoints are offline? If so, how do you do so?

Machine Learning Technology Has Promise But Be Aware Of The Likely Consequences – Charles Leaver

Written By Roark Pollock And Presented By Ziften CEO Charles Leaver

 

If you are a student of history you will see numerous examples of serious unintentional consequences when new technology has been presented. It frequently surprises people that new technologies may have dubious purposes in addition to the positive purposes for which they are launched on the market but it takes place all the time.

For example, Train robbers using dynamite (“You believe you utilized adequate Dynamite there, Butch?”) or spammers utilizing email. More recently making use of SSL to hide malware from security controls has actually become more common just because the genuine use of SSL has made this technique more useful.

Due to the fact that brand-new technology is often appropriated by bad actors, we have no need to think this will not hold true about the brand-new generation of machine-learning tools that have actually reached the marketplace.

To what effect will there be misuse of these tools? There are most likely a couple of ways in which enemies might utilize machine-learning to their advantage. At a minimum, malware writers will evaluate their brand-new malware versus the brand-new class of innovative hazard security products in a bid to customize their code so that it is less probable to be flagged as destructive. The efficiency of protective security controls always has a half-life because of adversarial learning. An understanding of artificial intelligence defenses will help assailants be more proactive in lowering the effectiveness of machine learning based defenses. An example would be an enemy flooding a network with phony traffic with the hope of “poisoning” the machine-learning model being developed from that traffic. The goal of the opponent would be to deceive the defender’s artificial intelligence tool into misclassifying traffic or to develop such a high degree of false positives that the defenders would dial back the fidelity of the alerts.

Machine learning will likely likewise be utilized as an offensive tool by enemies. For instance, some scientists forecast that opponents will use artificial intelligence strategies to sharpen their social engineering attacks (e.g., spear phishing). The automation of the effort that is required to tailor a social engineering attack is particularly unpleasant provided the efficiency of spear phishing. The capability to automate mass customization of these attacks is a potent economic incentive for assailants to adopt the strategies.

Expect breaches of this type that deliver ransomware payloads to increase dramatically in 2017.

The requirement to automate tasks is a significant motivation of financial investment choices for both aggressors and protectors. Machine learning promises to automate detection and response and increase the functional pace. While the innovation will progressively end up being a standard part of defense in depth methods, it is not a magic bullet. It should be understood that attackers are actively working on evasion techniques around machine learning based detection products while likewise utilizing machine learning for their own offensive functions. This arms race will require defenders to progressively attain incident response at machine pace, further exacerbating the requirement for automated incident response capabilities.