Detection Of WannaCry And Response To It Through Ziften And Splunk – Charles Leaver

Written by Joel Ebrahami and presented by Charles Leaver

 

WannaCry has created a lot of media attention. It might not have the huge infection rates that we have actually seen with much of the older worms, but in the current security world the amount of systems it was able to infect in a single day was still rather incredible. The objective of this blog is NOT to supply an in-depth analysis of the threat, however rather to look how the exploit acts on a technical level with Ziften’s Zenith platform and the combination we have with our innovation partner Splunk.

WannaCry Visibility in Ziften Zenith

My first action was to reach out to Ziften Labs hazard research study team to see exactly what details they could provide to me about WannaCry. Josh Harriman, VP of Cyber Security Intelligence, heads up our research team and notified me that they had samples of WannaCry presently running in our ‘Red Laboratory’ to take a look at the habits of the danger and perform more analysis. Josh sent me over the information of what he had actually discovered when analyzing the WannaCry samples in the Ziften Zenith console. He sent over those information, which I provide herein.

The Red Laboratory has systems covering all the most popular common os with various services and setups. There were already systems in the laboratory that were deliberately susceptible to the WannaCry exploit. Our international danger intelligence feeds utilized in the Zenith platform are upgraded in real-time, and had no trouble spotting the virus in our lab environment (see Figure 1).

wannasplunk-figure1

2 lab systems have been determined running the harmful WannaCry sample. While it is terrific to see our global hazard intelligence feeds upgraded so quickly and recognizing the ransomware samples, there were other behaviors that we identified that would have recognized the ransomware threat even if there had actually not been a danger signature.

Zenith agents gather a vast quantity of information on what’s occurring on each host. From this visibility information, we create non-signature based detection methods to take a look at generally harmful or anomalous behaviors. In Figure 2 below, we reveal the behavioral detection of the WannaCry infection.

wannasplunk-figure2

Investigating the Breadth of WannaCry Infections

Once detected either through signature or behavioral methods, it is very simple to see which other systems have likewise been contaminated or are exhibiting comparable behaviors.

wannasplunk-figure3

Detecting WannaCry with Ziften and Splunk

After reviewing this information, I decided to run the WannaCry sample in my own environment on a susceptible system. I had one susceptible system running the Zenith agent, and in this example my Zenith server was currently set up to integrate with Splunk. This permitted me to take a look at the very same data inside Splunk. Let me make it clear about the integration we have with Splunk.

We have 2 Splunk apps for Zenith. The first is our technology add on (TA): its function is to ingest and index ALL the raw data from the Zenith server that the Ziften agents create. As this information comes in it is massaged into Splunk’s Common Info Model (CIM) so that it can be stabilized and easily browsed as well as utilized by other apps such as the Splunk App for Enterprise Security (Splunk ES). The Ziften TA also includes Adaptive Response capabilities for taking actions from events that are rendered in Splunk ES. The 2nd app is a control panel for showing our information with all the graphs and charts offered in Splunk to allow absorbing the data a lot easier.

Since I currently had the details on how the WannaCry exploit acted in our research laboratory, I had the advantage of knowing exactly what to find in Splunk using the Zenith data. In this case I was able to see a signature alert by utilizing the VirusTotal integration with our Splunk app (see Figure 4).

wannasplunk-figure4

Risk Hunting for WannaCry Ransomware in Ziften and Splunk

But I wanted to wear my “incident responder hat” and investigate this in Splunk utilizing the Zenith agent information. My first idea was to search the systems in my lab for ones running SMB, since that was the initial vector for the WannaCry attack. The Zenith data is encapsulated in various message types, and I understood that I would most likely find SMB data in the running procedure message type, however, I used Splunk’s * regex with the Zenith sourcetype so I could search all Zenith data. The resulting search looked like ‘sourcetype= ziften: zenith: * smb’. As I expected I got one result back for the system that was running SMB (see Figure 5).

wannasplunk-figure5

My next action was to utilize the same behavioral search we have in Zenith that tries to find normal CryptoWare and see if I might get results back. Once again this was really simple to do from the Splunk search panel. I used the very same wildcard sourcetype as previously so I might browse throughout all Zenith data and this time I included the ‘delete shadows’ string search to see if this behavior was ever released at the command line. My search looked like ‘sourcetype= ziften: zenith: * delete shadows’. This search returned results, displayed in Figure 6, that revealed me in detail the process that was developed and the complete command line that was executed.

wannasplunk-figure6

Having all this detail within Splunk made it extremely simple to identify which systems were susceptible and which systems had actually currently been jeopardized.

WannaCry Remediation Using Splunk and Ziften

Among the next steps in any type of breach is to remediate the compromise as fast as possible to prevent further destruction and to take action to prevent other systems from being jeopardized. Ziften is one of the Splunk founding Adaptive Response members and there are a variety of actions (see Figure 7) that can be taken through Spunk’s Adaptive Response to reduce these dangers through extensions on Zenith.

wannasplunk-figure7

In the case of WannaCry we actually might have utilized practically any of the Adaptive Response actions currently readily available by Zenith. When attempting to reduce the effect and avoid WannaCry in the first place, one action that can take place is to shut down SMB on any systems running the Zenith agent where the version of SMB running is understood to be susceptible. With a single action Splunk can pass to Zenith the agent ID’s or the IP Address of all the vulnerable systems where we wanted to stop the SMB service, thus avoiding the exploit from ever taking place and permitting the IT Operations team to get those systems patched prior to beginning the SMB service once again.

Preventing Ransomware from Spreading or Exfiltrating Data

Now in the case that we have already been jeopardized, it is critical to prevent more exploitation and stop the possible exfiltration of delicate information or company intellectual property. There are really three actions we might take. The very first 2 are similar where we might eliminate the malicious process by either PID (process ID) or by its hash. This works, but considering that often times malware will just spawn under a new process, or be polymorphic and have a different hash, we can use an action that is guaranteed to prevent any incoming or outgoing traffic from those contaminated systems: network quarantine. This is another example of an Adaptive Response action readily available from Ziften’s integration with Splunk ES.

WannaCry is currently reducing, however hopefully this technical blog post reveals the worth of the Ziften and Splunk integration in handling ransomware hazards against the endpoint.

Learn From This HVAC Breach And Become Security Paranoid – Charles Leaver

Written By Charles Leaver Ziften CEO

 

Whatever you do not ignore cybersecurity criminals. Even the most paranoid “regular” person wouldn’t fret about a source of data breaches being taken credentials from its heating, ventilation and a/c (HVAC) contractor. Yet that’s what happened at Target in November 2013. Hackers broke into Target’s network using qualifications given to the contractor, most likely so they could track the heating, ventilation and air conditioning system. (For a great analysis, see Krebs on Security). And after that hackers had the ability to take advantage of the breach to spread malware into point-of-sale (POS) systems, then offload payment card details.

A variety of ludicrous errors were made here. Why was the HVAC professional provided access to the enterprise network? Why wasn’t the HVAC system on a separate, totally isolated network? Why wasn’t the POS system on a separate network? And so on.

The point here is that in a really complicated network, there are uncounted potential vulnerabilities that could be made use of through negligence, unpatched software applications, default passwords, social engineering, spear phishing, or insider actions. You understand.

Whose job is it to find and fix those vulnerabilities? The security group. The CISO’s team. Security specialists aren’t “typical” people. They are paid to be paranoid. Make no mistake, no matter the particular technical vulnerability that was made use of, this was a CISO failure to anticipate the worst and prepare accordingly.

I cannot speak to the Target A/C breach specifically, however there is one frustrating reason why breaches like this happen: A lack of monetary priority for cyber security. I’m not exactly sure how often businesses fail to finance security just due to the fact that they’re inexpensive and would rather do a share buy back. Or possibly the CISO is too shy to request what’s required, or has actually been told that she gets a 5% increase, no matter the requirement. Maybe the CEO is worried that disclosures of big allowances for security will startle investors. Maybe the CEO is merely naïve enough to believe that the business will not be targeted by hackers. The problem: Every company is targeted by cyber criminals.

There are big competitions over spending plans. The IT department wants to finance upgrades and improvements, and attack the backlog of demand for new and improved applications. On the other side, you have operational managers who see IT jobs as directly assisting the bottom line. They are optimists, and have lots of CEO attention.

By contrast, the security department frequently has to fight for crumbs. They are viewed as an expense center. Security decreases company threat in a manner that matters to the CFO, the CRO (chief risk officer, if there is one), the general counsel, and other pessimists who care about compliance and track records. These green-eyeshade individuals consider the worst case situations. That doesn’t make friends, and budget dollars are designated reluctantly at a lot of companies (till the company gets burned).

Call it naivety, call it established hostility, but it’s a genuine challenge. You cannot have IT provided fantastic tools to drive the enterprise forward, while security is starved and using second best.

Worse, you don’t want to end up in situations where the rightfully paranoid security groups are working with tools that don’t fit together well with their IT equivalent’s tools.

If IT and security tools do not mesh well, IT might not be able to rapidly act to react to dangerous circumstances that the security teams are keeping an eye on or are worried about – things like reports from threat intelligence, discoveries of unpatched vulnerabilities, nasty zero-day exploits, or user habits that suggest risky or suspicious activity.

One idea: Discover tools for both departments that are developed with both IT and security in mind, right from the start, instead of IT tools that are patched to provide some minimal security capability. One budget plan item (take it out of IT, they have more money), but 2 workflows, one created for the IT professional, one for the CISO team. Everybody wins – and next time somebody wants to offer the HVAC specialist access to the network, maybe security will observe exactly what IT is doing, and head that disaster off at the pass.

Next Generation Endpoint Security Products 10 Tips For Evaluation – Charles Leaver

Written By Roark Pollock And Presented By Chuck Leaver CEO Ziften

 

The End Point Security Buyer’s Guide

The most typical point for an advanced consistent attack or a breach is the endpoint. And they are certainly the entry point for most ransomware and social engineering attacks. Using endpoint security products has actually long been thought about a best practice for securing endpoints. Unfortunately, those tools aren’t staying up to date with today’s hazard environment. Advanced risks, and truth be told, even less innovative dangers, are often more than appropriate for fooling the typical worker into clicking something they shouldn’t. So organizations are looking at and examining a plethora of next-gen endpoint security (NGES) options.

With this in mind, here are ten pointers to think about if you’re taking a look at NGES solutions.

Suggestion 1: Begin with the end in mind

Don’t let the tail wag the dog. A danger reduction technique should always begin by examining issues and then trying to find possible fixes for those problems. But all too often we get captivated with a “shiny” new innovation (e.g., the latest silver bullet) and we end up trying to squeeze that technology into our environments without totally examining if it resolves an understood and recognized issue. So exactly what problems are you trying to solve?

– Is your existing endpoint protection tool failing to stop threats?
– Do you need much better visibility into activities at the endpoint?
– Are compliance requirements mandating continuous end point monitoring?
– Are you trying to decrease the time and expense of incident response?

Define the problems to address, and after that you’ll have a measuring stick for success.

Idea 2: Understand your audience. Who will be using the tool?

Understanding the problem that needs to be fixed is an essential first step in understanding who owns the problem and who would (operationally) own the service. Every functional team has its strengths, weaknesses, preferences and prejudices. Define who will need to use the solution, and others that could benefit from its usage. It could be:

– Security operations,
– IT group,
– The governance, risk & compliance (GRC) team,
– Help desk or end user support group,
– And even the server group, or a cloud operations team?

Pointer 3: Know exactly what you imply by end point

Another often neglected early step in defining the problem is specifying the endpoint. Yes, we all used to know exactly what we meant when we stated endpoint but today end points come in a lot more ranges than in the past.

Sure we want to safeguard desktops and laptop computers however how about mobile devices (e.g. smartphones and tablets), virtual end points, cloud based end points, or Internet of Things (IoT) devices? And how about your servers? All these devices, naturally, are available in multiple tastes so platform support needs to be dealt with also (e.g. Windows only, Mac OSX, Linux, etc?). Also, consider assistance for endpoints even when they are working remote, or are working offline. What are your requirements and exactly what are “great to haves?”

Suggestion 4: Start with a structure of continuous visibility

Constant visibility is a fundamental capability for attending to a host of security and operational management concerns on the endpoint. The old adage holds true – that you cannot manage exactly what you can’t see or determine. Further, you can’t secure exactly what you cannot effectively manage. So it should begin with continuous or all-the-time visibility.

Visibility is foundational to Security and Management

And consider what visibility indicates. Enterprises require a single source of reality that at a minimum monitors, saves, and examines the following:

– System data – events, logs, hardware state, and file system details
– User data – activity logs and habit patterns
– Application data – attributes of installed apps and usage patterns
– Binary data – attributes of installed binaries
– Procedures data – tracking info and stats
– Network connectivity data – statistics and internal behavior of network activity on the host

Pointer 5: Track your visibility data

End point visibility data can be saved and analyzed on the premises, in the cloud, or some combination of both. There are advantages to each. The appropriate approach varies, but is typically driven by regulatory requirements, internal privacy policies, the endpoints being monitored, and the general cost factors to consider.

Know if your company requires on-premise data retention

Know whether your company enables cloud based data retention and analysis or if you are constrained to on premise services only. Within Ziften, 20-30% of our customers save data on premise just for regulative factors. However, if lawfully an alternative, the cloud can provide expense advantages (among others).

Tip 6: Know what is on your network

Understanding the problem you are aiming to fix requires understanding the assets on the network. We find that as much as 30% of the end points we initially discover on clients’ networks are unmanaged or unidentified devices. This undoubtedly produces a big blind spot. Decreasing this blind spot is a critical best practice. In fact, SANS Critical Security Controls 1 and 2 are to carry out an inventory of licensed and unauthorized devices and software applications attached to your network. So look for NGES services that can finger print all connected devices, track software applications inventory and utilization, and perform ongoing continuous discovery.

Idea 7: Know where you are exposed

After figuring out what devices you have to view, you need to make certain they are running in up to date setups. SANS Critical Security Controls 3 suggests making sure secure setups monitoring for laptops, workstations, and servers. SANS Critical Security Controls 4 recommends making it possible for continuous vulnerability evaluation and remediation of these devices. So, look for NGES solutions that supply constant monitoring of the state or posture of each device, and it’s even better if it can assist enforce that posture.

Also look for services that deliver continuous vulnerability evaluation and removal.

Keeping your total endpoint environment solidified and free of vital vulnerabilities prevents a huge quantity of security concerns and eliminates a lot of back end work on the IT and security operations teams.

Pointer 8: Cultivate continuous detection and response

A crucial end goal for numerous NGES solutions is supporting continuous device state monitoring, to enable efficient risk or incident response. SANS Critical Security Control 19 recommends robust event response and management as a best practice.

Try to find NGES solutions that offer all-the-time or continuous risk detection, which leverages a network of worldwide hazard intelligence, and multiple detection strategies (e.g., signature, behavioral, artificial intelligence, etc). And look for event response services that help prioritize identified threats and/or problems and provide workflow with contextual system, application, user, and network data. This can help automate the proper response or next steps. Lastly, understand all the response actions that each solution supports – and look for a service that supplies remote access that is as close as possible to “sitting at the end point keyboard”.

Suggestion 9: Consider forensics data gathering

In addition to event response, companies need to be prepared to address the requirement for forensic or historic data analysis. The SANS Critical Security Control 6 suggests the maintenance, tracking and analysis of all audit logs. Forensic analysis can take numerous forms, but a structure of historical end point tracking data will be crucial to any examination. So look for solutions that maintain historic data that allows:

– Forensic tasks include tracing lateral risk movement through the network gradually,
– Identifying data exfiltration efforts,
– Identifying source of breaches, and
– Identifying appropriate removal actions.

Pointer 10: Tear down the walls

IBM’s security team, which supports an outstanding community of security partners, approximates that the typical enterprise has 135 security tools in place and is dealing with 40 security suppliers. IBM customers definitely tend to be large enterprise however it’s a common refrain (problem) from organizations of all sizes that security services don’t integrate well enough.

And the complaint is not simply that security services don’t play well with other security products, but likewise that they do not always integrate well with system management, patch management, CMDB, NetFlow analytics, ticketing systems, and orchestration tools. Organizations have to think about these (as well as other) integration points as well as the vendor’s willingness to share raw data, not just metadata, through an API.

Additional Tip 11: Plan for modifications

Here’s a bonus pointer. Presume that you’ll want to tailor that shiny brand-new NGES service shortly after you get it. No service will fulfill all your requirements right out of the box, in default setups. Discover how the solution supports:

– Custom-made data collection,
– Informing and reporting with custom data,
– Customized scripting, or
– IFTTT (if this then that) performance.

You understand you’ll want new paint or new wheels on that NGES solution quickly – so ensure it will support your future customization tasks easy enough.

Look for support for simple personalizations in your NGES solution

Follow the bulk of these suggestions and you’ll undoubtedly avoid a number of the typical pitfalls that plague others in their examinations of NGES services.

If You Want The Best End To End Protection For Your Organization Choose Ziften – Charles Leaver

Written By Ziften CEO Charles Leaver

 

Do you wish to handle and protect your end points, your data center, the cloud and your network? In that case Ziften can provide the ideal service for you. We gather data, and let you correlate and utilize that data to make decisions – and remain in control over your enterprise.

The info that we obtain from everybody on the network can make a real world distinction. Think about the inference that the 2016 U.S. elections were influenced by hackers in another country. If that’s the case, hackers can do practically anything – and the concept that we’ll go for that as the status quo is simply ridiculous.

At Ziften, our company believe the way to combat those threats is with higher visibility than you’ve ever had. That visibility goes across the entire enterprise, and connects all the major players together. On the back end, that’s real and virtual servers in the data center and the cloud. That’s infrastructure and applications and containers. On the other side, it’s notebooks and desktops, no matter where and how they are connected.

End-to-end – that’s the believing behind all that we do at Ziften. From endpoint to cloud, all the way from an internet browser to a DNS server. We connect all that together, with all the other parts to offer your business a complete service.

We also catch and save real-time data for approximately one year to let you know what’s happening on the network today, and offer historic trend analysis and cautions if something changes.

That lets you discover IT faults and security problems instantly, as well as be able to search out the origin by recalling in time to uncover where a fault or breach may have first happened. Active forensics are an outright requirement in this business: After all, where a breach or fault tripped an alarm might not be the place where the issue began – or where a hacker is operating.

Ziften supplies your security and IT groups with the visibility to comprehend your existing security posture, and identify where enhancements are needed. Endpoints non-compliant? Found. Rogue devices? These will be discovered. Penetration off-network? This will be detected. Obsolete firmware? Unpatched applications? All found. We’ll not only help you discover the issue, we’ll help you fix it, and make sure it stays fixed.

End to end security and IT management. Real time and historical active forensics. In the cloud, offline and onsite. Incident detection, containment and response. We have actually got it all covered. That’s what makes Ziften much better.

Our Enhancing Of NetFlow Will Provide You With Close Monitoring Of Cloud Activities – Charles Leaver

Written by Roark Pollock and Presented by Ziften CEO Charles Leaver

 

According to Gartner public cloud services market surpassed $208 billion in 2016. This represented about a 17% rise year over year. Not bad considering the on-going issues most cloud clients still have regarding data security. Another especially fascinating Gartner finding is the typical practice by cloud customers to contract services to several public cloud service providers.

In accordance with Gartner “most companies are already utilizing a mix of cloud services from various cloud companies”. While the business rationale for making use of multiple vendors is sound (e.g., avoiding supplier lock in), the practice does create extra intricacy intracking activity throughout an company’s increasingly fragmented IT landscape.

While some service providers support better visibility than others (for example, AWS CloudTrail can monitor API calls throughout the AWS infrastructure) companies need to comprehend and address the visibility issues related to transferring to the cloud regardless of the cloud service provider or service providers they deal with.

Unfortunately, the capability to track application and user activity, and networking interactions from each VM or endpoint in the cloud is restricted.

Irrespective of where computing resources reside, companies must answer the concerns of “Which users, devices, and applications are communicating with each other?” Organizations require visibility across the infrastructure so that they can:

  • Rapidly identify and prioritize concerns
  • Speed origin analysis and recognition
  • Lower the mean-time to repair problems for end users
  • Quickly determine and get rid of security dangers, lowering total dwell times.

Alternatively, bad visibility or bad access to visibility data can lower the effectiveness of existing security and management tools.

Businesses that are comfortable with the maturity, ease, and relative low cost of keeping track of physical data centers are apt to be disappointed with their public cloud choices.

What has been lacking is a basic, common, and stylish service like
NetFlow for public cloud infrastructure.

NetFlow, of course, has had 20 years or thereabouts to become a de facto requirement for network visibility. A common deployment includes the monitoring of traffic and aggregation of flows at network chokepoints, the collection and storage of flow info from several collection points, and the analysis of this flow information.

Flows consist of a basic set of destination and source IP addresses and port and protocol information that is generally collected from a switch or router. Netflow data is fairly inexpensive and easy to gather and provides nearly ubiquitous network visibility and enables actionable analysis for both network tracking and performance management applications.

Most IT staffs, specifically networking and some security groups are very comfy with the technology.

However NetFlow was developed for fixing exactly what has actually become a rather restricted issue in the sense that it just collects network info and does so at a minimal variety of prospective locations.

To make better use of NetFlow, two essential modifications are required.

NetFlow at the Edge: First, we have to broaden the useful deployment scenarios for NetFlow. Instead of only gathering NetFlow at network points of choke, let’s broaden flow collection to the edge of the network (cloud, servers and clients). This would considerably expand the big picture that any NetFlow analytics provide.

This would allow companies to enhance and take advantage of existing NetFlow analytics tools to get rid of the ever increasing blind spot of visibility into public cloud activity.

Rich, contextual NetFlow: Second, we have to utilize NetFlow for more than easy network visibility.

Instead, let’s utilize an extended version of NetFlow and include data on the device, application, user, and binary responsible for each tracked network connection. That would permit us to rapidly associate every network connection back to its source.

In fact, these 2 modifications to NetFlow, are precisely what Ziften has accomplished with ZFlow. ZFlow provides an expanded variation of NetFlow that can be deployed at the network edge, also as part of a container or VM image, and the resulting information collection can be consumed and examined with existing NetFlow analysis tools. Over and above conventional NetFlow Internet Protocol Flow Information eXport (IPFIX) networking visibility, ZFlow offers higher visibility with the inclusion of details on device, application, user and binary for every network connection.

Ultimately, this enables Ziften ZFlow to provide end-to-end visibility in between any two endpoints, physical or virtual, removing traditional blind spots like East West traffic in data centers and enterprise cloud deployments.

Part 2 Of Using Edit Difference For Detection – Charles Leaver

Written By Jesse Sampson And Presented By Charles Leaver CEO Ziften

 

In the very first about edit distance, we looked at searching for harmful executables with edit distance (i.e., how many character modifications it takes to make two matching text strings). Now let’s take a look at how we can use edit distance to look for malicious domains, and how we can build edit distance functions that can be integrated with other domain name features to pinpoint suspect activity.

Here is the Background

What are bad actors doing with harmful domains? It might be merely using a close spelling of a typical domain name to fool negligent users into looking at ads or getting adware. Legitimate websites are slowly catching onto this technique, often called typo squatting.

Other harmful domain names are the result of domain generation algorithms, which could be used to do all types of dubious things like avert countermeasures that obstruct recognized compromised websites, or overwhelm domain servers in a dispersed DoS attack. Older variations use randomly generated strings, while further advanced ones add techniques like injecting typical words, further confusing protectors.

Edit distance can assist with both use cases: here we will find out how. First, we’ll leave out typical domains, because these are usually safe. And, a list of regular domains supplies a standard for discovering abnormalities. One great source is Quantcast. For this discussion, we will stick to domain names and prevent subdomains (e.g. ziften.com, not www.ziften.com).

After data cleaning, we compare each candidate domain (input data observed in the wild by Ziften) to its prospective neighbors in the exact same top-level domain (the last part of a domain name – classically.com,. org, and so on but now can be nearly anything). The basic task is to discover the nearest neighbor in regards to edit distance. By finding domains that are one step removed from their closest neighbor, we can easily identify typo-ed domain names. By discovering domains far from their next-door neighbor (the stabilized edit distance we introduced in Part 1 is useful here), we can also discover anomalous domain names in the edit distance area.

What were the Outcomes?

Let’s take a look at how these outcomes appear in reality. Use caution when browsing to these domain names considering that they might contain harmful content!

Here are a few potential typos. Typo squatters target well known domains given that there are more possibilities somebody will visit. Several of these are suspicious in accordance with our risk feed partners, however there are some false positives as well with charming names like “wikipedal”.

ed2-1

Here are some strange looking domain names far from their next-door neighbors.

ed2-2

So now we have produced 2 useful edit distance metrics for hunting. Not just that, we have 3 functions to possibly add to a machine-learning design: rank of nearest neighbor, range from neighbor, and edit distance 1 from next-door neighbor, showing a threat of typo shenanigans. Other functions that could be used well with these include other lexical functions like word and n-gram distributions, entropy, and the length of the string – and network functions like the total count of failed DNS requests.

Streamlined Code that you can Play Around with

Here is a streamlined variation of the code to play with! Created on HP Vertica, but this SQL should run with the majority of innovative databases. Note the Vertica editDistance function might differ in other implementations (e.g. levenshtein in Postgres or UTL_MATCH. EDIT_DISTANCE in Oracle).

ed2-3

A Poorly Managed Environment Will Not Be Secure And It Is True In Reverse – Charles Leaver

Written by Charles Leaver Ziften CEO

 

If your enterprise computing environment is not appropriately managed there is no chance that it can be totally protected. And you can’t effectively manage those complicated business systems unless there’s a good sense that they are secure.

Some might call this a chicken-and-egg situation, where you don’t know where to start. Should you begin with security? Or should you begin with system management? That is the incorrect approach. Consider this instead like Reese’s Peanut Butter Cups: It’s not chocolate initially. It’s not peanut butter initially. Rather, both are mixed together – and dealt with as a single delicious treat.

Many companies, I would argue a lot of organizations, are structured with an IT management department reporting to a CIO, and with a security management group reporting to a CISO. The CIO team and the CISO team don’t know each other, talk with each other just when absolutely necessary, have unique budget plans, certainly have separate priorities, check out different reports, and make use of various management platforms. On an everyday basis, what makes up a task, an issue or an alert for one group flies totally under the other group’s radar.

That’s not good, since both the IT and security groups must make assumptions. The IT team thinks that everything is secure, unless somebody tells them otherwise. For example, they presume that devices and applications have actually not been compromised, users have not escalated their privileges, and so on. Likewise, the security team assumes that the servers, desktops, and mobiles are working properly, operating systems and applications are up to date, patches have actually been applied, etc

Since the CIO and CISO groups aren’t speaking to each other, do not comprehend each others’ roles and concerns, and aren’t using the exact same tools, those presumptions might not be appropriate.

And once again, you cannot have a safe and secure environment unless that environment is properly managed – and you can’t manage that environment unless it’s safe and secure. Or putting it another way: An environment that is not secure makes anything you carry out in the IT group suspect and irrelevant, and implies that you cannot understand whether the details you are seeing are right or controlled. It may all be phony news.

How to Bridge the IT / Security Space

The best ways to bridge that gap? It sounds simple however it can be hard: Ensure that there is an umbrella covering both the IT and security teams. Both IT and security report to the exact same individual or organization someplace. It might be the CIO, it might be the CFO, it might be the CEO. For the sake of argument here, let’s say it’s the CFO.

If the business does not have a protected environment, and there’s a breach, the worth of the brand name and the company may be decreased to nothing. Similarly, if the users, devices, infrastructure, application, and data aren’t well-managed, the company cannot work effectively, and the worth drops. As we have actually discussed, if it’s not well handled, it cannot be protected, and if it’s not secure, it cannot be well handled.

The fiduciary obligation of senior executives (like the CFO) is to protect the value of business assets, and that suggests making certain IT and security speak with each other, comprehend each other’s priorities, and if possible, can see the same reports and data – filtered and displayed to be meaningful to their specific areas of duty.

That’s the thought process that we adopted with the design of our Zenith platform. It’s not a security management tool with IT abilities, and it’s not an IT management tool with security abilities. No, it’s a Peanut Butter Cup, developed similarly around chocolate and peanut butter. To be less confectionery, Zenith is an umbrella that offers IT groups exactly what they require to do their tasks, and provides security groups what they need also – without coverage spaces that might weaken assumptions about the state of business security and IT management.

We need to guarantee that our business’s IT infrastructure is created on a protected structure – and that our security is implemented on a well managed base of hardware, infrastructure, software and users. We can’t run at peak efficiency, and with complete fiduciary obligation, otherwise.

Continuous Visibility Of The Endpoint Vital In This Work From Home Climate – Charles Leaver

Written By Roark Pollock And Presented By Charles Leaver Ziften CEO

 

A study recently completed by Gallup found that 43% of US citizens that were in employment worked remotely for some of their work time in 2016. Gallup, who has been surveying telecommuting trends in the United States for almost a decade, continues to see more staff members working beyond conventional offices and an increasing number of them doing this for more days out of the week. And, of course the variety of connected devices that the average employee uses has jumped also, which assists encourage the benefit and desire of working away from the workplace.

This mobility definitely makes for better staff members, and it is hoped more productive employees, however the problems that these patterns represent for both security and systems operations teams should not be dismissed. IT asset discovery, IT systems management, and hazard detection and response functions all take advantage of real-time and historical visibility into device, application, network connection and user activity. And to be genuinely effective, endpoint visibility and monitoring must work regardless of where the user and device are operating, be it on the network (regional), off the network but linked (remote), or detached (not online). Current remote working patterns are significantly leaving security and operational groups blind to potential concerns and risks.

The mainstreaming of these trends makes it even more tough for IT and security teams to limit what used to be deemed higher danger user habits, such as working from a coffeehouse. However that ship has actually sailed and today systems management and security groups have to have the ability to thoroughly track user, device, application, and network activity, spot anomalies and unsuitable actions, and implement appropriate action or fixes no matter whether an endpoint is locally connected, from another location linked, or detached.

In addition, the fact that numerous staff members now routinely gain access to cloud based applications and assets, and have back-up network or USB connected storage (NAS) drives at their homes further magnifies the requirement for endpoint visibility. Endpoint controls often provide the one and only record of activity being remotely performed that no longer always terminates in the organization network. Offline activity presents the most severe example of the requirement for continuous endpoint monitoring. Clearly network controls or network tracking are of negligible use when a device is operating offline. The setup of a suitable endpoint agent is vital to ensure the capture of all important system and security data.

As an example of the kinds of offline activities that may be detected, a customer was just recently able to track, flag, and report uncommon behavior on a business laptop computer. A high level executive transferred substantial amounts of endpoint data to an unapproved USB stick while the device was offline. Because the endpoint agent was able to gather this behavioral data during this offline duration, the client had the ability to see this unusual action and follow up appropriately. Through the continuous monitoring of the device, applications, and user behaviors even when the endpoint was disconnected, offered the customer visibility they never ever had previously.

Does your organization have constant tracking and visibility when staff member endpoints are offline? If so, how do you do so?

Machine Learning Technology Has Promise But Be Aware Of The Likely Consequences – Charles Leaver

Written By Roark Pollock And Presented By Ziften CEO Charles Leaver

 

If you are a student of history you will see numerous examples of serious unintentional consequences when new technology has been presented. It frequently surprises people that new technologies may have dubious purposes in addition to the positive purposes for which they are launched on the market but it takes place all the time.

For example, Train robbers using dynamite (“You believe you utilized adequate Dynamite there, Butch?”) or spammers utilizing email. More recently making use of SSL to hide malware from security controls has actually become more common just because the genuine use of SSL has made this technique more useful.

Due to the fact that brand-new technology is often appropriated by bad actors, we have no need to think this will not hold true about the brand-new generation of machine-learning tools that have actually reached the marketplace.

To what effect will there be misuse of these tools? There are most likely a couple of ways in which enemies might utilize machine-learning to their advantage. At a minimum, malware writers will evaluate their brand-new malware versus the brand-new class of innovative hazard security products in a bid to customize their code so that it is less probable to be flagged as destructive. The efficiency of protective security controls always has a half-life because of adversarial learning. An understanding of artificial intelligence defenses will help assailants be more proactive in lowering the effectiveness of machine learning based defenses. An example would be an enemy flooding a network with phony traffic with the hope of “poisoning” the machine-learning model being developed from that traffic. The goal of the opponent would be to deceive the defender’s artificial intelligence tool into misclassifying traffic or to develop such a high degree of false positives that the defenders would dial back the fidelity of the alerts.

Machine learning will likely likewise be utilized as an offensive tool by enemies. For instance, some scientists forecast that opponents will use artificial intelligence strategies to sharpen their social engineering attacks (e.g., spear phishing). The automation of the effort that is required to tailor a social engineering attack is particularly unpleasant provided the efficiency of spear phishing. The capability to automate mass customization of these attacks is a potent economic incentive for assailants to adopt the strategies.

Expect breaches of this type that deliver ransomware payloads to increase dramatically in 2017.

The requirement to automate tasks is a significant motivation of financial investment choices for both aggressors and protectors. Machine learning promises to automate detection and response and increase the functional pace. While the innovation will progressively end up being a standard part of defense in depth methods, it is not a magic bullet. It should be understood that attackers are actively working on evasion techniques around machine learning based detection products while likewise utilizing machine learning for their own offensive functions. This arms race will require defenders to progressively attain incident response at machine pace, further exacerbating the requirement for automated incident response capabilities.

Use Of Certain Commands Can Mean Threats – Charles Leaver

Written By Josh Harriman And Presented By Charles Leaver Ziften CEO

 

The repeating of a theme when it comes to computer system security is never ever a bad thing. As advanced as some attacks may be, you really need to look for and understand using common easily offered tools in your environment. These tools are usually used by your IT staff and more than likely would be white listed for use and can be missed out on by security groups mining through all the appropriate applications that ‘could’ be executed on an endpoint.

When someone has actually breached your network, which can be done in a range of ways and another blog for another day, indications of these programs/tools running in your environment must be examined to guarantee correct usage.

A couple of commands/tools and their features:

Netstat – Details on the existing connections on the network. This may be utilized to recognize other systems within the network.

Powershell – Built-in Windows command line function and can perform a host of actions for example getting important info about the system, killing procedures, including files or removing files etc

WMI – Another effective integrated Windows utility. Can move files around and gather essential system information.

Route Print – Command to view the local routing table.

Net – Including users/domains/accounts/groups.

RDP (Remote Desktop Protocol) – Program to access systems from a remote location.

AT – Set up jobs.

Looking for activity from these tools can take a long time and often be overwhelming, but is required to deal with who might be moving around in your environment. And not simply what is happening in real-time, however historically too to see a course somebody might have taken through the environment. It’s often not ‘patient zero’ that is the target, once they get a grip, they could use these tools and commands to begin their reconnaissance and lastly shift to a high value asset. It’s that lateral motion that you want to find.

You need to have the ability to collect the details gone over above and the ways to sift through to discover, alert, and examine this data. You can make use of Windows Events to monitor various modifications on a device and then filter that down.

Looking at some screen shots shown below from our Ziften console, you can see a quick distinction between what our IT group used to push out changes in the network, versus somebody running a very similar command themselves. This could be much like what you discover when someone did that from a remote location say by means of an RDP session.

commands-to-watch01

commands-to-watch02

commands-to-watch03

commands-to-watch04

An intriguing side note in these screenshots is that in all of the cases, the Process Status is ‘Terminated’. You would not observe this detail throughout a live examination or if you were not constantly collecting the data. However given that we are gathering all of the information continually, you have this historical data to take a look at. If in the event you were observing the Status as ‘Running’, this might suggest that someone is actually on that system as of now.

This only scratches the surface of what you must be collecting and how to evaluate exactly what is right for your network, which of course will be distinct from that of others. However it’s a good place to start. Harmful actors with the intention to do you harm will usually search for the path of least resistance. Why attempt and produce brand new and intriguing tools, when a great deal of what they need is currently there and ready to go.