Petya Variant Flaw Is Real Trouble Unless You Are A Ziften Customer – Charles Leaver

Written By Josh Harriman And Presented By Charles Leaver Ziften CEO

 

Another outbreak, another nightmare for those who were not prepared. While this newest attack is similar to the earlier WannaCry threat, there are some distinctions in this latest malware which is an alternative or new strain much like Petya. Called, NotPetya by some, this strain has a great deal of problems for anybody who encounters it. It may encrypt your data, or make the system completely inoperable. And now the e-mail address that you would be required to get in touch with to ‘possibly’ unencrypt your files, has actually been removed so you’re out of luck getting your files back.

A lot of information to the actions of this threat are openly readily available, but I wished to touch on that Ziften consumers are protected from both the EternalBlue threat, which is one system used for its proliferation, and even better still, a shot based upon a possible flaw or its own type of debug check that removes the hazard from ever performing on your system. It could still spread out nevertheless in the environment, however our defense would currently be presented to all existing systems to stop the damage.

Our Ziften extension platform allows our clients to have defense in place against certain vulnerabilities and destructive actions for this risk and others like Petya. Besides the particular actions taken against this particular variation, we have taken a holistic approach to stop particular strains of malware that conduct different ‘checks’ versus the system prior to executing.

We can also utilize our Search capability to try to find residues of the other proliferation strategies used by this danger. Reports show WMIC and PsExec being used. We can look for those programs and their command lines and usage. Despite the fact that they are genuine processes, their usage is typically uncommon and can be notified.

With WannaCry, and now NotPetya, we expect to see an ongoing rise of these types of attacks. With the release of the current NSA exploits, it has offered enthusiastic cyber criminals the tools needed to push out their items. And though ransomware risks can be a high commodity vehicle, more damaging risks could be launched. It has always been ‘how’ to get the hazards to spread out (worm-like, or social engineering) which is most challenging to them.

UK Email Security Breach Highlights Design Insecurities – Charles Leaver

Written By Dr Al Hartmann And Presented By Ziften CEO Charles Leaver

 

In the online world the sheep get shorn, chumps get chewed, dupes get deceived, and pawns get pwned. We’ve seen another great example of this in the recent attack on the UK Parliament email system.

Rather than admit to an e-mail system that was insecure by design, the official declaration read:

Parliament has robust steps in place to safeguard all of our accounts and systems.

Tell us another one. The one protective procedure we did see at work was blame deflection – the Russians did it, that always works, while implicating the victims for their policy infractions. While information of the attack are limited, combing numerous sources does help to assemble a minimum of the gross outlines. If these descriptions are fairly close, the UK Parliament email system failings are atrocious.

What went wrong in this case?

Rely on single aspect authentication

“Password security” is an oxymoron – anything password protected alone is insecure, that’s it, irrespective of the strength of the password. Please, no 2FA here, might hinder attacks.

Do not enforce any limit on failed login attempts

Helped by single element authentication, this permits easy brute force attacks, no skill needed. But when violated, blame elite foreign hackers – nobody can confirm.

Do not carry out brute force violation detection

Allow opponents to perform (otherwise trivially noticeable) brute force attacks for prolonged periods (12 hours against the United Kingdom Parliament system), to maximize account compromise scope.

Do not implement policy, treat it as merely recommendations

Combined with single element authentication, no limitation on unsuccessful logins, and no brute force violation detection, do not impose any password strength validation. Provide attackers with really low hanging fruit.

Rely on unsigned, unencrypted e-mail for delicate communications

If opponents are successful in jeopardizing e-mail accounts or sniffing your network traffic, provide lots of opportunity for them to score high value message material entirely in the clear. This likewise conditions constituents to trust readily spoofable e-mail from Parliament, developing a perfect constituent phishing environment.

Lessons learned

In addition to adding “Good sense for Dummies” to their summer reading lists, the United Kingdom Parliament e-mail system administrators might want to take additional actions. Reinforcing weak authentication practices, implementing policies, improving network and endpoint visibility with constant tracking and anomaly detection, and totally reconsidering secure messaging are recommended actions. Penetration testing would have revealed these fundamental weak points while staying outside the news headlines.

Even a few intelligent high-schoolers with a totally free weekend might have replicated this attack. And lastly, stop blaming the Russians for your very own security failings. Assume that any weak points in your security architecture and policy structure will be penetrated and made use of by some cyber criminals somewhere throughout the global internet. Even more incentive to discover and fix those weak points before the hackers do, so get started immediately. And after that if your defenders don’t cannot see the attacks in progress, upgrade your tracking and analytics.

SysSecOps Will Enable IT And Security To Work Closer – Charles Leaver

Written By Charles Leaver Ziften CEO

 

It was nailed by Scott Raynovich. Having dealt with numerous companies he recognized that one of the most significant obstacles is that security and operations are two different departments – with drastically different goals, varying tools, and different management structures.

Scott and his expert firm, Futuriom, recently completed a study, “Endpoint Security and SysSecOps: The Growing Trend to Develop a More Secure Enterprise”, where one of the essential findings was that clashing IT and security goals hamper professionals – on both teams – from attaining their goals.

That’s exactly what we believe at Ziften, and the term that Scott created to speak about the convergence of IT and security in this domain – SysSecOps – explains perfectly exactly what we’ve been discussing. Security groups and the IT teams need to get on the exact same page. That indicates sharing the very same goals, and sometimes, sharing the same tools.

Think about the tools that IT individuals use. The tools are designed to make sure the infrastructure and end devices are working appropriately, when something fails, helps them repair it. On the end point side, those tools will make sure that devices that are allowed onto the network, are set up effectively, have software applications that are authorized and appropriately updated/patched, and haven’t recorded any faults.

Consider the tools that security folks use. They work to impose security policies on devices, infrastructure, and security devices (like firewall programs). This might involve active monitoring events, scanning for abnormal habits, analyzing files to ensure they don’t contain malware, adopting the current hazard intelligence, matching versus recently found zero-days, and carrying out analysis on log files.

Finding fires, battling fires

Those are 2 different worlds. The security teams are fire spotters: They can see that something bad is taking place, can work rapidly to isolate the problem, and identify if harm happened (like data exfiltration). The IT teams are on-the-ground firefighters: They jump into action when an incident occurs to ensure that the systems are made safe and restored into operation.

Sounds excellent, doesn’t it? Sadly, all frequently, they don’t talk to each other – it resembles having the fire spotters and fire fighters utilizing dissimilar radios, different lingo, and dissimilar city maps. Worse, the teams cannot share the same data directly.

Our method to SysSecOps is to offer both the IT and security groups with the very same resources – which implies the same reports, provided in the suitable ways to experts. It’s not a dumbing down, it’s working smarter.

It’s ridiculous to operate in any other way. Take the WannaCry infection, for example. On one hand, Microsoft released a patch back in March 2017 that dealt with the underlying SMB flaw. IT operations teams didn’t install the patch, because they didn’t believe this was a big deal and didn’t talk with security. Security teams didn’t know if the patch was installed, because they don’t speak to operations. SysSecOps would have had everybody on the exact same page – and could have potentially avoided this issue.

Missing data indicates waste and risk

The dysfunctional gap between IT operations and security exposes companies to threats. Preventable risk. Unnecessary threats. It’s just inappropriate!

If your organization’s IT and security groups aren’t on the very same page, you are sustaining dangers and costs that you should not have to. It’s waste. Organizational waste. It’s wasteful since you have numerous tools that are offering partial data that have gaps, and each of your teams just sees part of the picture.

As Scott concluded in his report, “Collaborated SysSecOps visibility has actually currently proven its worth in helping companies assess, analyze, and prevent substantial threats to the IT systems and endpoints. If these goals are pursued, the security and management threats to an IT system can be greatly decreased.”

If your groups are collaborating in a SysSecOps sort of method, if they can see the exact same data at the same time, you not just have better security and more effective operations – however also lower risk and lower costs. Our Zenith software can assist you attain that effectiveness, not only dealing with your existing IT and security tools, but also completing the gaps to make sure everybody has the ideal data at the right time.

Detection Of WannaCry And Response To It Through Ziften And Splunk – Charles Leaver

Written by Joel Ebrahami and presented by Charles Leaver

 

WannaCry has created a lot of media attention. It might not have the huge infection rates that we have actually seen with much of the older worms, but in the current security world the amount of systems it was able to infect in a single day was still rather incredible. The objective of this blog is NOT to supply an in-depth analysis of the threat, however rather to look how the exploit acts on a technical level with Ziften’s Zenith platform and the combination we have with our innovation partner Splunk.

WannaCry Visibility in Ziften Zenith

My first action was to reach out to Ziften Labs hazard research study team to see exactly what details they could provide to me about WannaCry. Josh Harriman, VP of Cyber Security Intelligence, heads up our research team and notified me that they had samples of WannaCry presently running in our ‘Red Laboratory’ to take a look at the habits of the danger and perform more analysis. Josh sent me over the information of what he had actually discovered when analyzing the WannaCry samples in the Ziften Zenith console. He sent over those information, which I provide herein.

The Red Laboratory has systems covering all the most popular common os with various services and setups. There were already systems in the laboratory that were deliberately susceptible to the WannaCry exploit. Our international danger intelligence feeds utilized in the Zenith platform are upgraded in real-time, and had no trouble spotting the virus in our lab environment (see Figure 1).

wannasplunk-figure1

2 lab systems have been determined running the harmful WannaCry sample. While it is terrific to see our global hazard intelligence feeds upgraded so quickly and recognizing the ransomware samples, there were other behaviors that we identified that would have recognized the ransomware threat even if there had actually not been a danger signature.

Zenith agents gather a vast quantity of information on what’s occurring on each host. From this visibility information, we create non-signature based detection methods to take a look at generally harmful or anomalous behaviors. In Figure 2 below, we reveal the behavioral detection of the WannaCry infection.

wannasplunk-figure2

Investigating the Breadth of WannaCry Infections

Once detected either through signature or behavioral methods, it is very simple to see which other systems have likewise been contaminated or are exhibiting comparable behaviors.

wannasplunk-figure3

Detecting WannaCry with Ziften and Splunk

After reviewing this information, I decided to run the WannaCry sample in my own environment on a susceptible system. I had one susceptible system running the Zenith agent, and in this example my Zenith server was currently set up to integrate with Splunk. This permitted me to take a look at the very same data inside Splunk. Let me make it clear about the integration we have with Splunk.

We have 2 Splunk apps for Zenith. The first is our technology add on (TA): its function is to ingest and index ALL the raw data from the Zenith server that the Ziften agents create. As this information comes in it is massaged into Splunk’s Common Info Model (CIM) so that it can be stabilized and easily browsed as well as utilized by other apps such as the Splunk App for Enterprise Security (Splunk ES). The Ziften TA also includes Adaptive Response capabilities for taking actions from events that are rendered in Splunk ES. The 2nd app is a control panel for showing our information with all the graphs and charts offered in Splunk to allow absorbing the data a lot easier.

Since I currently had the details on how the WannaCry exploit acted in our research laboratory, I had the advantage of knowing exactly what to find in Splunk using the Zenith data. In this case I was able to see a signature alert by utilizing the VirusTotal integration with our Splunk app (see Figure 4).

wannasplunk-figure4

Risk Hunting for WannaCry Ransomware in Ziften and Splunk

But I wanted to wear my “incident responder hat” and investigate this in Splunk utilizing the Zenith agent information. My first idea was to search the systems in my lab for ones running SMB, since that was the initial vector for the WannaCry attack. The Zenith data is encapsulated in various message types, and I understood that I would most likely find SMB data in the running procedure message type, however, I used Splunk’s * regex with the Zenith sourcetype so I could search all Zenith data. The resulting search looked like ‘sourcetype= ziften: zenith: * smb’. As I expected I got one result back for the system that was running SMB (see Figure 5).

wannasplunk-figure5

My next action was to utilize the same behavioral search we have in Zenith that tries to find normal CryptoWare and see if I might get results back. Once again this was really simple to do from the Splunk search panel. I used the very same wildcard sourcetype as previously so I might browse throughout all Zenith data and this time I included the ‘delete shadows’ string search to see if this behavior was ever released at the command line. My search looked like ‘sourcetype= ziften: zenith: * delete shadows’. This search returned results, displayed in Figure 6, that revealed me in detail the process that was developed and the complete command line that was executed.

wannasplunk-figure6

Having all this detail within Splunk made it extremely simple to identify which systems were susceptible and which systems had actually currently been jeopardized.

WannaCry Remediation Using Splunk and Ziften

Among the next steps in any type of breach is to remediate the compromise as fast as possible to prevent further destruction and to take action to prevent other systems from being jeopardized. Ziften is one of the Splunk founding Adaptive Response members and there are a variety of actions (see Figure 7) that can be taken through Spunk’s Adaptive Response to reduce these dangers through extensions on Zenith.

wannasplunk-figure7

In the case of WannaCry we actually might have utilized practically any of the Adaptive Response actions currently readily available by Zenith. When attempting to reduce the effect and avoid WannaCry in the first place, one action that can take place is to shut down SMB on any systems running the Zenith agent where the version of SMB running is understood to be susceptible. With a single action Splunk can pass to Zenith the agent ID’s or the IP Address of all the vulnerable systems where we wanted to stop the SMB service, thus avoiding the exploit from ever taking place and permitting the IT Operations team to get those systems patched prior to beginning the SMB service once again.

Preventing Ransomware from Spreading or Exfiltrating Data

Now in the case that we have already been jeopardized, it is critical to prevent more exploitation and stop the possible exfiltration of delicate information or company intellectual property. There are really three actions we might take. The very first 2 are similar where we might eliminate the malicious process by either PID (process ID) or by its hash. This works, but considering that often times malware will just spawn under a new process, or be polymorphic and have a different hash, we can use an action that is guaranteed to prevent any incoming or outgoing traffic from those contaminated systems: network quarantine. This is another example of an Adaptive Response action readily available from Ziften’s integration with Splunk ES.

WannaCry is currently reducing, however hopefully this technical blog post reveals the worth of the Ziften and Splunk integration in handling ransomware hazards against the endpoint.

Learn From This HVAC Breach And Become Security Paranoid – Charles Leaver

Written By Charles Leaver Ziften CEO

 

Whatever you do not ignore cybersecurity criminals. Even the most paranoid “regular” person wouldn’t fret about a source of data breaches being taken credentials from its heating, ventilation and a/c (HVAC) contractor. Yet that’s what happened at Target in November 2013. Hackers broke into Target’s network using qualifications given to the contractor, most likely so they could track the heating, ventilation and air conditioning system. (For a great analysis, see Krebs on Security). And after that hackers had the ability to take advantage of the breach to spread malware into point-of-sale (POS) systems, then offload payment card details.

A variety of ludicrous errors were made here. Why was the HVAC professional provided access to the enterprise network? Why wasn’t the HVAC system on a separate, totally isolated network? Why wasn’t the POS system on a separate network? And so on.

The point here is that in a really complicated network, there are uncounted potential vulnerabilities that could be made use of through negligence, unpatched software applications, default passwords, social engineering, spear phishing, or insider actions. You understand.

Whose job is it to find and fix those vulnerabilities? The security group. The CISO’s team. Security specialists aren’t “typical” people. They are paid to be paranoid. Make no mistake, no matter the particular technical vulnerability that was made use of, this was a CISO failure to anticipate the worst and prepare accordingly.

I cannot speak to the Target A/C breach specifically, however there is one frustrating reason why breaches like this happen: A lack of monetary priority for cyber security. I’m not exactly sure how often businesses fail to finance security just due to the fact that they’re inexpensive and would rather do a share buy back. Or possibly the CISO is too shy to request what’s required, or has actually been told that she gets a 5% increase, no matter the requirement. Maybe the CEO is worried that disclosures of big allowances for security will startle investors. Maybe the CEO is merely naïve enough to believe that the business will not be targeted by hackers. The problem: Every company is targeted by cyber criminals.

There are big competitions over spending plans. The IT department wants to finance upgrades and improvements, and attack the backlog of demand for new and improved applications. On the other side, you have operational managers who see IT jobs as directly assisting the bottom line. They are optimists, and have lots of CEO attention.

By contrast, the security department frequently has to fight for crumbs. They are viewed as an expense center. Security decreases company threat in a manner that matters to the CFO, the CRO (chief risk officer, if there is one), the general counsel, and other pessimists who care about compliance and track records. These green-eyeshade individuals consider the worst case situations. That doesn’t make friends, and budget dollars are designated reluctantly at a lot of companies (till the company gets burned).

Call it naivety, call it established hostility, but it’s a genuine challenge. You cannot have IT provided fantastic tools to drive the enterprise forward, while security is starved and using second best.

Worse, you don’t want to end up in situations where the rightfully paranoid security groups are working with tools that don’t fit together well with their IT equivalent’s tools.

If IT and security tools do not mesh well, IT might not be able to rapidly act to react to dangerous circumstances that the security teams are keeping an eye on or are worried about – things like reports from threat intelligence, discoveries of unpatched vulnerabilities, nasty zero-day exploits, or user habits that suggest risky or suspicious activity.

One idea: Discover tools for both departments that are developed with both IT and security in mind, right from the start, instead of IT tools that are patched to provide some minimal security capability. One budget plan item (take it out of IT, they have more money), but 2 workflows, one created for the IT professional, one for the CISO team. Everybody wins – and next time somebody wants to offer the HVAC specialist access to the network, maybe security will observe exactly what IT is doing, and head that disaster off at the pass.

Next Generation Endpoint Security Products 10 Tips For Evaluation – Charles Leaver

Written By Roark Pollock And Presented By Chuck Leaver CEO Ziften

 

The End Point Security Buyer’s Guide

The most typical point for an advanced consistent attack or a breach is the endpoint. And they are certainly the entry point for most ransomware and social engineering attacks. Using endpoint security products has actually long been thought about a best practice for securing endpoints. Unfortunately, those tools aren’t staying up to date with today’s hazard environment. Advanced risks, and truth be told, even less innovative dangers, are often more than appropriate for fooling the typical worker into clicking something they shouldn’t. So organizations are looking at and examining a plethora of next-gen endpoint security (NGES) options.

With this in mind, here are ten pointers to think about if you’re taking a look at NGES solutions.

Suggestion 1: Begin with the end in mind

Don’t let the tail wag the dog. A danger reduction technique should always begin by examining issues and then trying to find possible fixes for those problems. But all too often we get captivated with a “shiny” new innovation (e.g., the latest silver bullet) and we end up trying to squeeze that technology into our environments without totally examining if it resolves an understood and recognized issue. So exactly what problems are you trying to solve?

– Is your existing endpoint protection tool failing to stop threats?
– Do you need much better visibility into activities at the endpoint?
– Are compliance requirements mandating continuous end point monitoring?
– Are you trying to decrease the time and expense of incident response?

Define the problems to address, and after that you’ll have a measuring stick for success.

Idea 2: Understand your audience. Who will be using the tool?

Understanding the problem that needs to be fixed is an essential first step in understanding who owns the problem and who would (operationally) own the service. Every functional team has its strengths, weaknesses, preferences and prejudices. Define who will need to use the solution, and others that could benefit from its usage. It could be:

– Security operations,
– IT group,
– The governance, risk & compliance (GRC) team,
– Help desk or end user support group,
– And even the server group, or a cloud operations team?

Pointer 3: Know exactly what you imply by end point

Another often neglected early step in defining the problem is specifying the endpoint. Yes, we all used to know exactly what we meant when we stated endpoint but today end points come in a lot more ranges than in the past.

Sure we want to safeguard desktops and laptop computers however how about mobile devices (e.g. smartphones and tablets), virtual end points, cloud based end points, or Internet of Things (IoT) devices? And how about your servers? All these devices, naturally, are available in multiple tastes so platform support needs to be dealt with also (e.g. Windows only, Mac OSX, Linux, etc?). Also, consider assistance for endpoints even when they are working remote, or are working offline. What are your requirements and exactly what are “great to haves?”

Suggestion 4: Start with a structure of continuous visibility

Constant visibility is a fundamental capability for attending to a host of security and operational management concerns on the endpoint. The old adage holds true – that you cannot manage exactly what you can’t see or determine. Further, you can’t secure exactly what you cannot effectively manage. So it should begin with continuous or all-the-time visibility.

Visibility is foundational to Security and Management

And consider what visibility indicates. Enterprises require a single source of reality that at a minimum monitors, saves, and examines the following:

– System data – events, logs, hardware state, and file system details
– User data – activity logs and habit patterns
– Application data – attributes of installed apps and usage patterns
– Binary data – attributes of installed binaries
– Procedures data – tracking info and stats
– Network connectivity data – statistics and internal behavior of network activity on the host

Pointer 5: Track your visibility data

End point visibility data can be saved and analyzed on the premises, in the cloud, or some combination of both. There are advantages to each. The appropriate approach varies, but is typically driven by regulatory requirements, internal privacy policies, the endpoints being monitored, and the general cost factors to consider.

Know if your company requires on-premise data retention

Know whether your company enables cloud based data retention and analysis or if you are constrained to on premise services only. Within Ziften, 20-30% of our customers save data on premise just for regulative factors. However, if lawfully an alternative, the cloud can provide expense advantages (among others).

Tip 6: Know what is on your network

Understanding the problem you are aiming to fix requires understanding the assets on the network. We find that as much as 30% of the end points we initially discover on clients’ networks are unmanaged or unidentified devices. This undoubtedly produces a big blind spot. Decreasing this blind spot is a critical best practice. In fact, SANS Critical Security Controls 1 and 2 are to carry out an inventory of licensed and unauthorized devices and software applications attached to your network. So look for NGES services that can finger print all connected devices, track software applications inventory and utilization, and perform ongoing continuous discovery.

Idea 7: Know where you are exposed

After figuring out what devices you have to view, you need to make certain they are running in up to date setups. SANS Critical Security Controls 3 suggests making sure secure setups monitoring for laptops, workstations, and servers. SANS Critical Security Controls 4 recommends making it possible for continuous vulnerability evaluation and remediation of these devices. So, look for NGES solutions that supply constant monitoring of the state or posture of each device, and it’s even better if it can assist enforce that posture.

Also look for services that deliver continuous vulnerability evaluation and removal.

Keeping your total endpoint environment solidified and free of vital vulnerabilities prevents a huge quantity of security concerns and eliminates a lot of back end work on the IT and security operations teams.

Pointer 8: Cultivate continuous detection and response

A crucial end goal for numerous NGES solutions is supporting continuous device state monitoring, to enable efficient risk or incident response. SANS Critical Security Control 19 recommends robust event response and management as a best practice.

Try to find NGES solutions that offer all-the-time or continuous risk detection, which leverages a network of worldwide hazard intelligence, and multiple detection strategies (e.g., signature, behavioral, artificial intelligence, etc). And look for event response services that help prioritize identified threats and/or problems and provide workflow with contextual system, application, user, and network data. This can help automate the proper response or next steps. Lastly, understand all the response actions that each solution supports – and look for a service that supplies remote access that is as close as possible to “sitting at the end point keyboard”.

Suggestion 9: Consider forensics data gathering

In addition to event response, companies need to be prepared to address the requirement for forensic or historic data analysis. The SANS Critical Security Control 6 suggests the maintenance, tracking and analysis of all audit logs. Forensic analysis can take numerous forms, but a structure of historical end point tracking data will be crucial to any examination. So look for solutions that maintain historic data that allows:

– Forensic tasks include tracing lateral risk movement through the network gradually,
– Identifying data exfiltration efforts,
– Identifying source of breaches, and
– Identifying appropriate removal actions.

Pointer 10: Tear down the walls

IBM’s security team, which supports an outstanding community of security partners, approximates that the typical enterprise has 135 security tools in place and is dealing with 40 security suppliers. IBM customers definitely tend to be large enterprise however it’s a common refrain (problem) from organizations of all sizes that security services don’t integrate well enough.

And the complaint is not simply that security services don’t play well with other security products, but likewise that they do not always integrate well with system management, patch management, CMDB, NetFlow analytics, ticketing systems, and orchestration tools. Organizations have to think about these (as well as other) integration points as well as the vendor’s willingness to share raw data, not just metadata, through an API.

Additional Tip 11: Plan for modifications

Here’s a bonus pointer. Presume that you’ll want to tailor that shiny brand-new NGES service shortly after you get it. No service will fulfill all your requirements right out of the box, in default setups. Discover how the solution supports:

– Custom-made data collection,
– Informing and reporting with custom data,
– Customized scripting, or
– IFTTT (if this then that) performance.

You understand you’ll want new paint or new wheels on that NGES solution quickly – so ensure it will support your future customization tasks easy enough.

Look for support for simple personalizations in your NGES solution

Follow the bulk of these suggestions and you’ll undoubtedly avoid a number of the typical pitfalls that plague others in their examinations of NGES services.

If You Want The Best End To End Protection For Your Organization Choose Ziften – Charles Leaver

Written By Ziften CEO Charles Leaver

 

Do you wish to handle and protect your end points, your data center, the cloud and your network? In that case Ziften can provide the ideal service for you. We gather data, and let you correlate and utilize that data to make decisions – and remain in control over your enterprise.

The info that we obtain from everybody on the network can make a real world distinction. Think about the inference that the 2016 U.S. elections were influenced by hackers in another country. If that’s the case, hackers can do practically anything – and the concept that we’ll go for that as the status quo is simply ridiculous.

At Ziften, our company believe the way to combat those threats is with higher visibility than you’ve ever had. That visibility goes across the entire enterprise, and connects all the major players together. On the back end, that’s real and virtual servers in the data center and the cloud. That’s infrastructure and applications and containers. On the other side, it’s notebooks and desktops, no matter where and how they are connected.

End-to-end – that’s the believing behind all that we do at Ziften. From endpoint to cloud, all the way from an internet browser to a DNS server. We connect all that together, with all the other parts to offer your business a complete service.

We also catch and save real-time data for approximately one year to let you know what’s happening on the network today, and offer historic trend analysis and cautions if something changes.

That lets you discover IT faults and security problems instantly, as well as be able to search out the origin by recalling in time to uncover where a fault or breach may have first happened. Active forensics are an outright requirement in this business: After all, where a breach or fault tripped an alarm might not be the place where the issue began – or where a hacker is operating.

Ziften supplies your security and IT groups with the visibility to comprehend your existing security posture, and identify where enhancements are needed. Endpoints non-compliant? Found. Rogue devices? These will be discovered. Penetration off-network? This will be detected. Obsolete firmware? Unpatched applications? All found. We’ll not only help you discover the issue, we’ll help you fix it, and make sure it stays fixed.

End to end security and IT management. Real time and historical active forensics. In the cloud, offline and onsite. Incident detection, containment and response. We have actually got it all covered. That’s what makes Ziften much better.

Our Enhancing Of NetFlow Will Provide You With Close Monitoring Of Cloud Activities – Charles Leaver

Written by Roark Pollock and Presented by Ziften CEO Charles Leaver

 

According to Gartner public cloud services market surpassed $208 billion in 2016. This represented about a 17% rise year over year. Not bad considering the on-going issues most cloud clients still have regarding data security. Another especially fascinating Gartner finding is the typical practice by cloud customers to contract services to several public cloud service providers.

In accordance with Gartner “most companies are already utilizing a mix of cloud services from various cloud companies”. While the business rationale for making use of multiple vendors is sound (e.g., avoiding supplier lock in), the practice does create extra intricacy intracking activity throughout an company’s increasingly fragmented IT landscape.

While some service providers support better visibility than others (for example, AWS CloudTrail can monitor API calls throughout the AWS infrastructure) companies need to comprehend and address the visibility issues related to transferring to the cloud regardless of the cloud service provider or service providers they deal with.

Unfortunately, the capability to track application and user activity, and networking interactions from each VM or endpoint in the cloud is restricted.

Irrespective of where computing resources reside, companies must answer the concerns of “Which users, devices, and applications are communicating with each other?” Organizations require visibility across the infrastructure so that they can:

  • Rapidly identify and prioritize concerns
  • Speed origin analysis and recognition
  • Lower the mean-time to repair problems for end users
  • Quickly determine and get rid of security dangers, lowering total dwell times.

Alternatively, bad visibility or bad access to visibility data can lower the effectiveness of existing security and management tools.

Businesses that are comfortable with the maturity, ease, and relative low cost of keeping track of physical data centers are apt to be disappointed with their public cloud choices.

What has been lacking is a basic, common, and stylish service like
NetFlow for public cloud infrastructure.

NetFlow, of course, has had 20 years or thereabouts to become a de facto requirement for network visibility. A common deployment includes the monitoring of traffic and aggregation of flows at network chokepoints, the collection and storage of flow info from several collection points, and the analysis of this flow information.

Flows consist of a basic set of destination and source IP addresses and port and protocol information that is generally collected from a switch or router. Netflow data is fairly inexpensive and easy to gather and provides nearly ubiquitous network visibility and enables actionable analysis for both network tracking and performance management applications.

Most IT staffs, specifically networking and some security groups are very comfy with the technology.

However NetFlow was developed for fixing exactly what has actually become a rather restricted issue in the sense that it just collects network info and does so at a minimal variety of prospective locations.

To make better use of NetFlow, two essential modifications are required.

NetFlow at the Edge: First, we have to broaden the useful deployment scenarios for NetFlow. Instead of only gathering NetFlow at network points of choke, let’s broaden flow collection to the edge of the network (cloud, servers and clients). This would considerably expand the big picture that any NetFlow analytics provide.

This would allow companies to enhance and take advantage of existing NetFlow analytics tools to get rid of the ever increasing blind spot of visibility into public cloud activity.

Rich, contextual NetFlow: Second, we have to utilize NetFlow for more than easy network visibility.

Instead, let’s utilize an extended version of NetFlow and include data on the device, application, user, and binary responsible for each tracked network connection. That would permit us to rapidly associate every network connection back to its source.

In fact, these 2 modifications to NetFlow, are precisely what Ziften has accomplished with ZFlow. ZFlow provides an expanded variation of NetFlow that can be deployed at the network edge, also as part of a container or VM image, and the resulting information collection can be consumed and examined with existing NetFlow analysis tools. Over and above conventional NetFlow Internet Protocol Flow Information eXport (IPFIX) networking visibility, ZFlow offers higher visibility with the inclusion of details on device, application, user and binary for every network connection.

Ultimately, this enables Ziften ZFlow to provide end-to-end visibility in between any two endpoints, physical or virtual, removing traditional blind spots like East West traffic in data centers and enterprise cloud deployments.

Part 2 Of Using Edit Difference For Detection – Charles Leaver

Written By Jesse Sampson And Presented By Charles Leaver CEO Ziften

 

In the very first about edit distance, we looked at searching for harmful executables with edit distance (i.e., how many character modifications it takes to make two matching text strings). Now let’s take a look at how we can use edit distance to look for malicious domains, and how we can build edit distance functions that can be integrated with other domain name features to pinpoint suspect activity.

Here is the Background

What are bad actors doing with harmful domains? It might be merely using a close spelling of a typical domain name to fool negligent users into looking at ads or getting adware. Legitimate websites are slowly catching onto this technique, often called typo squatting.

Other harmful domain names are the result of domain generation algorithms, which could be used to do all types of dubious things like avert countermeasures that obstruct recognized compromised websites, or overwhelm domain servers in a dispersed DoS attack. Older variations use randomly generated strings, while further advanced ones add techniques like injecting typical words, further confusing protectors.

Edit distance can assist with both use cases: here we will find out how. First, we’ll leave out typical domains, because these are usually safe. And, a list of regular domains supplies a standard for discovering abnormalities. One great source is Quantcast. For this discussion, we will stick to domain names and prevent subdomains (e.g. ziften.com, not www.ziften.com).

After data cleaning, we compare each candidate domain (input data observed in the wild by Ziften) to its prospective neighbors in the exact same top-level domain (the last part of a domain name – classically.com,. org, and so on but now can be nearly anything). The basic task is to discover the nearest neighbor in regards to edit distance. By finding domains that are one step removed from their closest neighbor, we can easily identify typo-ed domain names. By discovering domains far from their next-door neighbor (the stabilized edit distance we introduced in Part 1 is useful here), we can also discover anomalous domain names in the edit distance area.

What were the Outcomes?

Let’s take a look at how these outcomes appear in reality. Use caution when browsing to these domain names considering that they might contain harmful content!

Here are a few potential typos. Typo squatters target well known domains given that there are more possibilities somebody will visit. Several of these are suspicious in accordance with our risk feed partners, however there are some false positives as well with charming names like “wikipedal”.

ed2-1

Here are some strange looking domain names far from their next-door neighbors.

ed2-2

So now we have produced 2 useful edit distance metrics for hunting. Not just that, we have 3 functions to possibly add to a machine-learning design: rank of nearest neighbor, range from neighbor, and edit distance 1 from next-door neighbor, showing a threat of typo shenanigans. Other functions that could be used well with these include other lexical functions like word and n-gram distributions, entropy, and the length of the string – and network functions like the total count of failed DNS requests.

Streamlined Code that you can Play Around with

Here is a streamlined variation of the code to play with! Created on HP Vertica, but this SQL should run with the majority of innovative databases. Note the Vertica editDistance function might differ in other implementations (e.g. levenshtein in Postgres or UTL_MATCH. EDIT_DISTANCE in Oracle).

ed2-3

A Poorly Managed Environment Will Not Be Secure And It Is True In Reverse – Charles Leaver

Written by Charles Leaver Ziften CEO

 

If your enterprise computing environment is not appropriately managed there is no chance that it can be totally protected. And you can’t effectively manage those complicated business systems unless there’s a good sense that they are secure.

Some might call this a chicken-and-egg situation, where you don’t know where to start. Should you begin with security? Or should you begin with system management? That is the incorrect approach. Consider this instead like Reese’s Peanut Butter Cups: It’s not chocolate initially. It’s not peanut butter initially. Rather, both are mixed together – and dealt with as a single delicious treat.

Many companies, I would argue a lot of organizations, are structured with an IT management department reporting to a CIO, and with a security management group reporting to a CISO. The CIO team and the CISO team don’t know each other, talk with each other just when absolutely necessary, have unique budget plans, certainly have separate priorities, check out different reports, and make use of various management platforms. On an everyday basis, what makes up a task, an issue or an alert for one group flies totally under the other group’s radar.

That’s not good, since both the IT and security groups must make assumptions. The IT team thinks that everything is secure, unless somebody tells them otherwise. For example, they presume that devices and applications have actually not been compromised, users have not escalated their privileges, and so on. Likewise, the security team assumes that the servers, desktops, and mobiles are working properly, operating systems and applications are up to date, patches have actually been applied, etc

Since the CIO and CISO groups aren’t speaking to each other, do not comprehend each others’ roles and concerns, and aren’t using the exact same tools, those presumptions might not be appropriate.

And once again, you cannot have a safe and secure environment unless that environment is properly managed – and you can’t manage that environment unless it’s safe and secure. Or putting it another way: An environment that is not secure makes anything you carry out in the IT group suspect and irrelevant, and implies that you cannot understand whether the details you are seeing are right or controlled. It may all be phony news.

How to Bridge the IT / Security Space

The best ways to bridge that gap? It sounds simple however it can be hard: Ensure that there is an umbrella covering both the IT and security teams. Both IT and security report to the exact same individual or organization someplace. It might be the CIO, it might be the CFO, it might be the CEO. For the sake of argument here, let’s say it’s the CFO.

If the business does not have a protected environment, and there’s a breach, the worth of the brand name and the company may be decreased to nothing. Similarly, if the users, devices, infrastructure, application, and data aren’t well-managed, the company cannot work effectively, and the worth drops. As we have actually discussed, if it’s not well handled, it cannot be protected, and if it’s not secure, it cannot be well handled.

The fiduciary obligation of senior executives (like the CFO) is to protect the value of business assets, and that suggests making certain IT and security speak with each other, comprehend each other’s priorities, and if possible, can see the same reports and data – filtered and displayed to be meaningful to their specific areas of duty.

That’s the thought process that we adopted with the design of our Zenith platform. It’s not a security management tool with IT abilities, and it’s not an IT management tool with security abilities. No, it’s a Peanut Butter Cup, developed similarly around chocolate and peanut butter. To be less confectionery, Zenith is an umbrella that offers IT groups exactly what they require to do their tasks, and provides security groups what they need also – without coverage spaces that might weaken assumptions about the state of business security and IT management.

We need to guarantee that our business’s IT infrastructure is created on a protected structure – and that our security is implemented on a well managed base of hardware, infrastructure, software and users. We can’t run at peak efficiency, and with complete fiduciary obligation, otherwise.