Virtualization Technology News and Information
White Papers
RSS
White Papers Search Results
Showing 1 - 7 of 7 white papers, page 1 of 1.
UD Pocket Saves the Day After Malware Cripple’s Hospital’s Mission-Critical PCs
IGEL Platinum Partner A2U had endpoints within the healthcare organization’s finance department up and running within a few hours following the potentially crippling cyberattack, thanks to the innovative micro thin client.

A2U, an IGEL Platinum Partner, recently experienced a situation where one of its large, regional healthcare clients was hit by a cyberattack. “Essentially, malware entered the client’s network via a computer and began replicating like wildfire,” recalls A2U Vice President of Sales, Robert Hammond.

During the cyberattack, a few hundred of the hospital’s PCs were affected. Among those were 30 endpoints within the finance department that the healthcare organization deemed mission critical due to the volume of daily transactions between patients, insurance companies, and state and county agencies for services rendered. “It was very painful from a business standpoint not to be able to conduct billing and receiving, not to mention payroll,” said Hammond.

Prior to this particular incident, A2U had received demo units of the IGEL UD Pocket, a revolutionary micro thin client that can transform x86-compatible PCs and laptops into IGEL OS-powered desktops.

“We had been having a discussion with this client about re-imaging their PCs, but their primary concern was maintaining the integrity of the data that was already on the hardware,” continued Hammond. “HIPAA and other regulations meant that they needed to preserve the data and keep it secure, and we thought that the IGEL UD Pocket could be the answer to this problem. We didn’t see why it wouldn’t work, but we needed to test our theory.”

When the malware attack hit, that opportunity came sooner, rather than later for A2U. “We plugged the UD Pocket into one of the affected machines and were able to bypass the local hard drive, installing the Linux-based IGEL OS on the system without impacting existing data,” said Hammond. “It was like we had created a ‘Linux bubble’ that protected the machine, yet created an environment that allowed end users to quickly return to productivity.”

Working with the hospital’s IT team, it only took a few hours for A2U to get the entire finance department back online. “They were able to start billing the very next day,” added Hammond.

Defending Against the Siege of Ransomware
The threat of ransomware is only just beginning. In fact, nearly 50% of organizations have suffered at least one ransomware attack in the past 12 months and estimates predict this will continue to increase at an exponential rate. While healthcare and financial services are the most targeted industries, no organization is immune. And the cost? Nothing short of exorbitant.
The threat of ransomware is only just beginning. In fact, nearly 50% of organizations have suffered at least one ransomware attack in the past 12 months and estimates predict this will continue to increase at an exponential rate. While healthcare and financial services are the most targeted industries, no organization is immune. And the cost? Nothing short of exorbitant.
How to Develop a Multi-cloud Management Strategy
Increasingly, organizations are looking to move workloads into the cloud. The goal may be to leverage cloud resources for Dev/Test, or they may want to “lift and shift” an application to the cloud and run it natively. In order to enable these various cloud options, it is critical that organizations develop a multi-cloud data management strategy.

The primary goal of a multi-cloud data management strategy is to supply data, either via copying or moving data to the various multi-cloud use cases. A key enabler of this movement is the data management software applications. In theory, data protection applications can perform both of the copy and move functions. A key consideration is how the multi-cloud data management experience is unified. In most cases, data protection applications ignore the user experience of each cloud and use their proprietary interface as the unifying entity, which increases complexity.

There are a variety of reasons organizations may want to leverage multiple clouds. The first use case is to use public cloud storage as a backup mirror to an on-premises data protection process. Using public cloud storage as a backup mirror enables the organization to automatically off-site data. It also sets up many of the more advanced use cases.

Another use case is using the cloud for disaster recovery.

Another use case is “Lift and Shift,” which means the organization wants to run the application in the cloud natively. Initial steps in the “lift and shift” use case are similar to Dev/Test, but now the workload is storing unique data in the cloud.

Multi-cloud is a reality now for most organizations and managing the movement of data between these clouds is critical.

Confronting modern stealth
How did we go from train robberies to complex, multi-billion-dollar cybercrimes? The escalation in the sophistication of cybercriminal techniques, which overcome traditional cybersecurity and wreak havoc without leaving a trace, is dizzying. Explore the methods of defense created to defend against evasive attacks, then find out how Kaspersky’s sandboxing, endpoint detection and response, and endpoint protection technologies can keep you secure—even if you lack the resources or talent.
Explore the dizzying escalation in the sophistication of cybercriminal techniques, which overcome traditional cybersecurity and wreak havoc without leaving a trace. Then discover the methods of defense created to stop these evasive attacks.

Problem:
Fileless threats challenge businesses with traditional endpoint solutions because they lack a specific file to target. They might be stored in WMI subscriptions or the registry, or execute directly in the memory without being saved on disk. These types of attack are ten times more likely to succeed than file-based attacks.

Solution:
Kaspersky Endpoint Security for Business goes beyond file analysis to analyze behavior in your environment. While its behavioral detection technology runs continuous proactive machine learning processes, its exploit prevention technology blocks attempts by malware to exploit software vulnerabilities.

Problem:
The talent shortage is real. While cybercriminals are continuously adding to their skillset, businesses either can’t afford (or have trouble recruiting and retaining) cybersecurity experts.

Solution:
Kaspersky Sandbox acts as a bridge between overwhelmed IT teams and industry-leading security analysis. It relieves IT pressure by automatically blocking complex threats at the workstation level so they can be analyzed and dealt with properly in time.


Problem:
Advanced Persistent Threats (APTs) expand laterally from device to device and can put an organization in a constant state of attack.

Solution:
Endpoint Detection and Response (EDR) stops APTs in their tracks with a range of very specific capabilities, which can be grouped into two categories: visibility (visualizing all endpoints, context and intel) and analysis (analyzing multiple verdicts as a single incident).
    
Attack the latest threats with a holistic approach including tightly integrated solutions like Kaspersky Endpoint Detection and Response and Kaspersky Sandbox, which integrate seamlessly with Kaspersky Endpoint Protection for Business.
DevOps – an unsuspecting target for the world's most sophisticated cybercriminals
DevOps focuses on automated pipelines that help organizations improve time-to-market, product development speed, agility and more. Unfortunately, automated building of software that’s distributed by vendors straight into corporations worldwide leaves cybercriminals salivating over costly supply chain attacks. It takes a multi-layered approach to protect such a dynamic environment without harming resources or effecting timelines.

DevOps: An unsuspecting target for the world’s most sophisticated cybercriminals

DevOps focuses on automated pipelines that help organizations improve business-impacting KPIs like time-to-market, product development speed, agility and more. In a world where less time means more money, putting code into production the same day it’s written is, well, a game changer. But with new opportunities come new challenges. Automated building of software that’s distributed by vendors straight into corporations worldwide leaves cybercriminals salivating over costly supply chain attacks.

So how does one combat supply chain attacks?

Many can be prevented through the deployment of security to development infrastructure servers, the routine vetting of containers and anti-malware testing of the production artifacts. The problem is that a lack of integration solutions in traditional security products wastes time due to fragmented automation, overcomplicated processes and limited visibility—all taboo in DevOps environments.

Cybercriminals exploit fundamental differences between the operational goals of those who maintain and operate in the development environment. That’s why it’s important to show unity and focus on a single strategic goal—delivering a safe product to partners and customers in time.

The protection-performance balance

A strong security foundation is crucial to stopping threats, but it won’t come from a one bullet. It takes the right multi-layered combination to deliver the right DevOps security-performance balance, bringing you closer to where you want to be.

Protect your automated pipeline using endpoint protection that’s fully effective in pre-filtering incidents before EDR comes into play. After all, the earlier threats can be countered automatically, the less impact on resources. It’s important to focus on protection that’s powerful, accessible through an intuitive and well-documented interface, and easily integrated through scripts.

Greater Ransomware Protection Using Data Isolation and Air Gap Technologies
The prevalence of ransomware and the sharp increase in users working from home adds further complexity and broadens the attack surfaces available to bad actors. While preventing attacks is important, you also need to prepare for the inevitable fallout of a ransomware incident. To prepare, you must be recovery ready with a layered approach to securing data. This WhitePaper will address the approaches of data isolation and air gapping, and the protection provided by Hitachi and Commvault through H

Protecting your data and ensuring its’ availability is one of your top priorities. Like a castle in medieval times, you must always defend it and have built-in defense mechanisms. It is under attack from external and internal sources, and you do not know when or where it will come from. The prevalence of ransomware and the sharp increase in users working from home and on any device adds further complexity and broadens the attack surfaces available to bad actors. So much so, that your organization being hit with ransomware is almost unavoidable. While preventing attacks is important, you also need to prepare for the inevitable fallout of a ransomware incident.

Here are just a few datapoints from recent research around ransomware:
•    Global Ransomware Damage Costs Predicted To Reach $20 Billion (USD) By 2021
•    Ransomware is expected to attack a business every 11 seconds by the end of 2021
•    75% of the world’s population (6 Billion people) will be online by 2022.
•    Phishing scams account for 90% of attacks.
•    55% of small businesses pay hackers the ransom
•    Ransomware costs are predicted to be 57x more over a span of 6 years by 2021
•    New ransomware strains destroy backups, steal credentials, publicly expose victims, leak stolen data, and some even threaten the victim's customers

So how do you prepare? By making sure you’re recovery ready with a layered approach to securing your data. Two proven techniques for reducing the attack surface on your data are data isolation and air gapping. Hitachi Vantara and Commvault deliver this kind of protection with the combination of Hitachi Data Protection Suite (HDPS) and Hitachi Content Platform (HCP) which includes several layers and tools to protect and restore your data and applications from the edge of your business to the core data centers.

Process Optimization with Stratusphere UX
This whitepaper explores the developments of the past decade that have prompted the need for Stratusphere UX Process Optimization. We also cover how this feature works and the advantages it provides, including specific capital and operating cost benefits.

Managing the performance of Windows-based workloads can be a challenge. Whether physical PCs or virtual desktops, the effort required to maintain, tune and optimize workspaces is endless. Operating system and application revisions, user installed applications, security and bug patches, BIOS and driver updates, spyware, multi-user operating systems supply a continual flow of change that can disrupt expected performance. When you add in the complexities introduced by virtual desktops and cloud architectures, you have added another infinite source of performance instability. Keeping up with this churn, as well as meeting users’ zero tolerance for failures, are chief worries for administrators.

To help address the need for uniform performance and optimization in light of constant change, Liquidware introduced the Process Optimization feature in its Stratusphere UX solution. This feature can be set to automatically optimize CPU and Memory, even as system demands fluctuate. Process Optimization can keep “bad actor” applications or runaway processes from crippling the performance of users’ workspaces by prioritizing resources for those being actively used over not used or background processes. The Process Optimization feature requires no additional infrastructure. It is a simple, zero-impact feature that is included with Stratusphere UX. It can be turned on for single machines, or groups, or globally. Launched with the check of a box, you can select from pre-built profiles that operate automatically. Or administrators can manually specify the processes they need to raise, lower or terminate, if that task becomes required. This feature is a major benefit in hybrid multi-platform environments that include physical, pool or image-based virtual and cloud workspaces, which are much more complex than single-delivery systems. The Process Optimization feature was designed with security and reliability in mind. By default, this feature employs a “do no harm” provision affecting normal and lower process priorities, and a relaxed policy. No processes are forced by default when access is denied by the system, ensuring that the system remains stable and in line with requirements.