By Vinod Mohan of DataCore Software
The rate of growth of organizational data and scale of expansion of the modern data center - mainly considering the distributed architecture of today's IT systems - necessitates a more centralized system of data storage management that breaks the barriers of siloed storage capacity and rigid dependency structures.
Today's users and applications need anytime, anywhere access to data stored in any location. This is possible with a distributed file system (DFS) that is based on a client/server architecture where data storage across distributed storage systems is managed on a centralized server and made accessible to clients via file sharing protocols such as NFS and SMB. File virtualization technology incorporated into a DFS helps create an abstraction layer between file servers and clients allowing IT teams to present clients with one logical file mount point for all servers, while the file servers continue to host all the file data. Especially in the world of unstructured data, DFS plays a cardinal role in providing a unified and logical view of data scattered across local and remote locations including the cloud.

Architecture of a distributed file system (DFS)
In this blog, we will be discussing five key characteristics of a distributed file system that will help you understand what to expect as you set out to design and implement one for your organization.
#1 Location Transparency

The very nature of distributed computing is that devices are spread over sites and locations. One of the most essential things to look for in a DFS is how it aggregates different namespaces from distinct file servers and NAS devices into a single global namespace (see figure above). Without having your users remember which file is stored on which device and search for it within a local namespace, a DFS provides a global catalog/directory that simplifies searching, finding, and accessing files across your distributed IT environment - from just a single interface.
Even when you move files around from one device to another, they can be easily searched and found as they are available for access from a central control plane, which is the DFS. Regardless of which location a file is moved to, the application or user accessing the file on the front end can transparently see the file and need not necessarily be exposed to its physical location. To them, the file is being retrieved from a central (virtual) storage repository and there is no change to the file's access path or folder hierarchy that they typically use.
#2 High Availability and Data Redundancy

Since a distributed file system is used to manage access to multiple storage systems, its resiliency is a critical need to assure data availability. A failure would result in data inaccessibility and possible disruption to business operations. So, it is important to set it up with a high availability (HA) architecture so that when one of the nodes of the DFS fails, another takes over and ensures continuous data accessibility. The ideal DFS you want for your environment must support setting it up in a HA cluster.
Synchronous mirroring and asynchronous replication are some of the techniques available in a DFS to ensure data redundancy and recovery. While mirroring enables real-time copying of all the data to a second storage system, asynchronous replication lets you make copies of specific files, volumes or shares and automatically move it a remote location for disaster recovery.
#3 Policy-Driven Data Mobility

A crucial aspect of a DFS is the ability to transparently move data between different storage classes, varying by speed and performance level, cost, deployment, type of storage, etc. Because the DFS acts as the central command center on top of the file storage infrastructure, IT teams can use it to govern data placement on storage media.
- A typical use case would be to place the hottest data on the fastest NAS and moving infrequently accessed data (inactive data) to lower-cost commodity storage on-premises or cold/archival storage in the cloud.
- There could also be custom policies set by the IT administrator based on file age, type, size or based on performance, durability, protection, or compliance objectives to move data to a specific storage location.
A DFS must allow you to define the storage classes and set the rules for data movement. Once set, it should automate placement of data without any manual intervention to determine which data must go where. Make sure the DFS you want to implement incorporates machine learning to learn patterns based on custom criteria and accordingly move data to the appropriate storage destination.
#4 Management with File-Level Granularity

Because files distributed over diverse devices and storage systems are being centrally governed for placement, storage capacity pooling, optimization, and load balancing, it is important to also gain granular level of management at the file, volume, and share level from the DFS control plane.
Some granular file management features to consider in a DFS should include:
- Taking snapshots of a specific file, volume, or share
- Maintaining multiple copies of specific datasets for recovery in the event of a disruption
- Deciding based on custom business objectives which file gets stored where. For example:
- move snapshot copies to storage in a specific remote location(s)
- move video files (*.mp4) larger than 500 MB to a Windows file server
- move any data from users in Executive team on a fast NetApp NAS
- move files of a specific type to a secure server for legal and compliance reasons
- move files not accessed in the last 6 months to S3 storage in the cloud
#5 Flexibility to Scale On-Demand

As with any IT infrastructure component, scalability is a vital requirement for a DFS. Given the rate of growth of files and storage tiers in the organization, a DFS must be able to support diverse storage systems with multi-petabyte of capacity across multiple locations and sites. To support growing capacity and throughput demand, the DFS must be able to horizontally scale to add independent nodes to the global namespace.
Flexibility is also important from the standpoint of upgrading or refreshing the underlying storage equipment. Look for a DFS that enables you to seamlessly and non-disruptively scale up and scale out without much impact to applications and users accessing files. Such flexibility will also allow you to break free from supplier lock-in and be unrestrained to swap out pricey hardware with lower-cost alternatives.
A Unique Approach to Distributed File System from DataCore
Keep these five traits in mind as you shop for the right DFS solution for your environment. DataCore vFilO is one such distributed file system that is software-defined in design and powered by file virtualization technology to give you maximum flexibility and efficiency to control how you store, protect, and access data. With vFilO, you can eliminate hierarchical dependencies of file structures to physical locations, consolidate fractured namespaces into a global virtual mount point, and bring distant archives into the field of unified visibility for quick search and access. You can take charge of managing your data according to your business requirements. Contact DataCore to learn how to solve your distributed file and object storage challenges.
##
About the Author
Vinod Mohan is a Senior Product Marketing Manager at DataCore
Software. He has over a decade of experience in product, technology and
solution marketing of IT software and services spanning application
performance management, network, systems, virtualization, storage, IT
security and IT service management (ITSM). In his current capacity at
DataCore, Vinod focuses on communicating the value proposition of
software-defined storage to IT teams helping them benefit from
infrastructure cost savings, storage efficiency, performance
acceleration, and ultimate flexibility for storing and managing data.
Prior
to DataCore, Vinod held product marketing positions at eG Innovations
and SolarWinds, focusing on IT performance monitoring solutions. An avid
technology enthusiast, he is a contributing author to many popular
sites including APMdigest, VMblog, Cyber Defense Magazine, Citrix Blog,
The Hacker News, NetworkDataPedia, IT Briefcase, IT Pro Portal, and
more.