Welcome!

FinTech Journal Authors: Liz McMillan, Pat Romanski, Yeshim Deniz, Zakia Bouachraoui, Elizabeth White

Related Topics: FinTech Journal, Java IoT, Containers Expo Blog, @CloudExpo

FinTech Journal: Article

Beating the Data Deluge | @CloudExpo #NAS #BigData #DataCenter #Storage

Organizations are using hybrid cloud to gain the maximum amount of business flexibility from cloud architectures

Beating the Data Deluge with Scale-Out NAS for Hybrid Cloud

IDC estimates that the digital universe is doubling every two years. That means the need for data storage is increasing at the same rate. This exponential demand is no match for traditional vertical-scaling storage architectures. The resulting bottlenecks significantly reduce performance, and trying to store so much data would require prohibitively expensive server scale-out.

The reality, though, is that the majority of data centers around the world are still using vertical scaling solutions for storage, which means that organizations are seeking alternatives that allow them to scale cheaply and efficiently in order to remain competitive. And now, with software defined storage moving forward, we see the use of more scale-out storage solutions in data centers.

To help maximize budget efficiency and performance goals at the same time, organizations are using hybrid cloud to gain the maximum amount of business flexibility from cloud architectures. Essentially, hybrid cloud is a cloud computing environment that uses a mix of on-premises, private cloud and public cloud services, with orchestration between the two platforms.

These are desirable outcomes, to be sure, but because hybrid cloud architectures are so new, many are still learning about the challenges associated with deploying a hybrid cloud approach. In this article, you will learn some design elements that you can use to ensure your hybrid cloud delivers the performance, flexibility and scalability you need.

Why Scale-Out NAS Is Vital
Because hybrid cloud architectures are relatively new to the market-and even newer in full-scale deployment-many organizations are unaware of the importance of consistency in a scale-out NAS. However, it is the cornerstone of this type of hybrid-cloud storage solution. Many environments are eventually consistent, meaning that files that you write to one node are not immediately accessible from other nodes. This can be caused by not having a proper implementation of the protocols, or not tight enough integration with the virtual file system. The opposite of that is being strictly consistent: files are accessible from all nodes at the same time. Compliant protocol implementations and tight integration with the virtual file system is a good recipe for success.

The best-case scenario for a hybrid cloud architecture that incorporates a scale-out NAS approach is one based on three layers. Each server in the cluster will run a software stack based on these layers.

  • Layer one is the persistent storage layer. This layer is based on an object store, which provides advantages like extreme scalability. However, the layer must be strictly consistent in itself.
  • Layer two is the virtual file system, the heart of any scale-out NAS. It is in this second layer that features like caching, locking, tiering, quota and snapshots are handled.
  • Layer three contains the protocols like SMB and NFS but also integration points for hypervisors, for example.

It is very important to keep the architecture symmetrical and clean. If you manage to do that, many future architectural challenges will be much easier to solve.

Layer one is worth another look. Since the storage layer is based on an object store, we can now easily scale our storage solution. With a clean and symmetrical architecture, we can reach exabytes of data and trillions of files.

Layer one is responsible for ensuring redundancy, so a fast and effective self-healing mechanism is needed. To keep the data footprint low in the data center, the storage layer needs to support different file encodings. Some are good for performance and some for reducing the footprint.

The Importance of Metadata
Metadata are pieces of information in a virtual file system that describe the structure of that file system. For this reason, metadata is a very important component. For example, one metadata file can contain information about what files and folders are contained in a single folder in the file system. That means that we will have one metadata file for each folder in our virtual file system. As the virtual file system grows, we will get more and more metadata files.

Though the centralized storage of metadata has its merits, here we are talking about scale-out. So, let's look at where not to store metadata. Storing metadata in a single server can cause poor scalability, poor performance and poor availability. Since our storage layer is based on an object store, a better place to store all our metadata is in the object store - particularly when we are talking about high quantities of metadata. This will ensure good scalability, good performance and good availability.

Caching Devices
To increase performance, software-defined storage solutions need caching devices. From a storage solution perspective, both speed and size matter - as well as price; finding the sweet spot is important. For an SDS solution, it is also important to protect the data at a higher level by replicating the data to another node before destaging the data to the storage layer.

Especially in virtual or cloud environments, supporting multiple file systems and domains becomes more important as the storage solution grows in both capacity and features. Supporting multiple file systems is also very important. Different applications and use cases prefer different protocols. And sometimes, it is also necessary to be able to access the same data across different protocols.

A Virtual File System
Because we're talking about the hybrid cloud, we of course need support for hypervisors. Therefore, the scale-out NAS needs to be able to runs as hyper-converged as well. Being software-defined makes sense here.

In a situation where the architecture is flat and there is no external storage systems, then the scale-out NAS must be able to run as a virtual machine and make use of the hypervisor host's physical resources. The guest virtual machine's (VM) own images and data will be stored in the virtual file system that the scale-out NAS provides. The guest VMs can use this file system to share files between them, making it perfect for VDI environments as well.

It is vital to support many protocols because, in a virtual environment, there are many different applications running, having different needs for protocols. By supporting many protocols, we keep the architecture flat, and we have the ability to share data between applications that speak different protocols, to some extent.

These are the ingredients that make a very flexible and useful storage solution: being software-defined, supporting both fast and energy-efficient hardware, having an architecture that allows us to start small and scale up, supporting bare-metal as well as virtual environments, and having support for all major protocols.

Cloud Flexibility
Because each office site has its own independent file system, it's likely that different offices have a need for both a private area and an area that they share with other branches. So only parts of the file system will be shared with others.

To have the flexibility needed to scale the file system outside the four walls of the office, it makes sense to select a section of a file system and let others mount it at any given point in the other file systems - making sure that the synchronization is made at the file system level in order to have a consistent view of the file system across sites. Being able to specify different file encodings at different sites is useful, for example, if one site is used as a backup target.

Putting It All Together
All these elements taken together make for a best-in-class hybrid cloud system. The effect is a storage system built for the tsunami of data that today's digital transformation is creating. This system offers clean, efficient and linear scaling up to exabytes of data. Multiple entry points remove potential performance bottlenecks via a single file system that spans all servers. Multiple protocols are supported, as well as flexible scale-out by adding nodes. More rapid expansion and greater control over infrastructure investments are enabled by a scale-out NAS.

More Stories By Stefan Bernbo

Stefan Bernbo is the founder and CEO of Compuverde. For 20 years, he has designed and built numerous enterprise scale data storage solutions designed to be cost effective for storing huge data sets. From 2004 to 2010 Stefan worked within this field for Storegate, the wide-reaching Internet based storage solution for consumer and business markets, with the highest possible availability and scalability requirements. Previously, Stefan has worked with system and software architecture on several projects with Swedish giant Ericsson, the world-leading provider of telecommunications equipment and services to mobile and fixed network operators.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


IoT & Smart Cities Stories
Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...
DXWorldEXPO LLC announced today that ICOHOLDER named "Media Sponsor" of Miami Blockchain Event by FinTechEXPO. ICOHOLDER gives detailed information and help the community to invest in the trusty projects. Miami Blockchain Event by FinTechEXPO has opened its Call for Papers. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to [email protected] Miami Blockchain Event by FinTechEXPOalso offers sp...
DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
@DevOpsSummit at Cloud Expo, taking place November 12-13 in New York City, NY, is co-located with 22nd international CloudEXPO | first international DXWorldEXPO and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time t...
Headquartered in Plainsboro, NJ, Synametrics Technologies has provided IT professionals and computer systems developers since 1997. Based on the success of their initial product offerings (WinSQL and DeltaCopy), the company continues to create and hone innovative products that help its customers get more from their computer applications, databases and infrastructure. To date, over one million users around the world have chosen Synametrics solutions to help power their accelerated business or per...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...
Charles Araujo is an industry analyst, internationally recognized authority on the Digital Enterprise and author of The Quantum Age of IT: Why Everything You Know About IT is About to Change. As Principal Analyst with Intellyx, he writes, speaks and advises organizations on how to navigate through this time of disruption. He is also the founder of The Institute for Digital Transformation and a sought after keynote speaker. He has been a regular contributor to both InformationWeek and CIO Insight...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
Digital Transformation is much more than a buzzword. The radical shift to digital mechanisms for almost every process is evident across all industries and verticals. This is often especially true in financial services, where the legacy environment is many times unable to keep up with the rapidly shifting demands of the consumer. The constant pressure to provide complete, omnichannel delivery of customer-facing solutions to meet both regulatory and customer demands is putting enormous pressure on...
Bill Schmarzo, Tech Chair of "Big Data | Analytics" of upcoming CloudEXPO | DXWorldEXPO New York (November 12-13, 2018, New York City) today announced the outline and schedule of the track. "The track has been designed in experience/degree order," said Schmarzo. "So, that folks who attend the entire track can leave the conference with some of the skills necessary to get their work done when they get back to their offices. It actually ties back to some work that I'm doing at the University of San...