We are seeing a tremendous explosion of media content on Internet. Today, its not just YouTube for video distribution, there are million mobile apps which dostribute media - Audio & Video content over Internet.
Users today expect on-demand audio/video, anywhere, anytime access from any device. This increases the number of transcoded copies - to accommodate devices with various screen sizes.
Companies are now using Video and Audio as major means of distributing information in their websites. This media content is cataloged online and is always available for users.
Even the content creation is adding new challenges to data storage. The advent of new audio & video technologies is making raw content capture much larger: 3D, 4K/8K, High Dynamic Range, High Frame Rates (120 fps, 240fps), Virtual and Augmented Reality, etc.
Content creation workflow has changed from file-based workflows to cloud-based workflows for production, post-production processing such as digital effects, rendering, or transcoding, as well as distribution and archiving. This has created a need for real-time collaboration need for distributed environments and teams scattered all over the globe, across many locations and time zones
All this changes in how media is created and consumed has resulted in such massive dataset sizes, traditional storage architectures just can't keep up any longer in terms of scalability.
Traditional storage array technolofies such as RAID will no longer capable of serving the new data demands. For instance, routine RAID rebuilds would be taking way too long in case of a failure, heightening data loss risks upon additional failures during that dangerously longer time window. Furthermore, even if current storage architectures could technically keep up, they are cost-prohibitive, especially considering the impending data growth tsunami about to hit. To top it off, they just can't offer the agility, efficiency and flexibility new business models have come to expect in terms of instant and unfettered access, rock-solid availability, capacity elasticity, deployment time and so on.
Facing such daunting challenges, the good news is that a solution does exist and is here today: Object Storage.
Object Storage is a based on sophisticated storage software algorithms running on a distributed, interconnected cluster of high-performance yet standard commodity hardware nodes, delivering an architected solution suitable for the stringent performance, scalability, and cost savings requirements required for massive data footprints. The technology has been around for some time but is now coming of age.
The Media and Entertainment industry is well aware of the benefits Object Storage provides, which is why many players are moving toward object storage and away from traditional file system storage. These benefits include:
- Virtually unlimited scalability
Scale out by adding new server node - Low cost with leverage of commodity hardware
- Flat and global namespace, with no locking or volume semantics
- Powerful embedded metadata capabilities (native as well as user-defined)
- Simple and low-overhead RESTful API for ubiquitous, straightforward access over HTTP from any client anywhere
- Self-healing capabilities with sophisticated and efficient data protection through erasure coding (local or geo-dispersed)
- Multi-tenant management and data access capabilities (ideal for service providers)
- Reduced complexity (of initial deployment/staging as well as ongoing data management)
- No forklift upgrades, and no need for labor-intensive data migration projects
- Software-defined storage flexibility and management
HPE. A leading sellers of servers and Hyperconverged Systems offers several low cost, high performance solutions for Object Storage on its servers using Software Defined Storage solutions:
1. Object Store with Scality Ring
2. Lusture File System
Scality Ring Object Store is a paid SDS offering from Scality Inc which is ideal for enterprise customers.
The Lustre file system is an open-source, parallel file system that supports many requirements of leadership class HPC simulation environments. Born from from a research project at Carnegie Mellon University, the Lustre file system has grown into a file system supporting some of the Earth's most powerful supercomputers. The Lustre file system provides a POSIX compliant file system interface, can scale to thousands of clients, petabytes of storage and hundreds of gigabytes per second of I/O bandwidth. The key components of the Lustre file system are the Metadata Servers (MDS), the Metadata Targets (MDT), Object Storage Servers (OSS), Object Server Targets (OST) and the Lustre clients.
In short, Lusture is ideal for large scale storage needs of service providers and large enterprises.
No comments:
Post a Comment