Ensuring content is protected and available is imperative.
The term “disaster recovery” is synonymous with the IT industry. It means the continuity of business processes following a catastrophic failure through natural disaster or human error. It has long been vital in our industry, too. The digitization of assets is a big commitment for broadcasters both financially and logistically, and disaster recovery processes are an essential part of this process. The complexities of disaster recovery techniques and technologies deployed vary depending on the size of the failure or potential failure. When the decision is made to move to a digital archive, how can broadcasters be sure that the stored material will still be available in the event of any technological meltdown?
Ensuring that material is always protected in the event of a failure of any size is imperative. SGL FlashNet's scalable architecture means that, regardless of the scope of disaster recovery requirements, the customer's valuable assets are protected, automatically. Rules-based implementations provide fully automated data duplication across multiple storage layers and locations. The disaster recovery system enables multisite operations to mirror and synchronize data across the globe. If one site becomes inoperative, it can be rebuilt entirely from data that has been replicated at other sites. This makes automated site redundancy a reality.
How does a broadcaster take the first step into the disaster recovery process from an archive perspective? The execution of a failure mode effect analysis (FMEA) policy is imperative to analyze potential failure in small to large components from disk drive failure to complete denial of access, which equals site meltdown.
By looking at the systems around a FMEA, a structure can be put in place to counter failures, taking each component in turn. In the event of a small failure, a broadcaster can increase protection by adding more drives and RAID-protected servers. In the event of a large component failure, protection includes more than one tape library, redundant disk storage and an infrastructure that means that all applications aren't hosted on a single server, providing a distributed approach to controlling the software environment.
A broadcaster can also take a multisite distributed approach to its archive. Instead of putting all of its content in one site, distribute to multiple locations and link those sites together. In this instance, it's imperative to link the metadata with the content. It's all very well sending content to a remote site, but if the building that houses the core database is lost and all the assets are elsewhere, they're as good as useless.
Another option is to take a distributed approach to maintaining content across multiple sites. This scenario highlights a fundamental flaw because there's still one database at the center. So how do you protect the database? The answer is to build a clustered architecture around the database so that multiple host servers are attached to the storage that holds the database. Secondly, provide RAID protection for the storage that holds the database and the database engines hosted on a clustered architecture. By taking this approach, the broadcaster also creates regular backups of its database as part of standard archive schedules.
The ultimate disaster recovery scenario would be to have two completely separate content management systems with their own archives at independent sites. This then becomes intelligent disaster recovery, because the broadcaster can then set up rules engines at both sites that define which content is transferred between sites. This can, however, raise rights management issues with the transfer of content for disaster recovery purposes. In this case, it's important to define how and when the material is moved across. A model is evolving that will see content pushed in both directions, where site A is the redundant site for site B and vice versa.
Digitizing material and creating a digital workflow is just the start. Ensuring that material is always protected in the event of a failure of any size is imperative. SGL FlashNet is a scalable content storage management system available to the broadcast industry, and it provides resilience, flexibility and adaptability in tailored systems. Regardless of the installation size or environment, SGL FlashNet's clustered architecture and open system approach provide secure, future-proof systems that fit seamlessly. This multilevel protection is unique to SGL FlashNet, providing a completely secure storage environment.
Howard Twine is a product manager at SGL.