Figure 1. SAN and NAS use different protocols and transports. Click here to see an enlarged diagram.
Storage-area networks (SANs) are composed of computers and remote storage devices. The computers are typically connected to the remote storage devices using SCSI over Fibre Channel (see Figure 1). Other implementations of SAN exist, but this is the most common. In a SAN, all the storage appears local, just as if the remote disk were directly connected to the computer and physically located inside the computer chassis.
Network-attached storage (NAS) devices appear to the user as a remote drive letter or are named remote storage device. Typically, the operating system employs a protocol such as Network File System (NFS) or Common Internet File System (CIFS) to discover, log in, and transfer content to and from a storage device. NFS and CIFS both communicate over Ethernet. The user typically enters a username and password, and then is granted access to a particular device.
The SAN and NAS storage schemes evolved to meet different needs. Some possible benefits of SAN include access to large amounts of data; sharing data among different applications on different computers; real-time or near-real-time access to data updates; legacy support for SCSI devices; fast speeds; and avoidance of network congestion common with Ethernet.
Possible benefits of NAS include relatively simple user configuration; compatibility with existing username/password access systems; compatibility with legacy networking and server-sharing systems.
In many cases, either scheme can now meet all these needs. But, in earlier implementations, the distinction between SAN and NAS was useful.
For initial installation and configuration, SAN usually requires some specialized knowledge of network hardware, such as how to install the appropriate SCSI drivers and Fibre Channel card. You should also know how to configure your Fibre Channel network properly. Once the installation and configuration are complete, access, administration and authentication are all handled in the background. Access to the knowledge needed to build a SAN system is usually not a problem given that most SAN installations are part of a larger system involving a vendor that can assist in the initial setup.
NAS typically does not require specialized hardware knowledge, although familiarity with Ethernet is a plus. However, for a system administrator, getting all the users' computers to recognize a NAS through different operating systems and different versions of the same operating system can be a real challenge. Installing a NAS can be as simple as unpacking the device, plugging it in and attaching a network cable. Vendors have done an excellent job of programming these devices so that when they first power up, they recognize their operating environment and do a large amount of configuration themselves. Ninety-five percent of the time these devices work straight out of the box. That said, with a moderately complex network you should expect some challenges. Networks that could cause problems include ones that use manually assigned IP addresses, have internal firewalls, or implement complex routing based upon different protocols. For a more complex network, you might be better off purchasing a higher-end NAS system from a well-known manufacturer. It will probably provide a “smarter” NAS box that is more likely to work in your environment. In addition, such systems typically come with better product support. With a complicated network, you may need it.
Table 1. Connecting a NAS server to an inexpensive Ethernet switch may severely limit its performance. Click here to see an enlarged diagram.
Be particularly aware of where you plug the NAS system into the network. While you can plug a NAS box into any Ethernet connection, it is not wise to do so. The NAS should be connected at a point in the network you are sure will have sufficient bandwidth to support the traffic the NAS will generate. Example: If you connect the NAS box to a $78, 10-port Ethernet switch, it may not work very well (see Table 1). Low-cost Ethernet switches do not have sufficient backbone capacity to provide full bandwidth to all ports at the same time. A 100Base-T Ethernet switch might have a throughput of only 200Mb/s. Once you subtract the overhead, the actual available throughput is somewhere around 130Mb/s. If the load is shared among the 10 ports, each port has only about 13Mb/s available. NAS performance will suffer if it is limited to 13Mb/s. On the other hand, if you connect the NAS device to a nonblocking Ethernet switch, which can deliver 100Mb/s (70Mbit/s after subtracting overhead) to all ports simultaneously, then the NAS will be able to deliver data at its maximum performance limit, and the switch will not limit the speed.
Maintainability is a key factor in selecting any shared storage device. The choice between SAN and NAS is a matter of preference. As a general rule, you maintain SAN systems through the SAN device's operating system. Maintenance tools tend to be powerful and reasonably well documented, but may be command-line-based. Some SAN vendors have developed nice GUI-based maintenance tools. If you are comfortable with the command line of your operating system and don't mind getting under the hood, then you will probably find that SAN systems are straightforward and easy to maintain.
Typically, you maintain NAS systems through a Web interface. NAS systems tend to be relatively simple to maintain unless you are using them on a complex network. While NAS maintenance interfaces are generally complete, occasionally I have found that there were things I could do on a SAN that I could not do on a NAS. So the ultimate choice is yours. SAN and NAS have both matured significantly over the past few years. Both provide users with a way to access and share content, but the two use fundamentally different approaches to facilitating access to shared storage.
Many excellent tutorials are available on SAN and NAS. Just do an Internet search on “SAN or NAS tutorial.” If you need information on their implementation, search for “SAN or NAS how to.” These articles give practical information on installing and configuring these devices.
Brad Gilmer is executive director of the AAF Association and the Video Services Forum. He is also editor in chief of the “File Interchange Handbook.”
To order Brad Gilmer's book, “File Interchange Handbook for Images, Audio and Metadata,” from Focal Press, visit www.focalpress.com or call 800-545-2522. The book is also available from most major booksellers.
Send questions and comments to: email@example.com