Seagate Bar. S-ATA V
The test machine’s primary specifications are as follows:
After both drives were partitioned and formatted, SiSoftware’s Sandra File System benchmark (2003) was used. Again, the benchmark was run three times on each drive, and the graphs represent the average speeds from the trials. The Seagate SATA drive was moderately faster than the Maxtor EIDE in all tests except buffered write. Overall, the reported speeds from the Sandra benchmark are much higher than the HDTach reported speeds. This is most likely due to differences in the methods of testing. The scores in the Sandra benchmarks demonstrate the burst speeds, while HDTach’s scores are closer to the actual average read/write speeds. Even so, the Seagate SATA drive showed improved performance compared to the Maxtor EIDE.
Lastly, MadOnion.com’s PCMark 2002 Professional benchmark was conducted on both drives for three trials each. Generally speaking, the Seagate SATA drive performed as well as or slightly better than the Maxtor EIDE drive in all but the un-cached write test. In this test, the Seagate SATA drive outperformed the Maxtor EIDE drive substantially. With PCMark’s benchmark, we see speeds closer to those obtained by HDTach, which are most likely closer to actual speeds.
Addict Score: 8/10
We first forayed into Serial ATA technology as the storage media of the next decade almost 5 months ago. By now, the first Serial ATA drives, that is, different flavors of the Seagate Barracuda-V are already commercially available. Still, there is quite a bit of confusion surrounding Serial ATA technology particularly with respect to performance benefits over Parallel ATA technology. To address some of the more obvious issues, we have taken two of the new Barracuda SATA-V drives in single drive as well as in RAID-0 configuration and pitted them against the same drive using a parallel interface as well as some well established drives in the field, that is, the IBM 120GXP and the Maxtor D740X-6L. As testbed we used both the ASUS P4G8X (Granite Bay) and the ASUS A7N8X (nForce2) with either the on-board Silicon Image 3112-R controller or else a Silicon Image SATA-Link 3112-A PCI controller card.
Some of our results were predictable, others were strongly influenced by the version of driver used. Needless to say that the latter point goes for the nVidia IDE controller as well as for both Silicon Image interfaces. In the end, we were not prepared for some of the benchmark results we got.
The last year has stressed the need for a major changeover in hard disk drive technology. Changes go far beyond just the implementation of new technologies since they also involve a redefinition of the market as well as a repositioning of its major players. From a market share point of view, the main event in the last few months has been the capitulation and consequent sellout of IBM, once for many good reasons the holy grail of storage technology. Keep in mind, though, that the technical know-how and the features of IBM drives will live on under the umbrella of Hitachi.
Serial versus parallel cable and connector design. For most users, it is still counterintuitive that the narrow red cable can provide more data throughput than the wide black cable. The same goes for the tiny black SATA connector compared to the large grey IDE connector. The power connectors are similar in size but the traditional white connector appears more solid. Or not?
From a technical standpoint, the main event not only in the storage industry but ramifying into all branches of the PC and server industry has been the transition from the classical parallel IDE or ATA interface to the new serial interface. At present SATA is mostly associated with hard disk drives but both Oak Technologies and Philips have serial optical drive prototypes and suffice it to say that we are already looking at some of the new multi-lense DVDs that will enjoy the benefits of SATA technology.
The Narrow Bus Acceptance Challenge
SATA is still somewhat controversial and introduction of drives into the market is behind schedule. Aside from technical issues, one of the major challenges with respect to public acceptance of SATA technology has been the somewhat counterintuitive notion that a narrow interface using a serial protocol can be faster than a wide parallel bus with a multiplicity of data lines operating in unison or synchronously. Unfortunately, this kind of concept is not just present in the average consumer but also still widespread in the sales personnel in computer retail stores. In other words, if a big cable can't do the job, why should a thin cable suffice?
In short, parallel interfaces are very limited with respect to frequency because of timing skew between the different signals. These skews can be caused by differences in trace length but also by impurities in the conducting traces and further by factors as trivial as nicks in a cable. A different description for skew would be signal delays or phase shift between the different lines. Any serial interface, on the other hand will enjoy the benefit of only a single data line plus reference (or two data lines, in case differential signaling is used). In a full duplex configuration, that is separate I/O channels for reads and writes, a total of four differential data lines is used, however, it is still only two lines per channel. Neither scenario will be subjected to signal skew problems as they incur on a wide bus. Moreover, the paucity of lines allows to allocate a generous amount of conductor to each channel, and thus, warrants optimal transmission characteristics of the connecting media. The electrical characteristics of the cable, together with the low voltage swing-differential signaling protocol hitherto reserved for SCSI are the key factors for understanding the benefits of the Serial ATA interface.The Performance Confusion
Benchmarking performance and adequately documenting the results has developed into one of the major challenges and controversies in the computer business. Theoretically, any technology that is faster and better should be able to show its benefits across the entire plethora of application. However, the complexity of today's systems along with the different performance-enhancing tricks make it extremely difficult to figure out what happens at what level within the chain of events. Within the general subject of HDD technology, those tricks encompass prediction algorithms and pre-caching of data as, for example, implemented in the Intel Application Accelerator. Bottom line is that not all that glitters is gold or rather vice versa, oftentimes, the benefits of new technologies are not as overt as they should be because the older technology has already been tweaked to the max using every trick in the book. This, however, does not imply that there won't be any new headroom, even if it is not possible to address the improvements immediately with the current tools. Else, it could mean that the current tools are probing the wrong parameters since they were written specifically to address issues of a different technology.
It will get a bit more complex but we'll go step by step through the different issues involved.
New Interface vs. New Internals
Before going into specifics of HDD benchmarking, there are a few issues that should be cleared up. First, it is necessary to distinguish between internal and interface performance of a drive.
The rugged, corrugated clamshell packaging of the Seagate drives also called "Seashield" is translucent and reveals the drive within while providing protection against non operational shock and ESD damage.
Internal performance or media performance is exclusively a factor of the spindle rotational speed and the data density with average seek times contributing to the overall media read / write performance. This is best reflected in the well-established differences between outer and inner tracks in terms of linear read / write speed. What it really means, however, is that if the same platters are being used in two drives with either parallel or serial interface, then, by definition, there cannot be any significant difference in sustained read or write performance. Any claims to the contrary would be similar to claiming a square, er, spindle.
The connectors for the power and data cables leave no doubt about the true identity of the drive. The staggered contacts are at the under-belly of the L-shaped connectors
A case in point would be comparing e.g. a Seagate Barracuda SATA-V with a Seagate Barracuda p-ATA-V. Both models are using the same 8 MB cache, the same internal data pipelines, the same platters and the same actuators and heads. Moreover, the spindle speed is identical, that is both drives are running at 7200 rpm. Therefore, the only factor that could cause performance differences is the command overhead. This command overhead will be different for PATA and SATA interfaces, however, similar to what we know from the SiSoft Sandra memory benchmark, linear reads that are comparable to streaming data transfers will pay relatively little attention to command overheads and mostly show the overall media bandwidth.System Disclosure
* The nVidia 2.03 drivers feature the new SW IDE drivers 5.10.2600.307 that increase performance of IDE devices by up to 40% but are also plagued by several compatibility issues. For example, IDE drives show up as SCSI drives and CD burners do not work on these drivers. We use these drivers as a reference for how much performance can theoretically be achieved with parallel ATA drives but we also give the performance using the standard IDE drivers where appliccable since the SW drivers cannot be used in all system configurations.
** The detailed specifications of each drive are posted on the manufacturer's websites. All drives were partitioned into a 3 GB primary partition and an extended partition with a 1.5 GB logical drive for the exclusive use as a swapfile. The remaining disk space was partitioned into logical drives of approximately 30 GB and formatted using the FAT32 file system.
The most commonly used benchmark measuring the internal speed of HDDs is TCDLab's HDTach "sustained transfer rate". Identical data can be obtained from WinBench99 2.0 "Drive inspection test". Data from both benchmarks are almost indistinguishable from each other except that we found results obtained with WB99 more reproducible and less susceptible to jitter.
HDTach results for the Parallel ATA version of the Barracuda V. Note that the drive used was a 120 GB specimen as opposed to the 80 GB drives used for the SATA interface (see below). The 80 GB version uses three heads on 2 platters, whereas the 120GB version uses four heads with two platters. This translates into what is commonly referred to as shortened platters for the 80 GB version and even though the difference is minimal (we are talking about 26.67 vs. 30 GB/platter and head), it will affect the innermost tracks and, consequently, the average read speed across the platter.
HDTach results for the Serial ATA version of the Barracuda V. In this case, the drive had a capacity of 80 GB, using three heads on 2 platters resulting in 26.67 GB / head. As a result , the innermost tracks, which are the slowest, are not used which increases the average read performance. Writes are apparently not affected in the same manner, at least according to the results shown here. On the other hand, it appears as if HDTach bypasses the write cache since the write performance of all drives is only about 1/2 of what it should be. We will return to this issue towards the end of this review again.
HDTach results for the IBM 120 GXP (123 GB total capacity) for comparison. Both sustained read and writes are higher as is the burst speed while the random access time is lower. For the record, this particular screenshot was obtained on a different system using the Maxtor (Promise PDC20269) ATA133 PCI controller card. The drive characteristics are the same (except for the CPU usage that is driver dependent).
One major difference between the WB99 and HDTach benchmarks is that HDTach measures across the entire platter from the outer to the innermost tracks and, further, that it is very sensitive to any data already present on the drive. That is, in most cases, a freshly "Low Level"-formatted drive will show a smooth arc of the sustained read / write performance from the outer to the innermost tracks whereas any written LBAs will cause a sharp performance valley. This is especially true after defragmenting the drive. WB99 on the other hand, appears to only measure across the primary partition.
HDTach results for a SATA RAID0 configuration using the onboard Silicon Image controller with a default chunk size of 16K. We tried all different chunk sizes from 2K to 128K with 16K as the default giving the best results. Chunk sizes of 32K and above caused a dramatic drop in overall sustained transfers. Note that the first 20 GB are reported way below their theoretical cumulative performance. Likewise, the burst speed could be higher. There are different possibilities, starting from a bottleneck in the controller to simply misreporting of the transfer rates and all other possibilities in between.
WinBench99 Ver. 2.0 disk inspection test
The same configuration as above measured in WinBench99 2.0 shows an average transfer rate of 88.6 MB/sec across the primary partition (6 GB) which is almost exactly the sum of two single drives at the outermost tracks. This result rather clearly demonstrates that the HDTach results are not accurately reporting RAID-0 performance here. On the other hand, WB99 2.0 had some problems with SATA, too, in that the cyclic redundancy checking (CRC data verification) often caused time-out of the benchmark.
PCMark2002 Disk Test
A commonly (ab)used benchmark is FutureMark's PCMark2002. After looking at the detail results we feel that this benchmark does not qualify for assessing the performance of any drive.
The results show the disk performance of the Barracuda SATA V in a single drive configuration. Quite honestly, the results are nothing short of absurd.
To sum this up, the importance of media or platter speed and their contribution to different benchmarks should be clear at this point. Moreover, everybody should be aware of certain caveats that are in place with different benchmarks meaning that if the results cannot be repeated / confirmed with different applications, they may not be valid. Time to move on to what is really on the plate: Interface performance.Interface Performance
Different interfaces specify different rates of transfer, the most current specs being UATA/66, 100 and 133 with the last implemented only by Maxtor. Running the risk of being redundant, the suffix designates the maximum transfer rate that can be achieved by bursting from the disk's cache. The burst rate depends on the interface frequency which, in the case of UATA-100 is 25 MHz (clock) and in the case of UATA-133 is 33 MHz (clock) using a DDR protocol for 50 and 66 Mbps (Megabit / pin / sec), respectively. The bus interface itself is 16 bit wide (2 Byte) and width multiplied by frequency gives the overall peak bandwidth.
As outlined above, the serial ATA interface only uses one pair of data line each for reads and writes. This means that at a signaling rate of 1.5 Gbit/sec both lines will have to operate at 1.5 GHz. Since it is only two lines, they can be shielded, and furthermore, skew, that is the phase shift between the two lines can be controlled easily.
One inherent risk of any serial protocol is adaptation of the receiver to prolonged signal strings of 0000 or 1111 which can introduce a bias or baseline shift. This can be avoided by introducing calibration bits like a 01 sequence after every byte. the consequence will be that instead of the standard 8bit / Byte conversion it is a total of 10 bits / Byte. Moreover, the added 5th bit (after every 4 bits of data) provides a reliable synchronization point for the embedded clock. This, however, means that the 1.5Gbit/sec translate into 150 MB/sec rather than 1.87 MB/sec. In theory, SATA is still faster than Parallel ATA but keep in mind that this burst rate only applies to transfers from and to the disk's cache.
Different Communication Protocols in Parallel ATA and SATA
Third Party Nature of the Parallel ATA DMA Scheme
One issue more important than the actual burst rate is the communication between the drive and the host bus adapter (HBA). Parallel ATA suffers from several severe limitations, mostly caused by the fact that the drive itself is entirely passive when it comes to the initiation of data transfers. In short, there are two possibilities for Parallel ATA drives to execute commands.
The first scheme is the immediate execution of the command, however, this has the severe disadvantage that the drive will occupy the bus during the entire time it takes to read the data off the media (platter) and transfer them to the controller. This interval includes seek times and rotational latencies. In case the data are already in the cache, this problem is less of an issue.
The alternative is that the drive defers the execution of the command to a later point, or in trivial terms, the drive accepts the command, then disconnects from the bus in order to collect all data that are asked for, move them into the cache, and as soon as it is ready to transfer, reconnects to the bus.
The main problem in this case is that the drive itself has no authority to arbitrate for the bus, all it can do is to set the SERVICE bit as a flag for the controller that it is ready for data transfer. The controller does not see the SERVICE bit until it polls the device the next time to check for any status changes. After detecting the SERVICE bit, the host sends a SERVICE command to start data transfer. In technical parlance, this means that the device is non-deterministic and the transfer mode is referred to as the 3rd Party Nature of the Parallel ATA DMA transfer scheme. What is important here are the command overhead (all commands have to be issued twice in different formats) and the latencies involved. That is, the drive may be ready but the controller is busy polling the other devices (since there are up to four devices possible). In theory, this could make a difference, especially if master and slave devices are present on the same channel. In practice, the polling latencies appear short enough compared to the mechanical latencies involved to not make too much of a difference, regardless of whether a single device or two are on a channel for practical purposes.
If multiple commands are sent to the drive and the drive has to service these multiple outstanding commands, another limitation of Parallel ATA drives occurs since the SERVICE bit, which is a logical true of a single command line within the 40 parallel ATA signal lines cannot, by definition, contain any information about which command is going to be served. As a result, the host does not know, which set of data is going to be transferred until the actual data transfer starts. This piece of information, however is critical to set up the correct DMA channel but it has to be handled "after the fact" in Parallel ATA. In conventional home-office desktop applications, this is somewhat negligible since in most cases the host will issue a single command at the time.
Most of the Parallel ATA limitations just mentioned have little if any bearing in a single host-single drive configuration when running non-multi-threaded software. However, with more complex applications running in parallel or, for example, multithreaded software like audiovisual rendering programs where separate streams of sound and video data are handled quasi-simultaneously, the command overhead of Parallel ATA may become a problem.First Party DMA in SATA
One of the new buzzwords is First Party DMA (FPDMA) of SATA, which is often thrown in and never explained, even though it is rather simple: Compared to the non-deterministic role of Parallel ATA devices, Serial ATA drives can decide by themselves which data to transfer and when. That is, any transfer starts with the host sending a Read FPDMAQueued command after which it disconnects from the bus. A SATA drive is capable of actively reconnecting to the bus, meaning that it does not have to wait for the polling by controller when the latter "just feels like it is time". Because the SATA drive itself is deterministic, it can issue a DMA Setup Frame Information Structure (FIS). This FIS contains the information about the nature of the data and the memory location the data are supposed to go into, which was included in the FPDMAQueued(Ext) FIS originally issued by the host. To make a long story short, SATA uses a streamlined protocol without the 100% command overhead in queued Parallel ATA or else the bus occupancy in immediate service mode.
How Does it Translate Into Performance?
Theory is theory and practice is practice. In addition, there are driver issues to be taken into consideration that can optimize performance while cutting down on CPU usage, especially with respect to the IDE interface. Likewise, prefetching algorithms can be applied to enhance IDE performance by speculatively loading data into the drive cache. However, it is possible to make certain predictions regarding the outcome of the various benchmarks. That is, any benchmark that either relies on the internal drive performance or else measures data transfers by looking at large chunks like those typical for content creation applications will not show much of a difference between equivalent drives using only different interfaces.
Differences will become overt, in workloads consisting of random accesses with transfers of small files that can burst from the cache. In this case, the streamlined command overhead will contribute to the overall performance. Examples are mostly business applications, data base accesses and similar.
WinBench99 2.0 scores using the onboard Silicon Image RAID controller in single Barracuda SATA V configuration. Scores shown are Business Disk Winmark99 (blue) and High-End Disk Winmark 99 (red), depending on which drivers were used and whether HyperThreading was enabled. The drivers in question are the standard "Base" drivers that are complemented by some new SiI Filter drivers. The data shown here are averages of a minimum of 5 runs and meant only as a preamble to illustrate the complexity of what we are dealing with. Results with HT enabled and base drivers only were lower than those shown with HT disabled but there is no sense in beating a dead horse. Bottomline is that the devil is in the drivers that can be used to show a performance increase of 100% in certain applications. Else, the omission of these drivers will have a strong negative impact on the outcome of the benchmarks. There is more to that story, though, as we will show in the following.
Currently the singlemost powerful platform for AMD processors is the nForce2 platform. Aside from the independent dual memory controllers, the MCP-T offers what is currently probably the most powerful and efficient IDE controller, featuring support for UATA 133 as a side dish. It may appear somewhat unfair to pit the newcomer Serial ATA against such a formidable opponent but that is exactly what is needed to establish some of the performance rankings. Suffice it to say that we could have used any other chipset as well but the nForce2 chipset with its different IDE driver sets provides an excellent basis for performance comparison across what is possible and what is reasonable in parallel ATA.
Winbench99 Business Disk Winmark99
We already showed the impact of the Silicon Image Filter drivers on Winbench99 Diskmark performance on the last page. We also mentioned earlier that the Intel Application Accelerator might wash out some of the performance differences by using prefetching mechanisms/algorithms. One way around this problem would have been to avoid loading the IAA optimizations, however, in some ways, this would have been similar to driving a car on the spare tire.
Just when we thought we'd seen it all: Installing the Filter drivers results in more than doubling of the SATA performance. These numbers are real. Please also note the scores achieved with the Maxtor D740X and the IBM 120 GXP drives that, looked at in isolation would stomp anything else out there, courtesy of the latest nForce drivers released only last week. For the record, with the new SW IDE drivers as part of the 2.03 chipset driver package, we saw up to 40% performance increase over the standard drivers. This, however, only underscores the stellar scores achieved with the SATA drive(s) in either single or RAID 0 configuration. Note how much the new IDE drivers profit from the larger cache of the parallel Barracuda ATA V.
SIBase, SIFilt: Single Barracuda SATA V drive using either base drivers alone or upgraded to the latest filter drivers; S-RAID: SATA RAID 0 with either Base or Filter drivers. The * denotes the use of the original nVidia IDE drivers (6.1.2600.00, turquoise columns) that deliver almost identical perfornance as the Microsoft drivers, only at lower CPU usage.
Winbench99 High-End Disk Winmark99
As mentioned earlier, business applications running from the cache will be the prime beneficiaries of the SATA protocol as well as of the larger (8MB) cache whereas the differences in content creation applications involving the movement of large files would erode, courtesy of the limitations in internal performance.
In the High-End domain, raw bandwidth is what counts most, therefore, it is no surprise that the RAID 0 configurations take the lead. Still, the single Barracuda SATA V maintains a solid lead over every other drive in the competition as long as the Filter drivers are used.
The filter drivers provide for a healthy performance boost in single or RAID 0 configuration.
These numbers are off the scale of what is actually possible and most likely we are looking at total bus saturation. We only report these scores for reasons of completeness but otherwise we will disregard them as misrepresentation.
Microstation SE is, once again, dominated by the SATA drives.
Photoshop 4.0 has always been a domain of the IBM drives in the past.
Premiere 4.2 is dominated by the cache size or other internal drive performance characteristics.
Same as above, the differences between the base and filter drivers and the Parallel vs.Serial Barracuda ATA V are not relevant.
The only category within WinBench99 where the SATA drives fall behind.
WinBench results give some ideas about what is going on, especially with respect to interface performance dependence on the drivers. Keep in mind that the new nVidia IDE drivers are only supported in Service Pack1 under Windows XP and will not work with all devices.Business Winstone 2002
In the past, one of our major criticisms about ZiffDavis or eTesting Labs benchmarks has been that the scores showed more dependence on the disk I/O interface than on CPU performance. On the contrary, we found Bapco Sysmark rather insensitive to HDD performance which makes it a great benchmark for CPU and memory performance but a very poor tool to assess drive performance. ZDLabs ContentCreation Winstone appears to be somewhere in between. By extension, the different iterations of Business Winstone appear very suitable as sanity checks for the other benchmarks we were using.
Business Winstone2002 is clearly dominated by the SATA drives, however, the filter drivers apparently cause a slight performance hit. We observed the same effect in Content Creation Winstone2002 and 2003 but to a smaller degree (0.1 points difference).
One issue we have not touched upon yet is what is probably important for anybody doing video editing and similar content creation applications and that is a simple file copy. We used a folder containing the following sub-folders and files:
Total Folder size was 307 MB (322,780,099 Bytes)
The entire folder was copied from one drive to another. After each copy, the folder was deleted from the target drive, the recycle bin was emptied and the system cold rebooted to avoid any impact of resident memory data on the copy process. All numbers shown were manually stopped runtimes in seconds, meaning that there is a fair amount of inaccuracy. We further rounded the numbers to the nearest integer but we feel that, at least within the granularity offered, the benchmarks reflect the performance of the given configuration.
Runtime in seconds for the copy of 307 MB from one physical drive to another. Shorter is better. In all cases where the IBM 120 GXP was the target drive, copy times were between 10 and 11 seconds, which we believe is a consequence of the smaller cache (2MB vs 8 MB in all other drives used). If the 120GXP was used as source (master) with the Barracuda ATA V (slave) on the same channel, we noticed that the overall runtime was still slightly longer than if we used SATA to PATA transfers or else SATA to SATA. This seems somewhat logical since the same cable is used for both reads and writes and since Parallel ATA in its current form does not feature any point-to-point topology, the data have to pass through the controller and back. Since it is a total of 2 x 307 MBytes, the amount of data passsing through the cable is roughly sustained 80 MB/sec. That is without counting any command overhead and explains the slightly longer time to complete the copy process.
If we do the math, we see that 307 MB can be copied in 7 seconds which corresponds to approximately 44 MB/sec copy speed. Copy includes reads and writes and this brings us back to the numbers reported by HDTach for the average write performance across the platter that was in the low to mid twenties whereas what we are seeing here are sustained writes at 44 MB/sec. Keep in mind that these numbers are not a benchmark but a workload with a known file size and transfer time, meaning that there is hardly any possibility for error.The Bitter Pill
All File copy results shown on the last page were obtained with either the nVidia SW IDE drivers or else with the Silicon Image base drivers. When we used the Filter drivers and copied from one SATA drive to any other Parallel ATA or SATA drive, run times increased from 7 seconds to roughly 17 seconds. Increasing system memory to 1GB dropped the copy time back to 13 sec. Interestingly, only reads were affected by the SI Filter drivers. To be fair, though, we have to consider that these filter drivers are the first beta version and only give an idea of what is going to be possible with more mature driver software. Maybe we are a bit optimistic but it is possible that we will see driver updates that will bring us the best of the two worlds meaning the top scores obtained with either base or filter drivers. In any event, we hear that a fix is on the way already.
On a side note, using the original nVidia IDE drivers, the copy time from the IBM 120GXP to the parallel Barracuda ATA V increased to 10 seconds. The other way around, that is with the 120GXP as target, the copy time did not increase over the original 10 sec. At first, this appears counterintuitive but it does make sense in that the two bottlenecks, that is, the write cache and the parallel bus now match each other but do not add on top of each other.
After about two weeks worth of benchmarking Serial and Parallel ATA drives in any possible hard and software configuration thinkable, battling benchmarks and installing / uninstalling drivers and service packs not to mention the time spent on restoring a virgin state of the drives using zero-fills, we have a variety of results and conclusion. Lets's go step by step through them.
Even with the external controller chip on the mainboard, that is, without integration of the SATA controller into the south Bridge, Serial ATA appears to take off in terms of performance which in some instances resulted in 338 % of the baseline score. Some of these enhancements, particularly in business or data base applications dealing with transfer of small files can be attributed to the streamlined command - or "frame information" structure and the associated reduction of latencies, others are driver issues. Still, a doubling of the performance of the best Parallel ATA scores sends a rather strong message.
Most current drives do not yet reach the specified interface speed of 1.5Gbit/sec, however, a large portion of this can also be attributed to either the controller or else the bus bottleneck further downstream as, e.g. the PCI bus. Unfortunately, at the time of testing, we have not been able to get our hands on a PCI-X controller card to really challenge the performance of SATA drives in RAID 0 configuration.
Overall, there is no turning back to parallel drives. Bridge adapters are already available, even though it remains to be determined whether the current generation of adapter will work with conventional, existing optical storage media such as CD and DVDROM. Suffice it to say that the next generation of optical drive using a serial interface is already around the corner, we already held them in our hands, so we know.
Copyright © 2002