..:: Introduction ::..
With the release of Intel’s i875P “Canterwood” and i865PE “Springdale” chipsets earlier this year, Pentium 4 systems were able to be taken to new performance heights, furthering their lead over the Taiwanese chipset manufacturers. At the time of the release, and up until this point, Intel has held the top spot for high performance due to a lack of competitive products coming out of Taiwan. With the legal problems that dragged on between Intel and VIA now out of the way, VIA has come to the market with their own single channel chipset, however they have yet to get their dual channel supporting chipset to the market as of yet. All the while, the only real chipset that was offered with dual channel memory support was SiS’ own 655 chipset, although it suffered from limited FSB and DRAM frequency support. Today, we will be taking a look at SiS’ latest chipset, the 655FX, featuring full support for 800MHz FSB Pentium 4 processors, dual channel DDR400, native Serial ATA, and more. Before we get to the performance aspects of the SiS655FX chipset, let us first take a glance at some of the advancements and technologies used by it.
..:: SiS Hyper-Streaming Technology ::..
It wasn’t too long ago that SiS began developing, and adding a new feature to their latest high-end chipset lines, dubbed Hyper-Streaming Technology. This new “Hyper-Streaming Engine,” or HSE is a feature that has been implemented into the latest chipsets from SiS for both Intel and AMD platforms, such as the 655FX. The Hyper-Streaming Engine itself consist or four separate aspects that attempt to lower overall latencies within the system busses, further pipeline various execution and data, prioritize the various execution commands, and intelligently sort data that is stored in the RAM. According to SiS, the Hyper-Streaming Technology “makes streams of data flow all over the paths efficiently, concurrently, smoothly, and intelligently.” There are four core aspects of the Hyper-Streaming Technology that are dealt with within the Northbridge itself, those being Smart Arbitration, Split Transaction, Pipelining, and Concurrent Execution.
..:: Smart Arbitration ::..
If we take a look at a graphical layout of the Hyper-Streaming Architecture, we can see that it is utilized throughout the core busses of the system. We can also see a diagram that looks reminiscent of a breakdown of a modern microprocessors various processing stages. The first step in this process is “Smart Arbitration.” Smart Arbitration deals with how the chipset itself assigns priority to the various data streams coming in from system busses that do not operate on the same protocol as does the Hyper-Streaming Engine. The Smart Arbitration process identifies the specific type of data stream coming in to the chipset, and assigns a priority to it once it determines what type of data or request it might be, how continuous the data stream is, and the waiting time that will be induced during processing of it. Items such as response requests are assigned a higher priority because they have a direct effect upon the waiting time of the CPU and other system devices. Therefore, we can easily understand how Smart Arbitration is used for scheduling of commands and other data coming in to the Northbridge, and thereby improving bus utilization and efficiency.
..:: Split Transaction ::..
Split Transaction is a process that is very easy to understand, and only takes a quick second to explain. When a request by some arbitrary device within the system is sent along the bus, the bus cannot be utilized by any other device until a response is sent back to the originating device since the bus is “occupied.” With split transactions, other devices are able to utilize the system bus while the previous request is being processed, or the data is being fetched from memory. This allows for lowered latencies throughout the system due to less waiting time, and also allows for better overall bus utilization.
..:: Pipelining Transactions / Concurrent Execution ::..
Many of you are likely already rather familiar with the idea of pipelining as it is a feature used on all of today’s modern super-scalar microprocessors. The SiS 655FX chipset adopts this mode of operation for transactions that come through it. For example, let’s say that you want to run two transactions, each of these consisting of 6 stages, through the chipset. In a non-pipelined system, this would take 12 units of time to complete, whether they are clock cycles, or some other time measure. If we were to pipeling these two transactions, offsetting one by a single unit time, we would be able to shrink the processing time for both of these transactions to 7 units of time. Now, the topic of concurrent execution can further aid in cutting down total processing time of a set of transactions. In a pipelined situation, each of the transactions would begin one unit of time after the previous one. However, with concurrent execution, the transactions can be staggered so that multiple transactions can be processed at once, further cutting down on total processing time. If you aren’t sure what this means, take a look at the image above for a graphical representation.