The speed difference between processors and memories has become to one of the biggest problem in designing memory systems. While this primarily limits fast sequential access to data in memory it also sets constraints to efficient instruction fetch. In computers using single threaded processors this latter problem has traditionally been partially solved by using instruction caches, but in fast multithreaded processors supporting a large number of threads the problem is more difficult, because each thread can execute the program from an unique address (MIMD-style) or all threads can just access the same location synchronously (SIMD-style). In this paper we propose two cacheless instruction fetch mechanisms for multithreaded processors composed of an interthread pipelined instruction fetch unit and a banked instruction memory module using randomized hashing, combining and partitioning. The proposed mechanisms along with a two reference mechanisms based on direct mapped and T-way set associative caching are evaluated in a T-threaded case by simulations. According to our evaluation the proposed mechanisms solve efficiently the speed difference problem and provide clearly better performance than the reference solutions.
|Title of host publication||6th WSEAS World multiconference on circuits, systems, communications and computers: proceedings|
|ISBN (Print)||960-8052-63-7, 978-960-8052-63-5|
|Publication status||Published - 2002|
|MoE publication type||A4 Article in a conference publication|
|Event||6th WSEAS International Conference on Computers 2002 - Rethymnon, Greece|
Duration: 7 Jul 2002 → 14 Jul 2002
|Conference||6th WSEAS International Conference on Computers 2002|
|Period||7/07/02 → 14/07/02|
- multithreaded processors
- instruction memory
- instruction fetching mechanisms
Forsell, M. (2002). Cacheless instruction fetch mechanism for multithreaded processors. In 6th WSEAS World multiconference on circuits, systems, communications and computers: proceedings (pp. 150-155). WSEAS Press.