Aca 2

33
System Attributes to Performance 22/9/2012

Transcript of Aca 2

Page 1: Aca 2

System Attributes to Performance

22/9/2012

Page 2: Aca 2

CPU/Processor driven by-

A clock with a constant cycle time (τ) in nSecondClock Rate: f = 1/ τ in megahertz Ic- Instruction Count: Size of program/number of

machine instructions to be executed in the program.Different machine instructions needed- different no. of

clock cycles to executeCPI (Cycles per Instruction): Time needed to execute

each Instruction.Average CPI: For a given Instruction Set.

Page 3: Aca 2

Performance Factors:

CPU Time (T): -Time needed to execute a Program.- in seconds/program

T = CPU Time = Ic * CPI * τ

Execution of an instruction going through a cycle of events :

Instruction fetchDecodeOperand(s) fetchExecutionStore results

Page 4: Aca 2

Events Carried out in the CPU:Instruction decodesExecution phasesRemaining three required to Access the memory.

Memory Cycle :

Time needed to complete one memory reference.

Note- Memory cycle is k times processor cycle τ.

k depends upon speed of memory technology.

Page 5: Aca 2
Page 6: Aca 2

System Attributes Influence on Performance Factor (Ic, p, m, k, t):

1. Instruction-set architecture-

Affects the program length (Ic) and processor cycle needed (p)

2. Compiler Technology-

Affect value of Ic, p, m

3. CPU Implementation & Control-

Determine total processor time (p * τ)

4.Cache & Memory Hierarchy-

Affect the memory access latency (k*τ)

Page 7: Aca 2

System Attributes

Performance FactorsInstr. Count

(Ic)

Avg. Cycles per Instruction, CPI Processor Cycle Time

τProcessor Cycles per instruction

(p)

Memory Reference/

Instruction, (m)

Memory Access

Latency, (k)

Instruction-Set Architecture

X X

Compiler Technology

X X X

Processor Implementation

& ControlX X

Cache & Memory

HierarchyX X

Page 8: Aca 2

MIPS Rate: Million Instructions per Second

C = Total no. clock cycle needed to execute a Program

T = C * τ = C/f

CPI = C/Ic

T = Ic * CPI * τ = (Ic * CPI)/f

Page 9: Aca 2

Throughput Rate (Ws):

No. of programs a system can execute per unit Time.

Ws = Program/Second

Note:- In Multiprogrammed system, System throughput (Ws) is often lower than CPU throughput Wp.

Wp = f/ (Ic * CPI)

= 1/ Ic * CPI * τ = 1 Program/T

Ws = Wp

If the CPU is kept busy in a perfect program-interleaving fashion

Page 10: Aca 2

Two approaches to parallel programming :

Sequential Coded Source Program

Detect Parallelism & Assign target Machine Resources

Note:- Compiler Approach applied in programming Shared-Memory Multiprocessors

Page 11: Aca 2

•Parallel Dialects of C……•Parallelism specified in user Program

Note:- Approach applied in Multicomputer

Page 12: Aca 2

Parallel Computers Architectural Model/ Physical Model

Distinguished by having-

1. Shared Common Memory:

Three Shared-Memory Multiprocessor Models are:

i. UMA (Uniform-Memory Access)

ii. NUMA (Non-Uniform-Memory Access)

iii. COMA (Cache-Only Memory Architecture)

2. Unshared Distributed Memory

i. CC-NUMA (Cache-Coherent -NUMA)

Page 13: Aca 2

UMA Multiprocessor Model

Page 14: Aca 2

Physical memory is uniformly shared by all the processors.

All Processors have equal access time to all memory words, so it is called Uniform Memory Access.

Peripherals are also shared in some fashion.

Also called Tightly Coupled Systems -due to the high degree of resource sharing.

Page 15: Aca 2

Symmetric Vs Asymmetric Multiprocessor

Symmetric Multiprocessor: All processors have equal access to all peripheral devices.

Asymmetric Multiprocessor:

Only one or a subset of processors are executive capable.

i. MP (Executive or Master Processor)-

Can execute the O.S. and handle I/O

ii. AP (Attached Processor)-

No I/O capability

AP execute user codes under Supervision of MP

Page 16: Aca 2

NUMA Multiprocessor ModelShared-Memory SystemAccess Time varies with the location of the Memory

WordLocal Memories (LM): Shared Memory is physically

distributed to all processorsGlobal Address Space: Forms by collection of all

Local Memories (LM) that is accessible by all processors.

Faster Access to a local memory with a local processorSlow Access to remote memory attached to other

processors due to the added delay through the interconnection network

Page 17: Aca 2

LM – Local MemoryP - Local Processor

Page 18: Aca 2

P – ProcessorCSM – Cluster Shared MemoryCIN – Cluster Interconnection NetworkGSM – Global Shared Memory

UMA or

NUMA

(Access of Remote Memory)

Page 19: Aca 2

Three Memory-Access Patterns when Globally Shared Memory (GSM) added to a multiprocessor system:

i. The fastest is Local Memory(LM) accessii. The next is global memory (GSM)accessiii. The slowest is access of Remote Memory

Remote Memory- LM attach to other processor

Note: All cluster have equal access to GSMAccess right among Intercluster memories can be specified.

Page 20: Aca 2

COMA Multiprocessor Model

• Distributed Main Memory converted to Cache

•Cache form Global Address Space

•Remote Cache access by – Distributed cache Directories

C – CacheP – ProcessorD - Directories

Page 21: Aca 2

Multiprocessor System Suitable for-General purpose Multiuser ApplicationsProgrammability is major concern

Shortcoming of Multiprocessor System-Lack of ScalabilityLimitation in Latency Tolerance for Remote Memory

Access

Page 22: Aca 2

Mini – Super

Computer

Near- Super

Computer

MPP Class

Page 23: Aca 2

Distributed-Memory Multicomputers

Nodes- Multiple Computer in SystemInterconnection by Message-Passing NetworkNode is an Autonomous Computer consists of:

ProcessorLocal memorySometimes attached Disks Sometimes attached I/O Peripherals

Message-passing network provide: Point-to-point Static connection among nodes

Local Memories(LM)- private (accessible only by Local Processor)

NORMA(No-remote-memory-access)-traditional multicomputer

Page 24: Aca 2

Fig:- Generic model of a message-passing multicomputer

M – Local MemoryP - Processor

Node

Page 25: Aca 2

Parallel Computers: SIMD or MIMD configuration

SIMD- For special purpose applicationsCM 2 (Connection Machine) on SIMD architecture

MIMD- CM 5 on MIMD architectureHaving globally shared virtual address space

Scalable multiprocessors or multicomputer:

use distributed shared memory

Unscalable multiprocessors:

use centrally shared memory

Page 26: Aca 2

Fig:- Gordon Bell's taxonomy of MIMD computers.

Page 27: Aca 2

Supercomputer Classification:

Pipelined Vector machine/ Vector Supercomputers-

*Using a few powerful processors equipped with vector hardware

*Vector Processing

SIMD Computers / Parallel Processors-

*Emphasizing massive data parallelism

Page 28: Aca 2

Vector Supercomputers

1

2

3

4

5 6

Page 29: Aca 2

Step 1-2 Program & data are first loaded into the Main Memory through a Host computer.

Step 3 All instructions are first decoded by the Scalar Control Unit.

Step 4 If the decoded instruction is a scalar operation or a program control operation, it will be directly

executed by the scalar processor using the Scalar Functional Pipelines.

Step 5 If the instructions are decoded as a Vector operation, it will be sent to the vector control

unit.

Step 6 Vector control unit will supervise the flow of vector data between the main memory and

vector functional pipelines.

Note: A number of vector functional pipelines may be built into a vector processor.

Page 30: Aca 2

SIMD Supercomputers

CU- Control UnitPE- Processing ElementLM- Local MemoryIS- Instruction StreamDS- Data Stream

(Abstract Model of a SIMD computer)

Page 31: Aca 2

(Operational model of SIMD computer)

Page 32: Aca 2

SIMD Machine Model:

An operational model of an SIMD computer is specified by a 5-tuple:

M = <N , C , I , M , R>(1) N = No. of Processing Elements (PE) in the

machine. (2) C =Set of instructions directly executed by the control

unit (CU). Scalar & Program Flow Control Instructions.(3) I = Set of instructions broadcast by the CU to all

PEs for parallel execution.Include: Arithmetic, logic, data routing, masking,

and other local operations executed by each active PE over data within that PE.

Page 33: Aca 2

(4) M = Set of Masking Schemes

Each mask partitions the set of PEs into enabled and disabled subsets.

(5) R = Set of data-routing functions

Specifying various patterns to be set up in the interconnection network for inter-PE communications.