Agenda CS61c is NOT really about C Programmingcs61c/fa11/lec/01/01...2010 2.81 2.81 2009 2.71 2.81...
Transcript of Agenda CS61c is NOT really about C Programmingcs61c/fa11/lec/01/01...2010 2.81 2.81 2009 2.71 2.81...
8/27/11
1
CS61C:GreatIdeasinComputerArchitecture
(a.k.a.MachineStructures)Course Introduc-on
Instructors:
MikeFranklin
DanGarcia
hDp://inst.eecs.berkeley.edu/~cs61c/fa11
1Fall2011‐‐Lecture#1
Agenda
• ThinkingaboutMachineStructures
• GreatIdeasinComputerArchitecture
• Whatyouneedtoknowaboutthisclass
28/26/11 Fall2011‐‐Lecture#1
Agenda
• ThinkingaboutMachineStructures
• GreatIdeasinComputerArchitecture
• Whatyouneedtoknowaboutthisclass
38/26/11 Fall2011‐‐Lecture#1
CS61cisNOTreallyaboutCProgramming
• Itisaboutthehardware‐soUwareinterface– Whatdoestheprogrammerneedtoknowtoachievethehighestpossibleperformance
• LanguageslikeCareclosertotheunderlyinghardware,unlikelanguageslikeScheme!– Allowsustotalkaboutkeyhardwarefeaturesinhigherlevelterms
– Allowsprogrammertoexplicitlyharnessunderlyinghardwareparallelismforhighperformance
8/27/11 Fall2011‐‐Lecture#1 4
OldSchoolCS61c
58/26/11 Fall2011‐‐Lecture#1
(kinda)NewSchoolCS61c(1)
6
PersonalMobileDevices
8/26/11 Fall2011‐‐Lecture#1
8/27/11
2
7
WarehouseScaleComputer
8/26/11 Fall2011‐‐Lecture#1
NewSchoolCS61c(2)Old‐SchoolMachineStructures
8
CS61c
I/OsystemProcessor
Compiler
Opera6ng
System(MacOSX)
Applica6on(ex:browser)
DigitalDesign
CircuitDesign
Instruc6onSetArchitecture
Datapath&Control
transistors
MemoryHardware
SoHware Assembler
8/26/11 Fall2011‐‐Lecture#1
New‐SchoolMachineStructures(It’sabitmorecomplicated!)
• ParallelRequestsAssignedtocomputere.g.,Search“Katz”
• ParallelThreadsAssignedtocoree.g.,Lookup,Ads
• ParallelInstruccons>[email protected].,5pipelinedinstruccons
• ParallelData>[email protected].,Addof4pairsofwords
• HardwaredescripconsAllgatesfuncconingin
parallelatsamecme8/27/11 Fall2011‐‐Lecture#1 9
SmartPhone
WarehouseScale
Computer
So/ware Hardware
Harness Parallelism & Achieve High Performance
LogicGates
Core Core…
Memory(Cache)
Input/Output
Computer
MainMemory
Core
InstrucconUnit(s) FuncconalUnit(s)
A3+B3A2+B2A1+B1A0+B0
Project1
Project2
Project3
Project4
Agenda
• ThinkingaboutMachineStructures
• GreatIdeasinComputerArchitecture
• Whatyouneedtoknowaboutthisclass
108/26/11 Fall2011‐‐Lecture#1
6GreatIdeasinComputerArchitecture
1. LayersofRepresentacon/Interpretacon2. Moore’sLaw
3. PrincipleofLocality/MemoryHierarchy
4. Parallelism
5. PerformanceMeasurement&Improvement
6. DependabilityviaRedundancy
8/27/11 Fall2011‐‐Lecture#1 11
GreatIdea#1:LevelsofRepresentacon/Interpretacon
lw $t0,0($2)lw $t1,4($2)sw $t1,0($2)sw $t0,4($2)
HighLevelLanguageProgram(e.g.,C)
AssemblyLanguageProgram(e.g.,MIPS)
MachineLanguageProgram(MIPS)
HardwareArchitectureDescrip6on(e.g.,blockdiagrams)
Compiler
Assembler
Machine Interpreta4on
temp=v[k];v[k]=v[k+1];v[k+1]=temp;
0000 1001 1100 0110 1010 1111 0101 1000 1010 1111 0101 1000 0000 1001 1100 0110 1100 0110 1010 1111 0101 1000 0000 1001 0101 1000 0000 1001 1100 0110 1010 1111
LogicCircuitDescrip6on(CircuitSchema6cDiagrams)
Architecture Implementa4on
Anythingcanberepresentedasanumber,
i.e.,dataorinstruccons
8/27/11 12Fall2011‐‐Lecture#1
8/27/11
3
8/27/11 Fall2011‐‐Lecture#1 13
Predicts:2XTransistors/chipevery2years
GordonMooreIntelCofounderB.S.Cal1950!
#oftransistorsonan
integratedcircuit(IC)
Year
#2:Moore’sLaw JimGray’sStorageLatencyAnalogy:HowFarAwayistheData?
Registers
On Chip Cache On Board Cache
Memory
Disk
1
2 10
100
Tape /Optical Robot
10 9
10 6
Sacramento
This Campus
This Room My Head
10 min
1.5 hr
2 Years
1 min
Pluto
2,000 Years The ima
Andromeda
JimGrayTuringAwardB.S.Cal1966Ph.D.Cal1969!
GreatIdea#3:PrincipleofLocality/MemoryHierarchy
8/27/11 Spring2011‐‐Lecture#1 15
GreatIdea#4:Parallelism
8/27/11 Fall2011‐‐Lecture#1 16
8/27/11 Spring2011‐‐Lecture#1 17
Caveat:Amdahl’sLaw
GeneAmdahlComputerPioneerPh.D.Wisconsin1952!
GreatIdea#5:PerformanceMeasurementandImprovement
• Matchingapplicacontounderlyinghardwaretoexploit:– Locality– Parallelism– Specialhardwarefeatures,likespecializedinstruccons(e.g.,matrixmanipulacon)
• Latency– Howlongtosettheproblemup– Howmuchfasterdoesitexecuteonceitgetsgoing– Itisallabout-me to finish
8/27/11 Fall2011‐‐Lecture#1 18
8/27/11
4
CopingwithFailures
• 4disks/server,50,000servers• Failurerateofdisks:2%to10%/year
– Assume4%annualfailurerate
• Onaverage,howoUendoesadiskfail?a) 1/monthb) 1/weekc) 1/dayd) 1/hour
8/27/11 Fall2011‐‐Lecture#1 19
CopingwithFailures
• 4disks/server,50,000servers• Failurerateofdisks:2%to10%/year
– Assume4%annualfailurerate
• Onaverage,howoUendoesadiskfail?a) 1/monthb) 1/weekc) 1/dayd) 1/hour
8/27/11 Fall2011‐‐Lecture#1 20
50,000x4=200,000disks200,000x4%=8000disksfail
365daysx24hours=8760hours
Fall2011‐‐Lecture#1
GreatIdea#6:DependabilityviaRedundancy
• Redundancysothatafailingpiecedoesn’tmakethewholesystemfail
8/27/11 21
1+1=2 1+1=2 1+1=1
1+1=22of3agree
FAIL!
IncreasingtransistordensityreducesthecostofredundancySpring2011‐‐Lecture#1
GreatIdea#6:DependabilityviaRedundancy
• Appliestoeverythingfromdatacenterstostoragetomemory– Redundantdatacenterssothatcanlose1datacenterbutInternetservicestaysonline
– Redundantdiskssothatcanlose1diskbutnotlosedata(RedundantArraysofIndependentDisks/RAID)
– Redundantmemorybitsofsothatcanlose1bitbutnodata(ErrorCorreccngCode/ECCMemory)
8/27/11 22
Agenda
• ThinkingaboutMachineStructures
• GreatIdeasinComputerArchitecture
• Whatyouneedtoknowaboutthisclass
238/26/11 Fall2011‐‐Lecture#1
Yodasays…“Alwaysinmo6onisthefuture…”
Ourschedulemaychangeslightlydependingonsomefactors.Thisincludeslectures,assignments&labs…
8/27/11
5
Hotoffthepresses
• Duetohighstudentdemand,we’veaddedatenthseccon!!
• It’sthesamecmeaslab105
• Everyone(notjustthoseonthewaitlist),considermovingtothisseccon
8/27/11 Fall2011‐‐Lecture#1 25
CourseInformacon• CourseWeb:hDp://inst.eecs.Berkeley.edu/~cs61c/• Instructors:
– DanGarcia,MichaelFranklin• TeachingAssistants:
– BrianGawalt(HeadTA),EricLiang,PaulRuan,SeanSoleyman,AnirudhTodi,andIanVonseggern
• Textbooks:Average15pagesofreading/week(canrent!)– PaDerson&Hennessey,Computer Organiza-on and Design,
4thEdicon(not≤3rdEdicon,notAsianversion4thedicon)– Kernighan&Ritchie,The C Programming Language,2ndEdicon– Barroso&Holzle,The Datacenter as a Computer, 1st Edi-on
• Piazza:– Everyannouncement,discussion,clarificaconhappensthere
8/27/11 Fall2011‐‐Lecture#1 27
Reminders
• Discussionsandlabswillbeheldnextweek– SwitchingSeccons:ifyoufindanother61Cstudentwillingtoswapdiscussion(fromthePiazzathread)ANDlab,talktoyourTAs
– Partners(onlyproject2,3andperformancecompeccon)
8/27/11 Fall2011‐‐Lecture#1 28
CourseOrganizacon• Grading
– EPA:Effort,ParccipaconandAltruism(5%)– Homework(10%)– Labs(5%)– Projects(20%)
1. ComputerInstrucconSetSimulator(C)2. DataParallelism(Map‐ReduceonAmazonEC2)3. PerformanceTuningofaParallelApplicacon/Matrix
Mulcplyusingcacheblocking,SIMD,MIMD(OpenMP)4. ComputerProcessorDesign(Logisim)
– MatrixMulcplyCompecconforhonor(andEPA)– Midterm(25%):dateTBA,canbeclobbered!– Final(35%):3‐6PMThursdayDecember15th
8/27/11 Fall2011‐‐Lecture#1 29
Tried‐and‐TrueTechnique:PeerInstruccon
• Increasereal‐cmelearninginlecture,testunderstandingofconceptsvs.details
• Ascompletea“segment”askmulcplechoicequescon– 1‐2minutestodecideyourself– 2minutesinpairs/triplestoreachconsensus.– Teachothers!– 2minutediscussionofanswers,quescons,clarificacons
• YoucangettransmiDersfromtheASUCbookstoreORyoucanuseweb>clickerappfor$10!– We’llstartthisonMonday
8/27/11
6
EECSGradingPolicy
• hDp://www.eecs.berkeley.edu/Policies/ugrad.grading.shtml
“AtypicalGPAforcoursesinthelowerdivisionis2.7.ThisGPAwouldresult,forexample,from17%A's,50%B's,20%C's,10%D's,and3%F's.AclasswhoseGPAfallsoutsidetherange2.5‐2.9shouldbeconsideredatypical.”
• Fall2010:GPA2.8126%A's,47%B's,17%C's,3%D's,6%F's
• Job/InternInterviews:Theygrillyouwithtechnicalquescons,soit’swhatyousay,notyourGPA
(New61cgivesgoodstufftosay)8/27/11 Fall2011‐‐Lecture#1 31
Fall Spring
2010 2.81 2.81
2009 2.71 2.81
2008 2.95 2.74
2007 2.67 2.76
ExtraCredit:EPA!
• Effort– ADendingprofandTAofficehours,complecngallassignments,turninginHW0,doingreadingquizzes
• Parccipacon– ADendinglectureandvocngusingtheclickers– Askinggreatquesconsindiscussionandlectureandmakingitmoreinteraccve
• Altruism– HelpingothersinlaboronPiazza
• EPA!extracreditpointshavethepotencaltobumpstudentsuptothenextgradelevel!(butactualEPA!scoresareinternal)
LatePolicy…SlipDays!• Assignmentsdueat11:59:59PM• Youhave3slipdaytokens(NOThourormin)
• Everydayyourprojectorhomeworkislate(evenbyaminute)wedeductatoken
• AUeryou’veusedupalltokens,it’s33%deductedperday.– Nocreditifmorethan3dayslate
– Saveyourtokensforprojects,worthmore!!
• Noneedforsobstories,justuseaslipday!8/27/11 Fall2011‐‐Lecture#1 33
PolicyonAssignmentsandIndependentWork
• Withtheexcepconoflaboratoriesandassignmentsthatexplicitlypermityoutoworkingroups,allhomeworkandprojectsaretobeYOURworkandyourworkALONE.
• Youareencouragedtodiscussyourassignmentswithotherstudents,andextracreditwillbeassignedtostudentswhohelpothers,parccularlybyansweringquesconsonPiazza,butweexpectthatwhatyouhandinisyours.
• ItisNOTacceptabletocopysoluconsfromotherstudents.• ItisNOTacceptabletocopy(orstartyour)soluconsfromtheWeb.• Wehavetoolsandmethods,developedovermanyyears,fordeteccngthis.
YouWILLbecaught,andthepenalcesWILLbesevere.• AttheminimumNEGATIVEPOINTSfortheassignment,probablyanFinthe
course,andaleDertoyouruniversityrecorddocumencngtheincidenceofcheacng.
• (We’vecaughtpeopleinrecentsemesters!)• BothGiverandReceiverareequallyculpable
8/27/11 Fall2011‐‐Lecture#1 34
35Spring2011‐‐Lecture#18/27/11
ArchitectureofatypicalLecture
8/27/11 Fall2011‐‐Lecture#1 36
ADencon
Time(minutes)
10 30 35 58 60
Administrivia“Andinconclusion…”
Full
8/27/11
7
Summary• CS61C:Learn6greatideasincomputerarchitecturetoenablehighperformanceprogrammingviaparallelism,notjustlearnC1. LayersofRepresentacon/Interpretacon2. Moore’sLaw3. PrincipleofLocality/MemoryHierarchy4. Parallelism5. PerformanceMeasurementandImprovement
6. DependabilityviaRedundancy
37Fall2011‐‐Lecture#18/27/11