General-Purpose Many-Core Parallelism Broken, But Fixable Uzi Vishkin

General-Purpose Many-Core Parallelism Broken, But Fixable Uzi Vishkin

General-Purpose Many-Core Parallelism Broken, But Fixable Uzi Vishkin Scope: max speedup from on-chip parallelism Commodity computer systems 19462003 General-purpose computing: Serial. 5KHz4GHz. 2004 Clock frequency growth flatGeneral-purpose computing goes parallel. If you want your program to run significantly faster youre going to have to parallelize it 19802013 #Transistors/chip: 29K10sB Bandwidth/latency: 300 Intel Platform 2015, March05: #cores: ~dy-2003 ~2011: Advance from d1 to d2 How is many-core parallel computing doing?

- Current-day system architectures allow good speedups on regular dense-matrix type programs, but are basically unable to do much outside that. - In view of the prior slide Are desktop computers doomed to be seen as appliances, where owning one is a necessity, but upgrading for running a new application is not? Even the exception of PC gaming is threatened by diversifying onto other platforms Take home Parallelism doing good: used at unprecedented scale. But, can still do much better My focus: Whats missing Irregular problems/program Strong scaling, and - Cost-effective parallel programming for regular problems -

Sweat-to-gain ratio is (often too) high Though some progress with domain-specific languages Requires revolutionary approach Revolutionary: Throw out & replace high bar How else does parallel computing confuse the rest of CS? Supercomputer/cloud/big-data types: it is all about (programming for) "communication avoidance Parallel algorithms types: hopeless for general-purpose programming & apps Others: why wouldnt you make up your mind and let us know Not so confusing: limited scale vs largest scale, e.g.,: Intra-node: high communication bandwidth A problem: inter-node New result: The former can greatly help the latter. Perhaps

some peace & coherence. Interest in other camp Example Memory How did serial architectures deal with locality? 1. Gap opened between improvements in - Latency to memory, and - Processor speed 2. Locality observation Serial programs tend to reuse data, or nearby addresses (i) Increasing role for caches in architecture; yet, (ii) Same basic programming model In summary Starting point: Successful programming model Found a way to hold on to it Locality in Parallel Computing Early on Processors with local memory Practice of parallel programming meant: 1. Program for parallelism, and

2. Program for locality Consistent with: design for peak performance But, not with: cost-effective programming In summary Never: Truly successful parallel programming model Less to hold on to.. Back-up: Current systems/revolutionary changes Multiprocessors HP-12: Computer consisting of tightly coupled processors whose coordination and usage are controlled by a single OS and that share memory through a shared address space GPUs HW handles thread management. But, leave open missing items BACKUP: - -

Goal Fit as many FUs as you can into silicon. Now, use all of them all the time Architecture, including memory, optimized for peak performance on limited workloads, rather than sustained general-purpose performance Each thread is SIMD limit on thread divergence (both sides of a branch) HW uses parallelism for FUs and hiding memory latency No: shared cache for general data, or truly all-to-all interconnection network to shared memory Works well for plenty of structured parallelism Minimal parallelism: just to break even with serial Cannot handle serial & low-parallel code. Leave open missing items: strong scaling, irregular, cost-effective regular Also: DARPA-HProductivityCS. Still: Only heroic programmers can exploit the vast parallelism in todays machines [GameOver, CSTB/NAE11] Hardware-first threads Build-first, figureout-how-to-program later architecture Parallel programming: MPI, Open MP

Past Graphics cards GPUs. CUDA. GPGPU Dense-matrix-type X Irregular,Cost-effective,Strong scaling Future? Place holder Where to start so that

Heterogeneous system Heterogeneous lowering the bar: Keep what we have, but augment it. Enabled by: increasing transistor budget, 3D VLSI & design of power Hardware-first threads Build-first, figureout-how-to-program later architecture Parallel programming: MPI, Open MP Algorithms-first thread Graphics cards GPUs. CUDA.

GPGPU How to think about parallelism? PRAM & Parallel algorithms Concepts Theory, MTA, NYU-Ultra SB-PRAM, XMT Many-core. Quantitative validation XMT Past Dense-matrix-type X Irregular,Cost-effective,Strong scaling Future? Fine, but more important:

Heterogeneous system Legend: Remainder of this talk General thought on research/impact Human knowledge is dynamic. Lets hope it keeps growing Seek work where (you believe that) returns are highest today My own story Math education led me to pursue CSparallelism Algorithms Architecture More recently Algorithms And Architecture Till ~1990s and IMO, need has become highest in the combination

Applicable to any field of research Serial Abstraction & A Parallel Counterpart Serial abstraction: any single instruction available for execution in a serial program executes immediately Immediate Serial Execution (ISE) Serial Execution, Based on Serial Abstraction # ops What could I do in parallel Parallel Execution, Based at each step assuming on Parallel Abstraction unlimited hardware .. .. # ..

ops .. .. time .. time Time = Work .. Work = total #ops Time << Work Abstraction for making parallel computing simple: indefinitely many instructions, which are available for concurrent execution, execute immediately, dubbed Immediate Concurrent Execution

(ICE) same as parallel algorithmic thinking (PAT) for PRAM Example of Parallel algorithm Breadth-First-Search (BFS) (i) Concurrently as in natural BFS: only change to serial algorithm (ii) Defies decomposition/partition Parallel complexity W = ~(|V| + |E|) T = ~d, the number of layers Average parallelism = ~W/T Mental effort 1. Sometimes easier than serial 2. Within common denominator of other parallel approaches. In fact, much easier Memory example (contd) XMT Approach

Rationale Consider parallel version of serial algorithm Premise Similar* locality to serial 1. Large shared cache on-chip 2. High-bandwidth, low latency interconnection network [2011 technical introduction: Using Simple Abstraction to Reinvent Computing for Parallelism, CACM, 1/2011, 75-85 http://www.umiacs.umd.edu/users/vishkin/XMT/] 3D VLSI Bigger shared cache, lower distance (latency & power for data movement) and bandwidth with TSVs (through-silicon vias) * Parallel transition from time t to t+1: subset of serial transitions Not just talking Algorithms&Software ICE/WorkDepth/PAT Creativity ends here PRAM PRAM-On-Chip HW Prototypes 64-core, 75MHz FPGA of XMT

(Explicit Multi-Threaded) architecture SPAA98..CF08 128-core intercon. network Programming & workflow IBM 90nm: No parallel programming course 9mmX5mm, 400 MHz beyond freshmen [HotI07]Fund work on asynch NOCS10 Stable compiler FPGA designASIC IP for dynamic thread allocation IBM 90nm: 10mmX10mm

Intel TBB 4/13 Scales: 1K+ cores on-chip. Power & Tech updates cycle accurate simulator Orders-of-magnitude speedups & complexity Next slide: ease-of-programming non-trivial stress tests XMT GPU/CPU factor Graph Biconnectivity 2012 33X 4X random graphs Muuuch parallelism

>>8 Graph Triconnectivity 2012 129X ? ? Max Flow 2011 108X 2.5X 43 Burrows Wheeler Compression

Transform - bzip2 Decompression 25X 13X X/2.8 on GPU ? 70 ? - 3 graph algorithms: No algorithmic creativity. - 1st truly parallel speedup for lossless data compression. SPAA 2013. Beats Google Snappy (message passing within warehouse scale computers) State of project - 2012: quant validation of (most advanced) PRAM algorithms: ~65 man years 2013 -: 1. Applications 2. Update Memory & VLSI technologies/opportunities Not alone in building new parallel computer prototypes in academia

At least 3 more US universities in the last 2 decades Unique(?) daring own course-taking students to program it for performance - Graduate students do 6 programming assignments, including biconnectivity, in a theory course - Freshmen do parallel programming assignments for problem load competitive with serial course And we went out for HS students: magnet and inner city schools XMT is an essential component of our Parallel Computing courses because it is the one place where we are able to strip away industrial accidents from the student's mind, in terms of programming necessity, and actually build creative algorithms to solve problemsnational award winning HS teacher. 6th year of teaching XMT. 81 HS students in 2013. - HS vs PhD success stories And - Middle School Summer Camp Class,

July09 (20 of 22 students). Math HS Teacher D. Ellison, U. Indiana 21 What about the missing items ? Recap Feasible Orders of magnitude better with different hardware. Evidence Broad portfolio; e.g., most advanced parallel algorithms; high-school students do PhD-thesis level work Who should care? - DARPA Opportunity for competitors to surprise the US military and economy - Vendors - Confluence of mobile & wall-plugged processor market creates unprecedented competition. Standard: ARM. Quad-cores and architecture techniques reached plateau. No other way to get significantly ahead. Smart node in the cloud helped by large local memories of other nodes Bring Watson irregular technologies to personal user

But, we may be missing - Chicken-and-egg effect Few end-user apps use missing items (since..missing) - My guess Under water, the end-user application iceberg is much larger than todays parallel end-user applications - Supporting evidence - Irregular problems: many and rising. Data compression. Computer Vision. Bio-related. Sparse scientific. Sparse sensing & recovery. EDA - Educated innocents Students, last CompEng non-elective class Nearly all serial programs we learned/wrote do not fit this regular mold Cannot believe that the regular mold is sufficient for more than a small minority of potential applications For balance Heard from a colleague: so we teach the wrong things 2013 Embedded processor vendors hear from their customers.

New attitude What is really missed IMO Forget apps. Think how problems are posed and programs developed. Irregular programming is all over computer programs for problems whose understanding does not imply clear input or output definitions big variety on program development and resulting programs. Appealing instance of individuality & creativity in CS: Why robots have not yet replaced us.. Open-ended programming: Prevalent, but not in benchmarks. Open-ended programming is big deal for: the standard curriculum practice computer science culture as a whole Students resentful since they came to appreciate it, only see it now crushed it when it comes to parallel programming.. Can such ideas gain traction? Naive answer: Sure, since they are good.

So, why not in the past? Wall Street companies: risk averse. Too big for startup Focus on fighting out GPUs (only competition) 60 yrs same computing stack lowest common ancestor of company units for change: CEO who can initiate it? Turf issues My conclusion - A time bomb that will explode sooner or later - Huge market: more frequent replacement. - Will take over domination of a core area of IT. How much more? Snapshot: XMT High-level language Cartoon Spawn creates threads; a thread progresses at its own speed and expires at its Join. Synchronization: only at the Joins. So, virtual threads avoid busy-waits by expiring. New: Independence of order

semantics (IOS) The array compaction (artificial) problem Input: Array A[1..n] of elements. Map in some order all A(i) not equal 0 to array D. A 1 0 5 0 0 0 4 0 0 D e0

e2 1 4 5 e6 For program below: e$ local to thread $; x is 3 XMT-C Single-program multiple-data (SPMD) extension of standard C. Includes Spawn and PS - a multi-operand instruction. Essence of an XMT-C program int x = 0; Spawn(0, n-1) /* Spawn n threads; $ ranges 0 to n 1 */ { int e = 1;

if (A[$] not-equal 0) { PS(x,e); D[e] = A[$] } } n = x; Notes: (i) PS is defined next (think F&A). See results for e0,e2, e6 and x. (ii) Join instructions are implicit. XMT Assembly Language Standard assembly language, plus 3 new instructions: Spawn, Join, and PS. The PS multi-operand instruction New kind of instruction: Prefix-sum (PS). Individual PS, PS Ri Rj, has an inseparable (atomic) outcome: (i) Store Ri + Rj in Ri, and (ii) Store original value of Ri in Rj. Several successive PS instructions define a multiple-PS instruction. E.g., the sequence of k instructions: PS R1 R2; PS R1 R3; ...; PS R1 R(k + 1) performs the prefix-sum of base R1 elements R2,R3, ...,R(k + 1) to get: R2 = R1; R3 = R1 + R2; ...; R(k + 1) = R1 + ... + Rk; R1 = R1 + ... + R(k + 1).

Idea: (i) Several ind. PSs can be combined into one multi-operand instruction. (ii) Executed by a new multi-operand PS functional unit. Enhanced Fetch&Add. Story: 1500 cars enter a gas station with 1000 pumps. Main XMT patent: Direct in unit time a car to a EVERY pump; PS patent: Then, direct in unit time a car to EVERY pump becoming available Programmers Model as Workflow Arbitrary CRCW Work-depth algorithm. - Reason about correctness & complexity in synchronous PRAM-like model SPMD reduced synchrony Main construct: spawn-join block. Can start any number of processes at once. Threads advance at own speed, not lockstep Prefix-sum (ps). Independence of order semantics (IOS) matches Arbitrary CW. For locality: assembly language threads are not-too-short Establish correctness & complexity by relating to WD analyses spawn join

spawn join Circumvents: (i) decomposition-inventive; (ii) the problem with threads, e.g., [Lee]. Issue addressed in a PhD thesis nesting of spawns Tune (compiler or expert programmer): (i) Length of sequence of round trips to memory, (ii) QRQW, (iii) WD. [VCL07] - Correctness & complexity by relating to prior analyses XMT Architecture Overview BestInClass serial core master thread control unit (MTCU) Parallel cores (TCUs) grouped in clusters Global memory space evenly partitioned in cache banks using hashing

No local caches at TCU. Avoids expensive cache coherence hardware HW-supported run-time loadbalancing of concurrent threads over processors. Low thread creation overhead. (Extend classic storedprogram+program counter; cited by 40 patents; Prefix-sum to MTCU Hardware Scheduler/Prefix-Sum Unit Cluster 1 Cluster 2 Cluster C Parallel Interconnection Network

Memory Memory Bank Bank 11 Memory Memory Bank Bank22 DRAM DRAM Channel Channel 11 Shared Memory (L1 Cache)

Memory Memory Bank Bank M M DRAM DRAM Channel Channel D D - Enough interconnection network bandwidth Backup - Holistic design Lead question How to build and program general-purpose manycore processors for single task completion time? Carefully design a highly-parallel platform ~Top-down objectives: High PRAM-like abstraction level. Synchronous.

Easy coding Isolate creativity to parallel algorithms Not falling behind on any type & amount of parallelism Backwards compatibility on serial Have HW operate near its full intrinsic capacity Reduced-synchrony & no busy-waits; to accommodate varied memory response time Low overhead start & load balancing of fine-grained threads High all-to-all processors/memory bandwidth. Parallel memories Backup- How? The contractors algorithm 1. Many job sites: Place a ladder in every LR 2. Make progress as your capacity allows System principle 1st/2nd order PoR/LoR PoR: Predictability of reference LoR: Locality of reference Presentation challenge Vertical platform. Each level: lifetime career Strategy Snapshots. Limitation Not as satisfactory

The classic SW-HW bridge, GvN47 Program-counter & stored program XMT: upgrade for parallel abstraction Virtual over physical: distributed solution Von Neumann (1946--??) Virtual Start Hardware PC PC $ := TCU-ID

Yes Use PS to get new $ Is $ > n ? XMT Done Hardware Virtual PC P C1 P C1000000 PC1 Spaw n 1000000

PC2 No Execute Thread $ Join PC1000 When PC1 hits Spawn, a s pawn unit broadcas ts 1000000 and the code Spawn Join to PC1, PC 2, PC1000 on a des ignated bus H. Goldstine, J. von Neumann. Planning and coding problems for an electronic computing instrument, 1947 Workflow from parallel algorithms to programming

versus trial-and-error Legend creativity hyper-creativity [More creativity less productivity] Option 1 Domain decomposition, or task decomposition Option 2 PAT Parallel algorithmic thinking (say PRAM) Program Insufficient inter-thread bandwidth?

Sisyphean(?) Rethink algorithm: loop Take better advantage of cache Compiler Hardware Is Option 1 good enough for the parallel programmers model? Options 1B and 2 start with a PRAM algorithm, but not option 1A. Options 1A and 2 represent workflow, but not option 1B. PAT Prove correctness Program Still correct Tune Still correct

Hardware Not possible in the 1990s. Possible now. Why settle for less? Who should produce the parallel code? Choices [state-of-the-art compiler research perspective] Programmer only Thanks: Prof. Barua Writing parallel code is tedious. Good at seeing parallelism, esp. irregular parallelism. But are bad at seeing locality and granularity considerations. Have poor intuitions about compiler transformations. Compiler only Can see regular parallelism, but not irregular parallelism. Great at doing compiler transformations to improve parallelism, granularity and locality. Hybrid solution: Programmer specifies high-level parallelism, but little else. Compiler does the rest.

(My) Broader questions Goals: Where will the algorithms come from? Ease of programming Is todays HW good enough? Declarative programming This course relevant for all 3 questions Denial Example: BFS [EduPar2011] 2011 NSF/IEEE-TCPP curriculum teach BFS using OpenMP Teaching experiment Joint F2010 UIUC/UMD class. 42 students Good news Easy coding (since no meaningful decomposition) Bad news None got speedup over serial on 8-proc SMP machine BFS alg was easy but .. no good: no speedups Speedups on 64-processor XMT 7x to 25x Hey, unfair! Hold on: <1/4 of the silicon area of SMP Symptom of the bigger denial Only problem Developers lack parallel programming skills Solution Education. False Teach then see that HW is the problem HotPAR10 performance results include BFS: XMT/GPU Speed-up same silicon area, highly parallel input: 5.4X

Small HW configuration, large diameter: 109X wrt same GPU Discussion of BFS results Contrast with smartest people: PPoPP12, Stanford11 .. BFS on multi-cores, again only if the diameter is small, improving on SC10 IBM/GaTech & 6 recent papers, all 1st rate conferences BFS is bread & butter. Call the Marines each time you need bread? Makes one wonder Is something wrong with the field? Decree Random graphs = reality. In the old days: Expander graphs taught in graph design. Planar graphs were real Lots of parallelism more HW design freedom. E.g., GPUs get decent speedup with lots of parallelism, and But, not enough for general parallel algorithms. BFS (& maxflow): much better speedups on XMT. Same easier programs Power Efficiency heterogeneous design TCUs used only when beneficial extremely lightweight TCUs. Avoid complex HW overheads: coherent caches, branch prediction, superscalar issue, or speculation. Instead TCUs compensate with much parallelism distributed design allows easy turned off of unused TCUs

compiler and run-time system hide memory latency with computation as possible less power in idle stall cycles HW-supported thread scheduling is both much faster and less energy consuming than traditional software driven scheduling same for prefix-sum based thread synchronization custom high-bandwidth network from XMT lightweight cores to memory has been highly tuned for power efficiency we showed that the power efficiency of the network can be further improved using asynchronous logic Back-up slide Possible mindset behind vendors HW The hidden cost of low bandwidth communication BMM94: 1. HW vendors see the cost benefit of lowering performance of interconnects, but grossly underestimate the programming difficulties and the high software development costs implied. 2. Their exclusive focus on runtime benchmarks misses critical costs, including: (i) the time to write the code, and (ii) the time to port the code to different distribution of data or to different

machines that require different distribution of data. Architects ask (e.g., me) what gadget to add? Sorry: I also dont know. Most components not new. Still importing airplane parts to a car does not yield the same benefits Compatibility of serial code matters more More On PRAM-On-Chip Programming 10th grader* comparing parallel programming approaches I was motivated to solve all the XMT programming assignments we got, since I had to cope with solving the algorithmic problems themselves, which I enjoy doing. In contrast, I did not see the point of programming other parallel systems available to us at school, since too much of the programming was effort getting around the was the system was engineered, and this was not fun *From Montgomery Blair Magnet, Silver Spring, MD Independent validation by DoD employee Nathaniel Crowell. Parallel algorithms for graph problems, May 2011. MSc scholarly paper, [email protected] Not part of the XMT team

http://www.cs.umd.edu/Grad/scholarlypapers/papers/NCrowell.pdf Evaluated XMT for public domain problems of interest to DoD Developed serial then XMT programs Solved with minimal effort (MSc scholarly paper..) many problems. E.g., 4 SSCA2 kernels, Algebraic connectivity and Fiedler vector (Parallel Davidson Eigensolver) Good speedups No way where one could have done that on other parallel platforms so quickly Reports: extra effort for producing parallel code was minimal Importance of list ranking for tree and graph algorithms advanced planarity testing advanced triconnectivity planarity testing triconnectivity

st-numbering k-edge/vertex connectivity minimum spanning forest centroid decomposition tree contraction Euler tours ear decomposition search lowest common

ancestors biconnectivity strong orientation graph connectivity tree Euler tour Point of recent study Root of OofM speedups: Speedup on various input sizes on much simpler problems list ranking 2-ruling set prefix-sums

deterministic coin tossing Software release Allows to use your own computer for programming on an XMT environment & experimenting with it, including: a) Cycle-accurate simulator of the XMT machine b) Compiler from XMTC to that machine Also provided, extensive material for teaching or selfstudying parallelism, including (i)Tutorial + manual for XMTC (150 pages) (ii)Class notes on parallel algorithms (100 pages) (iii)Video recording of 9/15/07 HS tutorial (300 minutes) (iv) Video recording of Spring09 grad Parallel Algorithms lectures (30+hours) www.umiacs.umd.edu/users/vishkin/XMT/sw-release.html, Or just Google XMT Helpful (?) Analogy Grew on tasty salads: Natural ingredients; No dressing/cheese Now salads requiring tones of dressing and cheese. Taste?

Reminds (only?) me of Dressing Huge blue-chip & government investment in system & app software to overcome HW limitations. (limited scope) DSLs. Taste Speed-ups only on limited apps. Contrasted with: Simple ingredients Parallel algorithms theory. Few basic architecture ideas on control & data paths and memory system - Modest academic project - Taste Better speedups by orders of magnitude. HS student vs PhDs Participants Grad students: James Edwards, Fady Ghanim Recent PhDs: Aydin Balkan, George Caragea, Mike Horak, Fuat Keceli, Alex Tzannes*, Xingzhi Wen Industry design experts (pro-bono). Rajeev Barua, Compiler. Co-advisor X2. NSF grant. Gang Qu, VLSI and Power. Co-advisor. Steve Nowick, Columbia U., Asynch logic. Co-advisor. NSF team grant. Ron Tzur, U. Colorado, K12 Education. Co-advisor. NSF seed funding K12: Montgomery Blair Magnet HS, MD, Thomas Jefferson HS, VA, Baltimore (inner city) Ingenuity Project Middle School 2009 Summer Camp, Montgomery County Public Schools

Marc Olano, UMBC, Computer graphics. Co-advisor. Tali Moreshet, Swarthmore College, Power. Co-advisor. Bernie Brooks, NIH. Co-Advisor. Marty Peckerar, Microelectronics Igor Smolyaninov, Electro-optics Funding: NSF, NSA deployed XMT computer, NIH Transferred IP for Intel/TBB-customized XMT lazy scheduling. 42013 Reinvention of Computing for Parallelism. 1st out of 49 for Maryland Research Center of Excellence (MRCE) by USM. None funded. 17 members, including UMBC, UMBI, UMSOM. Mostly applications. * 1st place, ACM Student Research Competition, PACT11. Post-doc, UIUC

Mixed bag of incentives - - Vendor loyalists In past decade, diminished competition among vendors. The recent GPU chase/race demonstrates power of competition. Now back with a vengeance: 3rd and 4th in mobile dropped out in 2012 Whats in it for researchers who are not generalists? and how many HW/SW/algorithm/app generalists you know? Zero-sum with other research interests; e.g., spin (?) of Game Over report into supporting power over missing items Algorithms dart Parallel Random-Access Machine/Model PRAM: n synchronous processors all having unit time access to a shared memory. Basis for Parallel PRAM algorithmic theory

- 2nd in magnitude only to serial algorithmic theory - Simpler than above. See later - Won the battle of ideas in the 1980s. Repeatedly: -Challenged without success no real alternative! - Today: Latent, though not widespread, knowledgebase Drawing a target? State-of-the-art 1993 LogP well-cited paper: unrealistic for implementation Why high bandwidth hard for 1993 technology Low bandwidth PRAM lower bounds [VW85,MNV94] real conflict What else can be done? The approach I pursued Start from the abstraction: coming from algorithms, how do

I want to think about parallelism? Co-develop parallel algorithms theory Learn architecture. Understand constraints & compilers Start from a clean slate to build a holistic system, by connecting dots developed since 1970s. Preempt need for afterthought. No shame in learning from others Prototype/validate quantitatively Our explicit multi-threaded (XMT) platform Contradicting LogP: can do it! 1st thought leading to XMT Sufficient on-chip bandwidth now possible More Order-of-Magnitude Denial Examples 1 Performance Example Parallel Max-Flow speedups vs best serial [HeHo, IPDPS10] <= 2.5x using best of CUDA & CPU hybrid [CarageaV, SPAA11] <= 108.3x using XMT (ShiloachV&GoldbergTarjan) Big effort beyond published algorithms vs normal theory-to-practice Advantage by 43X

Why max-flow example? As advanced any irregular fine-grained parallel algorithms dared on any parallel architecture - Horizons of a computer architecture cannot only be studied using elementary algorithms [Performance, efficiency and effectiveness of a car not tested only in low gear or limited road conditions] - Stress test for important architecture capabilities not often discussed: 1. Strong scaling : Increase #processors, not problem size 2. Rewarding even little amounts of algorithm parallelism with speedups & not falling behind on serial More Order-of-Magnitude Denial Examples 2 Ease of programming Ease of learning. Teachability [SIGCSE10] - Freshman class. 11 non-CS students. Prog. assignments: merge-sort*, integer-sort* & sample-sort. - TJ Magnet HS. Teacher downloaded simulator, assignments, class notes, from XMT page. Self-taught. Recommends Teach XMT first. Easiest to set up (simulator), program, analyze - predictable performance (as in serial). Not just

embarrassingly parallel. Teaches also OpenMP, MPI, CUDA ** - HS & MS (some 10 yr old) from underrepresented groups by HS Math teacher Benchmark Can any CS major program your manycore? for hard speedups? Avoiding it denial. Yet, this is the state-of-the-art *In Nvidia + UC Berkeley IPDPS09 research paper! **Also, keynote at [email protected] + interview with teacher Biconnectivity Speedups [EdwardsV12]: 9X to 33X relative to up to best result of up to 4X [Cong-Bader] over 12-processor SMP. No GPU results. Ease-of-programming 1. Normal algorithm-to-programming (of [TarjanV]) versus creative and complex program 2. Most advanced algorithm in parallel algorithms textbooks 3. Spring12 class: programming HW assignment! Biconnectivity speedups were particularly challenging since DFS-based serial algorithms is very compact. Indeed: Triconnectivity speedups [EdwardsV,SPAA12]: up to 129X! Unaware of prior parallel results.

This completes the work on advanced PRAM algorithms. Guess what next. Other speedup results SPAA09: XMT gets ~10X vs. state-of-the art Intel Core 2 in experiments guided by senior Intel engineer. Silicon area of 64processor XMT, same as 1-2 commodity processor-core Simulation of 1024 processors: 100X on standard benchmark suite for VHDL gate-level simulation. for 1024 processors [GuV06] HotPar10/ICPP08 compare with GPUs XMT+GPU beats allin-one Power All results extend to a power envelop not exceeding current GPUs Reward game is skewed gives (illusion of) job security You might wonder: why if we have such a great architecture, dont we have many more single-application papers? Easier to publish on hard-to-program platforms Remember STI Cell? Vendor-backed is robust: remember Itanium? Application papers for easy-to-program architectures are

considered boring Even when they show good results Recipe for academic publication and promotions Take simple application (e.g. Breadth-First Search in graph) Implement it on latest difficult-to-program vendor-backed parallel architecture Discuss challenges and workarounds to establish intellectual merit Stand out of the crowd for industry impact Job security Architecture sure to be replaced (difficult to program ..) Competition may be back Steve Jobs 410: HTML5 will win on mobile&PCs Trap Rely on HTML5 to quickly get your app to broad compatibility across the mobile landscape. To succeed Tackle each platform separately. Idea Build apps that only work on the latest/greatest version of a phone, and intentionally not on previous models. Fewer people will be able to use it, but buyers of the new toy will. If your app makes hot new HW look good

promoted by HW/OS vendor giving your app presence it could not otherwise get. Once a product succeeds on brand-new HW, you may adapt it to other platforms --ZDNetAsia Platform competition High-end app exceeding other platforms Just a matter of time till mobile will start eating desktop vendors lunch Summary (1 out of 2) Industry & academia stuck in exploratory stage Solutions do not work critical adoption problems Contrasted creativity, complexity, and demand for overhauling code for high amounts of parallelism with: Cost-effective parallel programming, and Even feasibility Reviewed a solution The XMT platform: - Accounts for opportunities and constraints understood in exploratory stage & current technology realities

- Isolates all creativity to parallel algorithms - Reduce programming to a skill -Order-of-magnitude improvement: ease-of-prog, performance 57 Summary (2 out of 2) In line with new LANL emphasis strong scaling Belief propagation/inference applications In part: fast processing of Bayesian networks. Typically modeled as irregular directed acyclic graphs (DAGs). Processing of such DAGs in parallel requires fine-grained irregular parallelism. Supporting such fast parallel algorithm: goal of 1000-processor on-chip XMT system Stress test that met in max-flow work & strong speedups over the main many-core commercial platforms: promising indicators for such DAGs algorithms. Other results also offer similar evidence Conclusion 1 Without an easy-to-program many-core architecture, rejection of parallelism by mainstream programmers is all but certain

More researchers need to work and seek publications on easy-to-program architectures CS&E people are welcome to study problems/applications on the platform and/or the architecture, and do research: 1. A good research bet. Start with comparisons (performance, ease-of-programming) to convince yourself. 2. It is crucial for innovation, the economy (Deloittes report), the future of the profession, and is a good research bet 3. Need to understand that application people will join only when given wall-clock time reward 59 Conclusion 2 XMT aims at being easy-to-program, general-purpose architecture

Performance improvement on hard-to-parallelize applications (MaxFlow) Ease of programming: by showing order-of-magnitude improvement in ease-ofteaching/learning Achieved better speedups and at much earlier developmental stage (10th graders in HS versus graduate students). DARPA-HCPS/UCSB/UMD experiment, Middle-School, 2 Magnet HS, Inner City HS, freshmen course, UIUC/UMD-experiment. Current stage of XMT project: develop more complex applications beyond benchmarks. Steps in this direction: Max-Flow Gate-level logic simulations GuV06 New results on variety of high connectivity problems Other XMT objectives Workhorse for general-purpose parallel computing Sustained performance A car that can reach 300 mph but goes at an average of 10 mph is not as good as max 70 mph with an average of 50 mph Competitive on any type of parallelism fine/coarse grain, regular/irregular Parallel algorithms and architecture are on a collision course; preempt it General-purpose serial computing Robust

Example Supported the SW spiral (Andy Grove) Key stored-program-program-counter (SPPC) apparatus. Survived 60+yrs of general-purpose serial processors. Allowed: Import only 1 axiom from Math: induction [couple induction with simple abstraction] Algorithms Clean analytic theory: correctness, efficiency General-purpose parallel computing for speeding up single task Current commercial systems not robust - SW spiral broken Look like industrial accidents Did not emerge from a clean-slate design Order-of-magnitude behind* on ease-of-programming Unnecessary burden on programmers. Possibly inventive: - decomposition - assignment - orchestration - mapping - reasoning about concurrency (race conditions)

- (for GPUs) making the whole program highly parallel OofM behind* on speedups for irregular apps No compact/rigorous induction-like way to reason Is it feasible to rewrite the serial book for parallel? XMT: Yes! 1-1 match to serial stack Clean slate design Theory of parallel algorithm. Couple induction with simple parallel abstraction. Parallel SPPC Validation - Architecture, compiler, run-time - OofM ahead on ease-of-programming and speedups. Including: most advanced algorithmic problems Serial Abstraction & A Parallel Counterpart 2nd visit Serial abstraction: any single instruction available for execution in a serial program executes immediately Immediate Serial

Execution (ISE) Serial Execution, Based on Serial Abstraction # ops What could I do in parallel Parallel Execution, Based at each step assuming on Parallel Abstraction unlimited hardware .. .. # .. ops .. .. time

.. time Time = Work .. Work = total #ops Time << Work Abstraction for making parallel computing simple: indefinitely many instructions, which are available for concurrent execution, execute immediately, dubbed Immediate Concurrent Execution (ICE) same as parallel algorithmic thinking (PAT) for PRAM Conjecture SV82: Full PRAM algorithm just matter of skill. Used as framework in main PRAM algorithms texts: JaJa92, KKT01 Back-up slide Objectives for the rest of the presentation

1. Explain the XMT order-of-magnitude advantages on ease-of-programming and performance Snapshot of XMT programming Present ease-of-programming strategy By nature, algorithms require some level of creativity. Ideally, programming does not. Our approach: a. Isolate creativity to parallel algorithms b. Reduce programming to a skill Achieved that by careful design of the architecture. Makes more sense than labor on every program New insight Good speedup on limited parallelism crucial for order-of-magnitude speedup advantage on interesting problems 2. Rising business & science competition opportunities Example: The list ranking problem

Parallel pointer jumping ICE pseudocode for 1 <= i <= n pardo while S(i) != S(S(i)) do W(i) := W(i) + W(S(i)) S(i) := S(S(i)) end while end for Note - Tight synchrony - Reads before writes Complexity O(log n) time, O(n log n) work. Serial: Time = Work = O(n) Unusual Much (~n) parallelism. Often far less psBaseReg flag; // number of threads that require another loop iteration void pointer_jump(int S[n], int W[n], int n) { int W_temp[n]; int S_temp[n]; do { spawn(0, n-1) {

if (S[$] != S[S[$]]) { W_temp[$] = W[$] + W[S[$]]; S_temp[$] = S[S[$]];} else {W_temp[$] = W[$];S_temp[$] = S[$];} } flag = 0; spawn(0, n-1) { if (S_temp[$] != S_temp[S_temp[$]]) { int i = 1; ps(i, flag); W[$] = W_temp[$] + W_temp[S_temp[$]]; S[$] = S_temp[S_temp[$]];} else {W[$] = W_temp[$]; S[$] = S_temp[$];} } } while (flag != 0); } XMTC program Speedup for List Ranking (vs. Best Serial) Focus XMT feature Low overhead initiation of all TCUs

S p e e d u p (c y c le s /c y c le s ) HW (with feature) versus bsplit (without feature). Both within XMT jump List size (x 1,000) bsplit HW 1024-TCU XMT vs. Intel Core i7 920 70 60 50 40 30 jumpbsplit jump-HW cointossbsplit cointossHW hybridbsplit

hybridHW cointoss bsplit hybrid HW bsplit HW 1 0.18 6.03

0.23 1.38 0.18 5.97 3.2 0.54 7.63 0.27 2.18 0.54

7.61 10 0.60 8.18 0.49 3.74 0.61 8.23 32 0.75 10.72

1.17 8.83 1.43 14.31 100 1.21 17.97 3.32 27.86 3.69 36.50 320 1.66 14.70 6.73 31.01

6.86 33.07 1,000 3.46 8.61 16.57 40.26 16.63 40.72 3,200 5.63 10.44 29.37 56.53 29.64 56.68 10,000 6.38 10.86 35.85 61.64 35.95 61.72 20 Algorithms

Spawn types 10 cointoss: coin tossing bsplit: binary splitting jump: pointer jumping 0 List size Note (for later) Compare speedups for input = 1000s hybrid: cointoss until

the list size is below some threshold, then jump (accelerating cascades) HW: XMT hardware thread initiation Comparison with Prior Work on List Ranking Bader et al. 2005 Speedups <= 3x on a Sun E4500 SMP (using 8 processors) Speedups <= 6x on a Cray MTA-2 (using 8 processors, or 1024 HW streams as in 1024-TCU XMT) Speedups are relative to the same parallel algorithm running on a single processor on the same machine (list size = 80 million) No comparison with the best sequential algorithm Rehman et al. 2009 Speedups <=34x in cycle count (18x wall clock time) on NVIDIA GTX 280 GPU

(list size 4M) over best sequential algorithm on Intel Core 2 Quad Q6600 vs <= 62X for 10M (57X for 4M) on XMT Pointer jumping for 4K: no speedup on GPU vs 8X for XMT MAIN POINT Limited number (e.g., 4K) of very short threads order-ofmagnitude advantage to XMT over all others D. Bader, G. Cong, and J. Feo. On the architectural requirements for efficient execution of graph algorithms. In Proc. Intl Conf. on Parallel Processing (ICPP), pp. 547556, June 2005. M. S. Rehman, K. Kothapalli, and P. J. Narayanan. Fast and Scalable List Ranking on the GPU. In Intl Conf. on Supercomputing (ICS), pp. 235-243, June 2009. Biconnectivity Speedups [EdwardsV]: 9X to 33X relative to up to best result of up to 4X [Cong-Bader] over 12-processor SMP. No GPU results. Intl Workshop on Programming Models and Applications for Multicores and Manycores, to appear in Proc. PPoPP12, Feb 25-29, 2012 Biconnectivity speedups were particularly challenging since DFS-based serial algorithms is very compact; however: Stronger speedups for triconnectivity: submitted. Unaware of prior parallel results. Ease-of-programming

1. Normal algorithm-to-programming (of [TarjanV]) versus creative and complex program 2. Most advanced algorithm in parallel algorithms textbooks 3. Spring12 class: programming HW assignment! The U.S. Is Busy Building Supercomputers, but Needs Someone to Run Them*, 12/2011 Low-end supercomputers $1-10M/unit Supercomputing leaders Not enough programmers Comments 1. Fewer (total) programmers than many-cores 2. Prog. models of many-cores too similar to expect a difference 3. IMO denial. Just a symptom. The problem is the HW Opportunity Space <~1TB main memory. If 1000-core HW, order-of-magnitude: Lower Cost (~$10K/unit), Easier programming Greater speedups (performance) Could LANL be interested? * http://www.thedailybeast.com/articles/2011/12/28/the-u-s-is-busy-buildingsupercomputers-but-needs-someone-to-run-them.html

Unchallenged in a Fall11 DC event what a wonderful dynamic field. 10 yrs ago DEC&Sun were great companies, now gone; Apple, IBM, Motorola also out of high-end commodity processors. Oh dear, compare this cheerfulness with: The Trouble with Multicore: Chipmakers are busy designing microprocessors that most programmers can't handleD. Patterson, IEEE Spectrum 7/2010 Only heroic programmers can exploit the vast parallelism in current machines The Future of Computing: Game over or Next level, National Research Council, 2011 Ask yourself Dynamic OR consolidation and diminished competition? Satisfied with recent yrs innovation in high-end commodity apps? Next How did we get to this point, and how to get out? Is it an industrial disease if so many in academia, industry, and Wall Street see black as white? If yes, consider Publicity is justly commended as a remedy for social and industrial diseases. Sunlight is said to be the best of

disinfectants; electric light the most efficient policemanLouis D. Brandeis BTW, IMO vendors will be happy if shown the way. But: 1. people say what they think that vendors want to hear; 2. vendors cannot protest against compliments to their own products What if we keep digging (around HW dart) Example: how to attract students? Starts par prog course with a parallel version of basic matrix multiply or tile-based one? The latter: Deliver 1st par prog trauma ASAP: Shock and awe Okay to teach later, but .. how many tiles to fit 1000X1000 matrices in cache of modern PC? A sociology of science angle Ludwik Fleck, 1935 (the Turing of sociology of science): Research too esoteric to be reliable exoteric validation Exoteric validation: exactly what general programmers could have provided, but they have not! Next Why vendors approach cannot work (and rejection by programmers is all but certain)

What if keep digging 2 Programmers productivity busters Decomposition-inventive design **creativity** Reason about concurrency in threads **complexity** For some parallel HW: issues if whole program is not highly parallel. Will highlight Even if it is: **too much work** Many-core HW Optimized for things you can truly measure: (old) benchmarks & power. What about productivity? Low priority at best Denial .. in other words

[Credit: wordpress.com] Application dreamer between a rock and a hard place Permeated the following working assumption Programming models for larger-scale & mainstream systems similar importing ills of parallel computing to mainstream Casualties of too-costly SW development - Cost and time-to-market of applications - Business model for innovation (& American ingenuity) - Advantage to lower wage CS job markets - Mission of research enterprises. PhD theses (bioinformatics): program around the engineering of parallel machines. Not robust contributions: new algorithms or modeling app domain. - NSF HS plan: attract best US minds with less programming, 10K CS teachers .. Only future of the field & U.S. (and US-like) competitiveness Still HW vendor 11: Okay, you do have a convenient way to do parallel programming; so whats the big deal? or this is SW not HW

Threats to validity of current power modeling System 1 5 days correct code +1d performance System 2 5 days correct code +1/2 year (125d) performance How would you compare total power? Power to develop app Power to use app Would the applications be the same? If System 1 promotes more use (more apps, more programmers), is it really bad? But, what is the performance penalty for easy programming? Surprise benefit! vs. GPU [HotPar10] 1024-TCU XMT simulations vs. code by others for GTX280. < 1 is slowdown. Sought: similar silicon area & same clock. Postscript regarding BFS - 59X if average parallelism is 20

- 109X if XMT is downscaled to 64 TCUs Problem acronyms BFS: Breadth-first search on graphs Bprop: Back propagation machine learning alg. Conv: Image convolution kernel with separable filter Msort: Merge-sort algorith NW: Needleman-Wunsch sequence alignment Reduct: Parallel reduction (sum) Spmv: Sparse matrix-vector multiplication Backup slides Many forget that the only reason that PRAM algorithms did not become standard CS knowledge is that there was no demonstration of an implementable computer architecture that allowed programmers to look at a computer like a PRAM. XMT changed that, and now we should let Mark Twain complete the job. We should be careful to get out of an experience only the wisdom

that is in it and stop there; lest we be like the cat that sits down on a hot stove-lid. She will never sit down on a hot stovelid again and that is well; but also she will never sit down on a cold one anymore. Mark Twain State of the art - approaches Seek adding arch. support for parallel programming. For Next level?!.. Holding on to industrial accidents not my term Contrast with internet/networking: well-engineered core combined with openness-driven accidental design Off-the-shelf. What else can I do? LPS Conclusion 1 XMT aims at being easy-to-program, general-purpose architecture Performance improvements on hard-to-parallelize applications like Max-Flow Ease of programming: by showing order-of-magnitude improvement in easeof-teaching/learning Achieved better speedups and at much earlier developmental stage (10th graders in HS versus graduate students). DARPA-HCPS/UCSB/UMD experiment, Middle-School, 2 Magnet HS, Inner City HS, freshmen course, UIUC/UMD-experiment.

Current stage of XMT project: develop more complex applications beyond benchmarks. Steps in this direction: Max-Flow Gate-level logic simulations GuV06 New results on variety of high connectivity problems Without an easy-to-program many-core architecture, rejection of parallelism by mainstream programmers is all but certain Affirmative action: drive more researchers to work and seek publications on easy-to-program architectures 82 This work should not be dismissed as too easy LPS Conclusion 2 Other XMT objectives Workhorse for general-purpose parallel computing Sustained performance A car that can reach 300 mph but goes at an average of 10 mph is not as good as max 70 mph with an average of 50 mph Competitive on any type of parallelism fine/coarse grain, regular/irregular

Parallel algorithms and architecture are on a collision course; preempt it What will it take to get researchers & practitioners to experiment with an easy-to-program architectures (e.g., XMT)? Answer A 1+GHz 1000 XMT prototype Why Wall-clock reward Problem funding IMO, enabling such prototype must be top priority for funding It is crucial for innovation, the economy (Deloittes report), the future of the profession and a good deal for government Lessons from Invention of Computing H. Goldstine, J. von Neumann. Planning and coding problems for an electronic computing instrument, 1947: .. in comparing codes 4 viewpoints must be kept in

mind, all of them of comparable importance: Simplicity and reliability of the engineering solutions required by the code Simplicity, compactness and completeness of the code Ease and speed of the human procedure of translating mathematical conceived methods into the code, and also of finding and correcting errors in coding or of applying to it changes that have been decided upon at a later stage Efficiency of the code in operating the machine near its full intrinsic speed Take home Legend features that fail the truly measure test In todays language programmers productivity Birth (?) of CS: Translation into code of non-specific methods .. how to match that for parallelism How does XMT address BSP (bulksynchronous parallelism) concerns? XMTC programming incorporates programming for

locality & reduced synchrony as 2nd order considerations On-chip interconnection network: high bandwidth Memory architecture: low latencies 1st comment on ease-of-programming I was motivated to solve all the XMT programming assignments we got, since I had to cope with solving the algorithmic problems themselves which I enjoy doing. In contrast, I did not see the point of programming other parallel systems available to us at school, since too much of the programming was effort getting around the way the systems were engineered, and this was not fun. Jacob Hurwitz, 10th grader, Montgomery Blair High School Magnet Program, Silver Spring, Maryland, December 2007. XMT (Explicit Multi-Threading): A PRAM-On-Chip Vision IF you could program a current manycore great speedups. XMT: Fix the IF XMT was designed from the ground up with the following features:

- Allows a programmers workflow, whose first step is algorithm design for work-depth. Thereby, harness the whole PRAM theory - No need to program for locality beyond use of local thread variables, post work-depth - Hardware-supported dynamic allocation of virtual threads to processors. - Sufficient interconnection network bandwidth - Gracefully moving between serial & parallel execution (no off-loading) - Backwards compatibility on serial code - Support irregular, fine-grained algorithms (unique). Some role for hashing. Tested HW & SW prototypes Software release of full XMT environment SPAA09: ~10X relative to Intel Core 2 Duo Movement of data back of the thermal envelope argument 4X: GPU result over XMT for convolution Say total data movement as GPU but in time Power (Watt) is energy/time PowerXMT~ PowerGPU Later slides: 3.7 PowerXMT~PowerGPU

Finally, No other XMT algorithms moves data at higher rate Scope of comment single chip architectures How does it work and what should people know to participate Work-depth Alg Methodology (SV82) State all ops you can do in parallel. Repeat. Minimize: Total #operations, #rounds. Note: 1 The rest is skill. 2. Sets the algorithm Program single-program multiple-data (SPMD). Short (not OS) threads. Independence of order semantics (IOS). XMTC: C plus 3 commands: Spawn+Join, Prefix-Sum (PS) Unique 1st parallelism then decomposition Legend: Level of abstraction Means Means: Programming methodology Algorithms effective programs. Extend the SV82 Work-Depth framework from PRAM-like to XMTC [Alternative Established APIs (VHDL/Verilog,OpenGL,MATLAB) win-win proposition] Performance-Tuned Program minimize length of sequence of round-trips to memory + QRQW + Depth; take advantage of arch enhancements (e.g., prefetch) Means: Compiler: [ideally: given XMTC program, compiler provides

Architecture HW-supported run-timeload-balancing of concurrent threads decomposition: tune-up manually teach the compiler] over processors. Low thread creation overhead. (Extend classic stored-program program counter; cited by 15 Intel patents; Prefix-sum to registers & to memory. ) All Computer Scientists will need to know >1 levels of abstraction (LoA) CS programmers model: WD+P. CS expert : WD+P+PTP. Systems: +A. PERFORMANCE PROGRAMMING & ITS PRODUCTIVITY Basic Algorithm (sometimes informal) Add data-structures (for serial algorithm) Serial program (C) 3

Add parallel data-structures (for PRAM-like algorithm) 1 Standard Computer Decomposition Assignment Parallel Programming (Culler-Singh) Orchestration Mapping 2 Parallel computer Parallel program (XMT-C) Low overheads! 4

XMT Computer (or Simulator) 4 easier than 2 Problems with 3 4 competitive with 1: cost-effectiveness; natural APPLICATION PROGRAMMING & ITS PRODUCTIVITY Application programmers interfaces (APIs) (OpenGL, VHDL/Verilog, Matlab) compiler Serial program (C) Parallel program (XMT-C) Automatic? Yes Maybe Yes Decomposition

Standard Computer XMT architecture Assignment Parallel Programming (Culler-Singh) Orchestration Mapping Parallel computer (Simulator) XMT Block Diagram Back-up slide ISA

Any serial (MIPS, X86). MIPS R3000. Spawn (cannot be nested) Join SSpawn (can be nested) PS PSM Instructions for (compiler) optimizations The Memory Wall Concerns: 1) latency to main memory, 2) bandwidth to main memory. Position papers: the memory wall (Wulf), its the memory, stupid! (Sites) Note: (i) Larger on chip caches are possible; for serial computing, return on using them: diminishing. (ii) Few cache misses can overlap (in time) in serial computing; so: even the limited bandwidth to memory is underused. XMT does better on both accounts: uses more the high bandwidth to cache.

hides latency, by overlapping cache misses; uses more bandwidth to main memory, by generating concurrent memory requests; however, use of the cache alleviates penalty from overuse. Conclusion: using PRAM parallelism coupled with IOS, XMT reduces the effect of cache stalls. State-of-the-art Unsatisfactory products Diminished competition among vendors No funding for really new architecture Todays academics Limited to the best he/she can derive from vendor products (trained and rewarded) Missing Profound questioning of products Absurdity Industry expects from its people to: (i) look for guidance to improve products; e.g., from academia (ii) present a brave posture to the world regarding the same products; in turn, lead academia/funding not to develop answers to (i)

Problem: Develop a Technology Definition Technology: capability given by the practical application of knowledge how to build and program parallel machines parallel algorithmic thinking The rest of this talk - Snapshots of the technology - Order of magnitude advantages on ease-ofprogramming and speedups - Challenges Somewhere: Explanation for current reality Recall Technology: capability given by the practical application of knowledge XMT: build and program parallel machines parallel algorithmic thinking VERY few teach both algorithms AND architecture Theorists stop at creating knowledge Nearly impossible for young architects to develop a platform

(Exception: UT TRIPS. Now: Microsoft, NVidia) Diminished competition in architecture (Unintended) Research reward system: conform or perish No room for questioning vendors Every society honors its live conformists and its dead troublemakers McLaughlin Are we really trying to ensure that manycores are not rejected by programmers? Einsteins observation A perfection of means, and confusion of aims, seems to be our main problem Conformity incentives are for perfecting means - Consider a vendor-backed flawed system. Wonderful opportunity for our originality-seeking publications culture: * The simplest problem requires creativity More papers * Cite one another if on similar systems maximize citations and claim industry impact - Ultimate job security By the time the ink dries on these papers, next flawed modern state-of-the-art system. Culture of short-term impact - Unchallenged in a power DC meeting what a wonderful dynamic field. 10 yrs ago DEC&Sun were great companies, now gone;

dynamic? or diminished competition? Parallel Programming Today Current Parallel Programming High-friction navigation - by implementation [walk/crawl] Initial program (1week) begins trial & error tuning ( year; architecture dependent) PRAM-On-Chip Programming Low-friction navigation mental design and analysis [fly] Once constant-factors-minded algorithm is set, implementation and 98 tuning is straightforward So, an algorithm in the PRAM model

is presented in terms of a sequence of parallel time units (or rounds, or pulses); we allow p instructions to be performed at each time unit, one per processor; this means that a time unit consists of a sequence of exactly p instructions to be performed concurrently SV-MaxFlow-82: way too difficult 2 drawbacks to PRAM mode (i) Does not reveal how the algorithm will run on PRAMs with different number of processors; e.g., to what extent will more processors speed the computation, or fewer processors slow it? (ii) Fully specifying the allocation of instructions to processors requires a level of detail which might be unnecessary (e.g., a compiler may be able to extract from lesser detail) 1st round of discounts .. Work-Depth presentation of algorithms

Work-Depth algorithms are also presented as a sequence of parallel time units (or rounds, or pulses); however, each time unit consists of a sequence of instructions to be performed concurrently; the sequence of instructions may include any number. Why is this enough? See J-92, KKT01, or my classnotes SV-MaxFlow-82: still way too difficult Drawback to WD mode Fully specifying the serial number of each instruction requires a level of detail that may be added later 2nd round of discounts .. Informal Work-Depth (IWD) description Similar to Work-Depth, the algorithm is presented in terms of a sequence of parallel time units (or rounds); however, at each time unit there is a set containing a number of instructions to be performed concurrently. ICE Descriptions of the set of concurrent instructions can come in many flavors. Even implicit, where the number of instruction is not obvious.

The main methodical issue addressed here is how to train CS&E professionals to think in parallel. Here is the informal answer: train yourself to provide IWD description of parallel algorithms. The rest is detail (although important) that can be acquired as a skill, by training (perhaps with tools). Why is this enough? Answer: miracle. See J-92, KKT01, or my classnotes: 1. w/p + t time on p processors in algebraic, decision tree fluffy models 2. V81,SV82 conjectured miracle: use as heuristics for full overhead PRAM model Example of Parallel PRAM-like Algorithm Input: (i) All world airports. (ii) For each, all its non-stop flights. Find: smallest number of flights from DCA to every other airport. Basic (actually parallel) algorithm

Step i: For all airports requiring i-1flights For all its outgoing flights Mark (concurrently!) all yet unvisited airports as requiring i flights (note nesting) Gain relative to serial: (first cut) ~T/S! Decisive also relative to coarse-grained parallelism. Note: (i) Concurrently as in natural BFS: only change to serial algorithm (ii) No decomposition/partition Mental effort of PRAM-like programming 1. sometimes easier than serial 2. considerably easier than for any parallel computer currently sold. Understanding falls within the common denominator of other

approaches. Serial: forces eye-of-a-needle queue; need to prove that still the same as the parallel version. O(T) time; T total # of flights Parallel: parallel data-structures. Inherent serialization: S. Chronology around fault line Just right: PRAM model FW77 Too easy Too difficult Paracomputer Schwartz80 SV-82 and V-Thesis81 BSP Valiant90 PRAM theory (in effect) LOGP UC-Berkeley93 CLR-90 1st edition Map-Reduce. Success; not manycore J-92 CLRS-09, 3rd edition

NESL TCPP curriculum 2010 KKT-01 Nearly all parallel machines to date XMT97+ Supports the rich PRAM algorithms literature .. machines that most programmers V-11 cannot handle" Only heroic programmers Nested parallelism: issue for both; e.g., Cilk Current interest new "computing stacks: programmer's model, programming languages, compilers, architectures, etc. Merit of fault-line image Two pillars holding a building (the stack) must be on the same side of a fault line chipmakers cannot expect: wealth of algorithms and high programmers productivity with architectures for which PRAM is too easy (e.g., force programming for locality). Telling a fault line from the surface Surface

PRAM too difficult PRAM too easy ICE WD PRAM PRAM simplest model* BSP/Cilk * Sufficient bandwidth Insufficient bandwidth *per TCPP Fault line Old soft claim, e.g., [BMM94]: hidden cost of low bandwidth New soft claim: the surface (PRAM easy/difficult) reveals side W.R.T. the

bandwidth fault line. Missing Many-Core Understanding Comparison of many-core platforms for: Ease-of-programming, and Achieving hard speedups (over best serial algorithms for the same problem) for strong scaling strong scaling: solution time ~ 1/#processors for fixed problem size weak scaling: problem size fixed per processor Guess what happens to vendors and researchers supported by them once a comparison does not go their way?! New work Biconnectivity Not aware of GPU work 12-processor SMP: < 4X speedups. TarjanV log-time PRAM algorithm practical version significant modification. Their 1st try: 12-processor below serial XMT: >9X to <48X speedups. TarjanV practical version. More robust for all inputs than BFS, DFS etc.

Significance: 1. log-time PRAM graph algorithms ahead on speedups. 2. Paper makes a similar case for ShiloachV log-time connectivity. Beats also GPUs on both speed-up and ease (GPU paper versus grad course programming assignment and even couple of 10th graders implemented SV) Ease of Programming Benchmark Can any CS major program your manycore? Cannot really avoid it! Teachability demonstrated so far for XMT [SIGCSE10] - To freshman class with 11 non-CS students. Some prog. assignments: merge-sort*, integer-sort* & sample-sort. Other teachers: - Magnet HS teacher. Downloaded simulator, assignments, class notes, from XMT page. Self-taught. Recommends: Teach XMT first. Easiest to set up (simulator), program, analyze: ability to anticipate performance (as in serial). Can do not just for embarrassingly parallel. Teaches also OpenMP, MPI, CUDA. See also, keynote at [email protected] + interview with teacher.

- High school & Middle School (some 10 year olds) students from underrepresented groups by HS Math teacher. *Also in Nvidias Satish, Harris & Garland IPDPS09 Q&A Question: Why PRAM-type parallel algorithms matter, when we can get by with existing serial algorithms, and parallel programming methods like OpenMP on top of it? Answer: With the latter you need a strong-willed Comp. Sci. PhD in order to come up with an efficient parallel program at the end. With the former (study of parallel algorithmic thinking and PRAM algorithms) high school kids can write efficient (more efficient if fine-grained & irregular!) parallel programs. Conclusion XMT provides viable answer to biggest challenges for the field Ease of programming Scalability (up&down) Facilitates code portability

SPAA09 good results: XMT vs. state-of-the art Intel Core 2 HotPar10/ICPP08 compare with GPUs XMT+GPU beats allin-one Fund impact productivity, prog, SW/HW sys arch, asynch/GALS Easy to build. 1 student in 2+ yrs: hardware design + FPGAbased XMT computer in slightly more than two years time to market; implementation cost. Central issue: how to write code for the future? answer must provide compatibility on current code, competitive performance on any amount of parallelism coming from an application, and allow improvement on revised code time for agnostic (rather Soft observation vs Hard observation is a matter of community In theory, hard things include asymptotic complexity, lower bounds, etc. In systems, they tend to include concrete numbers Who is right? Pornography matter of geography My take: each community does something right. Advantages Theory: reasoning about revolutionary changes. Systems: small

incremental changes quantitative approach; often the case. Conclusion of Coming Intro Slide(s) Productivity: code development time + runtime Vendors many-cores are Productivity limited Vendors: monolithic Concerns 1. CS in awe of vendors HW: face of practice; Justified only if accepted/adopted 2. Debate: cluttered and off-point 3. May lead to misplaced despair Need HW diversity of high productivity solutions. Then natural selection. Will explain why US interests mandate greater role to academia Membership in Intel Academic Community Implementing parallel computing into CS curriculum

85% outside USA Source: M. Wrinn, Intel At SIGCSE10 How was the non-specificity addressed? Answer: GvN47 based coding for whatever future application on math. induction coupled with a simple abstraction Then came: HW, Algorithms+SW [Engineering problem. So, why mathematician? Hunch: hard for engineers to relate to .. then and now. A. Ghuloum (Intel), CACM 9/09: ..hardware vendors tend to understand the requirements from the examples that software developers provide ] Met desiderata for code and coding. See, e.g.: - Knuth67, The art of Computer Programming. Vol. 1: Fundamental Algorithms. Chapter 1: Basic concepts 1.1 Algorithms 1.2 Math Prelims 1.2.1 Math Induction Algorithms: 1. Finiteness 2. Definiteness 3. Input & Output 4. Effectiveness Gold standards Definiteness: Helped by Induction

Effectiveness: Helped by Uniform cost criterion" [AHU74] abstraction 2 comments on induction: 1. 2nd nature for math: proofs & axiom of the natural numbers. 2. need to read into GvN47: ..to make the induction complete.. Talk from 30K feet * Past Math induction plus ISE Foundation for first 6 decades of CS Proposed Math induction plus ICE Foundation for future of CS * (Great) Parallel system theory work/modeling Descriptive: How to get the most from what vendors are giving us This talk Prescriptive Versus Serial & Other Parallel 1st Example: Exchange Problem 2 Bins A and B. Exchange contents of A and B. Ex. A=2,B=5A=5,B=2. Algorithm (serial or parallel): X:=A;A:=B;B:=X. 3 Ops. 3 Steps. Space 1. Array Exchange Problem

2n bins A[1..n], B[1..n]. Replace A(i) and B(i), i=1..n. Serial Alg: For i=1 to n do /*serial exchange through eye-of-a-needle X:=A(i);A(i):=B(i);B(i):=X 3n Ops. 3n Steps. Space 1 Parallel Alg: For i=1 to n pardo /*2-bin exchange in parallel X(i):=A(i);A(i):=B(i);B(i):=X(i) 3n Ops. 3 Steps. Space n Discussion - Parallelism tends to require some extra space - Par Alg clearly faster than Serial Alg. - What is simpler and more natural: serial or parallel? Small sample of people: serial, but only if you .. majored in CS Eye-of-a-needle: metaphor for the von-Neumann mental & operational bottleneck Reflects extreme scarcity of HW. Less acute now In CS, we single-mindedly serialize -- needed or not Recall the story about a boy/girl-scout helping an old lady cross the street, even if .. she does not want to cross it All the machinery (think about compilers) that we try later to get

the old lady to the right side of the street, where she originally was and wanted to remain, may not rise to challenge Conclusion: Got to talk to the boy/girl-scout To clarify: - The business case for supporting in the best possible way existing serial code is clear - The question is how to write programs in the future What difference do we hope to make? Productivity in Parallel Computing The large parallel machines story Funding of productivity: $M650 HProductivityCS, ~2002 Met # Gflops goals: up by 1000X since mid-90s Met power goals. Also: groomed eloquent spokespeople Progress on productivity: No agreed benchmarks. No spokesperson. Elusive! In fact, not much has changed since: as intimidating and time consuming as programming in assembly language--NSF Blue Ribbon Committee, 2003 or even parallel software crisis, CACM 1991. Common sense engineering: Untreated bottleneck diminished returns on improvements bottleneck becomes more critical

Next 10 years: New specific programs on flops and power. What about productivity?! Reality: economic island. Cleared by marketing: DOE applications Enter: mainstream many-cores Every CS major should be able to program many-cores Many-Cores are Productivity Limited ~2003 Wall Street traded companies gave up the safety of the only paradigm that worked for them for parallel computing The software spiral (the cyclic process of HW improvement leading to SW improvement) is broken Reality: Never easy-to-program, fast general-purpose parallel computer for single task completion time. Current parallel architectures: never really worked for productivity. Uninviting programmers' models simply turn programmers away Why drag the whole field to a recognized disaster area? Keynote, ISCA09: 10 ways to waste a parallel computer. We can do better: repel the programmer; dont worry about the rest New ideas needed to reproduce the success of the serial

paradigm for many-core computing, where obtaining strong, but not absolutely the best performance is relatively easy. Must start to benchmark HW+SW for productivity. See CFP for PPoPP2011. Joint video-conferencing course with UIUC. Key for GvN47 Engineering solution (2nd visit of slide) Program-counter & stored program Later: Seek upgrade for parallel abstraction Virtual over physical: distributed solution Von Neumann (1946--??) Virtual Start Hardware PC

PC $ := TCU-ID Yes Use PS to get new $ Is $ > n ? XMT Done Hardware Virtual PC P C1 P C1000000

PC1 Spaw n 1000000 PC2 Join PC1000 When PC1 hits Spawn, a s pawn unit broadcas ts 1000000 and the code Spawn Join to PC1, PC 2, PC1000 on a des ignated bus No Execute Thread $ Few more experimental results

AMD Opteron 2.6 GHz, RedHat Linux Enterprise 3, 64KB+64KB L1 Cache, 1MB L2 Cache (none in XMT), memory bandwidth 6.4 GB/s (X2.67 of XMT) M_Mult was 2000X2000 QSort was 20M XMT enhancements: Broadcast, prefetch + buffer, non-blocking store, non-blocking caches. XMT Wall clock time (in seconds) App. XMT Basic XMT M-Mult 179.14 63.7 QSort 16.71 6.59 Opteron

113.83 2.61 Assume (arbitrary yet conservative) ASIC XMT: 800MHz and 6.4GHz/s Reduced bandwidth to .6GB/s and projected back by 800X/75 XMT Projected time (in seconds) App. XMT Basic XMT Opteron M-Mult 23.53 12.46 113.83 QSort 1.97 1.42 2.61 - Simulation of 1024 processors: 100X on standard benchmark suite for VHDL gate-level simulation. for 1024 processors [Gu-V06]

- Silicon area of 64-processor XMT, same as 1 commodity processor (core) (already noted: ~10X relative to Intel Core 2 Duo) Recall tile-based matrix multiply C = A x B. A,B: each 1,000 X 1,000 Tile: must fit in cache How many tiles needed in todays high-end PC? How to cope with limited cache size? Cache oblivious algorithms? XMT can do what others are doing and remain ahead or at least on par with them. Use of (enhanced) work-stealing, called lazy binary splitting (LBS). See PPoPP 2010. Nesting+LBS is currently the preferable XMT first line of defense for coping with limited cache/memory sizes, number of processors etc. However, XMT does a better job for flat parallelism than today's multicores. And, as LBS demonstrated, can incorporate work stealing and all other current means harnessed by cache-oblivious approaches. Keeps competitive

with resource oblivious approaches. Some supporting evidence (12/2007) Large on-chip caches in shared memory. 8-cluster (128 TCU!) XMT has only 8 load/store units, one per cluster. [IBM CELL: bandwidth 25.6GB/s from 2 channels of XDR. Niagara 2: bandwidth 42.7GB/s from 4 FB-DRAM channels. With reasonable (even relatively high rate of) cache misses, it is really not difficult to see that off-chip bandwidth is not likely to be a showstopper for say 1GHz 32-bit XMT. Memory architecture, interconnects High bandwidth memory architecture. - Use hashing to partition the memory and avoid hot spots. - Understood, BUT (needed) departure from mainstream practice. High bandwidth on-chip interconnects Allow infrequent global synchronization (with IOS).

Attractive: lower power. Couple with strong MTCU for serial code. Naming Contest for New Computer Paraleap chosen out of ~6000 submissions Single (hard working) person (X. Wen) completed synthesizable Verilog description AND the new FPGA-based XMT computer in slightly more than two years. No prior design experience. Attests to: basic simplicity of the XMT architecture faster time to market, lower implementation cost. XMT Development HW Track Interconnection network. Led so far to: ASAP06 Best paper award for mesh of trees (MoT) study Using IBM+Artisan tech files: 4.6 Tbps average output at max frequency (1.3 - 2.1 Tbps for alt networks)! No way to get such results without such access 90nm ASIC tapeout

Bare die photo of 8-terminal interconnection network chip IBM 90nm process, 9mm x 5mm fabricated (August 2007) Synthesizable Verilog of the whole architecture. Led so far to: Cycle accurate simulator. Slow. For 11-12K X faster: 1st commitment to silicon64-processor, 75MHz computer; uses FPGA: Industry standard for pre-ASIC prototype 1st ASIC prototype90nm 10mm x 10mm 64-processor tapeout 2008: 4 grad students Bottom Line Cures a potentially fatal problem for growth of generalpurpose processors: How to program them for single task completion time? Positive record Proposal Over-Delivering NSF 97-02 experimental algs. architecture NSF 2003-8 arch. simulator silicon (FPGA)

DoD 2005-7 FPGA FPGA+2 ASICs Final thought: Created our own coherent planet When was the last time that a university project offered a (separate) algorithms class on own language, using own compiler and own computer? Colleagues could not provide an example since at least the 1950s. Have we missed anything? For more info: http://www.umiacs.umd.edu/users/vishkin/XMT/ Merging: Example for Algorithm & Program Input: Two arrays A[1. . n], B[1. . n]; elements from a totally ordered domain S. Each array is monotonically nondecreasing. Merging: map each of these elements into a monotonically nondecreasing array C[1..2n] Serial Merging algorithm SERIAL RANK(A[1 . . ];B[1. .]) Starting from A(1) and B(1), in each round:

1. compare an element from A with an element of B 2. determine the rank of the smaller among them Complexity: O(n) time (and O(n) work...) PRAM Challenge: O(n) work, least time Also (new): fewest spawn-joins Merging algorithm (contd) Surplus-log parallel algorithm for Merging/Ranking for 1 i n pardo Compute RANK(i,B) using standard binary search Compute RANK(i,A) using binary search Complexity: W=(O(n log n), T=O(log n) The partitioning paradigm n: input size for a problem. Design a 2-stage parallel algorithm: 1. Partition the input into a large number, say p, of independent small jobs AND size of the largest small job is roughly n/p. 2. Actual work - do the small jobs concurrently, using a

separate (possibly serial) algorithm for each. Linear work parallel merging: using a single spawn Stage 1 of algorithm: Partitioning for 1 i n/p pardo [p <= n/log and p | n] b(i):=RANK(p(i-1) + 1),B) using binary search a(i):=RANK(p(i-1) + 1),A) using binary search Stage 2 of algorithm: Actual work Observe Overall ranking task broken into 2p independent slices. Example of a slice Start at A(p(i-1) +1) and B(b(i)). Using serial ranking advance till: Termination condition Either some A(pi+1) or some B(jp+1) loses Parallel program 2p concurrent threads using a single spawn-join for the whole algorithm Example Thread of 20: Binary search B. Rank as 11 (index of 15 in B) + 9 (index of 20 in A). Then: compare 21 to 22 and rank 21; compare 23 to 22 to rank 22; compare 23

to 24 to rank 23; compare 24 to 25, but terminate since the Thread of 24 will rank 24. Linear work parallel merging (contd) Observation 2p slices. None larger than 2n/p. (not too bad since average is 2n/2p=n/p) Complexity Partitioning takes W=O(p log n), and T=O(log n) time, or O(n) work and O(log n) time, for p <= n/log n. Actual work employs 2p serial algorithms, each takes O(n/p) time. Total W=O(n), and T=O(n/p), for p <= n/log n. IMPORTANT: Correctness & complexity of parallel program Same as for algorithm. This is a big deal. Other parallel programming approaches do not have a simple concurrency model, and need to reason w.r.t. the program. If solution beats others, will it be adopted by industry? LATER

Recently Viewed Presentations

  • General Chemistry 101 CHEM

    General Chemistry 101 CHEM

    Molar Mass. By definition, a molar mass is the mass of 1 mol of a substance (i.e., g/mol). The molar mass of an element is the mass number for the element that we find on the periodic table. The formula...
  • Breathe Well, Live Well An Asthma Management Program

    Breathe Well, Live Well An Asthma Management Program

    Milwaukee, Wis. air quality and is worse than the national average for the use of "rescue" asthma medications. No. 2: Tulsa, Okla. high prevalence of asthma and a low number of asthma specialists, according to data compiled by the AAFA.....
  • William Blake: A Lecture - ARMYTAGE.NET

    William Blake: A Lecture - ARMYTAGE.NET

    William Blake: A Lecture Introduction The Chimney Sweeper Holy Thursday Introduction Invocation Ballad stanza with trochaic tetra meter Inspiration from Jesus(God) Blake as a prophet-poet The Chimney Sweeper Child Labour in early Industrial Society Reiterating Sunday School Didacticism or Being...
  • Stratification and hypoxia on monthly to inter-annual ...

    Stratification and hypoxia on monthly to inter-annual ...

    Hypoxic event timing and Spring-Neap cycles. From MLR and other analyses we know that spring-neap cycles are more weakly related to hypoxic event timing than river flow. Nonetheless, explore potential linkage: Nine years of hypoxic events 2001-2009. Events defined using...
  • Astraea Redux A Poem on the Happy Restoration And Return of ...

    Astraea Redux A Poem on the Happy Restoration And Return of ...

    On the other hand, although the poem's case against lawlessness is a case against the opposition, it implies a case against arbitrary power in any hands. On the nature of the King's power, the poem carefully balances between a claim...
  • Mental Health &amp; Care of Older Adults

    Mental Health & Care of Older Adults

    Is the medical model best (Langer & Rodin, 1976)? No. Langer showed that residents who were encouraged to make choices in daily activities were feeling better and were more active. Mitigation factors: Decision to enter NH usually not made by...
  • Welcome to the Second Annual Infectious Disease Ontology Workshop

    Welcome to the Second Annual Infectious Disease Ontology Workshop

    Lindsay Cowell Other titles Arial MS Pゴシック Helvetica Blank Presentation Welcome to the Second Annual Infectious Disease Ontology Workshop IDO 2007 IDO Development Strategy IDO Development Strategy Disease-specific IDO test projects Goals for IDO 2008 Participant Introductions
  • Poetry Forms - ahsdhsd7.sharpschool.com

    Poetry Forms - ahsdhsd7.sharpschool.com

    Poetry Forms Creating a thematic poetry compilation Poetry Compilation You will be working on a poetry book for the next week Your compilation must contain the following: 7 poems of different form A common theme that runs throughout your entire...