Programming Languages & Software Engineering

Programming Languages & Software Engineering

A Sophomoric Introduction to Shared-Memory Parallelism and Concurrency Lecture 1 Introduction to Multithreading & Fork-Join Parallelism Steve Wolfman, based on work by Dan Grossman LICENSE: This file is licensed under a Creative Commons Attribution 3.0 Unported License; see http://creativecommons.org/licenses/by/3.0/. The materials were developed by Steve Wolfman, Alan Hu, and Dan Grossman. Why Parallelism? Photo by The Planet, CC BY-SA 2.0 Sophomoric Parallelism and Concurrency, Lecture 1 2 Why not Parallelism?

Photo from case study by William Frey, CC BY 3.0 Concurrency problems were certainly not the only problem here nonetheless, its hard to reason correctly about programs with concurrency. Photo by The Planet, CC BY-SA 2.0 Sophomoric Parallelism and Concurrency, Lecture 1 Moral: Rely as much as possible on highquality pre-made solutions (libraries). 3 Learning Goals By the end of this unit, you should be able to: Distinguish between parallelismimproving performance by exploiting multiple processorsand concurrencymanaging simultaneous access to shared resources.

Explain and justify the task-based (vs. thread-based) approach to parallelism. (Include asymptotic analysis of the approach and its practical considerations, like "bottoming out" at a reasonable level.) Sophomoric Parallelism and Concurrency, Lecture 1 4 Outline History and Motivation Parallelism and Concurrency Intro Counting Matches Parallelizing Better, more general parallelizing Sophomoric Parallelism and Concurrency, Lecture 1 5

What happens as the transistor count goes up? Chart by Wikimedia user: Wgsimon Creative Commons Attribution-Share Alike 3.0 Unported 6 (zoomed in) (Sparc T3 micrograph from Oracle; 16 cores. ) Chart by Wikimedia user: Wgsimon Creative Commons Attribution-Share Alike 3.0 Unported 7

(Goodbye to) Sequential Programming One thing happens at a time. The next thing to happen is my next instruction. Removing these assumptions creates challenges & opportunities: How can we get more work done per unit time (throughput)? How do we divide work among threads of execution and coordinate (synchronize) among them? How do we support multiple threads operating on data simultaneously (concurrent access)? How do we do all this in a principled way? (Algorithms and data structures, of course!) Sophomoric Parallelism and Concurrency, Lecture 1 8 What to do with multiple processors? Run multiple totally different programs at the same time (Already doing that, but with time-slicing.) Do multiple things at once in one program

Requires rethinking everything from asymptotic complexity to how to implement data-structure operations Sophomoric Parallelism and Concurrency, Lecture 1 9 Outline History and Motivation Parallelism and Concurrency Intro Counting Matches Parallelizing Better, more general parallelizing Sophomoric Parallelism and Concurrency, Lecture 1 10 KP Duty: Peeling Potatoes, Parallelism

How long does it take a person to peel one potato? Say: 15s How long does it take a person to peel 10,000 potatoes? ~2500 min = ~42hrs = ~one week full-time. How long would it take 100 people with 100 potato peelers to peel 10,000 potatoes? Sophomoric Parallelism and Concurrency, Lecture 1 11 KP Duty: Peeling Potatoes, Parallelism How long does it take a person to peel one potato? Say: 15s How long does it take a person to peel 10,000 potatoes? ~2500 min = ~42hrs = ~one week full-time. How long would it take 100 people with 100 potato peelers to peel 10,000 potatoes? Parallelism: using extra resources to solve a problem faster.

Sophomoric Parallelism and Concurrency, Lecture 1 Note: these definitions of parallelism and concurrency are not yet standard but the 12 perspective is essential to avoid confusion! Parallelism Example Parallelism: Use extra computational resources to solve a problem faster (increasing throughput via simultaneous execution) Pseudocode for counting matches Bad style for reasons well see, but may get roughly 4x speedup int cm_parallel(int arr[], int len, int target){ res = new int[4]; FORALL(i=0; i < 4; i++) { //parallel iterations res[i] = count_matches(arr + i*len/4, (i+1)*len/4 i*len/4, target); }

return res[0]+res[1]+res[2]+res[3]; } int count_matches(int arr[], int len, int target) { // normal sequential code to count matches of // target. } Sophomoric Parallelism and Concurrency, Lecture 1 13 KP Duty: Peeling Potatoes, Concurrency How long does it take a person to peel one potato? Say: 15s How long does it take a person to peel 10,000 potatoes? ~2500 min = ~42hrs = ~one week full-time. How long would it take 4 people with 3 potato peelers to peel 10,000 potatoes? Sophomoric Parallelism and Concurrency, Lecture 1

14 KP Duty: Peeling Potatoes, Concurrency How long does it take a person to peel one potato? Say: 15s How long does it take a person to peel 10,000 potatoes? ~2500 min = ~42hrs = ~one week full-time. How long would it take 4 people with 3 potato peelers to peel 10,000 potatoes? Concurrency: Correctly and efficiently manage access to shared resources (Better example: Lots of cooks in one kitchen, but only 4 stove burners. Want to allow access to all 4 burners, but not cause spills or incorrect Note: these definitions of parallelism and burner settings.) Sophomoric Parallelism and Concurrency, Lecture 1

concurrency are not yet standard but the 15 perspective is essential to avoid confusion! Concurrency Example Concurrency: Correctly and efficiently manage access to shared resources (from multiple possibly-simultaneous clients) Pseudocode for a shared chaining hashtable Prevent bad interleavings (correctness) But allow some concurrent access (performance) template class Hashtable { void insert(K key, V value) { int bucket = ; prevent-other-inserts/lookups in table[bucket] do the insertion re-enable access to table[bucket] }

V lookup(K key) { (like insert, but can allow concurrent lookups to same bucket) } } Sophomoric Parallelism and Concurrency, Lecture 1 16 Will return to this in a few lectures! OLD Memory Model Local variables Control flow info Dynamically allocated data. pc=

The Stack The Heap (pc = program counter, address of current instruction) Sophomoric Parallelism and Concurrency, Lecture 1 17 Shared Memory Model We assume (and C++11 specifies) shared memory w/explicit threads NEW story: PER THREAD: Local variables Control flow info Dynamically

allocated data. pc= pc= A Stack pc= A Stack

A Stack The Heap Note: we can share local variables by sharing pointers to their locations. Sophomoric Parallelism and Concurrency, Lecture 1 18 Other models We will focus on shared memory, but you should know several other models exist and have their own advantages Message-passing: Each thread has its own collection of objects. Communication is via explicitly sending/receiving messages Cooks working in separate kitchens, mail around ingredients Dataflow: Programmers write programs in terms of a DAG. A node executes after all of its predecessors in the graph Cooks wait to be handed results of previous steps

Data parallelism: Have primitives for things like apply function to every element of an array in parallel Note: our parallelism solution will have a dataflow feel to it. Sophomoric Parallelism and Concurrency, Lecture 1 19 Outline History and Motivation Parallelism and Concurrency Intro Counting Matches Parallelizing Better, more general parallelizing Sophomoric Parallelism and Concurrency, Lecture 1 20 Problem: Count Matches of a Target

How many times does the number 3 appear? 3 5 9 3 2 0 4 6 1

3 // Basic sequential version. int count_matches(int array[], int len, int target) { int matches = 0; for (int i = 0; i < len; i++) { if (array[i] == target) matches++; } return matches; } How can we take advantage of parallelism? Sophomoric Parallelism and Concurrency, Lecture 1 21 First attempt (wrong.. but grab the code!)

void cmp_helper(int * result, int array[], int lo, int hi, int target) { *result = count_matches(array + lo, hi - lo, target); } int cm_parallel(int array[], int len, int target) { int divs = 4; std::thread workers[divs]; int results[divs]; for (int d = 0; d < divs; d++) workers[d] = std::thread(&cmp_helper, &results[d], array, (d*len)/divisions, ((d+1)*len)/divisions, target); int matches = 0; for (int d = 0; d < divs; d++) matches += results[d]; return matches; } Notice: we use a pointer to shared memory to communicate across threads!

BE CAREFUL sharing memory! Sophomoric Parallelism and Concurrency, Lecture 1 22 void cmp_helper(int * result, int array[], int lo, int hi, int target) { *result = count_matches(array + lo, hi - lo, target); } Shared Memory: Data Races int cm_parallel(int array[], int len, int target) { int divs = 4; std::thread workers[divs]; int results[divs]; for (int d = 0; d < divs; d++) workers[d] = std::thread(&cmp_helper, &results[d], array, (d*len)/divisions,

((d+1)*len)/divisions, target); int matches = 0; for (int d = 0; d < divs; d++) matches += results[d]; } return matches; Race condition: What happens if one thread tries to write to a memory location while another reads (or multiple try to write)? KABOOM (possibly silently!) Sophomoric Parallelism and Concurrency, Lecture 1 23 void cmp_helper(int * result, int array[], int lo, int hi, int target) { *result = count_matches(array + lo, hi - lo, target); }

Shared Memory and Scope/Lifetime int cm_parallel(int array[], int len, int target) { int divs = 4; std::thread workers[divs]; int results[divs]; for (int d = 0; d < divs; d++) workers[d] = std::thread(&cmp_helper, &results[d], array, (d*len)/divisions, ((d+1)*len)/divisions, target); int matches = 0; for (int d = 0; d < divs; d++) matches += results[d]; } return matches; Scope problems: What happens if the child thread is still using the

variable when it is deallocated (goes out of scope) in the parent? KABOOM (possibly silently??) Sophomoric Parallelism and Concurrency, Lecture 1 24 void cmp_helper(int * result, int array[], int lo, int hi, int target) { *result = count_matches(array + lo, hi - lo, target); } Run the Code! int cm_parallel(int array[], int len, int target) { int divs = 4; std::thread workers[divs]; int results[divs]; for (int d = 0; d < divs; d++) workers[d] = std::thread(&cmp_helper,

&results[d], array, (d*len)/divisions, ((d+1)*len)/divisions, target); int matches = 0; for (int d = 0; d < divs; d++) matches += results[d]; } return matches; Now, lets run it. KABOOM! What happens, and how do we fix it? Sophomoric Parallelism and Concurrency, Lecture 1 25 Fork/Join Parallelism std::thread defines methods you could not implement on your own The constructor calls its argument in a new thread (forks)

Sophomoric Parallelism and Concurrency, Lecture 1 26 Fork/Join Parallelism std::thread defines methods you could not implement on your own The constructor calls its argument in a new thread (forks) fork! Sophomoric Parallelism and Concurrency, Lecture 1 27 Fork/Join Parallelism std::thread defines methods you could not implement on your own

The constructor calls its argument in a new thread (forks) fork! Sophomoric Parallelism and Concurrency, Lecture 1 28 Fork/Join Parallelism std::thread defines methods you could not implement on your own The constructor calls its argument in a new thread (forks) Sophomoric Parallelism and Concurrency, Lecture 1 29 Fork/Join Parallelism std::thread defines methods you could not implement on your

own The constructor calls its argument in a new thread (forks) join blocks until/unless the receiver is done executing (i.e., its constructors argument function returns) join! Sophomoric Parallelism and Concurrency, Lecture 1 30 Fork/Join Parallelism std::thread defines methods you could not implement on your own The constructor calls its argument in a new thread (forks) join blocks until/unless the receiver is done executing (i.e., its constructors argument function returns) This thread is join!

stuck until the other one finishes. Sophomoric Parallelism and Concurrency, Lecture 1 31 Fork/Join Parallelism std::thread defines methods you could not implement on your own The constructor calls its argument in a new thread (forks) join blocks until/unless the receiver is done executing (i.e., its constructors argument function returns) join! Sophomoric Parallelism and Concurrency, Lecture 1 This thread could already be

done (joins immediately) or could run for a long time. 32 Join std::thread defines methods you could not implement on your own The constructor calls its argument in a new thread (forks) join blocks until/unless the receiver is done executing (i.e., its constructors argument function returns) a fork a join And now the thread proceeds normally. Sophomoric Parallelism and Concurrency, Lecture 1 33

Second attempt (patched!) int cm_parallel(int array[], int len, int target) { int divs = 4; std::thread workers[divs]; int results[divs]; for (int d = 0; d < divs; d++) workers[d] = std::thread(&cmp_helper, &results[d], array, (d*len)/divisions, ((d+1)*len)/divisions, target); int matches = 0; for (int d = 0; d < divs; d++) { workers[d].join(); matches += results[d]; } } return matches;

Sophomoric Parallelism and Concurrency, Lecture 1 34 Outline History and Motivation Parallelism and Concurrency Intro Counting Matches Parallelizing Better, more general parallelizing Sophomoric Parallelism and Concurrency, Lecture 1 35 Success! Are we done? Answer these: What happens if I run my code on an old-fashioned one-core machine?

What happens if I run my code on a machine with more cores in the future? (Done? Think about how to fix it and do so in the code.) Sophomoric Parallelism and Concurrency, Lecture 1 36 Chopping (a Bit) Too Fine 3s 12 secs 3s of

3s work 3s We thought there were 4 processors available. Sophomoric Parallelism and Concurrency, Lecture 1 But theres only 3. Result? 37 Chopping Just Right 4s 12

secs 4s of work 4s We thought there were 3 processors available. Sophomoric Parallelism and Concurrency, Lecture 1 And there are. Result? 38 Success! Are we done?

Answer these: What happens if I run my code on an old-fashioned one-core machine? What happens if I run my code on a machine with more cores in the future? Lets fix these! Sophomoric Parallelism and Concurrency, Lecture 1 (Note: std::thread::hardware_concurrency() and omp_get_num_procs().) 39 Success! Are we done? Answer this: Might your performance vary as the whole class tries problems, depending on when you start your run?

(Done? Think about how to fix it and do so in the code.) Sophomoric Parallelism and Concurrency, Lecture 1 40 Is there a Just Right? 4s 12 secs 4s of Im busy. Im busy.

work 4s We thought there were 3 processors available. Sophomoric Parallelism and Concurrency, Lecture 1 And there are. Result? 41 Chopping So Fine Its Like Sand or Water (of course, we cant predict the busy times!) 12

of work secs We chopped into 10,000 pieces. Sophomoric Parallelism and Concurrency, Lecture 1 Im busy. Im busy. And there are a few processors. 42

Result? Success! Are we done? Answer this: Might your performance vary as the whole class tries problems, depending on when you start your run? Lets fix this! Sophomoric Parallelism and Concurrency, Lecture 1 43 void cmp_helper(int * result, int array[], int lo, int hi, int target) { *result = count_matches(array + lo, hi - lo, target); } Analyzing Performance

int cm_parallel(int array[], int len, int target) { int divs = len; std::thread workers[divs]; int results[divs]; for (int d = 0; d < divs; d++) workers[d] = std::thread(&cmp_helper, &results[d], array, (d*len)/divisions, ((d+1)*len)/divisions, target); int matches = 0; for (int d = 0; d < divs; d++) matches += results[d]; } return matches; Yes, this is silly. Well justify later.

Its Asymptotic Analysis Time! (n == len, # of processors = ) How long does dividing up/recombining the work take? Sophomoric Parallelism and Concurrency, Lecture 1 44 void cmp_helper(int * result, int array[], int lo, int hi, int target) { *result = count_matches(array + lo, hi - lo, target); } Analyzing Performance int cm_parallel(int array[], int len, int target) { int divs = len; std::thread workers[divs]; int results[divs]; for (int d = 0; d < divs; d++) workers[d] = std::thread(&cmp_helper,

&results[d], array, (d*len)/divisions, ((d+1)*len)/divisions, target); int matches = 0; for (int d = 0; d < divs; d++) matches += results[d]; } return matches; How long does doing the work take? (n == len, # of processors = ) (With n threads, how much work does each one do?) Sophomoric Parallelism and Concurrency, Lecture 1 45 void cmp_helper(int * result, int array[], int lo, int hi, int target) { *result = count_matches(array + lo, hi - lo, target); }

Analyzing Performance int cm_parallel(int array[], int len, int target) { int divs = len; std::thread workers[divs]; int results[divs]; for (int d = 0; d < divs; d++) workers[d] = std::thread(&cmp_helper, &results[d], array, (d*len)/divisions, ((d+1)*len)/divisions, target); int matches = 0; for (int d = 0; d < divs; d++) matches += results[d]; } return matches; Time (n) with an infinite number of processors?

That sucks! Sophomoric Parallelism and Concurrency, Lecture 1 46 Zombies Seeking Help A group of (non-CSist) zombies wants your help infecting the living. Each time a zombie bites a human, it gets to transfer a program. The new zombie in town has the humans line up and bites each in line, transferring the program: Do nothing except say Eat Brains!! Analysis? How do they do better? Sophomoric Parallelism and Concurrency, Lecture 1 Asymptotic analysis was so much easier with a brain!

47 A better idea + + + + + + + +

+ + + + + + + The zombie apocalypse is straightforward using divide-and-conquer Note: the natural way to code it is to fork two tasks, join them, and get results. But the natural zombie way is to bite one human and then each recurse. (As is so often true, the zombie way is better.) Sophomoric Parallelism and Concurrency, Lecture 1

48 Divide-and-Conquer Style Code (doesnt work in general... more on that later) void cmp_helper(int * result, int array[], int lo, int hi, int target) { if (len <= 1) { *result = count_matches(array + lo, hi-lo, target); return; } int left, right; int mid = lo + (hi-lo)/2; std::thread child(&cmp_helper, &left, array, lo, mid, target); cmp_helper(&right, array, mid, hi, target); child.join(); }

return left + right; int cm_parallel(int array[], int len, int target) { int result; cmp_helper(&result, array, 0, len, target); return result; } Sophomoric Parallelism and Concurrency, Lecture 1 49 void cmp_helper(int * result, int array[], int lo, int hi, int target) { if (len <= 1) { *result = count_matches(array + lo, hi-lo, target); return; } Analysis of D&C Style Code

int left, right; int mid = lo + (hi-lo)/2; std::thread child(&cmp_helper, &left, array, lo, mid, target); cmp_helper(&right, array, mid, hi, target); child.join(); } return left + right; int cm_parallel(int array[], int len, int target) { int result; cmp_helper(&result, array, 0, len, target); return result; } Its Asymptotic Analysis Time! (n == len, # of processors = ) How long does dividing up/recombining the work take? Um?

Sophomoric Parallelism and Concurrency, Lecture 1 50 Easier Visualization for the Analysis + + + + + + +

+ + + + + + + + How long does the tree take to run with an infinite number of processors? (n is the width of the array)

Sophomoric Parallelism and Concurrency, Lecture 1 51 void cmp_helper(int * result, int array[], int lo, int hi, int target) { if (len <= 1) { *result = count_matches(array + lo, hi-lo, target); return; } Analysis of D&C Style Code int left, right; int mid = lo + (hi-lo)/2; std::thread child(&cmp_helper, &left, array, lo, mid, target); cmp_helper(&right, array, mid, hi, target); child.join();

} return left + right; int cm_parallel(int array[], int len, int target) { int result; cmp_helper(&result, array, 0, len, target); return result; } How long does doing the work take? (n == len, # of processors = ) (With n threads, how much work does each one do?) Sophomoric Parallelism and Concurrency, Lecture 1 52 void cmp_helper(int * result, int array[], int lo, int hi, int target) { if (len <= 1) {

*result = count_matches(array + lo, hi-lo, target); return; } Analysis of D&C Style Code int left, right; int mid = lo + (hi-lo)/2; std::thread child(&cmp_helper, &left, array, lo, mid, target); cmp_helper(&right, array, mid, hi, target); child.join(); } return left + right; int cm_parallel(int array[], int len, int target) { int result; cmp_helper(&result, array, 0, len, target);

return result; } Time (lg n) with an infinite number of processors. Exponentially faster than our (n) solution! Yay! So why doesnt the code work? Sophomoric Parallelism and Concurrency, Lecture 1 53 Chopping Too Fine Again 1 2 o f w o r

k s e c s We chopped into n pieces (n == array length). Sophomoric Parallelism and Concurrency, Lecture 1 Result? 54 KP Duty: Peeling Potatoes, Parallelism Remainder

How long does it take a person to peel one potato? Say: 15s How long does it take a person to peel 10,000 potatoes? ~2500 min = ~42hrs = ~one week full-time. How long would it take 100 people with 100 potato peelers to peel 10,000 potatoes? Sophomoric Parallelism and Concurrency, Lecture 1 55 KP Duty: Peeling Potatoes, Parallelism Problem How long does it take a person to peel one potato? Say: 15s How long does it take a person to peel 10,000 potatoes? ~2500 min = ~42hrs = ~one week full-time. How long would it take 10,000 people with 10,000 potato peelers to peel 10,000 potatoes if we use the linear solution for dividing work up? If we use the divide-and-conquer solution?

Sophomoric Parallelism and Concurrency, Lecture 1 56 Being realistic Creating one thread per element is way too expensive. So, we use a library where we create tasks (bite-sized pieces of work) that the library assigns to a reasonable number of threads. Sophomoric Parallelism and Concurrency, Lecture 1 57 Being realistic Creating one thread per element is way too expensive. So, we use a library where we create tasks (bite-sized pieces of work) that the library assigns to a reasonable number of threads. But creating one task per element still too expensive.

So, we use a sequential cutoff, typically ~500-1000. (This is like switching from quicksort to insertion sort for small subproblems.) Sophomoric Parallelism and Concurrency, Lecture 1 Note: were still chopping into (n) pieces, just not into n pieces. 58 Being realistic: Exercise How much does a sequential cutoff help? With 1,000,000,000 (~230) elements in the array and a cutoff of 1: About how many tasks do we create? With 1,000,000,000 elements in the array and a cutoff of 16 (a ridiculously small cutoff): About how many tasks do we create? What percentage of the tasks do we eliminate with our cutoff? Sophomoric Parallelism and Concurrency, Lecture 1

59 That library, finally C++11s threads are usually too heavyweight (implementation dependent). OpenMP 3.0s main contribution was to meet the needs of divideand-conquer fork-join parallelism Available in recent g++s. See provided code and notes for details. Efficient implementation is a fascinating but advanced topic! Sophomoric Parallelism and Concurrency, Lecture 1 60 Learning Goals By the end of this unit, you should be able to: Distinguish between parallelismimproving performance by exploiting multiple processorsand concurrencymanaging

simultaneous access to shared resources. Explain and justify the task-based (vs. thread-based) approach to parallelism. (Include asymptotic analysis of the approach and its practical considerations, like "bottoming out" at a reasonable level.) Sophomoric Parallelism and Concurrency, Lecture 1 P.S. We promised wed justify assuming # processors = . 61 Outline History and Motivation Parallelism and Concurrency Intro Counting Matches Parallelizing Better, more general parallelizing

Bonus code and parallelism issue! Sophomoric Parallelism and Concurrency, Lecture 1 62 Example: final version int cmp_helper(int array[], int len, int target) { const int SEQUENTIAL_CUTOFF = 1000; if (len <= SEQUENTIAL_CUTOFF) return count_matches(array, len, target); int left, right; #pragma omp task untied shared(left) left = cmp_helper(array, len/2, target); right = cmp_helper(array+len/2, len-(len/2), target); #pragma omp taskwait } return left + right;

int cm_parallel(int array[], int len, int target) { int result; #pragma omp parallel #pragma omp single result = cmp_helper(array, len, target); } return result; Sophomoric Parallelism and Concurrency, Lecture 1 63 Side Note: Load Imbalance Does each bite-sized piece of work take the same time to run: When counting matches? When counting the number of prime numbers in the array?

Compare the impact of different runtimes on the chop up perfectly by the number of processors approach vs. chop up super-fine.

Recently Viewed Presentations

  • Ms. Johnson&#x27;s Language Class

    Ms. Johnson's Language Class

    Flag "Under Pressure" ... Poem analysis. Tone Activity. Take some words and look up the definition for each. Write the word and definition on a piece notecard. September 15. Target: I can analyze the tone of a text. Agenda: Finish...
  • Agriculture Science 1 At the completion of this unit ...

    Agriculture Science 1 At the completion of this unit ...

    Identify key FFA historical events. Identify the mission and strategies, colors, motto, emblem and parts o the emblem, and organizational structure of FFA. Recite and explain the meaning of the FFA Creed. Discuss the meaning and purpose of a program...
  • Bell Work 1) into a decimal & a

    Bell Work 1) into a decimal & a

    Lindsay has a paper bag full of FruitiTutti Chews in three different fruit flavors. She says, "If you reach into the bag, you have a 13 chance of pulling out a Killer Kiwi. ... There is a 27 chance of...
  • Magnetism - Physics at SPASH

    Magnetism - Physics at SPASH

    Magnetism. The Basics. The first magnetic stone was discovered by the Greeks around 600 BCE and was called a Lodestone. Today, we call it Magnetite. The magnets we manufacture and use today always have two poles, north and south. Like...
  • Lorem Ipsum - unice.fr

    Lorem Ipsum - unice.fr

    Panne matériel Hydrocution La noyade: Prévention Savoir nager Condition physique Hygiène de vie Entrainement technique Contrôle et entretien de l'équipement Attention aux Conditions Météo, courant, visi, froid Plongées spéciales: épaves, grottes, etc Pas de plongée solo Surveillance des autres Assistance...
  • CHRODIS IN CONTEXT: ACTIONS FOR HEALTH PROMOTION IN

    CHRODIS IN CONTEXT: ACTIONS FOR HEALTH PROMOTION IN

    Gaps in life expectancy between people with high and low level of education at 65 years old Eurostat . Health inequalities within EU Member States: there are large gaps in life expectancy by education level: in Central and Eastern Europe,...
  • The Prairie

    The Prairie

    An abnormally large number of animals have disappeared from this ecozone. Many are extinct, extirpated, endangered or threatened because of habitat loss. This includes: The grizzly bear (disappeared from area) The swift fox (disappeared from area) The peregrine falcon (endangered...
  • Outline for class session Response papers start next

    Outline for class session Response papers start next

    Vulnerability - how much do particularly climate impacts hurt BEFORE your community responds, and what determines that? ... Harm will depend on types of impacts, exposure, vulnerability, adaptive capacity, and resilience ... Coastal vs. inland. Rainfall vs. aquifer dependent. Adaptive...