2019-03-12

Mar 13 In-Class Exercise Thread.

Post your solutions to the Mar 13 In-Class Exercise to this thread.
Best,
Chris
Post your solutions to the Mar 13 In-Class Exercise to this thread. Best, Chris
2019-03-13

-- Mar 13 In-Class Exercise Thread
1. How many map reduce rounds are needed to simulate a 20 step PRAM computation?
A: According to theorem, in DMRC, a PRAM computation can run in O(t) time -> 2 rounds per round ->20*2 rounds = 40
2. Our description didn't say how accumulators should be handle. Propose a method to handle them.
3. What would simulating the command LoadProcid k look like?
4. In a given timestep t are all PRAM processors doing the same instruction? If not, then what's happening in the simulation?
No, in a given timestep t, different processors are doing the map and reduce steps separately.
(Edited: 2019-03-13)
1. How many map reduce rounds are needed to simulate a 20 step PRAM computation? A: According to theorem, in DMRC, a PRAM computation can run in O(t) time -> 2 rounds per round ->20*2 rounds = 40 2. Our description didn't say how accumulators should be handle. Propose a method to handle them. 3. What would simulating the command LoadProcid k look like? 4. In a given timestep t are all PRAM processors doing the same instruction? If not, then what's happening in the simulation? No, in a given timestep t, different processors are doing the map and reduce steps separately.

-- Mar 13 In-Class Exercise Thread
 1. The number of rounds is 2 * 20 = 40. Two rounds of map-reduce simulate one step of PRAM computation
 3. Add another tuple into the global memory to represent accumulators.
 It would need to store processor, accumulator number, and value.
 3. Store the value of the processor ID into the accumulator tuple.
 4. No, PRAM processors could be doing different instructions. In the simulation, the processors go through the same rounds, but the outputs are computed differently.
(Edited: 2019-03-13)
1. The number of rounds is 2 * 20 = 40. Two rounds of map-reduce simulate one step of PRAM computation 3. Add another tuple into the global memory to represent accumulators. It would need to store processor, accumulator number, and value. 3. Store the value of the processor ID into the accumulator tuple. 4. No, PRAM processors could be doing different instructions. In the simulation, the processors go through the same rounds, but the outputs are computed differently.

-- Mar 13 In-Class Exercise Thread
How many map reduce rounds are needed to simulate a 20 step PRAM computation?
``
Solution: Theorem. Any CREW PRAM algorithm using `O(n^2−2\epsilon)` total memory, `O(n^2−2\epsilon)` processors and `t(n)` time can be run in `O(t)` rounds in DMRC. Therefore when `t(n) = 20` [steps], it will take `O(t) = O(20) = O(1)` [rounds]. Specifically, each round of map-reduce consists of 2 mappers and 2 reducers. Over 20 timesteps and 2 rounds per timestep, `\approx 40` rounds are required.

Our description didn't say how accumulators should be handled. Propose a method to handle them.
``
Solution: Our global memory does not live in any reducer or mapper and is currently described in ordered pairs/tuples. The global memory is currently the collection of all those tuples. We can handle accumulators (i.e. local memory) by creating and passing around the following quadruple `(i, acc, n, v)` where `i` is the processor number, `n` is the number of the accumulator, and `v` is the value of the accumulator. We would need to update this quadraple after each write step.

What would simulating the command LoadProcid k look like?
``
Solution: In the case of LoadProcid `k`, we simulate using reducer `\rho_1^t` to call the accumulator quadruple `(i, acc, k, v) \to (i, acc, k, i)`.

In a given timestep `t` are all PRAM processors doing the same instruction? If not, then what's happening in the simulation?
``
Solution: No, each processor might be doing different instructions. The simulation functions as normal as instructions are independent.
(Edited: 2019-03-13)
'''How many map reduce rounds are needed to simulate a 20 step PRAM computation?''' @BT@@BT@ '''Solution:''' ''Theorem.'' Any CREW PRAM algorithm using @BT@O(n^2−2\epsilon)@BT@ total memory, @BT@O(n^2−2\epsilon)@BT@ processors and @BT@t(n)@BT@ time can be run in @BT@O(t)@BT@ rounds in DMRC. Therefore when @BT@t(n) = 20@BT@ [steps], it will take @BT@O(t) = O(20) = O(1)@BT@ [rounds]. Specifically, each round of map-reduce consists of 2 mappers and 2 reducers. Over 20 timesteps and 2 rounds per timestep, @BT@\approx 40@BT@ rounds are required. ---- '''Our description didn't say how accumulators should be handled. Propose a method to handle them.''' @BT@@BT@ '''Solution:''' Our global memory does not live in any reducer or mapper and is currently described in ordered pairs/tuples. The global memory is currently the collection of all those tuples. We can handle accumulators (i.e. local memory) by creating and passing around the following quadruple @BT@(i, acc, n, v)@BT@ where @BT@i@BT@ is the processor number, @BT@n@BT@ is the number of the accumulator, and @BT@v@BT@ is the value of the accumulator. We would need to update this quadraple after each write step. ---- '''What would simulating the command LoadProcid k look like?''' @BT@@BT@ '''Solution:''' In the case of LoadProcid @BT@k@BT@, we simulate using reducer @BT@\rho_1^t@BT@ to call the accumulator quadruple @BT@(i, acc, k, v) \to (i, acc, k, i)@BT@. ---- '''In a given timestep @BT@t@BT@ are all PRAM processors doing the same instruction? If not, then what's happening in the simulation?''' @BT@@BT@ '''Solution:''' No, each processor might be doing different instructions. The simulation functions as normal as instructions are independent.

-- Mar 13 In-Class Exercise Thread
How many map reduce rounds are needed to simulate a 20 step PRAM computation? 40
Our description didn't say how accumulators should be handle. Propose a method to handle them. Add another (i, acc, n, v) into the global memory to represent accumulators.
What would simulating the command LoadProcid k look like? Reducer p at time t: (i, acc, k, v) -> (i, acc, k, i)
In a given timestep t are all PRAM processors doing the same instruction? If not, then what's happening in the simulation? No. Simulations are independent.
How many map reduce rounds are needed to simulate a 20 step PRAM computation? 40 Our description didn't say how accumulators should be handle. Propose a method to handle them. Add another (i, acc, n, v) into the global memory to represent accumulators. What would simulating the command LoadProcid k look like? Reducer p at time t: (i, acc, k, v) -> (i, acc, k, i) In a given timestep t are all PRAM processors doing the same instruction? If not, then what's happening in the simulation? No. Simulations are independent.

-- Mar 13 In-Class Exercise Thread
* How many map reduce rounds are needed to simulate a 20 step PRAM computation?
It will take about 40 rounds. Each PRAM step takes 2 rounds of MR.
* Our description didn't say how accumulators should be handle. Propose a method to handle them.
Accumulators can be handled by passing them between processors as a tuple, updating it after every step involving writes. The tuple could hold processor id, accumulator location, number of that accumulator, and its value.
* What would simulating the command LoadProcid k look like?
The processor's reducer could be performing
(id, acc, k, v) -> (id, acc, k, id)
* In a given timestep t are all PRAM processors doing the same instruction? If not, then what's happening in the simulation?
No, at a given time step t the processors could be simulating a different independent instruction.
(Edited: 2019-03-13)
'''* How many map reduce rounds are needed to simulate a 20 step PRAM computation?''' It will take about 40 rounds. Each PRAM step takes 2 rounds of MR. '''* Our description didn't say how accumulators should be handle. Propose a method to handle them.''' Accumulators can be handled by passing them between processors as a tuple, updating it after every step involving writes. The tuple could hold processor id, accumulator location, number of that accumulator, and its value. '''* What would simulating the command LoadProcid k look like?''' The processor's reducer could be performing (id, acc, k, v) -> (id, acc, k, id) '''* In a given timestep t are all PRAM processors doing the same instruction? If not, then what's happening in the simulation?''' No, at a given time step t the processors could be simulating a different independent instruction.

-- Mar 13 In-Class Exercise Thread
1. O(t) rounds are needed to simulate a PRAM computation in t time. In a 20 step PRAM computation, 40 MapReduce rounds are needed as a step has two MapReduce rounds.
2. Handle accumulators by passing a tuple (i, is_acc, acc_num, value)
3. Simulate LoadProcid k by having the first reducer take the processor i from the input tuple and use the accumulator tuple (i, is_acc, k, i) to write the value i into accumulator k.
4. In a given timestep, all PRAM processors may not be doing the same instruction.
1. O(t) rounds are needed to simulate a PRAM computation in t time. In a 20 step PRAM computation, 40 MapReduce rounds are needed as a step has two MapReduce rounds. 2. Handle accumulators by passing a tuple (i, is_acc, acc_num, value) 3. Simulate LoadProcid k by having the first reducer take the processor i from the input tuple and use the accumulator tuple (i, is_acc, k, i) to write the value i into accumulator k. 4. In a given timestep, all PRAM processors may not be doing the same instruction.
2019-03-17

-- Mar 13 In-Class Exercise Thread
1. The number of rounds is 2 * 20 = 40. Map Reduce is twice as fast as Pram
2. Add (is_acc, acc_num, value) tuple into the global memory as accumulators.
3. Store the value of the procID into the tuple.
4.For a given time all PRAMs may not be doing same instruction.
1. The number of rounds is 2 * 20 = 40. Map Reduce is twice as fast as Pram 2. Add (is_acc, acc_num, value) tuple into the global memory as accumulators. 3. Store the value of the procID into the tuple. 4.For a given time all PRAMs may not be doing same instruction.

-- Mar 13 In-Class Exercise Thread
 1. It takes 20*2=40 rounds of map reduce as each step of PRAM needs 2 rounds of map reduce.
 2. Add a tuple to global memory containing (pid, is_acc, acc_num, value), where pid is processor id, acc_num is the number of accumulator, value is the value of this accumulator.
 3. We use the tuple added to load the processor x as (x, is_acc, k, x).
 4. No, PRAM processors may executing different instructions as they are independent.
1. It takes 20*2=40 rounds of map reduce as each step of PRAM needs 2 rounds of map reduce. 2. Add a tuple to global memory containing (pid, is_acc, acc_num, value), where pid is processor id, acc_num is the number of accumulator, value is the value of this accumulator. 3. We use the tuple added to load the processor x as (x, is_acc, k, x). 4. No, PRAM processors may executing different instructions as they are independent.

User Icon
-- Mar 13 In-Class Exercise Thread
1. Each step of PRAM takes a map step and a reduce step. So a 20 step PRAM algorithm will need 40 steps total.
2. Our given PRAM scheme only assumes that we're passing global registers in the tuples. In order to allow for local accumulators, we can adjust our tuples to store a value which says if this is a local value or global, and if it is local then we also need to store the id of the accumulator.
3. LoadProcid k simply requires us to set the value in the tuple to the id of the processor.
4. In a given time step, the PRAM processors might not be doing the same thing. While some could be doing the mapping instructions, some could be doing reducers.
1. Each step of PRAM takes a map step and a reduce step. So a 20 step PRAM algorithm will need 40 steps total. 2. Our given PRAM scheme only assumes that we're passing global registers in the tuples. In order to allow for local accumulators, we can adjust our tuples to store a value which says if this is a local value or global, and if it is local then we also need to store the id of the accumulator. 3. LoadProcid k simply requires us to set the value in the tuple to the id of the processor. 4. In a given time step, the PRAM processors might not be doing the same thing. While some could be doing the mapping instructions, some could be doing reducers.
[ Next ]
X