Concurrent Processing Final Part 2

Card Set Information

Concurrent Processing Final Part 2
2014-12-07 15:21:00
part2 final

part 2 of concurrent processing final exam
Show Answers:

  1. 2 common patterns of parallel processes
    • master-slave architecture
    • data-parallel architecture
  2. data parallel architecture
    • processes do the same work on different chunk of the data
    • uses "striping" the input data
    • minimal coordination between the processes is required
  3. how does each process know it's part of the same parallel program?
    specific nodes of the processors are assigned to the application at runtime
  4. communicator
    • an object connecting groups of processes in the MPI session
    • each process is assigned an identifier or rank
  5. mpi initialization
    • starts a new session
    • ensures all slave processes receive the command line arguments passes to the master process
    • can only be called once
  6. mpi_comm_size
    • gets the number of processes in the communicator
    • master uses this when addressing slaves
  7. mpi tags
    • user-defined tag is sent with each message
    • used to help the receiving process distinguish the type of message
  8. non-blocking send / receive functions
    • don't wait for confirmation that the message was successfully delivered or received
    • lets calling thread process with other work
    • like a voice mail
  9. mpi_request
    • used as last parm in non-blocking send/recv functions
    • used to track the completion of the receive operation
  10. advantages of non-blocking message-passing calls
    • allows calling process to continue to perform work while waiting for data
    • prevents deadlock
  11. disadvantages of non-blocking message-passing class
    • harder to use (complicated code)
    • have to wait for all non-blocking calls to complete before finalizing mpi
    • must be careful canceling
  12. can you mix blocking with non blocking?
  13. mpi_iprobe
    • allows calling process to check if a non-blocking message is already received
    • pre call to mpi_recv()
    • flag = true if there is a message waiting
  14. broadcast function
    • mpi_bcast
    • sends and receives
    • broadcasts from one process to all others
    • blocking function
  15. gather function
    • mpi_gather
    • sends and receives
    • gathers data from all processes to one root process
    • blocking function
  16. reduce function
    • mpi_reduce
    • reduces all values to one value
    • blocking function
  17. all-reduce function
    • no root param
    • every process receives the same reduction of the values in the input buffers
  18. group
    an ordered set of processes with process ranks in the range 0 to N-1
  19. communicator
    an object that encompasses a group of processes that may communicate with eachother
  20. 2 ways to create to create a communicator
    • mpi_comm_create()
    • mpi_comm_spit()
  21. mpi_comm_create()
    creates a new communicator based on a predefined group.
  22. mpi_comm_split()
    creates a new communicator but bypasses the need to specify a group
  23. color
    • used in mpi_comm_split
    • assigns existing processes within an existing communicator to a new sub-communicator
    • all processes assigned the same color will be assigned to the same communicator
  24. mpi_comm_free
    used to free the memory from a communicator
  25. mpi_type_commit
    used to allocate the derived type
  26. mpi_type_contiguous()
    pass any row of the array
  27. mpi_type_vector()
    pass any column of the array
  28. mpi_type_indexed()
    pass some other subset of the array