Semester 6 - Concurrent Processing 2 Midterm

Card Set Information

Semester 6 - Concurrent Processing 2 Midterm
2014-10-15 01:11:34
midterm concurrent processing

semester 6 concurrent processing 2
Show Answers:

  1. Beowulf Cluster
    • a high-performance parallel computing cluster 
    • a group of networked PCs configured and employed to work in parallel
  2. Distributed memory parallel computer
    Beowulf Cluster
  3. Beowulf Cluster pros
    • cheap
    • scalable
  4. Beowulf cluster cons
    • nodes must be linked by a high performance network
    • complex software
  5. general concurrency
    • platform with multiple processors
    • processors are working on unrelated tasks simultaneously
    • not TRUE parallel processing
  6. True Parallelism
    processors simultaneously performs part of the same task
  7. Pseudo Parallelism
    • executing multiple parallel processes on a single processor
    • illusion
  8. Message Passing
    coordination of the subtasks requires intercommunication between the parallel processes
  9. MPI
    • Message Passing Interface
    • communications protocol for programming parallel computers
    • c++ and fortran
  10. two common patterns when implementing an application that uses parallel processes
    • master-slave architecture
    • data-parallel architecture
  11. master-slave architecture
    • one process is responsible for managing the other processes
    • other processes are slaves
  12. data parallel architecture
    • processes do the same work on a different chunk of data
    • 'blocking' or 'striping' the input data
    • minimal coordination between the processes is required
  13. How does each process know it's part of the same parallel program?
    specific nodes of the cluster(processors) are assigned to the application at runtime
  14. a process with rank 0
    is the master process
  15. Communicator
    an object connecting groups of processes in the MPI session.
  16. default communicator
  17. must be called by each process before terminating
  18. MPI_Comm_rank(MPI_Comm comm, int *rank)
    • obtains the rank(identifier) for the current process
    • answers the question "Who am I?"
  19. rank and size
    if size of 4 then ranks are 0,1,2,3
  20. Communicator Size
    • int MPI_Comm_size(MPI_COMM_WORLD, int *size);
    • gets the number of processes int the communicator
    • used in the master process
  21. Process Rank
    • int MPI_Comm_rank(MPI_COMM_WORLD, int *rank);
    • gets the rank of the current process
    • if (rank == 0){ //then master process }
  22. MPI Tags
    • user defined tag is sent with each message
    • helps distinguish if the message is passing more data, or quitting.
  23. MPI_Status
    • a data structure
    • MPI_Status status;
  24. Non-Blocking Send / Receive Functions
    • don't wait for confirmation that the message was successfully delivered or received.
    • allows messages to be passes asynchronously using a separate worker thread.
  25. synchronous functions
    • blocking
    • having a personal conversation
  26. asynchronous functions
    • non-blocking
    • like leaving/picking-up a voice mail
  27. non-blocking send MPI_Request
    • used as a parameter in a non-blocking send function
    • used to track the completion of the send operation
  28. non-blocking receive MPI_Request
    • used to track the completion of the receive operation
    • does not include a status parameter
  29. MPI_Test flag
    boolean value: 0 = incomplete
  30. MPI_Test
    tests in a non-blocking send/receive has completed
  31. MPI_Wait
    waits until a non-blocking send/receive is completed
  32. advantages of non-blocking message-passing calls
    • improves performance by allowing the calling process to continue work while waiting for data to be transferred.
    • prevents deadlock
  33. disadvantages of non-blocking message-passing calls
    • complicated code
    • must be careful to wait for all non-blocking calls to be completed before finalizing MPI
    • Must be careful to cancel calls that will never be completed before finalizing
  34. when to using non-blocking message passing
    • whenever the calling thread could perform useful work while waiting for a message to send or be received
    • whenever there is a risk of deadlock
  35. mixing blocking with non-blocking
    it's okay to do!
  36. calling MPI_Finalize() before allowing non-blocking functions are complete
    will cause a run-time error
  37. What is MPI_Iprobe and what will flag do?
    • allows calling proces to check if a non-blocking message is ready to be received without calling MPI_Recv()
    • flag will be true if there is a message waiting
  38. Amdahl's law
    used to find the maximum expected improvement to an overall system when only part of the system is improved.
  39. Amdahl's law speedup max
    speedup is limited to at most 20×
  40. MPI_Bcast
    Broadcasts data from one process to all other processes
  41. MPI_Gather
    gathers data from all processes to one root processes
  42. MPI_Reduce
    • reduces all values within a communicator
    • blocking function
  43. MPI_Allreduce
    • similar to mpi_reduce except
    • no root parameter
  44. shared memory supercomputer
    processing cores to a single machine
  45. shared vs distributed memory supercomputer
    • shared = multiple processes on a single comp
    • distributed = a group of multiple computers networked together to work in parallel
  46. MPI-1 vs MPI-2
    • MPI-1 = basic message passing functions, C and fortran language bindings
    • MPI-2 =  Dynamic process management, c++ binding, parallel I/O
  47. MPI-1
    • Basic message passing functions
    • C and Fortran language bindings
  48. MPI-2
    • Dynamic proccess management
    • c++ binding
    • Parallel I/O