Hadoop Tutorials.CO.IN
Big Data - Hadoop - Hadoop Ecosystem - NoSQL - Spark

Understanding Map-Reduce Programming

by Abhishek Dharga

MapReduce is a method for distributing a task across multiple nodes where each node processes data stored on that node.
Consists of two phases:

  • Map
  • Reduce



The jobtracker coordinates all the jobs run on the system by scheduling tasks to run on tasktrackers. Tasktrackers run tasks and send progress reports to the jobtracker, which keeps a record of the overall progress of each job. If a task fails, the jobtracker can reschedule it on a different tasktracker.

Hadoop divides the input into fixed-size pieces called input splits, or just splits. The size of split is mostly same as block size of HDFS for optimization purpose. Hadoop creates one map task for each split, which runs the user defined map function for each record in the split. The record, we can say is made up by Hadoop using record reader as key value pair.

Hadoop always try its best to run the map task on a node where the input data /input split resides in. This is called the data locality optimization which improves the performance. In case all 3 nodes hosting the replica of map task input split are busy with other task then, job scheduler will look for a free map slot on a node in the same rack as one of the blocks. Very occasionally even this is not possible, so an off-rack node is used, which results in an inter-rack network transfer. It is possible because of rack awareness feature in Hadoop.

Map tasks write their output to the local disk /local node where mapper resides, not to HDFS. Map output is intermediate output and once the job is complete the map output can be thrown away.

If the node running the map task fails before the map output has been consumed by the reduce task, then Hadoop using job tracker will automatically rerun the map task on another node to re-create the map output.



The input to a reduce task is normally the output from multiple mappers. Reduce tasks don't have the advantage of data locality. Therefore, the sorted and shuffled map outputs have to be transferred across the network to the node where the reduce task is running, where they are merged and then passed to the user-defined reduce function. The output of the reducer is stored in HDFS.





Search

Follow us on Twitter

Recommended for you