The Hadoop Map/Reduce framework has a master/slave architecture. It has a HA job orchestrator (YARN) and several worker servers, one per node in the cluster. YARN is the point of interaction between users and the framework. Users submit map/reduce jobs to the jobtracker, which puts them in a queue of pending jobs and executes them on a first-come/first-served basis. YARN manages the assignment of the map/reduce ans spark tasks to the workers.
A Hue frontend, Ooozie and Hive are also available.
Hadoop's Distributed File System is designed to reliably store very large files across machines in a large cluster. Hadoop DFS stores each file as a sequence of blocks, all blocks in a file except the last block are the same size. Blocks belonging to a file are replicated for fault tolerance. The block size and replication factor are configurable per file. Files in HDFS are "write once" and have strictly one writer at any time.
Hadoop & gCube
Hadoop nodes are exploited by gCube services which then provide higher level functionality through the iMarine VREs. gCube Services can execute Hadoop Map Reduce Jobs using the gCube Execution engine which implements a particular adaptor to interface to Hadoop jobtracker. As well a new Framework called WPS-Hadoop has been developed to allow executing different type of Environmental and Geospatial Algorithms in Hadoop.
A Spark 2 environment is also available on the same Hadoop cluster.
The following Hadoop clusters are available on the D4ScienceAn e-Infrastructure operated by the D4Science.org initiative. infrastructure thanks to iMarine:
|Partner||Distribution||YARN||Worker Nodes||HDFS Size||total RAM||total virtual CPU cores|