Sumario: | Apache Hadoop is a freely available open source tool-set that enables big data analysis. This Hadoop Fundamentals LiveLessons tutorial demonstrates the core components of Hadoop including Hadoop Distriuted File Systems (HDFS) and MapReduce. In addition, the tutorial demonstrates how to use Hadoop at several levels including the native Java interface, C++ pipes, and the universal streaming program interface. Examples of how to use high level tools include the Pig scripting language and the Hive 'SQL like' interface. Finally, the steps for installing Hadoop on a desktop virtual machine, in a Cloud environment, and on a local stand-alone cluster are presented. Topics covered in this tutorial apply to Hadoop version 2 (i.e., MR2 or Yarn). The source code repository for this LiveLesson can be found at www.clustermonkey.net/download/LiveLessons/Hadoop_Fundamentals/ . About the Author: Douglas Eadline, PhD, began his career as a practitioner and a chronicler of the Linux Cluster HPC revolution and now documents big data analytics. Starting with the first Beowulf How To document, Dr. Eadline has written hundreds of articles, white papers, and instructional documents covering virtually all aspects of HPC computing. Prior to starting and editing the popular ClusterMonkey.net web site in 2005, he served as Editorinchief for ClusterWorld Magazine, and was Senior HPC Editor for Linux Magazine. Currently, he is a consultant to the HPC industry and writes a monthly column in HPC Admin Magazine. Both clients and readers have recognized Dr. Eadline's ability to present a "technological value proposition" in a clear and accurate style. He has practical hands on experience in many aspects of HPC including, hardware and software design, benchmarking, storage, GPU, cloud, and parallel computing.
|