Why Hadoop used for Big Data Analytics ?
- Big data analytics is the process of examining large data sets to uncover hidden patterns, unknown correlations, market trends, customer preferences and other useful business information.
- Hadoop is a framework to store and process big data. Hadoop specifically designed to provide distributed storage and parallel data processing that big data requires.
Hadoop is the best solution for storing and processing big data because:
Hadoop stores huge files as they are (raw) without specifying any schema.
- High scalability – We can add any number of nodes, hence enhancing performance dramatically.
- High availability – In hadoop data is highly available despite hardware failure. If a machine or few hardware crashes, then we can access data from another path.
- Reliable – Data is reliably stored on the cluster despite of machine failure.
- Economic – Hadoop runs on a cluster of commodity hardware which is not very expensive.
What is Hadoop ?
- Hadoop is an open source project from Apache Software Foundation.
- It provides a software framework for distributing and running applications on clusters of servers that is inspired by Google’s Map-Reduce programming model as well as its file system(GFS).
- Hadoop was originally written for the nutch search engine project.
- Hadoop is open source framework written in Java. It efficiently processes large volumes of data on a cluster of commodity hardware.
- Hadoop can be setup on single machine , but the real power of Hadoop comes with a cluster of machines , it can be scaled from a single machine to thousands of nodes. Hadoop consists of two key parts,
- Hadoop Distributes File System(HDFS)
Hadoop Distributed File System(HDFS)
- HDFS is a highly fault tolerant, distributed, reliable, scalable file system for data storage.
- HDFS stores multiple copies of data on different nodes; a file is split up into blocks (Default 64 MB) and stored across multiple machines.
- Hadoop cluster typically has a single namenode and number of datanodes to form the HDFS cluster.
- Map-Reduce is a programming model designed for processing large volumes of data in parallel by dividing the work into a set of independent tasks.
- It is also a paradigm for distributed processing of large data set over a cluster of nodes.