Respuesta :

fichoh

Hadoop could be described as an open source program to handle big data. By utilizing the MapReduce model, Hadoop distributes data across it's nodes which are a network of connected computers. This data is shared across the systems and thus enables it to handle and process a very large amount dataset at the same time. These collection of networked nodes is called the HADOOP CLUSTER.

Hadoop leverages the power and capabilities of it's distributed file system (HDFS) which can handle the reading and writing of large files from log files, databases, raw files and other forms of streaming data for processing using ingestion tools such as the apache Flume, striim and so on. The HDFS ensures the ingested data are divided into smaller subunits and distributed across systems in the cluster.

Learn more : https://brainly.com/question/15548105