arrow_circle_down Sort by Categories
Data is classified as big if the total size is more than 1 GB/TB/PB/EX. A large organization with a huge amount of data uses Hadoop, Spark software, processed with the help of a large cluster of commodity hardware. Cluster term refers to a group of systems that are connected via LAN and multiple nodes on this cluster helps in performing the jobs. Hadoop, Spark has gained popularity worldwide in managing big data and at present, it has covered nearly 90% market of big data.
Categories of BigData:
Example of BigData:
1) New York Exchange generates about 1 TB of new trade data per day.
2) Social Media: Statistics shows that 500+ terabytes of data get ingested into the database of social media site Facebook, every day.
3) Jet Engine /Travel Portals: Single Jet Engine generates 10+ terabytes(TB) of data in 30 minutes of a flight per day. Generation of data reaches up to many Petabytes (PB).
Overview of Hadoop Ecosystem:
The Hadoop ecosystem consists of
1) HDFS (Hadoop Distributed File System)
2) Apache MapReduce
3) Apache PIG
4) Apache HBase
5) Apache Hive
6) Apache Sqoop
7) Apache Flume, and
8) Apache Zookeeper
9) Apache Kafka
10) Apache OOZIE
iClass Gyansetu provides classes with corporate trainers working with top companies.
Gyansetu provides free placement services.
Big Data Hadoop, Spark takes around 4-5months to learn
Phone No- +91-8130799520/ 9999201478