Managing Big Data using Hadoop and Spark
Learn ingestion, storage, processing and analysis of Big data using Hadoop and Spark Ecosystem
As per your team needs
The program is focussed on ingestion, storage, processing and analysis of Big data using Hadoop and Spark Ecosystem i.e. HDFS, MapReduce, YARN, Sqoop, Flume, Hive, Spark Core, Pig, Impala, HBase and Kafka.
The intended audience for this course:
Participants should preferably have prior Software development experience along with basic knowledge of SQL and Unix commands. Knowledge of Python/Scala would be a plus.
Apache, Apache Kafka, Apache Spark, Kafka, Spark and other associated open source project names are trademarks of the Apache Software Foundation. DataCouch is not affiliated with, endorsed by, or otherwise associated with the Apache Software Foundation (ASF) or any of their projects.