As per your team needs
The program is focussed on ingestion, storage, processing and analysis of Big data using Hadoop and Spark Ecosystem i.e. HDFS, MapReduce, YARN, Sqoop, Flume, Hive, Spark Core, Pig, Impala, HBase and Kafka.
The intended audience for this course:
Participants should preferably have prior Software development experience along with basic knowledge of SQL and Unix commands. Knowledge of Python/Scala would be a plus.
Apache, Apache Kafka, Apache Spark, Apache Trino, Apache Iceberg, Apache Hive, Kafka, Spark, Trino, Iceberg, Hive, and other associated open-source project names are the Apache Software Foundation trademarks. Starburst, Starburst Data, Starburst Enterprise, and Starburst Galaxy are registered trademarks of Starburst Data, Inc. All rights reserved. DataCouch is not affiliated with, endorsed by, or otherwise associated with the Apache Software Foundation (ASF) or any of its projects.