Lorem ipsum dolor sit amet, conse ctetur adip elit, pellentesque turpis.

  • No products in the cart.

Image Alt

Big Data Integration using Talend

  /    /  Big Data Integration using Talend

Big Data Integration using Talend

Categories:
Big Data Technology
Reviews:

Course Overview:

Learning Big Data Frameworks is hard and working with various Big Data frameworks like Hadoop, Spark etc. is even harder.

Talend enables us to perform Visual coding of these frameworks along with allowing us to be Vendor agnostic. Talend generates code of Big Data frameworks automatically behind the scenes without worrying about the nitty gritty of the frameworks therefore Talend improves team productivity and reduces cost.

The course is focussed towards building various Big Data Integration applications using Talend through zero coding approach.

Purpose:

The program is focused you on Getting familiar with Big Data aspects of Talend Open Studio that will greatly enhance your data handling and integration capabilities. Create ETL jobs using Hive, Pig, Spark etc. that connect to almost any data source, Filter, Modify, unite data, Build standalone jobs that run on a schedule or based on an event and Make jobs more user-friendly for non-technical users.

Productivity Objectives:  

Upon completion of this course, you should be able to:

  • Integrate Talend with Big Data Distribution
  • Learn about Talend Studio
  • Read and Write data to/from HDFS (HDFS, HBase)
  • Read and Write tables to/from HDFS (Hive, Sqoop)
  • Processing Hive Tables
  • Processing data stored on HDFS with Pig
  • Process data using Talend Spark Jobs
  • Experience Kafka Jobs

The intended audience for this course

  • Developers
  • Bigdata Developers
  • Data Engineers
  • Integration Engineers
  • Architects
  • Data Steward
Getting started with a basic Big Data Job
  • Creating a Job
  • Adding components to the Job
  • Connecting the components together
  • Configuring the components
  • Executing the Job
  • Creating Hadoop Cluster
  • Apache Weblog Insights
  • Reading and Writing to Hive on Hadoop
  • Get and Put files into and from HDFS
Getting started with a Hive Job
  • Hive Overview
  • Use cases
  • Working with Simple hive row input
  • Loading data into HDFS for Hive  
  • Create external table and load data
Getting started with a HBase, Pig and HCatalog Jobs
  • HBase Overview
  • HCatalog Overview
  • HBase input output
  • HCatalog load
  • Hcatalog output
  • Weblog Generator
  • HCatalog Create
  • HCatalog Load
Getting started with a Pig Job
  • Creating a Pig Job
  • Pig Count Code
  • Pig Count IP
  • Pig Read Result
  • Pig aggregate
  • Pig code
  • Pig cross
  • Pig distinct
  • Pig filterColumns
  • Pig filterRow
  • Pig fullOuterJoin
  • Pig innerJoin
  • Pig leftOuterJoin
  • Pig rightOuterJoin
  • Pig sort
Getting started with a Kafka Job
  • Kafka Overview
  • Salient Features of Kafka
  • Kafka Use cases
  • Kafka Prod
  • Kafka Cust
Getting started with a Spark Batch Job
  • Sqoop Overview
  • Creating a Sqoop Job
  • Use cases
  • Sqoop MySQL TO HDFS PrepareTableList
  • Orchestrator Example
Getting started with a Spark Batch Job
  • Creating a Spark Batch Job
  • Carparts Demoprep
  • Carparts ETL
  • Carparts01 Spark
  • LoadCarPartsinHDFS
  • Load US Spending Files
  • US Agency Spending Files
  • Prep Real Gov File for Demo Copy
  • MR Count Code
  • MR Count IP

The participants should have a basic knowledge of Linux/Unix commands

Course Information

Duration

2 Days

Mode of Delivery

Instructor led/Virtual

Reach out to us..Our representative will get back to you!



Post a Comment