Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
The main focus of the course will be on how to use the Big Data tools, but we will
also focus on how to install and configure some of the Big Data related
frameworks on in-premise and cloud based infrastructure.
Because of the hype Hadoop is the news all the time. But, there are a lot of
frameworks supporting Hadoop (like Pig/Hive) and a lot of frameworks which are
alternatives to Hadoop (like Twitter Storm and LinkedIn Samza) to address the
limitations of the MapReduce model. Some of these frameworks will also be
discussed during the course to give a big picture of what Big Data is about.
Also, time will be spent on NoSQL databases. Starting with why NoSQL instead of
RDBMS databases to some of advanced concepts like importing data in bulk from
a RDBMS to a NoSQL database. Different NoSQL databases will be compared and
HBase will be discussed in much more detail.
A VM (Virtual Machine) will be provided for all the participants with Big Data
frameworks (Hadoop etc.) installed and configured on CentOS with data sets and
code to process the same. The VM helps in making the Big Data learning
experience less steeper.
The training will help the participant get through the Cloudera Certified
Developer for Apache Hadoop (CCDH) certification with minimal effort.
Pre-Requisites:
Knowledge of Java is a definitive plus to get started with Big Data, but not
mandatory. Hadoop provides streaming which allows programming MapReduce
in non-Java languages like Perl, Python and there are also higher level abstracts
like Hive/Pig which provides SQL like procedure type interface.
Similarly knowledge of Linux would be a definitive plus but the basics of Linux just
enough to get started with the different Big data frameworks.
A laptop/desktop with minimum of 3GB RAM, 10 GB free HARD Disk and with a
decent processor. These specifications would be enough to run the Big Data VM
and the framework smoothly.
Any Data Analyst who would like to enhance/transfer their existing knowledge to
the Big Data space.
Any Architect who would like to design application in conjunction to Big Data or
Big Data applications itself.
HDFS Interfaces
Web Interface
Command Line Interface
File System
Administrative
Input Splits
Word co-occurrence
Pig
Introduction to PIG
Why PIG not MapReduce
Pig Components
Pig Execution Modes
Pig Shell - Grunt
Pig Latin, Writing PIG Latin scripts
Pig Data Types
Storage Types
Diagnosing Pig commands
Macros
UDF and External Scripts
Hive
Introduction and Architecture
Different modes of executing Hive queries
Metastore implementations
HiveQL (DDL & DML operations)
External vs Internal Tables
Views
Partitions & Buckets
UDF
Comparison of Pig and Hive
Flume
Overview of Flume
NoSQL Databases
Introduction to NoSQL database
Types of NoSQL databases and their features
Brewers CAP Theorem
Advantage of NoSQL vs. traditional RDBMS
ACID vs BASE
Different types of NoSQL databases
Key value
Columnar
Document
Graph
HBase
Introduction to HBase
Why use HBase
HBase Architecture - read and write paths
HBase vs. RDBMS
Installing and Configuration
Schema design in HBase - column families, hot spotting
Accessing data with HBase Shell
Accessing data with HBase API - Reading, Adding, Updating data from the
shell, JAVA API
HBase Coprocessors (Endpoints, Observers)
POC
Click stream analysis
Analyzing the Twitter data with Hive