COURSES OFFERED

BIG DATA - HADOOP & PYSPARK

The Big Data - Hadoop & PySpark Course covers the topics from the very basics of Big Data to the level needed to work in live projects. The Course is a must for anyone in the IT industry and prospective Big Data experts. The Course is the right blend of big data concepts and hands-on in HDFS, MapReduce, HIVE, SQOOP, No-SQL, Kafka, PySpark in detail.

This course includes:

☞ 30+hours of on demand videos

☞ Live clarification session on a fortnight

☞ Right blend of concepts and hands-on

☞ Certificate of completion

☞ Built in Assessment to test your knowledge.

Syllabus

Data Introduction

  • Data Sources
  • Data Storage & Processing
  • Data VS Information
  • Assessment

Introduction to Big Data & Hadoop Ecosystem

  • Emergence of the Big Data
  • Basic Terminologies
  • Central Theme of Big Data
  • Requirements of a Programming Model
  • Assessment

HDFS - Hadoop Distributed File System

  • Emergence of the Big Data
  • Basic Terminologies
  • Central Theme of Big Data
  • Requirements of a Programming Model
  • Assessment

Map Reduce

  • Map Reduce Introduction
  • Map reduce Flow Examples
  • Map Reduce Implementation
  • Mappers and Reducers
  • Map Reduce - Shuffle, Sort and Partitions
  • Map Reduce - Combiners
  • Assessment

YARN

  • What is YARN ?
  • YARN Resource Manager and Node Manager
  • YARN Application Master Process
  • Scheduling Policies
  • Assessment

Hive

  • Transactional Processing vs Analytical Processing
  • Hive as an open source Data Warehouse
  • Hive Architecture
  • Hive Data Types
  • Different Types of Table in Hive
  • Loading Data in Hive
  • Complex Data Types in Hive
  • Denormalized Storage in Hive
  • Optimization of Queries in Hive - Partitioning and Bucketing
  • Assessment

No-SQL Introduction

  • What led to the No SQL
  • What is No-SQL
  • Characteristics of No-SQL Databases
  • CAP Theorem
  • No-SQL Models
  • Assessment

SQOOP

  • What is SQOOP ?
  • SQOOP Architecture
  • Why do we need Sqoop ?
  • How Sqoop works ?
  • Sqoop Import
  • Sqoop Export
  • Use of Sqoop
  • Assessment

Introduction to Kafka

  • What is Kafka
  • Components of Kafka
  • Kafka Core APIs
  • Kafka Example
  • Assessment

Programming with Python

  • Introduction to Python
  • Executing Python code
  • Syntax i.e. Comments,Indentation
  • Variables
  • Datatypes
  • Operators
  • Control Flow structures

Introduction to Spark

  • Why Spark?
  • Advantages of Spark
  • What is Spark?
  • Components of Spark
  • History of Spark
  • Assessment

Overview of the Spark

  • Spark Architecture
  • Spark Session
  • Spark Language API's
  • Data Frame and Partitions
  • Transformations & Actions
  • Assessment

Spark Structured API Overview

  • Structured API's
  • Schema Spark
  • Types Structured
  • API Execution
  • Assessment

Operations on DataFrames

  • Columns and Expressions
  • Working with Records
  • Creating DataFrames
  • Methods to Manipulate Columns
  • DataFrame Transformations Rows and Columns
  • Assessment

Working with Different Types of Data

  • Working with Booleans
  • Working with Numerical Data
  • String Manipulations
  • Working with Dates and Time Stamps
  • Tackling Nulls
  • Working with Complex Data
  • User Defined Functions
  • Assessment

Aggregations in Spark

  • Simple Aggregation Functions
  • Statistical Aggregation Functions
  • GroupBy - Calculations based on Groups of Data
  • Grouping of Sets - Rollups and Cube
  • Window Functions
  • Assessment

Joins in Spark

  • What are Joins ?
  • Aggregation vs Joins
  • Inner Joins
  • Outer Joins
  • Left Semi and Anti joins
  • Spark Join Challenges
  • Spark Join Communication Strategies
  • Assessment

Spark SQL Introduction

  • Spark Managed Tables
  • Creating Tables
  • Inserting data into Tables
  • Running Spark
  • SQL Queries
  • Assessment

RDDs

  • What are Low-level APIs and when to use them?
  • What are RDDs ?
  • Creating RDDs
  • RDD Transformations and Actions
  • Assessment

Distributed Variables

  • What are distributed variables ?
  • Broadcast Variables
  • Accumulators
  • Broadcast and Accumulators Examples
  • Assessment

Spark on a cluster

  • Spark Execution
  • Modes
  • Internals of Spark
  • Outside the Spark
  • Application Internals of Spark
  • Outside the Spark Application
  • Pipelining and Shuffle Persistance
  • Assessment

Monitoring and Debugging

  • Components to Monitor
  • What to Monitor ?
  • The Spark UI
  • Common issues and highlevel approach to fix them
  • Assessment

FREQUENTLY ASKED QUESTIONS

What all course material will be provided by the academy?

Participants will have lifetime access to the videos and assessments provided as a part of the course.

How experienced is the trainer?

Our trainers are practitioners from the industry who are passionate about training.

What will I be able to do upon completing the Training?

Post completion of the course successfully you will understand the Big Data Ecosystem with important components like HDFS, Hive & PySpark. You will be able to work on creating table, complex types, and partition bucketing in Hive. You will know the architecture and different components of Spark. You will be able to perform Data Engineering and Data analysis tasks using Dataframes in Spark.

What background knowledge is necessary to do this course?

One should be intrested and enthusiastic to learn apart from that no prior knowledge is required.

What all softwares are required to do this course?

Our course relies on open-source software tools like Ubuntu. To Install Ubuntu you need to have a good laptop or computer.

BIG DATA - PYSPARK DEVELOPER

Big Data - PySpark Developer course covers the topics of Basics of Python, Introduction to Apache Spark, Operations on Dataframes, Aggregations, and PySpark Joins with hands-on experience. The course is a must for anyone who plans to become a PySpark Developer

This course includes:

☞ Live session

☞ Right blend of concepts and hands-on

☞ Certificate of completion

☞ Built in Assessment to test your knowledge.

Syllabus

Data Introduction

  • Data Sources
  • Data Storage & Processing
  • Data VS Information
  • Assessment

Introduction to Big Data & Hadoop Ecosystem

  • Emergence of the Big Data
  • Basic Terminologies
  • Central Theme of Big Data
  • Requirements of a Programming Model
  • Assessment

Programming with Python

  • Introduction to Python
  • Executing Python code
  • Syntax i.e. Comments,Indentation
  • Variables
  • Datatypes
  • Operators
  • Control Flow structures

Introduction to Spark

  • Why Spark?
  • Advantages of Spark
  • What is Spark?
  • Components of Spark
  • History of Spark
  • Assessment

Overview of the Spark

  • Spark Architecture
  • Spark Session
  • Spark Language API's
  • Data Frame and Partitions
  • Transformations & Actions
  • Assessment

Spark Structured API Overview

  • Structured API's
  • Schema Spark
  • Types Structured
  • API Execution
  • Assessment

Operations on DataFrames

  • Columns and Expressions
  • Working with Records
  • Creating DataFrames
  • Methods to Manipulate Columns
  • DataFrame Transformations Rows and Columns
  • Assessment

Working with Different Types of Data

  • Working with Booleans
  • Working with Numerical Data
  • String Manipulations
  • Working with Dates and Time Stamps
  • Tackling Nulls
  • Working with Complex Data
  • User Defined Functions
  • Assessment

Aggregations in Spark

  • Simple Aggregation Functions
  • Statistical Aggregation Functions
  • GroupBy - Calculations based on Groups of Data
  • Grouping of Sets - Rollups and Cube
  • Window Functions
  • Assessment

Joins in Spark

  • What are Joins ?
  • Aggregation vs Joins
  • Inner Joins
  • Outer Joins
  • Left Semi and Anti joins
  • Spark Join Challenges
  • Spark Join Communication Strategies
  • Assessment

Spark SQL Introduction

  • Spark Managed Tables
  • Creating Tables
  • Inserting data into Tables
  • Running Spark
  • SQL Queries
  • Assessment

RDDs

  • What are Low-level APIs and when to use them?
  • What are RDDs ?
  • Creating RDDs
  • RDD Transformations and Actions
  • Assessment

Distributed Variables

  • What are distributed variables ?
  • Broadcast Variables
  • Accumulators
  • Broadcast and Accumulators Examples
  • Assessment

Spark on a cluster

  • Spark Execution
  • Modes
  • Internals of Spark
  • Outside the Spark
  • Application Internals of Spark
  • Outside the Spark Application
  • Pipelining and Shuffle Persistance
  • Assessment

Monitoring and Debugging

  • Components to Monitor
  • What to Monitor ?
  • The Spark UI
  • Common issues and highlevel approach to fix them
  • Assessment

FREQUENTLY ASKED QUESTIONS

What all course material will be provided by the academy?

Participants will have lifetime access to the videos and assessments provided as a part of the course.

How experienced is the trainer?

Our trainers are practitioners from the industry who are passionate about training.

What will I be able to do upon completing the Training?

You will know the architecture and different components of Spark. You will be able to perform Data Engineering and Data analysis tasks using Dataframes.

What background knowledge is necessary to do this course?

One should be intrested and enthusiastic to learn apart from that no prior knowledge is required.

What all softwares are required to do this course?

Our course relies on open-source software tools like Ubuntu. To Install Ubuntu you need to have a good laptop or computer.

BIG DATA - HADOOP & PYSPARK - INSTRUCTOR-LED LIVE ONLINE

This structured classroom-like course provides an interactive experience with a live instructor enabling personalized and hands-on learning with the best advantages being immediate feedback and student networking leading to an immersive learning experience achieved within the stipulated timeframe with limited distractions. This career-centric course will guide you through the basics of using Hadoop with hands-on in HDFS, MapReduce, HIVE, PySpark, No-SQL - MongoDB,Introduction to Streaming - Kafka,Scheduling Tool - Apache Airflow, Cloud Data Engineering - Microsoft Azure in detail providing realistic, penetrating and industry compliant knowledge. So, dive in and become indispensable in the Big Data world!

This course includes:

☞ 70+hours of Live Classes

☞ Right blend of concepts and hands-on

☞ Hands-on examples to compliment the concepts

☞ 100+ Self-Paced videos to enhance the learning

☞ Certificate of completion

☞ Built in Assessment to test your knowledge

Syllabus

Data Introduction

  • Data Sources
  • Data Storage & Processing
  • Data VS Information

Introduction to Big Data & Hadoop Ecosystem

  • Emergence of the Big Data
  • Basic Terminologies
  • Central Theme of Big Data
  • Requirements of a Programming Model

HDFS - Hadoop Distributed File System

  • Emergence of the Big Data
  • Basic Terminologies
  • Central Theme of Big Data
  • Requirements of a Programming Model

Map Reduce

  • Map Reduce Introduction
  • Map reduce Flow Examples
  • Map Reduce Implementation
  • Mappers and Reducers
  • Map Reduce - Shuffle, Sort and Partitions
  • Map Reduce - Combiners

YARN

  • What is YARN ?
  • YARN Resource Manager and Node Manager
  • YARN Application Master Process
  • Scheduling Policies

Hive

  • Transactional Processing vs Analytical Processing
  • Hive as an open source Data Warehouse
  • Hive Architecture
  • Hive Data Types
  • Different Types of Table in Hive
  • Loading Data in Hive
  • Complex Data Types in Hive
  • Denormalized Storage in Hive
  • Optimization of Queries in Hive - Partitioning and Bucketing

Programming with Python

  • Introduction to Python
  • Executing Python code
  • Syntax i.e. Comments,Indentation
  • Variables
  • Datatypes
  • Operators
  • Control Flow structures

Introduction to Spark

  • Why Spark?
  • Advantages of Spark
  • What is Spark?
  • Components of Spark
  • History of Spark

Overview of the Spark

  • Spark Architecture
  • Spark Session
  • Spark Language API's
  • Data Frame and Partitions
  • Transformations & Actions

Spark Structured API Overview

  • Structured API's
  • Schema Spark
  • Types Structured
  • API Execution

Operations on DataFrames

  • Columns and Expressions
  • Working with Records
  • Creating DataFrames
  • Methods to Manipulate Columns
  • DataFrame Transformations Rows and Columns

Working with Different Types of Data

  • Working with Booleans
  • Working with Numerical Data
  • String Manipulations
  • Working with Dates and Time Stamps
  • Tackling Nulls
  • Working with Complex Data
  • User Defined Functions

Aggregations in Spark

  • Simple Aggregation Functions
  • Statistical Aggregation Functions
  • GroupBy - Calculations based on Groups of Data
  • Grouping of Sets - Rollups and Cube
  • Window Functions

Joins in Spark

  • What are Joins ?
  • Aggregation vs Joins
  • Inner Joins
  • Outer Joins
  • Left Semi and Anti joins
  • Spark Join Challenges
  • Spark Join Communication Strategies

Spark SQL Introduction

  • Spark Managed Tables
  • Creating Tables
  • Inserting data into Tables
  • Running Spark
  • SQL Queries

RDDs

  • What are Low-level APIs and when to use them?
  • What are RDDs ?
  • Creating RDDs
  • RDD Transformations and Actions

Distributed Variables

  • What are distributed variables ?
  • Broadcast Variables
  • Accumulators
  • Broadcast and Accumulators Examples

Spark on a cluster

  • Spark Execution
  • Modes
  • Internals of Spark
  • Outside the Spark
  • Application Internals of Spark
  • Outside the Spark Application
  • Pipelining and Shuffle Persistance

Monitoring and Debugging

  • Components to Monitor
  • What to Monitor ?
  • The Spark UI
  • Common issues and highlevel approach to fix them

No-SQL Introduction

  • What led to the No SQL
  • What is No-SQL
  • Characteristics of No-SQL Databases
  • CAP Theorem
  • No-SQL Models
  • Mongo DB

Introduction to Streaming - Kafka

  • What is Kafka
  • Components of Kafka
  • Kafka Core APIs
  • Kafka Example

Scheduling Tool - Apache Airflow

  • Introduction
  • Hands-On
  • Airflow

Cloud Data Engineering

  • Introduction to Cloud
  • Introduction to ADF
  • Introduction to Datalakes
  • Introduction to Azure Synapse

FREQUENTLY ASKED QUESTIONS

Who should take this Big Data Course?

The market for Big Data analytics is growing across the world and this strong growth pattern translates into a great opportunity for all IT Professionals.This Big Data - Hadoop & PySpark Course can be pursued by professionals as well as freshers.

How experienced is the trainer?

Our trainers are practitioners from the industry who are passionate about training.

What will I be able to do upon completing the Training?

Post completion of the course successfully you will understand the Big Data Ecosystem with important components like HDFS, Hive & PySpark. You will be able to work on creating table, complex types, and partition bucketing in Hive. You will know the architecture and different components of Spark. You will be able to perform Data Engineering and Data analysis tasks using Dataframes in Spark and NoSQL Databases - MongoDB, Apache Airflow, Cloud Data Engineering

What are the pre-requisites for Big Data - Hadoop & PySpark course

There are no such prerequisites for Big Data - Hadoop & PySpark Course. However, prior knowledge of RDBMS - SQL will be helpful but is not mandatory

Why should you go for a Big Data - Hadoop & PySpark online course?

Big Data is the fastest growing and the most promising technology for handling large volumes of data for doing data analytics. This Big Data Hadoop course will help you be up and running in the most demanding professional skills.

BIG DATA - HADOOP, PYSPARK & MICROSOFT AZURE- INSTRUCTOR-LED LIVE ONLINE

A comprehensive guide through fundamental Hadoop aspects where participants immerse themselves in practical activities involving Hadoop Distributed File System (HDFS), PySpark, and exploration of No-SQL databases like MongoDB. The intricacies of Aiflow and Cloud Data Engineering, specifically with Microsoft Azure's ADF, Synapse, Databricks, and Datalake Storage, are thoroughly examined, providing a realistic and profound knowledge foundation. Embark on this transformative journey and position yourself as an indispensable professional in the dynamic realm of Big Data!!!!

This course includes:

☞ 80+hours of Live Classes

☞ Right blend of concepts and hands-on

☞ Hands-on examples to compliment the concepts

☞ 100+ Self-Paced videos to enhance the learning

☞ Certificate of completion

☞ Built in Assessment to test your knowledge

Syllabus

Data Introduction

  • Data Sources
  • Data Storage & Processing
  • Data VS Information

Introduction to Big Data & Hadoop Ecosystem

  • Emergence of the Big Data
  • Basic Terminologies
  • Central Theme of Big Data
  • Requirements of a Programming Model
  • Emergence of the Big Data
  • Basic Terminologies
  • Central Theme of Big Data
  • Requirements of a Programming Model

Programming with Python

  • Introduction to Python
  • Executing Python code
  • Syntax i.e. Comments,Indentation
  • Variables
  • Datatypes
  • Operators
  • Control Flow structures

In-Depth Exploration of Spark

  • Why Spark?
  • Advantages of Spark
  • What is Spark?
  • Components of Spark
  • History of Spark
  • Spark Architecture
  • Spark Session
  • Spark Language API's
  • Data Frame and Partitions
  • Transformations & Actions
  • Structured API's
  • Schema Spark
  • Types Structured
  • API Execution
  • Columns and Expressions
  • Working with Records
  • Creating DataFrames
  • Methods to Manipulate Columns
  • DataFrame Transformations Rows and Columns
  • Working with Booleans
  • Working with Numerical Data
  • String Manipulations
  • Working with Dates and Time Stamps
  • Tackling Nulls
  • Working with Complex Data
  • User Defined Functions
  • Simple Aggregation Functions
  • Statistical Aggregation Functions
  • GroupBy - Calculations based on Groups of Data
  • Grouping of Sets - Rollups and Cube
  • Window Functions
  • What are Joins ?
  • Aggregation vs Joins
  • Inner Joins
  • Outer Joins
  • Left Semi and Anti joins
  • Spark Join Challenges
  • Spark Join Communication Strategies
  • What are Low-level APIs and when to use them?
  • What are RDDs ?
  • Creating RDDs
  • RDD Transformations and Actions
  • What are distributed variables ?
  • Broadcast Variables
  • Accumulators
  • Broadcast and Accumulators Examples
  • Spark Execution
  • Modes
  • Internals of Spark
  • Outside the Spark
  • Application Internals of Spark
  • Outside the Spark Application
  • Pipelining and Shuffle Persistance
  • Components to Monitor
  • What to Monitor ?
  • The Spark UI
  • Common issues and highlevel approach to fix them

Introduction to No-SQL Databases

  • What led to the No SQL
  • What is No-SQL
  • Characteristics of No-SQL Databases
  • CAP Theorem
  • No-SQL Models
  • SQL vs NoSQL

Introduction to MongoDB

  • Introduction to MongoDB
  • CRUD Operations
  • Indexing
  • Replication

Introduction to Scheduling Tool - Apache Airflow

  • Airflow Introduction
  • Airflow Basic Architecture
  • Airflow Task Flow API
  • Databases and Executors

Introduction to Cloud Data Computing

  • Introduction to Cloud
  • Deployment Model of Cloud Services
  • Public, Private and Hybrid Cloud Models
  • Cloud Service Models- PaaS,SaaS,IaaS
  • Introduction to Microsoft Azure
  • Services in Azure

Unveiling Azure Data Lake Storage

  • Creating a Storage Account
  • Working with Containers and Blobs
  • Types of Blobs
  • Introduction to Azure Data Lake Storage Gen2
  • Enable Azure Data Lake Storage Gen2 in Azure Storage
  • Azure Data Lake Store vs Azure Blob Storage
  • Understand the stages for processing Big Data
  • Use Azure Data Lake Storage Gen2 in data analytics workloads

An In-Depth Introduction to Azure Data Factory (ADF)

  • Storage Service and Account
  • Azure Key Vault
  • What is Data Factory?
  • Data Factory Key Components
  • Pipeline and Activity
  • Linked Service on Data Set
  • Integration Runtime Provision Required Azure Resources
  • Create Resource Group
  • Create Storage Account
  • Hands-On Session using Different Scenarios on ADF

Navigating Azure Databricks

  • Introduction
  • Get started with Azure Databricks
  • Identify Azure Databricks workloads
  • Understand Key Concepts
  • Get to know Spark
  • Create a Spark cluster
  • Use Spark in Notebooks
  • Use Spark to work with data files
  • Visualize the data
  • Get started with Delta Lake
  • Create Delta Lake tables
  • Create and Query Catalog Tables
  • Use Delta Lake for streaming data
  • Get started with SQL Warehouse
  • Create databases and tables
  • Create queries and dashboards
  • Understand Azure Databricks notebooks and pipelines
  • Create a linked service for Azure Databricks
  • Use a Notebook activity in a pipeline
  • Use parameters in a notebook

Exploring Azure Synapse: A Comprehensive Introduction

  • Introduction
  • Design a data warehouse schema
  • Create data warehouse tables
  • Load data warehouse tables
  • Query a data warehouse
  • Load staging tables
  • Load dimension tables
  • Load slowly changing dimensions
  • Load fact tables
  • Perform post load optimization
  • Scale compute resources in Azure Synapse Analytics
  • Pause compute in Azure Synapse Analytics
  • Manage workloads in Azure Synapse Analytics
  • Introduction to Azure Synapse Analytics
  • What is Azure Synapse Analytics
  • How Azure Synapse Analytics works
  • When to use Azure Synapse Analytics
  • Query files using a serverless SQL Pool
  • Understand pipeline in Azure Synapse Studio
  • Define Dataflows
  • Run a pipeline
  • Understand Synapse Notebooks and Pipelines
  • Use a Synapse notebook activity in a pipeline
  • Use parameters in a notebook

FREQUENTLY ASKED QUESTIONS

Who should take this Big Data Course?

The market for Big Data analytics is growing across the world and this strong growth pattern translates into a great opportunity for all IT Professionals.This Big Data - Hadoop, PySpark & Microsoft Azure Course can be pursued by professionals as well as freshers.

How experienced is the trainer?

Our trainers are practitioners from the industry who are passionate about training.

What will I be able to do upon completing the Training?

Post completion of the course successfully you will understand the Big Data Ecosystem with important components like HDFS & PySpark. You will know the architecture and different components of Spark. You will be able to perform Data Engineering and Data analysis tasks using Dataframes in Spark and NoSQL Databases - MongoDB, Apache Airflow, Cloud Data Engineering

What are the pre-requisites for Big Data - Hadoop, PySpark & Microsoft Azure course

There are no such prerequisites for Big Data - Hadoop,PySpark & Microsoft Azure Course. However, prior knowledge of RDBMS - SQL will be helpful but is not mandatory

Why should you go for a Big Data - Hadoop, PySpark & Microsoft Azure online course?

Big Data is the fastest growing and the most promising technology for handling large volumes of data for doing data analytics. This Big Data Hadoop course will help you be up and running in the most demanding professional skills.

BIG DATA - HADOOP & PYSPARK FOR COLLEGE STUDENTS

The Big Data - Hadoop & PySpark Course covers the topics from the very basics of Big Data to the level needed to work in live projects. The Course is a must for anyone in the IT industry and prospective Big Data experts. The Course is the right blend of big data concepts and hands-on in HDFS, MapReduce, HIVE, SQOOP, No-SQL, Kafka, PySpark in detail.

This course includes:

☞ 30+hours of on demand videos

☞ Live clarification session on a fortnight

☞ Right blend of concepts and hands-on

☞ Certificate of completion

☞ Built in Assessment to test your knowledge.

Syllabus

Data Introduction

  • Data Sources
  • Data Storage & Processing
  • Data VS Information
  • Assessment

Introduction to Big Data & Hadoop Ecosystem

  • Emergence of the Big Data
  • Basic Terminologies
  • Central Theme of Big Data
  • Requirements of a Programming Model
  • Assessment

HDFS - Hadoop Distributed File System

  • Emergence of the Big Data
  • Basic Terminologies
  • Central Theme of Big Data
  • Requirements of a Programming Model
  • Assessment

Map Reduce

  • Map Reduce Introduction
  • Map reduce Flow Examples
  • Map Reduce Implementation
  • Mappers and Reducers
  • Map Reduce - Shuffle, Sort and Partitions
  • Map Reduce - Combiners
  • Assessment

YARN

  • What is YARN ?
  • YARN Resource Manager and Node Manager
  • YARN Application Master Process
  • Scheduling Policies
  • Assessment

Hive

  • Transactional Processing vs Analytical Processing
  • Hive as an open source Data Warehouse
  • Hive Architecture
  • Hive Data Types
  • Different Types of Table in Hive
  • Loading Data in Hive
  • Complex Data Types in Hive
  • Denormalized Storage in Hive
  • Optimization of Queries in Hive - Partitioning and Bucketing
  • Assessment

No-SQL Introduction

  • What led to the No SQL
  • What is No-SQL
  • Characteristics of No-SQL Databases
  • CAP Theorem
  • No-SQL Models
  • Assessment

SQOOP

  • What is SQOOP ?
  • SQOOP Architecture
  • Why do we need Sqoop ?
  • How Sqoop works ?
  • Sqoop Import
  • Sqoop Export
  • Use of Sqoop
  • Assessment

Introduction to Kafka

  • What is Kafka
  • Components of Kafka
  • Kafka Core APIs
  • Kafka Example
  • Assessment

Programming with Python

  • Introduction to Python
  • Executing Python code
  • Syntax i.e. Comments,Indentation
  • Variables
  • Datatypes
  • Operators
  • Control Flow structures

Introduction to Spark

  • Why Spark?
  • Advantages of Spark
  • What is Spark?
  • Components of Spark
  • History of Spark
  • Assessment

Overview of the Spark

  • Spark Architecture
  • Spark Session
  • Spark Language API's
  • Data Frame and Partitions
  • Transformations & Actions
  • Assessment

Spark Structured API Overview

  • Structured API's
  • Schema Spark
  • Types Structured
  • API Execution
  • Assessment

Operations on DataFrames

  • Columns and Expressions
  • Working with Records
  • Creating DataFrames
  • Methods to Manipulate Columns
  • DataFrame Transformations Rows and Columns
  • Assessment

Working with Different Types of Data

  • Working with Booleans
  • Working with Numerical Data
  • String Manipulations
  • Working with Dates and Time Stamps
  • Tackling Nulls
  • Working with Complex Data
  • User Defined Functions
  • Assessment

Aggregations in Spark

  • Simple Aggregation Functions
  • Statistical Aggregation Functions
  • GroupBy - Calculations based on Groups of Data
  • Grouping of Sets - Rollups and Cube
  • Window Functions
  • Assessment

Joins in Spark

  • What are Joins ?
  • Aggregation vs Joins
  • Inner Joins
  • Outer Joins
  • Left Semi and Anti joins
  • Spark Join Challenges
  • Spark Join Communication Strategies
  • Assessment

Spark SQL Introduction

  • Spark Managed Tables
  • Creating Tables
  • Inserting data into Tables
  • Running Spark
  • SQL Queries
  • Assessment

RDDs

  • What are Low-level APIs and when to use them?
  • What are RDDs ?
  • Creating RDDs
  • RDD Transformations and Actions
  • Assessment

Distributed Variables

  • What are distributed variables ?
  • Broadcast Variables
  • Accumulators
  • Broadcast and Accumulators Examples
  • Assessment

Spark on a cluster

  • Spark Execution
  • Modes
  • Internals of Spark
  • Outside the Spark
  • Application Internals of Spark
  • Outside the Spark Application
  • Pipelining and Shuffle Persistance
  • Assessment

Monitoring and Debugging

  • Components to Monitor
  • What to Monitor ?
  • The Spark UI
  • Common issues and highlevel approach to fix them
  • Assessment

FREQUENTLY ASKED QUESTIONS

What all course material will be provided by the academy?

Participants will have lifetime access to the videos and assessments provided as a part of the course.

How experienced is the trainer?

Our trainers are practitioners from the industry who are passionate about training.

What will I be able to do upon completing the Training?

Post completion of the course successfully you will understand the Big Data Ecosystem with important components like HDFS, Hive & PySpark. You will be able to work on creating table, complex types, and partition bucketing in Hive. You will know the architecture and different components of Spark. You will be able to perform Data Engineering and Data analysis tasks using Dataframes in Spark.

What background knowledge is necessary to do this course?

One should be intrested and enthusiastic to learn apart from that no prior knowledge is required.

What all softwares are required to do this course?

Our course relies on open-source software tools like Ubuntu. To Install Ubuntu you need to have a good laptop or computer.

BASICS OF OPERATING SYSTEMS & LINUX COMMANDS

The Course is for anyone in the IT industry and Students. The Course is the right blend of Computer Basics concepts and Operating System along with Linux Commands with Hands-On. Once you go through our course you can gain intermediate knowledge in Linux.

This course includes:

☞ 3+hours of on demand videos

☞ Live clarification session on a fortnight

☞ Right blend of concepts and hands-on

☞ Certificate of completion

☞ Built in Assessment to test your knowledge.

Syllabus

Computer Basics

  • What is Computer?
  • History of Computer
  • Components of Computer
  • Functionalites of Computer
  • Software Components
  • Assessment

Operating System

  • What is Operating System
  • Types of OS
  • Assessment

Linux

  • Directory commands
  • File Commands
  • Listing Files
  • Assessment
  • Copy and Move Commands
  • Linux Commands - More
  • Linux Commands - Cat command
  • Assessment
  • Regular Expressions with GREP
  • GREP Hands-on
  • Comparison Commands - Diff
  • Comparison Commands - Cmp
  • Assessment
  • Linux Commands with Pipes
  • Linux Commands - Man
  • Process Monitoring
  • Process Monitoring Hands-on
  • Assessment
  • FilePermissionsPart1
  • FilePermissionsPart2
  • Networking Command
  • Admin Command
  • Compression Commands
  • Assessment

FREQUENTLY ASKED QUESTIONS

What all course material will be provided by the academy?

Participants will have lifetime access to the videos and assessments provided as a part of the course.

How experienced is the trainer?

Our trainers are practitioners from the industry who are passionate about training.

What will I be able to do upon completing the Training?

You will gain knowledge on basics of computers & Operating System with Linux commands and Hands-on.

What background knowledge is necessary to do this course?

One should be intrested and enthusiastic to learn apart from that no prior knowledge is required.

What all softwares are required to do this course?

Our course relies on open-source software tools like Ubuntu. To Install Ubuntu you need to have a good laptop or computer.

INTRODUCTION TO BIGDATA AND HADOOP ECOSYSTEM

This course covers the topics from the basics of Big Data and the Hadoop Ecosystem. The course is a must for anyone in the IT industry and prospective Big data experts. The course covers the topics of Introduction to Big Data, Fundamentals of Big Data, The Foundations of Big Data and Hadoop Ecosystem, Hadoop Distributed file system, and Map Reduce./p>

This course includes:

☞ Live session

☞ Right blend of concepts and hands-on

☞ Certificate of completion

☞ Built in Assessment to test your knowledge.

Syllabus

INTRODUCTION TO BIG DATA

  • Introduction

THE FUNDAMENTALS

  • Data VS Information
  • Data Storage and Processing
  • Data Sources
  • Big Data Introduction
  • Fundamentals Assessment

THE FOUNDATIONS OF BIG DATA

  • Emergence of the Big Data
  • Basic Terminologies
  • Central Theme of Big Data
  • Requirements of a Programming Model
  • Understand Distributed Processing through a Story
  • Foundations Assessment 2

ENVIRONMENT AND INSTALLATION

  • Oracle_VirtualMachine_Installation
  • Google Cloud Platform Setup
  • How to install Ubuntu operating system on Virtual Box

HADOOP ECOSYSTEM

  • Introduction to Hadoop Ecosystem
  • Hadoop Ecosystem Assessment 1

HADOOP DISTRIBUTED FILE SYSTEM

  • What is HDFS?
  • Nodes in HDFS
  • HDFS Assessment 1
  • Storing Files in HDFS
  • HDFS Assessment 2
  • Challenges in Distributed Systems
  • Managing the Data Node Failure
  • HDFS Assessment 3
  • Managing Name Node Failure
  • HDFS Commands Part 1
  • HDFS Commands Part 2
  • HDFS Assessment 4

MAP REDUCE

  • Introduction to Map Reduce
  • Map Reduce Flow Example 1
  • Map Reduce Example 2 - User View Count
  • Map Reduce Mappers and Reducers
  • MapReduce Assessment 1
  • Shuffle-Sort-Partitions
  • Map Reduce Combiners
  • Combiner with Caution
  • Map Reduce Wrap Up
  • MapReduce Assessment 2

FREQUENTLY ASKED QUESTIONS

What all course material will be provided by the academy?

Participants will have lifetime access to the videos and assessments provided as a part of the course.

How experienced is the trainer?

Our trainers are practitioners from the industry who are passionate about training.

What will I be able to do upon completing the Training?

You will be able to relate to the Big Data ecosystem and its components. and you will be able to work on HDFS.

What background knowledge is necessary to do this course?

One should be intrested and enthusiastic to learn apart from that no prior knowledge is required.

What all softwares are required to do this course?

Our course relies on open-source software tools like Ubuntu. To Install Ubuntu you need to have a good laptop or computer.

APACHE HIVE

The course covers the basics of Apache Hive. The course is a must for anyone in the IT industry who needs to upgrade Big Data knowledge. This course covers the topics of Introduction to Hive, Hive Architecture, Data Types in Hive, User-defined Functions, and Partition and Bucketing.

This course includes:

☞ Live session

☞ Right blend of concepts and hands-on

☞ Certificate of completion

☞ Built in Assessment to test your knowledge.

Syllabus

INTRODUCTION

  • ENVIRONMENT AND INSTALLATIONS
  • Oracle_VirtualMachine_Installation
  • Google Cloud Platform Setup
  • How to install Ubuntu operating system on Virtual Box

HIVE

  • Transactional and Analytical Processing
  • What is Data warehouse?
  • Introducing Hive
  • Hive Assessment 1
  • Hive Hands-on1
  • Hive Hands-on2
  • Hive Hands-on Assessment 1
  • Hive vs RDBMS
  • Hive Architecture
  • Hive Metastore
  • Hive Assessment 2
  • Hive Hands-on2
  • Hive Hands-on Assessment 2
  • Primitive Datatypes in Hive
  • How storage works in Hive
  • Different types of Tables in Hive
  • Hive Assessment 3
  • Hive Hands-on4
  • Hive Hands-on5
  • Hive Hands-on Assessment 3
  • Inserting the Data into Hive Tables
  • Hive Complex Datatypes
  • Hive User Defined Functions
  • Hive Assessment 4
  • Hive Hands-on6-Inserting data into Tables
  • Hive Hands-on7 _Complex Datatype
  • Hive Hands-on 8 Retrieving Elements from Complex data types columns and Explode
  • Hive Hands-on Assessment 4
  • Denormalized Storage in Hive
  • Hive Optimization Of The Queries Theory
  • Hive Partition & Bucketing Theory part1
  • Hive Partition & Bucketing Theory part2
  • Hive Hands-on Assessment 5
  • Hive Hands-on9- Partitioning Part1
  • Hive Hands-on10- Partitioning Part2
  • Hive Hands on11-Bucketing
  • Hive Hands-on Assessment 5

FREQUENTLY ASKED QUESTIONS

What all course material will be provided by the academy?

Participants will have lifetime access to the videos and assessments provided as a part of the course.

How experienced is the trainer?

Our trainers are practitioners from the industry who are passionate about training.

What will I be able to do upon completing the Training?

You will be able to work on creating table, work on complex types and implement the query optimization techniques- partition bucketing

What background knowledge is necessary to do this course?

One should be intrested and enthusiastic to learn apart from that no prior knowledge is required.

What all softwares are required to do this course?

Our course relies on open-source software tools like Ubuntu. To Install Ubuntu you need to have a good laptop or computer.

RDBMS - SQL

TIn this course you can learn the fundamental basics of RDBMS and Data Management with SQL. Learn everything that you will need to construct queries with the most popular data manipulating programming language - SQL. The course is a right blend of concepts and hands-on to take a step towards making you a pro in SQL.

This course includes:

☞ 10+hours of on demand videos

☞ Live clarification session on a fortnight

☞ Right blend of concepts and hands-on

☞ Certificate of completion

☞ Built in Assessment to test your knowledge.

Syllabus

Introduction

  • Data vs Information
  • Why do we need Data?
  • Different ways to store Data
  • Systems to store Data

RDBMS

  • History of RDBMS
  • What is RDBMS?
  • What is Database?
  • Assessment
  • Database Components
  • Assessment
  • Data Types
  • Assessment
  • Database Objects
  • Assessment
  • Database Design
  • Assessment
  • Keys
  • Assessment
  • Constraints
  • Assessment
  • Normalization
  • Assessment
  • Transactions
  • Assessment

SQL

  • Introduction to SQL
  • Assessment
  • Types of SQL
  • DDL Commands
  • Assessment
  • DML Commands
  • Assessment
  • DCL Commands
  • Assessment
  • TCL Commands
  • DQL Commands
  • Assessment
  • SQL Statements
  • Assessment
  • Alias
  • Assessment
  • Distinct
  • Assessment
  • Operators and Types
  • Assessment
  • Aggregation
  • Assessment
  • Clauses
  • Assessment
  • Null Values
  • Assessment
  • Sub Queries
  • Assessment
  • Join and Types
  • Assessment
  • Set Operators and Types
  • Assessment
  • String Functions
  • Assessment

FREQUENTLY ASKED QUESTIONS

Do I need prior programming experience?

No prior programming experience is necessary. We will take you step by step through everything there is to know about SQL.

What if I have questions during the course?

In this course you will never be alone. Our team will be with you every step of the way, ready to answer your questions through email.

Will I have lots of practice?

This course comes packed with lots of exercises to effectively practice and actually use MySQL in order to help you.

INTRODUCTION TO DATAWAREHOUSE

A Data warehouse is a database that is used for data analysis and reporting. It is a central repository for all the data that is needed for reporting and analysis. In this course you will learn everything about the Data warehouse

This course includes:

☞ 2+hours of on demand videos

☞ Live clarification session on a fortnight

☞ Certificate of completion

☞ Built in Assessment to test your knowledge.

Syllabus

Introduction

  • Data vs Information
  • Why do we need Data?
  • Data Storage and Processing
  • Transactional vs Analytical Processing
  • What is Data warehouse?
  • History of Datawarehouse
  • OLTP & OLAP

Introduction to ETL

  • Extraction
  • Assessment
  • Transformation
  • Types of Transformation
  • Load Data
  • Assessment
  • Data Warehouse Architecture Introduction
  • Data Warehouse Architectures based on layers
  • Data Warehouse Architecture based on Tiers
  • Assessment
  • What is Data Mart?
  • Storing Data in Data Warehouse
  • Dimension Tables and Fact Tables
  • Dimension and Fact table with Example
  • Assessment

Data Warehouse Schema

  • Intro To DWH Schema
  • Star Schema
  • Snowflake Schema
  • Fact Constellation Schema
  • SCD Introduction
  • SCD Type 0 and 1
  • SCD Type 2
  • SCD Type 3
  • SCD Type 4
  • SCD Type 6
  • ETL Tools
  • Scheduling Tools
  • Reporting tools

FREQUENTLY ASKED QUESTIONS

Do I need prior programming experience?

No prior programming experience is necessary.

What if I have questions during the course?

In this course you will never be alone. Our team will be with you every step of the way, ready to answer your questions through email.

Will I have lots of practice?

This course comes packed with lots of exercises to effectively practice and actually use MySQL in order to help you.

DATA WAREHOUSE / ETL TESTING

Data Warehouse/ETL Testing helps you to learn a step-by-step process that includes Computer Basics, Operating System, Linux and Its Commands, RDBMS, SQL in depth, ETL Testing introduction, difference between OLAP and OLTP, learning data warehousing concepts, its workflow, difference between data warehouse testing and testing, ETL Testing.

As a part of ETL Testing Course, you will be exposed to real-life industry project which give you in-depth understanding of Data warehousing and its concepts. ETL testing course will help you to become a successful ETL Testing expert.

This course includes:

☞ 20+hours of on demand videos

☞ Live clarification session on a fortnight

☞ Right blend of concepts and hands-on

☞ Certificate of completion

☞ Built in Assessment to test your knowledge.

Syllabus

Computer Basics

  • What is Computer?
  • History of Computer
  • Components of Computer
  • Functionalites of Computer
  • Software Components
  • Assessment

Operating System

  • What is Operating System
  • Types of OS
  • Assessment

Linux

  • Directory commands
  • File Commands
  • Listing Files
  • Assessment
  • Copy and Move Commands
  • Linux Commands - More
  • Linux Commands - Cat command
  • Assessment
  • Regular Expressions with GREP
  • GREP Hands-on
  • Comparison Commands - Diff
  • Comparison Commands - Cmp
  • Assessment
  • Linux Commands with Pipes
  • Linux Commands - Man
  • Process Monitoring
  • Process Monitoring Hands-on
  • Assessment
  • FilePermissionsPart1
  • FilePermissionsPart2
  • Networking Command
  • Admin Command
  • Compression Commands
  • Assessment

RDBMS

  • History of RDBMS
  • What is RDBMS?
  • Assessment
  • What is Database?
  • Assessment
  • Database Components
  • Assessment
  • Data Types
  • Assessment
  • Database Objects
  • Assessment
  • Database Design
  • Assessment
  • Keys
  • Assessment
  • Constraints
  • Assessment
  • Normalization
  • Assessment
  • Transactions
  • Assessment

SQL

  • Introduction to SQL
  • Types of SQL
  • Assessment
  • DDL Commands
  • Assessment
  • DML Commands
  • Assessment
  • DCL Commands
  • Assessment
  • TCL Commands
  • DQL Commands
  • Assessment
  • SQL Statements
  • Assessment
  • Alias
  • Assessment
  • Distinct
  • Assessment
  • Operators and Types
  • Assessment
  • Aggregation
  • Assessment
  • Clauses
  • Assessment
  • Null Values
  • Assessment
  • Sub Queries
  • Assessment
  • Join and Types
  • Assessment
  • Set Operators and Types
  • Assessment
  • String Functions
  • Assessment

Introduction to ETL

  • Extraction
  • Assessment
  • Transformation
  • Types of Transformation
  • Load Data
  • Assessment
  • Data Warehouse Architecture Introduction
  • Data Warehouse Architectures based on layers
  • Data Warehouse Architecture based on Tiers
  • What is Data Mart?
  • Assessment
  • Storing Data in Data Warehouse
  • Dimension Tables and Fact Tables
  • Dimension and Fact table with Example
  • Assessment

Data Warehouse Schema

  • Intro To DWH Schema
  • Star Schema
  • Snowflake Schema
  • Fact Constellation Schema
  • SCD Introduction
  • SCD Type 0 and 1
  • SCD Type 2
  • SCD Type 3
  • SCD Type 4
  • SCD Type 6
  • ETL Tools
  • Scheduling Tools
  • Reporting tools

Testing

  • What is Quality
  • Good Software
  • What is Software Testing?
  • Role of Software Testing
  • Introduction to SDLC
  • SDLC Phases
  • What is STLC?
  • STLC Phases
  • Expecations from Software
  • Categories of the Requirements
  • Non Functional Testing
  • Functional Testing
  • Test Case
  • Introduction to Test case Design Techniques
  • High Level Test Case Design Technique
  • Lower Level Test Case Design Technique
  • Software Testing Phases
  • Unit Testing
  • Functional Testing
  • Integration Testing
  • System Testing
  • UserAcceptance Testing
  • Regression Testing
  • What is Defect?
  • Defect Life Cycle
  • Defect Parameters

ETL Testing

  • What is Datawarehouse?
  • ETL Intro
  • What is ETL Testing?
  • Business Requirements in Datawarehouse ETL Mapping
  • ETL testing with example
  • DataWarehouse Testing Life Cycle
  • Major Testing Phases
  • Metadata check
  • Meta Data Validation Hands-on
  • Data Mismatch Check
  • Data Completeness Validation with Record Count
  • Record Count Check Handson
  • Data Completeness Validation with Column Data Profiling
  • Column Data Profiling Validation Hands-on
  • Data Completeness Validation with Data Duplicate Check
  • Duplicate Validation Hands-on
  • Data Accuracy
  • Data Accuracy Check Hands-on1
  • Data Consistency
  • Data Transformations Validation
  • Data Transformations Validation - Example

ETL Testing Project

  • Creating Tables
  • Loading Records
  • Testing
  • ETL Testing Queries

FREQUENTLY ASKED QUESTIONS

What all course material will be provided by the academy?

Participants will have lifetime access to the videos and assessments provided as a part of the course.

How experienced is the trainer?

Our trainers are practitioners from the industry who are passionate about training.

What will I be able to do upon completing the Training?

Post completion of the course successfully you will able to work on SQL Queries and write Test Cases and Work in ETL Testing Project

What background knowledge is necessary to do this course?

One should be intrested and enthusiastic to learn apart from that no prior knowledge is required.

What all softwares are required to do this course?

Our course relies on open-source software tools like Ubuntu. To Install Ubuntu you need to have a good laptop or computer.

PROJECT ON DATA WAREHOUSE/ETL TESTING

In this course you will learn about the Real - Time Project scenario on Data Warehouse/ETL Testing. As a part of Data Warehouse/ETL Testing Course, you will be exposed to real-life industry scenarios which give you in-depth understanding of Data warehousing/ETL Testing and its concepts. Project on Data Warehouse/ETL testing course will help you to become a successful ETL Testing expert.

This course includes:

☞ 2+hours of on demand videos

☞ Live clarification session on a fortnight

☞ Right blend of concepts and hands-on

☞ Certificate of completion

☞ Interview Questions to peek up

Syllabus

Introduction

  • Welcome

Project on SQL Server

  • Introduction to Project
  • Pre Requisties of Project
  • What is SQL Server?
  • SQLServer Branch Table Preparation Source and TargetS
  • SQLServer Branch Table with User Interface
  • SQL Server Loading all the Tables Preparation
  • SQL Server Branch Table Testing Approach
  • SQL Server Meta Data Check Branch Table
  • Sql server Branch Table Count Null Duplicate Hands-On
  • SQL Server Branch Table Transformation Validation Hands-on
  • SQL Server Hands-on Branch Table Columns with Asis Mapping
  • SQL Server Customer Table Metadata Check Hands-On
  • SQL Server Customer table Count NUll and Duplicate Check Hands-On
  • SQL Server Customer Table Transformation1 Hands-On
  • SQL Server Customer Table Transformation2 Hands-On
  • SQL Server Transformation Validation for Customer table
  • SQL Server Customer Table Validation for columns without transformation
  • SQL Server Accounts Table Meta Data Check
  • SQL Server Count Null Duplicate Check Hands-On
  • SQL Server Accounts Table Transformation1
  • SQL Server Accounts Table Transformation2
  • SQL Server Accounts Table Transformation3
  • SQL Server Accounts Validation for the Non transformed columns
  • SQL Server Payments Table Transfromation Validation for both the columns
  • SQL Server Payments table validation for non transformed columns

Project on MySQL

  • Introduction to Project
  • Pre Requisties of Project
  • Branch Table Preparation Source and Targets
  • Hands-On Preparations
  • Branch Table Testing Approach
  • Branch Table Meta Data Check Hands-On
  • Branch Table Count Null Duplicate Hands-On
  • Branch Table Transformation Validation Hands-On
  • Branch Table Validating columns without Transformation Hands-On
  • Customer Table MetaData count duplicate Hands-On
  • Customer table Handson Count NUll and Duplicate Check
  • Customer table Transformation validation Hands-On1
  • CustomerTable Transformation Hands-On Final
  • Transformation Validation for Customer table Trans3
  • Accounts Table MetaData Check Hands-On1
  • Accounts Table Count Null Duplicate Check Hands-On2
  • Accounts Table Transformation Validation Hands-On3
  • Accounts Table Transformation Validation Left pad Hands-on4
  • Payments Table Hands-On Validation
  • Payment table count Null duplicate check Hands-On
  • Payments Table Transformation validation Hands-On Final

FREQUENTLY ASKED QUESTIONS

What all course material will be provided by the academy?

Participants will have lifetime access to the videos and assessments provided as a part of the course.

How experienced is the trainer?

Our trainers are practitioners from the industry who are passionate about training.

What will I be able to do upon completing the Training?

Post completion of the course successfully you will able to work on SQL Queries and write Test Cases and Work in ETL Testing Project

What background knowledge is necessary to do this course?

One should be intrested and enthusiastic to learn apart from that no prior knowledge is required.

What all softwares are required to do this course?

Our course relies on open-source software tools like Ubuntu. To Install Ubuntu you need to have a good laptop or computer.

INTRODUCTION TO PYTHON PROGRAMMING

Python is a programming language with many features that can be both useful for experienced developers and those just starting out. It has a simple syntax that makes it easy to learn, and powerful libraries that allow for robust programming. Our Introduction to Python Programming course covers the topics from very basics to write a program in python. The course covers all the basics topics needed get started with Python

This course includes:

☞ 30+ on demand videos

☞ Right blend of concepts and hands-on

☞ Certificate of completion

☞ Built in Assessment to test your knowledge.

Syllabus

Introduction

  • Welcome to the Course

Environment and Installation

  • Oracle Virtual Machine Installation
  • Ubuntu Operating System Installation in Virtual Box
  • Installation of Jupyter Notebook on Ubuntu

Introduction to Python

  • Executing Python code
  • Syntax i.e. Comments,Indentation
  • Variables
  • Datatypes
  • Operators
  • Control Flow structures
  • Loops Theory and Hands-On
  • Functions Theory and Hands-On

Python Programs Made Easy

  • Python program for Swapping the Numbers
  • Python program to print Fibonacci Series
  • Python program to calculate the Factorial of a Number
  • Python program to find a Prime Number
  • Python program to find a Anagram
  • Python program to find a Palindrome

FREQUENTLY ASKED QUESTIONS

What all course material will be provided by the academy?

Participants will have lifetime access to the videos and assessments provided as a part of the course.

How experienced is the trainer?

Our trainers are practitioners from the industry who are passionate about training.

What background knowledge is necessary to do this course?

One should be intrested and enthusiastic to learn apart from that no prior knowledge is required.

What all softwares are required to do this course?

Our course relies on open-source software tools like Ubuntu. To Install Ubuntu you need to have a good laptop or computer.

AZURE CLOUD DATA ENGINEERING RECORDED SESSION

The Azure Cloud Data Engineering course covers the topics from the very basics to the level needed to work in live projects. The Course is a must for anyone in the IT industry and prospective Cloud Data experts. The Course is the right blend of Azure concepts and hands-on Cloud Computing, Data Lake Storage, Azure Data Factory, Azure Databricks, Azure Synapse Analytics in detail.

This course includes:

☞ 30+ hours of live classes recordings

☞ Right blend of concepts and hands-on

☞ Certificate of completion

☞ Built in Assessment to test your knowledge.

Syllabus

Introduction

  • Introduction to Cloud Computing
  • Introduction to Cloud Computing - Compute power
  • Introduction to Cloud Computing - Deployment Models
  • Cloud categories

Data Lake Storage

  • Data Engineering and Data Lake
  • Data Lake Features - ETL &  ELT
  • Azure Data Lake Storage Services

FREQUENTLY ASKED QUESTIONS

What all course material will be provided by the academy?

Participants will have lifetime access to the videos and assessments provided as a part of the course.

How experienced is the trainer?

Our trainers are practitioners from the industry who are passionate about training.

What background knowledge is necessary to do this course?

One should be intrested and enthusiastic to learn apart from that no prior knowledge is required.

What all softwares are required to do this course?

Our course relies on open-source software tools like Ubuntu. To Install Ubuntu you need to have a good laptop or computer.

Contact Us

Launch your GraphyLaunch your Graphy
100K+ creators trust Graphy to teach online
Blismos Academy 2024 Privacy policy Terms of use Contact us Refund policy