Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
■■■
EXECUTIVE SUMMARY
■■■
TECHNICAL SKILLS
Big Data : Hadoop, HDFS, MapReduce, Hive, Impala, Tez, Spark, Sqoop,
Distributions
Query Tools : SQL Developer, SQL Navigator, SQL* Plus, TSQL, SQL
Responsibilities:
Highly knowledgeable in end to end functioning of Capital Markets - Counterparty
Credit Risk Management Process.
Worked on Potential Future Exposure of Collateral Securities and Asset Management
project; using multiple source systems like ENDUR, CALYPSO, FENICS, GMI OTC,
BROADRIDGE, etc.
Worked on designing, development and delivery of software solutions in the FX
Derivatives group for use by business users.
Extensively worked on Hadoop with Cloudera Distribution
Coordinated with Business Team and Source Teams to get the requirements
Used SQOOP to import data from RDBMS source system, Spark for data cleansing
and loaded data into Hive staging and base tables. Developed permanent connectors
to automate this process
Handled different kinds of files types like JSON, XML, Flat Files and CSV by using
appropriate SERDES or Parsing logic for loading into Hive tables
Implemented Hive UDF's and did performance tuning using Partitioning and
bucketing for better results
Analysed the data by using Impala queries and Spark to view transaction information
and validate the data
Implemented optimized map joins to get data from different sources to perform
cleaning operations before applying the algorithms
Coordinated with Cloudera team and Admin team to fix Cloudera update issues
Implemented Spark job to extract data from RDBMS systems which reduced the job
process time
Developed workflow in OOZIE to manage and schedule jobs on Hadoop cluster for
extracting data from source on daily, weekly, monthly, quarterly and annually basis
Used Autosys scheduler to schedule the workflows
Analysed the production jobs in case of abends and fixed the issues within 30 mins
Reduced the daily batch cycle time for some systems to less than 15 mins from 2 hrs
using forks and join concept in Oozie
Worked on POC to migrate existing big data platform (on premises Cloudera) to
Azure
Exported data from Hive table to EDW using SQOOP
Created staging tables, developed work flow to extract data from different source
systems in Hadoop and load data into these tables. The data from these staging table
is exported using SFTP to third party system to execute data models
Used JIRA and KANBAN to update tasks, code check in, code deployment and
maintain documentation
Worked in Agile development environment in monthly sprint cycles by dividing and
organizing tasks. Participated in daily scrum and other design related meetings
Participated in Cloudera updates and tested the regions once they are done
Used SQL explorer, TOAD to view source data in DB2, Oracle, Netezza
Environment: Hadoop, Cloudera, Hive, Pig, SQOOP, Kafka, Spark, OOZIE, Python,
PySpark, UNIX, Shell scripting, RDBMS, Azure, Autosys
Responsibilities:
Implemented Hive Ad-hoc queries to handle Member hospital data from different data
sources such as Epic and Centricity
Implemented Hive UDF's and did performance tuning for better results
Analysed the data by performing Hive queries and running Pig Scripts to study patient
and practitioner behaviour
Implemented optimized map joins to get data from different sources to perform cleaning
operations before applying the algorithms
Experience in using Sqoop to import and export the data from Netezza and Oracle DB
into HDFS and HIVE
Implemented POC to introduce Spark Transformations
Worked in transforming data from map reduce into HBase on bulk operations
Developed workflow in Oozie to manage and schedule jobs on Hadoop cluster for
generating reports on nightly, weekly and monthly basis
Implemented test scripts to support test driven development and continuous integration
Used JIRA and Confluence to update tasks and maintain documentation
Worked in Agile development environment in sprint cycles of two weeks by dividing
and organizing tasks. Participated in daily scrum and other design related meetings
Used SQOOP to export the analysed data to relational database for analysis by data
analytics team
Environment: Hadoop, Cloudera, Hive, Sqoop, Flume, HBase, Spark, Oozie, Linux, Python,
UNIX
Wells Fargo & Co., Charlotte, NC 11/2014 – 04/2016
Sr. SQL Developer
Responsibilities:
Worked in Capital Markets and gained expertise on Counterparty Credit Risk
Management
Performed Midyear and Annual CCAR exercises as per the Stress Values provided
by Federal Government
Involved in gathering and analysing the requirements; and preparing business rules
Coordinated with the front-end design team to provide them with the necessary
stored procedures, packages and necessary insight into the data
Wrote Unix Shell Scripts to process the files on daily basis like renaming the file,
extracting date from the file, unzipping the file and to remove the characters from the
file before loading them into base tables
Involved in the continuous enhancements and fixing of production problems
Partitioned the fact tables and materialized views to enhance the performance
Handled errors using exception handling and added check points extensively for the
ease of debugging and displaying the error messages in the application
Used Tortoise SVN to Check in all the code changes to repository and generated Build
life using Anthill Pro; later deployed the code onto Grid and Coherence Machines
Extracted data from multiple data sources such as .bin, .dat and xml files; loaded the
data into stage tables using Autosys jobs and Unix Pearl/Shell scripts
Created new simulation and valuation models for calculating exposure values used
in credit risk process
Received Shared Success Award for my contributions to the team
Environment: Oracle 9i/10g, SQL Server 2012, UNIX, HP ALM, Autosys, Tibco
DataSynapse, Anthill, Tortoise SVN
Responsibilities
Interacting with users to gather Business requirements and Analysing, Designing
and Developing the Data feed and Reporting systems
Designed and developed the Package, Stored Procedures, functions efficiently in
loading, validating and cleansing the data. Also worked on creating users and roles
as needed for the applications
Created Cursors, Collections and database triggers for maintaining complex integrity
constraints and implementing the complex business rules
Worked on Performance tuning using the Partitioning and indexing concepts (Local
and Global indexes on partition tables)
Performed Data Extraction, Transformation and Loading (ETL process) from Source
to target systems
Worked on Windows Batch scripting, scheduling jobs and monitoring logs
Created UNIX Shell Scripts for Informatica feeds and to execute Oracle procedures
Designed processes to extract, transform, and load data to the Data Mart
Performed ETL Process by using Informatica Power Center
Optimized SQL used in Reports and Files to improve performance drastically
Used various transformations like aggregator, Update strategy, Lookup, Expression,
Stored procedure Filter, Source Qualifier, Sequence generator, Router, Update
strategy etc
Environment: Oracle 11g, Informatica, SQL, PL/SQL, SQL Loader, SQL*Plus, Autosys,
Tortoise SVN, TOAD and UNIX
Responsibilities
Worked closely with architects, designers and developers to translate data
requirements into the physical schema definitions for Oracle
Trapped errors such as INVALID_CURSOR, VALUE_ERROR using exception
handler
Tuning Database and SQL scripts for Optimal Performance, Redesign and build the
schemas to meet Optimal Performance measures
Extensively Involved in Performance tuning and Optimization of the SQL queries
Wrote Stored Procedures, Functions, Packages and triggers using SQL to implement
business rules and processes and also performed code debugging using TOAD
Set up batch and production jobs through Autosys
Created Shell scripts to access and setup runtime environment, and to run Oracle
Stored Procedures, Packages
Executed Batch files for loading database tables from flat files using SQL*loader
Created UNIX Shell Scripts for automating the execution process
Environment: Oracle 9i, SQL, PL/SQL, TOAD, Unix, SQL Server 2003, XML
Infosys Limited, Hyderabad, India 07/2007–09/2009
Senior Systems Engineer - Java Development
Responsibilities
Key responsibilities included requirements gathering, designing and developing the
Java application.
Identified and fixed transactional issues due to incorrect exceptional handling and
concurrency issues due to unsynchronized block of code.
Created Java application module for providing authentication to the users for using
this application and to synchronize handset with the Exchange server.
Performed unit testing, system testing and user acceptance test.
Gathered specifications for the Library site from different departments and users of
the services.
Wrote SQL scripts to create and maintain the database, roles, users, tables, views,
procedures and triggers.
Designed and implemented the UI using HTML and Java.
Worked on database interaction layer for insertions, updating and retrieval
operations on data.
■■■
EDUCATIONAL QUALIFICATION
■■■
CERTIFICATIONS