company logo

AVP - Data Engineering

Bangalore
Hybrid
Full-Time
Senior: 8 to 12 years
Posted on Nov 07 2024

About the Job

Skills

Big Data
SQL
ETL
Data Mining
DATA QUALITY
data enginner
data quality
python

Expert Data Engineer having 10+ years of experience. Driving Innovation through Robust Data  

Solutions and Scalable Architectures. Strategic Data Engineer Transforming Raw Data into 

Actionable Insights. Proven Expertise in Developing Robust Data Infrastructures, Enhancing 

ETL Pipelines, and Driving Business Intelligence Solutions for Optimal Performance 

SKILLS 

● DE Stack: Spark, Hadoop, Airflow, Hive, Amazon S3, Azure Databricks, PySpark, ADLS, ADF, 

Data Warehousing, Data Modeling, Databricks, Tidal, Informatica 

● Programming languages: SQL, PL/SQL, Python, Core Java, Shell Script 

● Cloud Services: AWS, Azure 

● Data warehousing: Hive, Redshift 

● Database: Oracle 10g/12C, MySQL 

● Tools/ Utilities: Putty, SQL DeveloperSQL Loader, JIRA 

EDUCATIONAL QUALIFICATION 

● Bachelor of Engineering (Electronics and Communication) from M.I.E.T Nagpur University 

PROJECT DETAILS 

TCS, Bangalore 

DEC’21 – till date 

Project 1: Commercial Data Innovation 

Role: Lead Data Engineer  

Description: The primary objective of the Commercial Operations Data Integration Project is to streamline 

and enhance the analysis of commercial data by integrating data from SAP systems into Databricks. This 

will enable advanced analytics and data-driven decision-making. 

Responsibilities: 

● Designed and implemented data pipelines using Azure Databricks, PySpark and ADF for large 

scale data processing, analysis, and reporting. 

● Conducted performance tuning and optimization of Spark jobs to enhance data processing speed 

and scalability 

● Build robust data-platform for creating Data Pipeline which decreased the development effort 

by 80%. 

● Collaborated with cross-functional teams to gather requirements, create data models, and ensure 

data quality and consistency. 

● Implemented data security and compliance measures to protect sensitive data and meet regulatory 

requirements. 

Environment: Spark, Data Bricks, Azure, Python, SQL, ADLS, ADF, SAP, Azure Cloud, Databricks, 

Data Factory, ADLS. 

1



Nous Infosystem, Bangalore 

July’20 – Dec’21 

Project: Omnichannel 

Role: Senior Data Engineer  

Description: The primary objective of the Project is to collect media data from multiple sources such as 

Facebook, Google Ads, LinkedIn etc. in order to create and manage multiple marketing campaigns 

effectively. 

Responsibilities: 

● Build data validation Framework to ensure the accuracy and integrity of incoming data, 

reducing errors by 80%. Implemented best practices for data governance and data quality  

assurance within the data lake ecosystem 

● Design and develop comprehensive data models for data lakes, facilitating efficient storage, 

retrieval, and analysis of diverse structured datasets 

● Data Profiling and cleaning of data received from various sources 

● Provided technical leadership and mentorship to team members, fostering professional growth 

and skill development. 

Environment: Spark, Data Bricks, Azure, SQL, ADLS, Facebook, LinkedIn, Salesforce, Google Ads, 

Adobe Analytics. 

Harman, Bangalore 

Feb’19 – July’ 20 

Project: Travelopia 

Role: Sr. Data Engineer  

Description: Travelopia is a pioneer in the specialist travel sector, with an extensive portfolio of 

independently operated brands, most of which are leaders in their sector. Sailing adventures, safaris, 

Arctic expeditions – our brands are as diverse as they are exciting, creating unforgettable experiences for 

customers across the world. It deals with the high end brands and deals with luxury of the customers. 

Responsibilities: 

● Received business data from source systems at predefined intervals, stored in designated 

HDFS locations within Raw-Layer tables. 

● Orchestrated data cleaning processes using scheduled jobs, transferring cleansed data to 

Clean-Layer tables for further processing. 

● Implemented business logic on clean data to extract relevant information before modeling the 

data into HIVE tables based on business requirements. 

● Developed and maintained Data Models in HIVE to support reporting and trend analysis, 

ensuring data accuracy and integrity. 

● Collaborated with business stakeholders to understand requirements and ensure HIVE tables 

met analytical needs for report generation and trend analysis. 

Deloitte, Bangalore 

Feb’18 - Feb’19 

Project Name: E-ONE HUB (Wells Fargo) 

2



Project Description :Wells Fargo & Company is an American multinational financial services company 

headquartered in San Francisco, California, with central offices throughout the country. It is the world's 

second-largest bank by market capitalizationand the third largest bank in the U.S. by total assets. Our 

Project involves in Regulatory Report validation which are uploaded in cloud based Lombard Report 

utility, ultimately which are given to US Fed, govt. 

ROLE: Data Engineer  

● Executed ETL (Extract, Transform, Load) processes based on client specifications using Hive  

Queries, or Spark scripts to meet project requirements. 

● Design and develop comprehensive data models for data lakes, facilitating efficient storage, 

retrieval, and analysis of diverse structured datasets. 

Capgemini 

Jun’13 to Feb’18 

Project: Boehringer Ingelheim 

Role: Developer 

Description: The Boehringer Ingelheim is a large pharmaceutical company of US. The Key points of the 

project is to migrate the data from Seibel database to Veeva cloud computing. For this purpose, 

Informatica and SQL tools are used for the some changes in data to import the data to Veeva (Sales force). 

Responsibilities: 

● Used various transformations like Source Qualifier, Expression, Aggregator, Joiner, Filter, 

Lookup, Router, Update Strategy Designing and optimizing the Mapping. 

● Developed Sessions using task developer and workflow designer in Workflow manager and 

monitored the results using workflow monitor. 

● Modified several of the existing mappings based on the user requirements and maintained existing 

mappings, sessions and workflows. 

● Prepared SQL Queries to validate the data in both source and target databases. 

● Worked on TOAD and Oracle SQL Developer to develop queries . 

Created Test cases for the mappings developed and then created integration Testing Document.

About the company

Covenant is a premier Executive Search and Recruitment Firm with 350+ Full-Time Recruitment Specialists with a proven track record of strong delivery across the globe. We are specialists in the arena of Human Capital unmatched in all metrics Head Hunting, Turnaround Time, Unceasing Availability, Cost-Effectiveness, Steadfast Quality of Service - provided by a team of Principal Consultants and Ana ...Show More

Industry

Management Consulting

Company Size

201-500 Employees

Headquarter

Chennai, Tamilnadu

Other open jobs from Covenant Consultants