Mumbai
Cochin
Full-Time
Mid-Level: 4 to 6 years
Posted on Dec 14 2024

About the Job

Skills

Big Data
Spark
kafka
SQL
Apach nifi
Rdbms
Agile
git

Job Summary:

Data Engineer you will work with our big data team to develop quality services, building highly-available and resilient big data platforms, you will be helping teams to rapidly prototype, deliver, and run, high-impact and high-value services.

Ideally, you will be someone who has hands-on experience building and optimizing Big Data data pipelines, architectures and data sets.


Key Responsibilities Job Specific Responsibilities

Define, design, and develop services and solutions around large data ingestion, storage, and management such as with RDBMS, No SQL DBs, Log Files, and Events.

Define, design, and run robust data pipelines/batch jobs in a production environment.

Develop highly scalable, highly concurrent, and low-latency systems.

Strong analytic skills related to working with unstructured datasets.

Work with third-party and other internal provider services to support a variety of integrations.

Design and implement workflows and data pipelines using tools like Apache Nifi.

Build and manage messaging and streaming systems using Kafka.

Create batch processing pipelines using Spark and real-time stream processing solutions with Spark Streaming or Flink.

Work with product teams on a range of tools and services, improving products to meet user needs.

Participate in sprint planning to work with developers and project teams to ensure projects are deployable and able to monitor from the outside.


Essential Skills

Experience with Apache Nifi for designing workflows and managing data pipelines.

Proficiency in working with Kafka for messaging and streaming use cases.

Strong understanding of Spark for batch processing and Flink for real-time stream processing.

Familiarity with Debezium for change data capture (CDC) and real-time data synchronization.

Good understanding of data architecture principles.

Experience of big data environments (also advising best practices/new technologies to Analytics team)

Experience of Storing Data in systems such as S3(minio), Kafka.

Familiarity with the basic principles of distributed computing and data modeling

Working knowledge of message queuing, streaming processing, and highly scalable data stores

Scripting or programming skills.

In-depth knowledge of Spark, Kafka, Airflow, Apache Avro, Parquet, ORC.

Good understanding of OLAP data models design

Experience working in an agile environment.

Knowledge of the use of version control systems such as git. Desirable Skills

Experience in large-scale Analytics, Insight, Decision Making, and Reporting solutions, based on Big Data technology

Good understanding of data architecture principles

Understanding of infrastructure (including hosting, container based deployments and storage architectures) would be advantageous



About the company

Brainhunter Recruitment (India) Pvt. Ltd. is your leading source for high-quality contract and permanent talent across IT & Non-IT sectors. With 17+ years of experience and Preferred Vendor status with 100+ prestigious firms, we specialize in providing high-growth industries with pre-screened, top-tier hires in fields like AI, Machine Learning, Data Science, and Digital Transformation. Our high ...Show More

Industry

IT Services and IT Consul...

Company Size

11-50 Employees

Headquarter

Mumbai

Other open jobs from Brainhunter Recruiting Pvt Ltd