Data Engineer II

47

Applications

Bangalore
Full-Time
Hybrid
Mid-Level: 4 to 6 years
25L - 30L (Per Year)
Posted on Apr 02 2024

About the Job

Skills

Python (Programming Language)
SQL
Extract, Transform, Load (ETL)
NoSQL
Snowflake Cloud
Data Warehouse Architecture
Data Modeling



Data Engineer II 

About Us

Redica Systems is a SaaS start-up serving more than 200 customers within the life science sector, with a specific focus on Pharmaceuticals and MedTech. Embracing a hybrid model, our workforce is distributed globally, with headquarters in Pleasanton, CA. 

Redica's data analytics platform empowers companies to improve product quality and navigate evolving regulations. Using proprietary processes, we harness one of the industry's most comprehensive datasets, sourced from hundreds of health agencies and the Freedom of Information Act. 

Our customers use Redica Systems to more effectively and efficiently manage their inspection preparation, monitor their supplier quality, and perform regulatory surveillance.

More information is available at redica.com.

The Role 

We’re looking for an experienced Data Engineer II to join our team as we continue to develop the first-of-its-kind quality and regulatory intelligence (QRI) platform for the life science industry.

The ideal candidate will come with experience in designing, building, and maintaining data pipelines and infrastructure maintenance while remaining hands-on in the code. 

Core Responsibilities 

  • Full understanding of the technical architecture and the different subsystems 
  • Create the necessary infrastructure for optimal extraction, transformation, and loading (ETL) of data from various sources
  • Able to work in an Agile Scrum environment, with a keen focus on delivering sustainable, high-performance, scalable, and easily maintainable enterprise solutions
  • Prioritize and address technical challenges, working closely with engineering managers
  • Proactively guide technical choices within your domain of expertise
  • Recommend and validate different ways to improve data reliability, efficiency, and quality 
  • Identify optimal approaches for resolving data quality or consistency issues
  • Ensure successful system delivery to the production environment and assist the operations and support team in resolving production issues, as necessary
  • Has the ability to handle the acquisition of data from a variety of sources, intelligent change monitoring, data mapping, transformations, and analysis
  • Design, test, and maintain data stores, databases, processing systems, and microservices
  • Integrate various sub-systems or components to deliver end-to-end solutions
  • Collaborate with NLP/ML teams to integrate data pipeline with NLP/ML services

About you

  • Tech Savvy: Effectively anticipates and adopts innovations in business-building technology solutions, staying up-to-date with data advancements and incorporating them into work processes
  • Manages Complexity: Actively synthesizes solutions from complex information by identifying patterns and developing effective problem-solving strategies to solve data-related problems effectively
  • Decision Quality: Consistently makes good and timely decisions that propel organizational progress and maintain data integrity
  • Collaborates: Actively engages in collaborative problem-solving by leveraging diverse perspectives and finding innovative solutions to achieve shared goals and data engineering initiatives
  • Optimizes Work Processes: Actively seeks opportunities to enhance and streamline current work processes for managing data pipelines, ETL (Extract, Transform, Load) processes, and data warehousing
  • Drives Results: Strives to continuously improve performance and exceed expectations to contribute to overall success and meet data-related deliverables
  • Strategic Mindset: Consistently demonstrates a strategic mindset by envisioning future possibilities and successfully translating them into breakthrough data strategies, contributing to the organization's long-term success
  • Engaged: Not only shares our values but also possesses the essential competencies needed to thrive at Redica, as outlined here

Qualifications

  • A minimum of 4 years experience in Data Engineering
  • 3+ years of experience with an emphasis on code/system architecture and quality output
  • Experience designing and building data pipelines, data APIs, and ETL/ELT processes
  • Exposure to data modeling and data warehouse concepts
  • Hands-on experience in Python
  • Hands-on experience working with AWS Sagemaker and supporting the building of batch and real-time ML pipelines (AWS Sagemaker / MLflow)
  • Hands-on experience setting up, configuring, and maintaining SQL and no-SQL databases (MySQL/MariaDB, PostgreSQL, MongoDB, Snowflake)
  • Computer Science, Computer Engineering, or similar technical degree

Bonus Points

  • Experience with the data engineering stack within AWS is a major plus (S3, Lake Formation, Lambda, Fargate, Kinesis Data Streams/Data Firehose, DynamoDB, Neptune DB)
  • Experience with event-driven data architectures
  • Experience with the ELK stack is a major plus (ElasticSearch, LogStash, Kibana)


About the company

Redica Systems is a data analytics platform to help regulated industries improve their quality and stay on top of evolving regulations. Our proprietary processes transform one of the industrys most complete data sets, aggregated from hundreds of health agencies and unique Freedom of Information Act (FOIA) sourcing, into meaningful answers and insights that reduce regulatory and compliance risk. Wi ...Show More

Industry

Information Services

Company Size

51-200 Employees

Headquarter

Pleasanton, California