Senior Data Engineer II
Redica Systems is a data analytics platform built to help life sciences companies improve their quality and stay on top of evolving regulations. Our proprietary processes transform one of the industry’s most complete data sets, aggregated from hundreds of health agencies and unique Freedom of Information Act (FOIA) sourcing, into meaningful answers and insights that reduce regulatory and compliance risk.
Founded in 2010, Redica Systems serves over 200 customers in the Pharma, BioPharma, MedTech, and Food and Cosmetics industries, including 19 of the top 20 Pharma companies and 9 of the 10 top MedTech companies. Redica Systems’ headquarters are in Pleasanton, CA, but we are a geographically distributed company. More information is available at redica.com.
We’re looking for an experienced Senior Data Engineer II to join our team as we continue to develop the first-of-its-kind quality and regulatory intelligence (QRI) platform for the life science industry. The ideal candidate will come with experience leading/mentoring a team of developers and maintaining a high bar of quality while remaining hands-on in the code.
● Full understanding of the technical architecture and the different sub-systems
● Able to work as a lead in an Agile Scrum environment, with a keen focus on delivering sustainable, high-performance, scalable, and easily maintainable enterprise solutions
● Helps prioritize technical issues with engineering managers
● Proactively guides technical decisions in a domain of expertise
● Recommend and validate different ways to improve data reliability, efficiency, and quality
● Identify optimal approaches for resolving data quality or consistency issues
● Ensure successful system delivery to the production environment and assist the operations and support team in resolving production issues, as necessary
● Lead the acquisition of data from a variety of sources, intelligent change monitoring, data mapping, transformations, and analysis
● Develop, test, and maintain architectures for data stores, databases, processing systems, and microservices
● Integrate various sub-systems or components to deliver end-to-end solutions
● Integrate data pipeline with NLP/ML services
● Tech Savvy: Effectively anticipates and adopts innovations in business-building technology solutions, staying up-to-date with data advancements and incorporating them into work processes
● Manages Complexity: Actively synthesizes solutions from complex information by identifying patterns and developing effective problem-solving strategies to solve data-related problems effectively
● Decision Quality: Consistently makes good and timely decisions that propel organizational progress and maintain data integrity
● Collaborates: Actively engages in collaborative problem-solving by leveraging diverse perspectives and finding innovative solutions to achieve shared goals and data engineering initiatives
● Optimizes Work Processes: Actively seeks opportunities to enhance and streamline current work processes for managing data pipelines, ETL (Extract, Transform, Load) processes, and data warehousing
● Drives Results: Strives to continuously improve performance and exceed expectations to contribute to overall success and meet data-related deliverables
● Strategic Mindset: Consistently demonstrates a strategic mindset by envisioning future possibilities and successfully translating them into breakthrough data strategies, contributing to the organization's long-term success
● Engaged: Not only shares our values but also possesses the essential competencies needed to thrive at Redica.
● 5+ years of senior or lead developer experience with an emphasis on technical mentorship,
code/system architecture, and quality output
● Extensive experience designing and building data pipelines, data APIs, and ETL/ELT processes
● Extensive experience in data modelling and datawarehouse concepts
● Deep, hands-on experience in Python
● Hands-on experience setting up, configuring, and maintaining SQL and no-SQL databases
(MySQL/MariaDB, PostgreSQL, MongoDB, Snowflake)
● Computer Science, Computer Engineering, or similar technical degree
● Experience with the data engineering stack within AWS is a major plus (S3, Lake Formation, Lambda,
Fargate, Kinesis Data Streams/Data Firehose, DynamoDB, Neptune DB)
● Experience with event-driven data architectures
● Experience with the ELK stack is a major plus (ElasticSearch, LogStash, Kibana)
All your information will be kept confidential according to EEO guidelines.