Senior Data Engineer

  • Full-time

Company Description

Atida is one of the largest pan-European online health and wellbeing groups. We are leading the way in the digital health landscape by creating a unique destination and ecosystem of products and services that caters to supporting people’s holistic wellbeing. Our online pharmacies and parapharmacies are consumers’ favourites across 7 markets already servicing over 4M+ health enthusiasts on a journey to feel their best inside and out.

We aim to support people all over the world to make better decisions about their health by providing trusted expertise, breaking taboos and normalising the “health” conversation so everyone can live life feeling happy, healthy and confident.

Headquartered in Amsterdam, we have over 900 employees in 8 locations in Europe and we are growing fast organically and through acquisitions. Working at Atida means you’ll enjoy a high-paced, high-variety role where you can make a big impact quickly.

You’ll be joining us on our mission to support consumers’ wellbeing and you’ll help transform the health industry. We are a dynamic, diverse and multinational team and you’ll be collaborating with colleagues across all of Europe.

Does this sound exciting to you? Apply now!

Job Description

We are looking for an Azure Data Engineer to join our Future Data Program. The role will be responsible for building new data pipeline architecture and optimized data flows using Azure cloud stack. With your breadth of technical experience and knowledge, with depth/Subject Matter Expertise in Microsoft Data Platform Cloud solutions you will have a large influence in the growth of our data team. You will have working experience with Azure DevOps and CI/CD paired with knowledge of Agile and/or Scrum methods of delivery.

Your role will involve:

  • Data Ingestion and Storage including Azure Data Factory, Azure Databricks, Azure Data Lake, Kafka and Spark Streaming, Azure EventHub/IoT Hub, and Azure Stream Analytics
  • Data Engineering including Azure Data Factory and Azure Databricks with experience with Python and T-SQL. Experience with Azure functions, C# .NET, and SSIS is a plus.
  • Cloud Big Data Analytics in Azure Synapse Analytics, Azure Analysis Services, and Snowflake
  • Advanced Analytics and Model Management including Azure Databricks, Azure ML/MLFlow as well as deployment of models using Azure Kubernetes Service
  • Relational Databases including Azure SQL, SQL Server
  • NoSQL Databases including Cosmos DB
  • Data Governance, Data Catalog, Master Data Management
  • Working experience with Visual Studio, PowerShell Scripting, and ARM templates
  • Knowledge of Lambda and Kappa architecture patterns.
  • Extensive experience connecting to various data sources and structures: APIs, NoSQL, RDBMS, Blob Storage, Data Lake, etc..

Qualifications

You will be an experienced data pipeline builder ready to support Business Analysts and Data Architects. Self-directed and comfortable supporting the data needs of multiple teams, systems and products you will also be excited by the prospect of optimizing or even re-designing our company's data architecture to support our next generation of products and data initiatives. As a strong communicator you will enjoy this customer facing role and leveraging off excellent stakeholder management skills.

You will also have experience in the following:

  • Advance Azure Knowledge and experience working and migrating data products to on-prem to Azure.
  • Experience building and optimizing big data' data pipelines, architectures and data sets using Py-Spark.
  • Experience working in building Real Time data pipelines using Event-hub, storage queues and Azure stream analysis.
  • Strong analytic skills related to working with unstructured datasets.
  • Build processes supporting data transformation, data structures, metadata, dependency and workload management.
  • Cloud Big Data Analytics in Azure Synapse Analytics, Azure Analysis Services
  • Data Ingestion and Storage including Azure Data Factory, Azure Databricks, Azure Data Lake, Kafka and Spark Streaming, Azure EventHub/IoT Hub, and Azure Stream Analytics
  • Experience with Bigdata Tools: Hadoop Spark, Kafka
  • Experience with Object Oriented/object function Scripting languages: Python preferred.
Privacy Policy