Data Engineer (preferably B2B Agreement)

  • Full-time
  • Division: CTO
  • Department: Business Intelligence

Company Description

Yggdrasil is a provider of superior online gaming solutions for iGaming operators. The business was founded in 2013 and is today one of the industry’s most respected and acclaimed suppliers. In 2019 Yggdrasil introduced the three pioneering value propositions; YG Franchise, YG Masters and YG Game IP, of which the last two are powered by Yggdrasil’s enabling technology solution GATI (Game Adaptation Tool & Interface). Since its inception, Yggdrasil has embarked on a rapid growth trajectory enabled by a strong corporate culture focused on innovation, creativity, quality and technology leadership. 

Job Description

We’re looking for a dynamic Data Engineer to take the lead in designing and implementing advanced data architectures that power our analytics and decision-making. Join our collaborative team to shape the future of data infrastructure, tackle exciting challenges, and make a real impact with your expertise.

What You'll Do:

  • Lead the development and implementation of data solutions, including data warehousing, ETL pipelines, and data lakes;
  • Design and build data pipelines and infrastructure that enable efficient processing, storage, and analysis of large volumes of data;
  • Collaborate with data team and other business stakeholders to understand data requirements and design solutions that meet their needs;
  • Collaborate closely with System Engineers, DBAs, and DevOps teams to align data infrastructure with overall company architecture;
  • Work with software engineers to integrate data pipelines and infrastructure into larger software systems;
  • Monitor, maintain, and troubleshoot existing data pipelines and solutions to ensure continuous, reliable data flow and system performance;
  • Optimize data processing and storage systems for performance and scalability;
  • Implement and maintain data security and privacy measures to protect sensitive data;
  • Develop and maintain documentation for data pipelines, infrastructure, and processes;
  • Stay up-to-date with emerging trends and technologies in the data engineering field and recommend new tools and techniques to improve our data solution

Who You Are:

 

  • At least 3 years of experience in data engineering or a related field;
  • Expertise in SQL, Spark/Python or other data processing technologies;
  • Experience designing and building data pipelines and infrastructure using cloud platforms such as AWS, GCP or Azure;
  • Strong understanding of data warehousing, ETL, and data lake architecture and design principles;
  • Experience with data streaming technologies such as Apache Kafka, Apache Flink, or similar tools for real-time data processing.
  • Experience with relational database technologies such as MySQL, PostgreSQL, or Oracle;
  • Experience with data modeling and schema design;

Tech stack:

  • Google cloud;
    • BigQuery;
    • Composer / Airflow;
    • DataProc / Spark;
    • DataFlow / Beam
    • PubSub;
    • Cloud Functions;
  • Debezium
  • Languages: Python, some Scala and Java, SQL;
  • Infrastructure tools like Terraform, k8s, Grafana.

Encouragement to Apply:

We understand that confidence gaps and imposter syndrome can deter amazing candidates from applying. Please apply anyway — we’d love to hear from you.

Privacy PolicyImprint