Senior Big Data Engineer

  • Full-time

Company Description

Join Sigma Software’s AdTech Competence Center — a 300+ team of experts delivering innovative, high-load, and data-driven advertising technology solutions. Since 2008, we’ve been helping leading AdTech companies and startups design, build, and scale their technology products. 

We focus on fostering deep domain expertise, building long-term client partnerships, and growing together as a global team of professionals passionate about AdTech, data, and cloud-based solutions. 

Does this sound like an exciting opportunity? Keep reading, and let’s discuss your future role! 

CUSTOMER

Our client is an international AdTech company developing modern, privacy-safe, and data-driven advertising platforms. The team works with AWS and cutting-edge data technologies to build scalable, high-performance systems. 

PROJECT

The project revolves around the development of a next-generation AdTech platform that powers real-time, data-driven advertising. It leverages AWS, Python, and distributed data frameworks to process large-scale datasets efficiently and securely — enabling businesses to make smarter, faster, and more informed marketing decisions.

Job Description

  • Design, develop, and maintain robust data pipelines and ETL processes using Python, SQL, and PySpark 
  • Work with large-scale data storage on AWS (S3, DynamoDB, MongoDB) 
  • Ensure high-quality, consistent, and reliable data flows between systems 
  • Optimize performance, scalability, and cost efficiency of data solutions 
  • Collaborate with backend developers and DevOps engineers to integrate and deploy data components 
  • Implement monitoring, logging, and alerting for production data pipelines 
  • Participate in architecture design, propose improvements, and mentor mid-level engineers. 

 

    Qualifications

    • 5+ years of experience in data engineering or backend development 
    • Strong knowledge of Python and SQL 
    • Hands-on experience with AWS (S3, Glue, Lambda, DynamoDB) 
    • Practical knowledge of PySpark or other distributed processing frameworks 
    • Experience with NoSQL databases (MongoDB or DynamoDB) 
    • Good understanding of ETL principles, data modeling, and performance optimization 
    • Understanding of data security and compliance in cloud environments 
    • Fluent in English (Upper-Intermediate level or higher)

    Additional Information

    PERSONAL PROFILE

    • Strong communication and collaboration skills in cross-functional environments 
    • Proactive, accountable, and driven to deliver high-quality results