Senior Data Engineer

  • 40 Water St, Boston, MA 02109, USA
  • Full-time

Company Description

Part of Publicis Groupe (Euronext Paris Exchange: FR0000130577; CAC 40 index), Publicis Spine was established in 2017 to serve the data and analytics and technology needs of Publicis Groupe agencies and their clients. Its mission is to grow clients’ businesses through transformative data applications and is the home to Publicis Groupe’s proprietary technology platform, Publicis PeopleCloud.  It includes a consistent, transparent, best-in-class approach to data, analytics solutions, partnerships and technology via a closely joined network of engineers, technology experts, product designers, analysts and data scientists all empowering marketing and digital business transformation.

Job Description

Publicis Spine is looking for a talented Sr. Data Engineer for an exciting opportunity on the data engineering team. The successful candidate will be involved with designing workflows for data and analytics tools that are a big part of the road-map for 2019 while managing data and infrastructure to efficiently query data in the billions. Candidates will be considered based on their ability to design large distributed technical solutions, architect, manage, monitor and optimize data pipeline projects resulting in actionable data and data pipelines which support the larger organization.

Core Responsibilities

In this role, you will be expected to drive Publicis Spine's mission to grow our clients' businesses through the transformative application of data. Your key priorities will include but are not limited to: 

  • Architect, Design and Maintain Data Pipelines through the lifecycle of the product.
  • Optimize and Monitor existing data pipelines using AWS infrastructure.
  • Write Python/Scala applications for data processing and job scheduling.
  • Understand and Manage massive data-stores.
  • Integrate products from data projects into APIs built in Ruby/Rails
  • Expose large data sets
  • Enjoy being challenged and solve complex problems on a daily basis
  • Design efficient and robust ETL workflows
  • Manage real time streaming application and data flow
  • Investigate, procure and ramp up to new technologies
  • Ability to work in teams and collaborate with others to clarify requirements
  • Build analytics tools that utilize the data pipelines to provide meaningful insights into data


To successfully accelerate and impact the business objectives in this role, you must be a seasoned leader and change agent who brings fresh perspectives with prior experience. Qualification requirements and experience include but are not limited to:

  • 4-7 years data engineering or data science experience (preferably in Python and/or Scala)
  • Must have a strong programming, software engineering background
  • Proficient understanding of distributed computing principles
  • Strong experience with relational SQL and NoSQL databases
  • Knowledge of Big Data Architectures: Hive/Hadoop, Redis, etc.
  • Experience with Big Data tools and concepts: Spark, HDFS, MapReduce etc.
  • Experience with AWS cloud services: EC2, EMR, RDS, Redshift, Lambda, Kinesis
  • Experience with fast search and analytics engines: Elasticsearch, Lucene, etc.
  • Experience with data streams (Kinesis or Kafka)
  • Experience with data pipeline workflow management tools like AWS Pipelines (Airflow a plus)
  • Experience with Various machine learning concepts a plus
  • Excellent oral and written communication skills
  • Bachelor's Degree in Mathematics, Computer Science/Engineering, Statistics

Additional Information

All your information will be kept confidential according to EEO guidelines.

Privacy Policy