- Wood Ln, Shepherd's Bush, London W12, UK
Publicis Media is one of the four solutions hubs of Publicis Groupe alongside Publicis Communications, Publicis.Sapient and Publicis Healthcare. Led by Steve King, CEO, Publicis Media is comprised of Starcom, Zenith, Digitas, Spark Foundry, Blue 449 and Performics, powered by digital-first, data-driven global practices that together deliver client value and business transformation. Publicis Media is committed to helping its clients navigate the modern media landscape and is present in more than 100 countries with over 23,500 employees worldwide.
Starcom is the Human Experience Company. A world-renowned media communications agency, we believe the alchemy of people and technology creates experiences people love, and actions brands need. With more than 5,000 employees worldwide, Starcom partners with the world's leading marketers and new establishment brands, including Airbnb, Coca-Cola, Fiat Chrysler Automobiles, Kellogg Company, Kraft Heinz, Samsung, Visa and more. Starcom is part of Publicis Media, one of four solution hubs within Publicis Groupe and has offices within Publicis One.
We are looking for a Data Engineer to join our Data Team. The hire will be responsible for optimizing our data and data pipeline architecture, taking care of improving data flow and collection for cross functional teams.
The Data Engineer will sit in the Data Science team and support the data collection, manipulation and analysis cross different brand teams in an FMCG application field.
The Data Engineer will support the re-design of internal processes and might be involved in the automation of tasks for the operations team.
They must be comfortable supporting the data needs of multiple teams, systems and products.
- Work within the data science team and very closely with data analysts to support greater functionality in our data systems and models;
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, etc;
- Develop, test and maintain optimal data pipeline architectures (such as large-scale processing systems) and ensure that these meet business requirements;
- Assemble large, complex data sets leveraging internal and external data sources;
- Build or support the infrastructure required for optimal ETL from a wide variety of data sources using appropriate technologies.
- Understand and address data security issues.
We are looking for a candidate with education in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have working experience using the following software/tools:
- Relational SQL and NoSQL databases: Postgres, Google BigQuery, MongoDB, etc.
- AWS cloud services (EC2, EMR, RDS, Redshift) and/or Google Cloud Services;
- Scripting languages: Python, Java, C++, Scala, etc.
- Big data tools: Hadoop, Spark, Kafka, etc.
- Good knowledge of software architecture design for larger scale scripts
Time management and organizational skills: the candidate will have to be able to scope projects and pro-actively understand projects opportunities;
- Ability to produce documentation;
- Experience supporting and working with cross-functional teams in a dynamic environment, preferably in Media;
- Microsoft Excel (some knowledge or previous experience of VBA is also a plus)
Additional desired skills
- Ability to analyse and gain insights from data;
- Interest in Data Visualization;
- An understanding of machine learning and statistical modelling techniques;
- FMCG experience is also a plus.