Data Engineer (Data Platform)
- Carrer de la Ciutat de Granada, Barcelona, Spain
- Department: Technology
- Contract Type: Long term/Permanent contract
Adevinta is a marketplace specialist, operating digital marketplaces in 16 countries in Europe, Latin America and North Africa. Our leading local brands include Leboncoin in France, InfoJobs in Spain, Subito in Italy, Jofogás in Hungary, and Segundamano in Mexico, among many others. Adevinta’s local marketplaces thrive through global connections and networks of knowledge.
One of the missions of Adevinta's Hub is to develop the global product platforms and technology infrastructure necessary to create developer pipelines, big data processing, media management, payment, security and identity systems. With over 250 million monthly active users under our belt, we are able to harness huge amounts data to provide insights on a global scale. Together with our deep local expertise, we have a winning combination.
We are currently looking for a really experienced data engineer to join our team of data engineers in Barcelona. As a member of the Data team, you will join our journey as we discover how to build state of the art data processing solutions for Adevinta, with emphasis on both volume, velocity and privacy. The event data we collect from all Adevinta sites around the world is essential to our business - including feeding marketplaces with low-latency (“realtime”) updates, visitor insights analysis and targeted advertisement. We are solving exciting problems at scale, from gathering up to 900 million events per day worldwide while keeping users privacy and data security in mind.
You will be part of developing a modern data processing pipeline at scale for Adevinta sites around the world, for a variety of purposes, such as classification, insights and understanding and modelling user behavior. As part of the global Adevinta organization, you will also have the opportunity to learn from and share knowledge with data scientists and engineers across Adevinta. We encourage a diverse, collaborative and creative work environment, where you will develop and push for the state-of-the-art in big data processing at the same time as building reliable and highly scalable services.
Join us to create an amazing data platform team.
- A BSc degree in Computer Science (or equivalent job experience)
- Strong analytical / problem solving skills
- Experience with batch and streaming data processing tools (Spark, Kafka, Kafka Streams, Luigi, etc.)
- Experience with “big data” NoSQL databases design and optimization (S3, Hive, Redshift, Cassandra, etc)
- Experience with AWS or any other cloud platform
- Experience with Agile methodologies.
- SOLID, TDD
- Understanding of Software Development best practices
- Proven ability and experience developing highly structured computer programs (Scala preferably), and interested in learning more.
- Experience with any AMQP technologies
- Linux, shell scripting
- K8s experience
- Experience in building and maintaining systems at scale: service discovery, load balancing, secret management, dynamic request routing, circuit breakers and deployment schemes (rolling updates, canary, etc.)
- Experience with modern development tools like Git, Travis, Terraform or equivalent
- Invent: identify hidden areas of improvement in any process or system, including changing established rules or procedures
- Create: lead the creation of data processing services and exploration tools
- Protect: improve the reliability and availability of Schibsted systems by gathering hard data, designing systems and creating or adapting code for increased service reliability and performance
- Survey: Implement monitoring solutions enabling production systems to be monitored 24/7
- Share: Share knowledge and provide expert advice to Schibsted engineers as to how to best use the data platform tools as well as providing answers to their engineering needs, fostering a mixed development and operations culture
- Engineer: Install, configure, fine-tune, and optimise all sorts of technology solutions.