Senior Data Engineer - Azure

  • Full-time

Company Description

Merkle | Aquila is a data analytics company focused on extracting the maximum value from data, translating it into decisions which empower our clients to take better actions.

We are a growing team based in Edinburgh, London, Derby and Bristol. As we expand into new markets, we are always on the look-out for talented and experienced practitioners to work on a number of exciting projects.  We offer a relaxed but dynamic environment, with a culture of mutual respect and support. Our people love working here because of the interesting variety of our projects, the great people and the freedom to deliver.

In June 2017 we joined the Merkle organisation as part of its drive to become a worldwide leader in personalised data-driven digital marketing. Merkle, part of the Dentsu Aegis Network, is a major Global player in this market, and together we are working to build a unified operation to deliver best-in-class services worldwide. Together we are working to build a unified operation with market leading capabilities.

Job Description

The Role

We are looking for candidates with experience in deploying enterprise level cloud-based data platforms to join our growing Data Engineering team at Merkle|Aquila. Successful candidates will understand modern data platform architecture and have solved complex technical problems to deliver on our clients’ requirements.  

You must have a team player attitude, developing simple, reusable solutions that are scalable, utilizing appropriate technologies and engineering best practices. 

Life as a Senior Data Engineer at Merkle | Aquila

In this role you will have the opportunity to work with clients from a wide range of sectors, understand their specific requirements, liaise with data scientists and analytical consultants, and design, develop and deliver technical solutions. You will typically work with a few clients across different stages of maturity. There is a wide-ranging variety of skills required for your work.  

The main remit of the role is to ingest, transform, and make-ready data from difference source systems for consumption into analytical processes. You will be required to transition analytical datasets into automated load processes, to be surfaced and presented to end-user systems, collection endpoints, and/or reporting tools. You will also support the configuration and deployment of the infrastructure architecture and help to configure the systems that are used by the analytics team. 

You will be offered the opportunity to learn new skills, through in-house and external training providers. You will work with cloud hosting providers such as Microsoft Azure, AWS, and DataBricks. You will work on delivering prepared analytical datasets / code provision and maintenance of BI reporting platforms like Tableau and Power BI. You will do this as part of a dynamic, highly skilled, and flexible team working on exciting and challenging projects. 

A few of the benefits

Whether it’s the joy of working with people at the top of their game or the Merkle | Aquila social calendar, people love working here – and we hope you will too!

  • Career development through Merkle University and other tools; with access to courses, textbooks and mentorship
  • Company Pension, life insurance, health insurance and other corporate benefits
  • 28 days Annual Leave plus your birthday off
  • Free breakfast, fruit and most importantly, biscuits!
  • Four Thirsty - have a drink on us on a Friday afternoon!
  • A selection of other industry standard benefits

Qualifications

What we are looking for in you

You will have a deep understanding of distributed computing, data and application architectures, basic networking, security, and infrastructure. You will demonstrate excellent problem-solving skills and have experience in writing code/ETL processes/Monitoring processes in a production ready environment.

Ideally you should be able to demonstrate a knowledge of Big Data with a focus on data engineering; optimizing data, pipelines, ETLs, data modelling and analytical tools are a plus. 

Essential: 

  • Experience implementing production level deployments at an enterprise scale on any cloud platform, Azure is desirable
  • Experience of IaaS components, SaaS and IaC
  • Experience of CI / CD pipelines; Jenkins, Docker, DevOps, BitBucket 
  • A strong understanding of data modelling application, data structures, databases, data lakes, and data ingestion and transformation routines 
  • Cross-functional team work internally and with external clients 
  • Proficiency working in an Agile environment 

Desirable:

  • Scala/PySpark/Python programming in Apache Spark 
  • C#, Node JS, or similar application development experience 
  • Data Warehouse design and/or implementation 
  • Data Factory, Glue, or SSIS ETL package development 

Additional Information

At the point of application, the candidate must have the legal right to work in the UK as we are unable to sponsor visas as this time.

Merkle does not discriminate against job applicants on the basis of age, disability, gender reassignment, marital or civil partner status, pregnancy or maternity, race, colour, nationality, ethnic or national origin, religion or belief, sex or sexual orientation. Experience stipulated in this job description serves as a guide only and all applications will be considered on their merits, irrespective of experience. 

As part of our Diversity and Inclusion agenda, and as an Equal Opportunities employer, if you require reasonable adjustments during the selection process please engage directly with your Recruiter.

Privacy Policy