Data Engineer
- Full-time
- Department: Digital, Data and Cloud
Company Description
Version 1 has celebrated over 26 years in Technology Services and continues to be trusted by global brands to deliver solutions that drive customer success. Version 1 has several strategic technology partners including Microsoft, AWS, Oracle, Red Hat, OutSystems and Snowflake. We’re also an award-winning employer reflecting how employees are at the heart of Version 1.
We’ve been awarded: Innovation Partner of the Year Winner 2023 Oracle EMEA Partner Awards, Global Microsoft Modernising Applications Partner of the Year Award 2023, AWS Collaboration Partner of the Year - EMEA 2023 and Best Workplaces for Women by Great Place To Work in UK and Ireland 2023.
As a consultancy and service provider, Version 1 is a digital-first environment and we do things differently. We’re focused on our core values; using these we’ve seen significant growth across our practices and our Digital, Data and Cloud team is preparing for the next phase of expansion. This creates new opportunities for driven and skilled individuals to join one of the fastest-growing consultancies globally.
Job Description
This is an exciting opportunity for an experienced Data Engineer of large scale data solutions who will join a team delivering a transformative solutions for a key Version 1 customer.
We are seeking engineers that can support across multiple initiatives and have the flexibility to move between projects to deal with delivery demands.
Qualifications
Essential
- Minimum 8 to 10 years of experience in a Data Engineer role
- Understanding of Snowflake architecture, data modelling and administration.
- At least 2 - 3 years in Snowflake data solutions implementations.
- Experience in designing and implementing efficient ETL/ELT pipelines.
- Optimizing the performance of Snowflake databases, including designing and implementing data structures and using indexes appropriately.
- Experience developing data solutions on AWS (AWS Batch/Lambda, RDS, AWS Glue, Redshift (+Spectrum)) or Azure (Azure Data Factor, Synapse).
- Comfortable working with a range of data sources and formats e.g. JSON, XML, Flat files, API Integration.
- Understanding of dimensional modelling for Data Warehousing (Kimball).
- Experience in design, development and deployment of data warehouses or relational/analytical/multi-dimensional data marts.
- Strong understanding of BI principles, architectures and enterprise scale BI solutions.
- Advanced RDBMS experience including complex stored procedures, functions, query optimization, indexing strategy etc.
- Proficient in writing SQL, Stored procedures and views. Creating and optimising complex queries, analysing query performance, use of partitioning and clustering.
- Experience in effectively coaching novice developers.
- Good problem solving and data analysis skills.
- Excellent written and oral communication skills, ability to communicate complex concepts.
- Ability to translate business requirements into technical solutions.
- Full project lifecycle experience, from initial concept through to deployment and support
Desirable:
- Exposure to and direct experience with Hadoop, Graph DBs or AWS Redshift or Azure Synapse a distinct advantage.
- Familiarity with Databricks.
- Programming experience; in particular with Python.
- Experience working with offshore team Consultancy experience.
- Knowledge of Agile/Scrum is desirable.
- Knowledge of enterprise BI platforms such as Tableau, Qlikview, Power BI.