Data Engineer -GCP

  • Full-time
  • Role: Data Engineer -GCP

Company Description

T-Systems Information and Communication Technology India Private Limited (T-Systems ICT India Pvt. Ltd.) is a proud recipient of the prestigious Great Place To Work® Certification™. As a wholly owned subsidiary of T-Systems International GmbH, T-Systems India operates across Pune, Bangalore, and Nagpur, boasting a dedicated team of 3500+ employees providing services to group customers. T-Systems offers integrated end-to-end IT solutions, driving the digital transformation of companies in all industries, including automotive, manufacturing, logistics, and transportation, as well as healthcare and the public sector. T-Systems develops vertical, company-specific software solutions for these sectors. T-Systems International GmbH is an information technology and digital transformation company with a presence in over 20 countries and a revenue of more than €4 billion. T-Systems is a world-leading provider of digital services and has over 20 years of experience in the transformation and management of IT systems. As a subsidiary of Deutsche Telekom and a market leader in Germany, T-Systems International offers secure, integrated information technology and digital solutions from a single source.

Job Description

We are looking for an experienced Data Engineer on GCP.
The requirement is for a fast-paced and innovative cloud enablement team to support on-prem Cloudera migration to Google Cloud Platform using serverless orchestrations.

The successful candidate will have a minimum of 6 years' experience with the last 2 - 3 years of experience as a Data Engineer on the GCP platforms.  

Mandatory Skills & Experience:

  • Experience with BigQuery, ML Platform (Vertex AI or others), Dataflow and Dataproc.
  • Expertise in architecting solutions for modern big data applications on-prem & cloud platforms.
  • Experience with migrating on-prem Hadoop workload to cloud platforms.
  • Proven experience in designing and building scalable infrastructure and platform to collect, process and analyse very large amounts of data (structured, un-structured and streaming real-time data).
  • Knowledge in handling in-memory processing systems such as Apache Spark, Apache Beam aka Dataflow in GCP and BigQuery
  • Expert level of skills with Streaming pipelines using PubSub & Dataflow or similar technologies.

 

Desired Skills & Experience:

  • Experience with Dataflow, Apache Spark , BigQuery, PubSub and Kafka
  • Experience with Document store, key-value pair and relational stores.
  • Experience with SAFe/Spotify based Agile methodologies.
Privacy PolicyImprint