Senior Cloud Data Engineer

  • Full-time

Company Description

Amol Technologies is a developing Global Software Development organization headquartered at Nashik, India. We are a group of profoundly gifted technologists taking into account our customer's needs. With an unmatched ability pool of profoundly talented Software Engineers, we have the specialized aptitudes, responsibility and industry information to convey custom applications on schedule and inside spending plan.

Job Description

We are looking for a Senior Cloud Data Engineer with 8–12 years of strong experience in designing enterprise-grade data platforms on AWS / GCP / Azure. The candidate must have hands-on expertise in building scalable data lakes, real-time streaming systems, and cloud-native analytics solutions.

Location: Mumbai | Pune | Bangalore

This role requires architecture-level thinking along with hands-on implementation capability.

Key Responsibilities

  • Design and architect end-to-end cloud data platforms
  • Build scalable ETL/ELT pipelines (batch + real-time)
  • Lead migration from on-premise to cloud data platforms
  • Design data lakes, lakehouse, and warehouse architectures
  • Implement data governance, security, and compliance policies
  • Optimize performance and cost across cloud services
  • Mentor junior data engineers
  • Work closely with Business, BI, and ML teams
  • Implement CI/CD for data pipelines

Cloud Expertise (At Least One Required)

AWS Stack

Strong experience with:

  • Amazon Web Services
  • Amazon S3
  • AWS Glue
  • Amazon Redshift
  • Amazon EMR
  • AWS Lambda
  • Amazon Kinesis

GCP Stack

Strong experience with:

  • Google Cloud Platform
  • BigQuery
  • Cloud Storage
  • Dataflow
  • Dataproc
  • Pub/Sub

Azure Stack

Strong experience with:

  • Microsoft Azure
  • Azure Data Factory
  • Azure Synapse Analytics
  • Azure Data Lake
  • Azure Databricks
  • Azure Event Hubs

 

Technical Skills

  • Advanced SQL & Data Modeling (Star/Snowflake, Data Vault)
  • Python / PySpark (strong hands-on)
  • Apache Spark (cluster tuning & optimization)
  • Airflow / Orchestration tools
  • Real-time streaming architecture
  • CI/CD (GitHub Actions / Azure DevOps / Jenkins)
  • Docker & Kubernetes
  • REST API integrations
  • Data Quality & Monitoring frameworks

Mandatory Experience

  • 8–12 years in Data Engineering
  • Minimum 3–5 years in Cloud Data Platforms
  • Experience in handling TB–PB scale datasets
  • Experience designing Data Lakes & Warehouses
  • Strong understanding of IAM & security controls
  • Exposure to DevOps & Infrastructure as Code

Good to Have

  • Cloud Certification (AWS/GCP/Azure Professional level)
  • Experience in Banking / FinTech / Healthcare domain
  • Experience with Lakehouse architecture
  • Experience with ML data pipelines

 

 

 

 

 

🔹 Soft Skills

 

 

  • Leadership capability
  • Stakeholder management
  • Excellent communication skills
  • Solution-oriented mindset