Databricks Architect - 15 years exp.
- Contract
Company Description
Job Description
Role: Databricks Architect
Duration: 12 months
Location: Troy, MI- Remote
Minimum exp.- 14-15 yrs.
Role Overview
We are looking for a Databricks Architect to design and lead modern Lakehouse data platforms using Databricks. The role focuses on building scalable, high-performance data pipelines and enabling analytics and AI use cases on cloud-native data platforms.
Key Responsibilities
- Architect and implement Databricks Lakehouse solutions for large-scale data platforms
- Design and optimize batch & streaming data pipelines using Apache Spark (PySpark/SQL)
- Implement Delta Lake best practices (ACID, schema enforcement, time travel, performance tuning)
- Build and manage Databricks jobs, workflows, notebooks, and clusters
- Enable data governance using Unity Catalog (access control, lineage)
- Integrate Databricks with cloud data services (ADLS / S3, ADF, Synapse, etc.)
- Support analytics, BI, and AI/ML workloads (MLflow exposure is a plus)
- Lead solution design discussions and mentor data engineering teams
Must-Have Skills
- 10+ years in data engineering / data architecture
- 5+ years of strong hands-on experience with Databricks
- Expert in Apache Spark, PySpark, SQL
- Strong experience with Delta Lake & Lakehouse architecture
- Cloud experience on Azure Databricks / AWS Databricks
- Proven experience in designing high-volume, scalable data pipelines
Good-to-Have
- Unity Catalog, MLflow, Databricks Workflows
- Streaming experience (Kafka / Event Hubs)
- CI/CD for Databricks (Azure DevOps / GitHub)
Additional Information
All your information will be kept confidential according to EEO guidelines.