DevOps Engineer
- Full-time
- მდებარეობა/Location: Head Office
- სამუშაო გრაფიკი/Work Schedule: Full-time
Company Description
We are TBC — a technology company where a bold and determined team creates customer-oriented services, products, and opportunities.
Through innovation and technology, we fulfill our mission and make people’s lives easier.
We support your growth, help you achieve your goals, and empower you to create your own success story — all that matters is that you believe.
Job Description
We are looking for a DevOps Engineer to join our Data Department, which is building Big Data and AI Platforms from scratch.
You will play a key role in designing, deploying, and scaling the core infrastructure that powers big data processing, machine learning pipelines, and enterprise analytics.
This position involves working across Microsoft Azure, collaborating with data engineers, ML engineers, and analytics teams to build a secure, automated, and cost-efficient foundation for the organization’s future Data & AI ecosystem.
Key Responsibilities
- Design, deploy, and operate core infrastructure for a new Data and AI Platform — covering data ingestion, transformation, ML model training, and analytical workloads.
- Architect and manage Azure resources — including subscriptions, IAM, networking, monitoring, and FinOps governance — to support large-scale data and ML environments.
- Build and operate Kubernetes-based platforms (AKS) to orchestrate microservices across data, ML, and analytics layers.
- Implement and maintain microservices and event-driven architectures, leveraging Ingress controllers, service meshes and distributed load balancing.
- Develop and maintain Infrastructure as Code (IaC) using Terraform and Terragrunt, building modular, reusable, and environment-specific components that follow the DRY principle.
- Establish GitOps workflows with Argo CD and Azure DevOps, ensuring fully automated, auditable, and consistent deployments across all environments.
- Implement monitoring and observability stacks (Prometheus, Grafana, Azure Monitor) for end-to-end visibility into data, compute, and network layers.
- Apply FinOps principles — perform cost analysis, tagging, budgeting, and optimization
- Collaborate with Data and ML teams to deploy and manage core platforms and tools such as Databricks, MLFlow and vector-enabled databases for AI workloads.
- Manage high-performance load balancers for real-time ML inference and large-scale data services.
- Ensure secure network architecture across hybrid environments, managing VNETs, subnets, private endpoints, DNS (Azure Private Resolver), and VPN routing.
- Contribute to the design of scalable, cost-effective, and reliable data infrastructure — from concept to production.
Qualifications
- 3+ years of hands-on DevOps experience, preferably in data or AI-focused environments.
- Proven experience building or scaling cloud-native data and ML platforms from the ground up.
- Deep knowledge of cloud infrastructure concepts, governance, and automation.
- Strong background in Linux administration, containerization, and Kubernetes orchestration (AKS, on-prem, or hybrid).
- Expertise with Terraform and Terragrunt, building modular IaC for multi-environment deployments.
- Proficiency with CI/CD and GitOps automation using Argo CD, Flux, or Azure DevOps.
- Hands-on experience with observability tools (Prometheus, Grafana, Azure Monitor).
- Solid understanding of networking principles (VNETs, DNS, routing, VPNs, firewalls).
- Excellent problem-solving, automation, and documentation skills.
Additional Information
Nice to Have
- Experience with Databricks for data engineering, analytics, or ML workflows.
- Exposure to data lakehouse architectures (Delta Lake, Parquet, Synapse).
- Experience with FinOps automation tools (Kubecost, CloudHealth, Azure Cost Management).
- Familiarity with policy-as-code frameworks (OPA, Azure Policy, Conftest).
- Experience with service mesh and API gateway technologies (Istio, Linkerd, Kong).
- Familiarity with autoscaling frameworks (Karpenter, Cluster Autoscaler) for dynamic data workloads.
- Experience implementing FinOps practices — cost monitoring, usage optimization, and budgeting.
- Experience with message brokers and streaming platforms (Kafka, NATS, Azure Service Bus).
- Collaboration experience with Data Engineering, Analytics, or MLOps teams.
- Exposure to LLM-based or AI-driven workloads and model deployment strategies.
TBC processes the personal data of the candidate in order to determine the suitability of the candidate for the vacancy, in accordance with the requirements of the Law of Georgia on Personal Data Protection. Information about the candidate may also be processed to determine the suitability of the candidate for future vacancies. Information about the candidate is stored for a maximum of 3 years. In case you do not want further data processing, want to change or delete data, please follow the link and contact us through the communication channels located at the same link https://tbcbank.ge/en/privacy-policy
TBC shares its information with companies included in the TBC Bank Group PLC. Subsidiary companies also ensure personal data processing in accordance with the law. If you do not wish to share your data with TBC Group companies, please contact us at the same link https://tbcbank.ge/en/privacy-policy
TBC conducts the selection process of candidates in compliance with the requirements of the Law on the Elimination of All Forms of Discrimination and the principles of equal treatment of persons and eliminates discrimination against persons on any grounds.
If you see signs of discriminatory treatment, contact us: 032 2 27 27 27 *1998;
We will contact you only those who will pass the first stage of selection and will be granted candidate status.