AI Engineer
- Full-time
Company Description
Kata.ai is an Indonesian Conversational Artificial Intelligence company with a focus on creating technology to enhance the understanding of human conversations, improving the way humans collaborate with machines. Kata.ai’s Natural Language Processing (NLP) technology powers MultiPurpose chatbots (virtual customer service / virtual friend) for major corporations in Indonesia across different kinds of industries such as Unilever (FMCG), Telkomsel (Telco), Bank BRI (Financial Services), and Alfamart (Retail).
The company’s proprietary Kata Bot Platform can be leveraged to create feature-rich chatbots on top of Kata.ai’s robust and scalable AI technology platform, ensuring company of any size can easily build their own chatbot on any messaging platform. With this platform, it is now possible for the business to focus on designing engaging interaction for their customers, while Kata.ai handles all the technology aspects of the chatbots.
Established in 2015, the company has become a trusted partner for major corporations such as Microsoft, Accenture, and Line. In 2020, the company received Series-B funding from TransPacific Technology Fund and MDI Venture.
Job Description
You will design, build, and deploy production-grade AI systems — including LLM-powered conversational agents, RAG pipelines, NLP workflows, and voice AI integrations — to deliver intelligent, reliable, and measurable AI solutions for enterprise clients across government, financial services, healthcare, and telecommunications sectors — so that Kata's clients can automate customer interactions at scale with high accuracy, low latency, and strong business impact.
Qualifications
Qualifications & Education :
- Bachelor's degree in Computer Science, Artificial Intelligence, Data Science, Computational Linguistics, or related field
- Master's degree in AI/ML is a plus
- Relevant certifications (GCP AI/ML, DeepLearning.AI, etc.) are advantageous
Technical Skills :
- LLM Integration: OpenAI GPT-4o, Anthropic Claude, Google Gemini, or open-source models (LLaMA, Mistral, Qwen)
- AI Frameworks: LangChain, LlamaIndex, CrewAI, or similar agent/RAG orchestration frameworks
- Prompt Engineering: System prompt design, few-shot prompting, chain-of-thought, structured output (JSON mode, function calling)
- RAG Pipelines: Document chunking, embedding strategies, retrieval optimization, reranking
- Vector Databases: Pinecone, Weaviate, Qdrant, or pgvector
- Voice AI: LiveKit Agents SDK, STT integrations (Deepgram, Google Speech-to-Text, Whisper), TTS integrations (ElevenLabs, Google TTS)
- Languages: Python (required); FastAPI for AI service exposure
- Cloud: GCP or Azure for AI/ML workload deployment — Vertex AI, Azure OpenAI, Cloud Run
- Evaluation Frameworks: RAGAS, DeepEval, custom eval pipelines, or LLM-as-judge approaches
- Containerization: Docker; basic Kubernetes for deploying AI services
- Monitoring: AI-specific observability — LangSmith, Langfuse, or custom logging for tracing LLM calls in production
Additional Information
Skills & Competencies :
- Strong analytical thinking — able to diagnose and improve AI system behavior through systematic evaluation and iteration
- Ability to balance research-oriented experimentation with production-grade engineering rigor
- Good communication skills — able to explain AI system behavior, limitations, and tradeoffs clearly to non-technical Product and Project stakeholders
- Quality-first mindset — proactively defines success metrics and failure modes before building
- Collaborative and cross-functional — works closely with Backend, Frontend, and QA to ensure AI components integrate cleanly into the full product
- Agile/Scrum mindset with comfort operating in sprint-based delivery cycles
- Keeps up with the fast-moving AI landscape and proactively identifies relevant new tools, models, or techniques applicable to the team