Data Engineer - Fraud AI Platform

  • Jakarta, Indonesia
  • Full-time

Company Description

Traveloka is a technology company based in Jakarta, Indonesia. Founded in 2012 by ex-Silicon Valley engineers and aims to revolutionize human mobility with technology. Today Traveloka is expanding its reach by operating in six countries and experimenting with new endeavors that will create large impact in the markets and industries we touch.

Job Description

The Data Team at Traveloka has had enormous growth over the past two year and requires comprises a large, diverse team of engineers, data scientists, machine learning engineers, analysts, and product managers handling Platforms, Data Science, Machine Learning and AI, and Analytics groups for the organization.

The Data team separates signals and noise in our structured and unstructured data. We research, shape, and use this data to create algorithms, insights, and platforms that make people’s lives better by empowering discovery and mobility. 

Your Mission

Your primary mission is to

  • Build data pipelines to aid our data scientists or product managers in fraud discovery and intelligence. 
  • Create data-stores to handle real-time streaming of petabytes of data as an input for training & testing of machine learning models. 
  • Create deployment framework which enables real time retrieval as a source of input for production, re-evaluation of machine learning models. 

The team / What is in for you! 

The team comprises of (senior) data scientists, machine learning engineers and is looking to scale up the team horizontally. The team is currently embracing the squad structure and tries to solve problems with a long term product strategy.

The team always believes it is about the problem, not the person, strongly encourages criticism dislikes bureaucracy and is a strong advocate of data-driven decision making. 

You will have the opportunity to

  • Accelerate your career growth and learning curve exponentially. 
  • Help to build data products from 0 to 1 to create impactful work. 
  • A large extent of freedom to choose your suite of tools to implement your own ideas and solutions. 



  • 4+ Years experience in building data pipelines to support analytics and machine learning models. 
  • Understands the trade offs between choice of tools and systems. For example differences or trade offs in batch and real-time, Beam vs Spark vs SQL, Restful vs GRPC, BigTable vs BigQuery vs Cloud Storage as a data-store. 
  • Is familiar with asynchronous systems, serverless tools in data processing, data freshness and data retrieval. 
  • Proficiency in programming languages such as Java, Python, Go or data processing languages.
  • Knowledgeable in Machine learning models lifecycle management and machine learning toolkits (such as Kubeflow).
  • Unafraid of ambiguity, comfortable working with rapidly changing environment. 


  • A go-getter, does not wait for things to happen, takes full control and ownership. 
  • Actively seeks out new knowledge & technologies, understand things from first principles.
  • Familiarity with data version control solutions such as pachyderm.
  • Understands and ability to navigate cloud technologies, in particular GCP will be a strong plus. 
  • Ability to understand the context of the business, the product, and the data being generated. 
  • Strong communicator with proven track record of building products 0 to 1. 

Additional Information

Join our ambitious growing team in building large impact consumer products and services, building real-time and big data systems and platforms, coming up with creative solutions to business and engineering problems, deriving insights from massive amount of data, and transforming industries with technology.