Senior Hadoop Administrator

  • Full-time
  • Department: IT

Company Description

PubMatic is the automation solutions company for an open digital media industry. Featuring the leading omni-channel revenue automation platform for publishers and enterprise-grade programmatic tools for media buyers, PubMatic’s publisher-first approach enables advertisers to access premium inventory at scale. Processing nearly one trillion ad impressions per month, PubMatic has created a global infrastructure to activate meaningful connections between consumers, content and brands. Since 2006, PubMatic’s focus on data and technology innovation has fueled the growth of the programmatic industry as a whole. Headquartered in Redwood City, California, PubMatic operates 11 offices and six data centers worldwide.

Job Description

PubMatic Data center team is looking for a Senior Hadoop Administrator who will be responsible for assisting with the design, implementation, and ongoing support of the Big Data platforms.

As a Senior Hadoop Administrator, you will be responsible for installing, configuring and maintaining multiple Hadoop clusters. you will be responsible for design and architecture of the Big Data Platform, work with development teams to optimize different Hadoop services and deploy and code in to multiple environment

Major Duties and Responsibilities

  • Evaluate and recommend systems software and hardware for the enterprise system including capacity modeling
  • Architecture of our Hadoop infrastructure to meet changing requirements for scaling, reliability, performance, manageability
  • Work with core production support personnel in IT and Engineering to automate deployment and operation of the infrastructure; manage, deploy, and configure infrastructure with Ansible or other automation toolsets
  • Creation of metrics and measures of utilization and performance
  • Capacity planning and implementation of new/upgraded hardware and software releases as well as for storage infrastructure
  • Responsible for monitoring the Linux community and report on important changes/enhancements to the team
  • Ability to work well with a global team of highly motivated and skilled personnel - interaction and dialog are requisites in this dynamic environment
  • Research and recommend innovative, and where possible, automated approaches for system administration tasks; identify approaches that leverage our resources, provide economies of scale, and simplify remote/global support issues

Qualifications

Minimum Qualifications

  • 7+ years of professional experience supporting production medium to large scale Linux environments
  • 5+ years of professional experience working with Hadoop (HDFS & MapReduce) and related technology stack
  • A deep understanding of Hadoop design principals, cluster connectivity, security, and the factors that affect distributed system performance
  • MapReduce experience a plus
  • Solid understanding of automation tools (Puppet, Chef, Ansible)
  • Expertise with at least one, if not most of the following languages: Python, Perl, Ruby, or Bash
  • Prior experience with remote monitoring and event handling using Nagios, ELK
  • Solid ability to create automation with Chef, Puppet, Ansible, or Shell
  • Good collaboration & communication skills, with the ability to participate in an interdisciplinary team
  • Strong written communications and documentation skills
  • Knowledge of 'best practices' related to security, performance, and disaster recovery

Additional Information

All your information will be kept confidential according to EEO guidelines.