Staff Systems Engineer - Hadoop Administration
Visa is a world leader in digital payments, facilitating more than 215 billion payments transactions between consumers, merchants, financial institutions and government entities across more than 200 countries and territories each year. Our mission is to connect the world through the most innovative, convenient, reliable and secure payments network, enabling individuals, businesses and economies to thrive.
When you join Visa, you join a culture of purpose and belonging – where your growth is priority, your identity is embraced, and the work you do matters. We believe that economies that include everyone everywhere, uplift everyone everywhere. Your work will have a direct impact on billions of people around the world – helping unlock financial access to enable the future of money movement.
Join Visa: A Network Working for Everyone.
Product Reliability Engineering(PRE) is part of the Visa's technology organization. The division is responsible for maintaining and supporting Visa's data assets and provides support for value added products and services to drive innovation for our partners and clients, within Visa and globally. Product Reliability Engineering Big Data Platform Team is part of PRE supports open source Big Data and Kafka clusters in Visa.
As a Staff Big Data Engineer you will be responsible for monitoring, troubleshooting, automating and continuously developing software tools to improve the availability and resiliency of open source Big Data Platforms at Visa. In this hands-on role, you will Administer and ensure performance, reliability and increase the operational efficiency of open source big data platforms.
- Responsible for Performing Big Data Administration and Engineering activities on multiple opensource Hadoop, Kafka, HBase and Spark clusters.
- Strong Troubleshooting and debugging skills.
- Cross-team teamwork, build and maintain relationships with the customer teams, the user community, architects, and engineering teams, jointly work on key deliverables ensuring production scalability and stability.
- Effective Root cause analysis of major production incidents and developing learning documentation.
- Identify and implement HA solution for services with SPOF.
- Plan and perform capacity expansion and upgrades in timely manner avoiding any scaling issues and bugs.
- Automation of repetitive tasks to reduce manual effort and avoid Human errors.
- Tune alerting and setup observability to proactively identify the issues and performance problems.
- Work closely with L-3 teams in reviewing new use cases, cluster hardening techniques for building a robust and reliable platforms.
- Leverage DevOps tools, disciplines( Incident, problem and change management) and standards in day to operations.
- Ensure the Hadoop platform can effectively meet performance and SLA requirements.
- Perform security remediation, automation and selfheal as per the requirement.
This is a hybrid position. Hybrid employees can alternate time between both remote and office. Employees in hybrid roles are expected to work from the office 2-3 set days a week (determined by leadership/site), with a general guidepost of being in the office 50% or more of the time based on business needs.
5+ years of relevant work experience and a Bachelors degree or at least 2 years of work experience with an Advanced degree (e.g. Master, MBA, JD, MD) or 0 years of work experience with a PhD, OR 8+ years of relevant work experience
6 or more years of work experience with a Bachelors Degree or 4 or more years of relevant experience with an Advanced Degree (e.g. Masters, MBA, JD, MD) or up to 3 years of relevant experience with a PhD
Hands on experience working as a Hadoop system engineer in managing Hadoop platforms.
Experience in building, managing and tuning performance of Hadoop platforms.
Extensive knowledge on Hadoop eco-system such as Zookeeper, HDFS, Yarn, HIVE and SPARK.
Excellent Shell, Python programming skills for automation requirement for repetitive dev-ops tasks
Understanding of security tools like Kerberos and Ranger.
Experience on Hortonworks distribution or Open Source preferred.
Hands-on experience in debugging Hadoop issues both on platform and applications.
Knowledge on Kafka, HBASE and Kubernetes is a plus.
understanding of Linux, networking, CPU, memory and storage.
Knowledge on Java and Python is good to have.
Excellent interpersonal, verbal, and written communication skills.
This position is not ideal for a Hadoop developer.
Work Hours: Varies upon the needs of the department.
Travel Requirements: This position requires travel 5-10% of the time.
Mental/Physical Requirements: This position will be performed in an office setting. The position will require the incumbent to sit and stand at a desk, communicate in person and by telephone, frequently operate standard office equipment, such as telephones and computers.
Visa is an EEO Employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability or protected veteran status. Visa will also consider for employment qualified applicants with criminal histories in a manner consistent with EEOC guidelines and applicable local law.
Visa will consider for employment qualified applicants with criminal histories in a manner consistent with applicable local law, including the requirements of Article 49 of the San Francisco Police Code.U.S. APPLICANTS ONLY: The estimated salary range for a new hire into this position is 117,700.00 to 153,000.00 USD, which may include potential sales incentive payments (if applicable). Salary may vary depending on job-related factors which may include knowledge, skills, experience, and location. In addition, this position may be eligible for bonus and equity. Visa has a comprehensive benefits package for which this position may be eligible that includes Medical, Dental, Vision, 401 (k), FSA/HSA, Life Insurance, Paid Time Off, and Wellness Program.