Staff Systems Engineer(Bigdata Administration)
- Full-time
- Job Family Group: Technology and Operations
Company Description
As the world's leader in digital payments technology, Visa's mission is to connect the world through the most creative, reliable and secure payment network - enabling individuals, businesses, and economies to thrive. Our advanced global processing network, VisaNet, provides secure and reliable payments around the world, and is capable of handling more than 65,000 transaction messages a second. The company's dedication to innovation drives the rapid growth of connected commerce on any device, and fuels the dream of a cashless future for everyone, everywhere. As the world moves from analog to digital, Visa is applying our brand, products, people, network and scale to reshape the future of commerce.
At Visa, your individuality fits right in. Working here gives you an opportunity to impact the world, invest in your career growth, and be part of an inclusive and diverse workplace. We are a global team of disruptors, trailblazers, innovators and risk-takers who are helping drive economic growth in even the most remote parts of the world, creatively moving the industry forward, and doing meaningful work that brings financial literacy and digital commerce to millions of unbanked and underserved consumers.
You're an Individual. We're the team for you. Together, let's transform the way the world pays.
Job Description
Essential Functions
- Person will be responsible to Perform Big Data Administration and Engineering activities on multiple Hadoop, Kafka, Hbase and Spark clusters
- Work on Performance Tuning and Increase Operational efficiency on a continuous basis
- Monitor health of the platforms and Generate Performance Reports and Monitor and provide continuous improvements
- Working closely with development, engineering and operation teams, jointly work on key deliverables ensuring production scalability and stability
- Develop and enhance platform best practices
- Ensure the Hadoop platform can effectively meet performance & SLA requirements
- Responsible for Big Data Production environment which includes Hadoop (HDFS and YARN), Hive, Spark, Livy, SOLR, Oozie, Kafka, Airflow, Nifi, Hbase etc
- Perform optimization, debugging and capacity planning of a Big Data cluster
- Perform security remediation, automation and self heal as per the requirement
Qualifications
Basic Qualifications:
- 4 years of work experience with a Bachelor's Degree or at least 2 years of work experience with an Advanced degree (e.g. Masters, MBA, JD, MD) or 0 years of work experience with a PhD degree
Preferred Qualifications:
- 7-10 years of work experience and a Bachelor's Degree or 6 years of work experience with an Advanced Degree (e.g. Masters, MBA, JD, MD) or 3 years of experience with a PhD.
- Minimum 6 years of work experience in maintaining, optimization, issue resolution of Big Data large scale clusters, supporting Business users and Batch process.
- Hands-on Experience No SQL Databases HBASE is plus
- Prior Experience in Linux / Unix OS Services, Administration, Shell, awk scripting is a plus
- Excellent oral and written communication and presentation skills, analytical and problem solving skills
- Self-driven, Ability to work independently and as part of a team with proven track record
- Experience on Hortonworks distribution or Open Source preferred