Senior Data Engineer

  • Full-time

Company Description

Join the thousands of innovators, advocates and forces who are making an impact every day at one of the biggest footwear brands in the world. Whether you love to connect with consumers on the retail floor or want to drive our award-winning powerhouse in new directions, the SKECHERS team is the place to be. Learn more about our brand at skx.com. 

Job Description

JOB PURPOSE

Are you excited about high performance Big Data implementation? Then great! The Skechers USA Data Engineering team is growing, and we need Data Engineers developers who are thorough and agile, capable of breaking down and solving problems, and have a strong will to get things done. In the Data Engineering team, you will work on real-world problems working on-premise or multi cloud tech stack where reliability, accuracy and speed are paramount, take responsibility for your systems end-to-end and influence the direction of our technology that impacts customers around the world. 

We are looking for a Senior Data Engineer with both conceptual and hands-on experience working on structured/semi structured/Complex data processing and streaming frameworks; RDBMS and NoSQL data stores. As a member of our Data Services team you will be a member of a service group responsible for continuing organizational expansion of our data processing projects. Ideal candidate must be enthused about all spectrum of data development, including data transport, data processing, data warehouse/ETL integration, quick learning and self-starting. This is a demanding role that will require hands-on experience with data processing development to be deployed on Linux. You will be responsible for the day to day operation and new developments. We are seeking a candidate with good skills in software development life cycle. This position includes 24x7 production support. 


ESSENTIAL JOB RESULTS

•    Collaborate with data stewards, data architects, and data engineers to design, implement and deliver successful data solutions
•    Drive engineering best practices, set standards and propose larger projects which may require cross-team collaboration
•    Define technical requirements and implementation details for the underlying data lake, data warehouse and data marts
•    Involved in the design and implementation of full cycle of data services, from data ingestion, data processing, ETL to data delivery for reporting 
•    Identify, troubleshoot and resolve production data integrity and performance issues
•    Design, develop and support various data platform applications
•    Design and develop applications to process large amounts of critical information in batch and near real-time to power business insights

SUPERVISORY RESPONSIBILITIES

•    No

Qualifications

JOB REQUIREMENTS

•    Experience in managed services for data ingestion/processing with hands on experience working in AWS environment and operational experience of Kinesis/Kafka, S3, Glue and Athena.
•    Experience with the following data processing technologies: Spark, Kafka, Kinesis
•    A solid understanding of NoSQL data stores with extensive experience in working with SQL, script languages (Python, shell etc.) 
•    Proven experience of distributed systems driving large-scale data processing and analytics
•    Expertise with RDMS and Data Warehousing (Strong SQL)
•    Experience working with BI and data warehousing tools building data pipelines and real-time data streams.
•    Experience with Linux KSH/bash scripting and java
•    Experience working with any one of ETL toolsets: SyncSort, Talend, Informatica 
•    Experience with any of the following message / file formats: Parquet, Avro, ORC Protobuf
•    Expertise in Python, pySpark or similar programming languages
•    Excellent communication and presentation skills (verbal, written, presentation) across all levels of the organization. Ability to translate ambiguous concepts into tangible ideas
•    Experience with version control systems like Git
•    Proficient in writing technical specifications and documentation
•    Experience with Presto, Hive, Impala or similar SQL based engine for Big Data
•    Experience with Cassandra, MongoDB or similar NoSQL databases.
•    Experience with Scala, Nodejs


EXPERIENCE & EDUCATION

•    10+ years of experience defining, designing and delivering data pipelines and solutions
•    8+ years of experience working with Linux based operating systems
•    6+ years relevant experience developing and integrating frameworks and database technologies that support highly scalable data processing
•    5+ years of programming experience with Python
•    3+ years documented experience in a data engineering role on a variety of big data projects
•    3+ year experience on premises MPP data warehousing systems (e.g. Vertica, Teradata) or 2+ year experience with cloud-based data warehousing systems (e.g. AWS Redshift, Snowflake, Google Big Query)
•    Proficient in any flavor of SQL
•    Demonstrable ability in data modeling, ETL development, data warehousing, batch, and real time data processing
•    Demonstrable experience with Stream Processing and workload management for data transformation, augmentation, analysis, etc.
•    At least 2 years of experience working within cloud environments, preferably AWS
•    B.S. in Computer Science, Computer Information Systems, Engineering, or another technical field, or equivalent work experience
 

Additional Information

PHYSICAL DEMANDS

While performing the duties of this job, the employee is regularly required to stand; use hands to finger, handle, or feel, and talk or hear.  The employee frequently is required to walk, sit, reach with hands and arms, stoop, and kneel. The employee is occasionally required to sit for long period of times.  

All your information will be kept confidential according to EEO guidelines.

 

Privacy Policy