Senior AWS Data Engineer
- Burlington, MA, USA
- Employees can work remotely
- Shift: 1st
QuickPivot (a Vericast company) is the premier customer data platform for brands looking to know, target and engage their customers. Our technology empowers marketers to rapidly make data-driven decisions and develop advanced cross-channel campaigns that drive timely, relevant customer experiences. QuickPivot allows brands to consolidate, cleanse, match and enrich all of their first party data and activate it to any endpoint to support strategic marketing campaigns. The QuickPivot CDP is backed up by a services team laser-focused on one thing: client success.
Winner of several industry innovation awards, the QuickPivot platform enables marketers to deliver coordinated customer experiences across all channels, measure results in real-time, and refine marketing programs to improve results. As brands look for cost-effective ways to drive rapid campaign creation and execution, QuickPivot is emerging as the vendor of choice. That’s why clients like Orvis, Annie Selkie, Allen Edmonds, the NHL, and over 20 channel partners are turning to QuickPivot.
We're seeking a creative Senior AWS Data Engineer who will provide strong software development and data analysis capabilities to our analytics and data science team. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. This individual understands client data structures and business goals, providing general analytics and development support, including visualizations and dashboards.
- You will work with cross-functional teams supporting software design, development, integrations, and maintenance of key analytics modules, including:
- Identity resolution engine
- Data enrichment and suppression engine
- Machine learning engine
- You will support our developers and data scientists on data initiatives and ensure that optimal data delivery architecture is consistent and always available throughout ongoing projects.
- You will help establish best-practices, patterns, develop analytic data visualizations, across engineering teams.
- You are self-directed and comfortable supporting the data needs of multiple teams, systems, and products.
- Must have 5+ years development experience of solid programming skills working with languages such as Python, Node.js, Spark, Scala, PySpark for building scalable and flexible ETL jobs and data workflows
- Proficient in AWS services: S3, EC2, EMR Lambda, Step Function, CloudWatch, Redshift/Spectrum, Athena, Aurora RDS, DynamoDB, SQS, AWS Glue or other cloud architectures
- Proficient with relational database using Aurora or MySQL and non-relational databases using DynamoDB or MongoDB.
- Experience in designing and developing micro-services architecture
- Write and refine code to enhance the performance and reliability of data extraction and processing.
- Experience with infrastructure as code using CloudFormation or Terraform
- Experience in working in Agile teams, and working independently with Business stakeholders providing solutions and regular updates
Looking for Someone Who:
- Thrives while leading in a fast paced, regularly changing environment.
- Has strong analytical, conceptual, and problem-solving skills
- Is most comfortable when there’s too much to do. You aren’t easily overwhelmed, and you prioritize tasks naturally. You’re not afraid to focus on one thing at a time when it’s needed.
- Has high dynamic range. You are able to see the big picture but comfortable diving deep into the details. You’re not above rolling up your sleeves and doing the grunt work.
- You are excited by thoughtful design and you appreciate attention to detail and fine craft.
- Shows confidence AND humility. You’re able to make fast decisions with the wisdom to change course as soon as it’s needed.