Senior Data Engineer

  • Full-time

Company Description

Ex Parte provides our customers with the data and insight to make smart and informed decisions on the most important legal issues facing their organizations.

We are is looking for talented, enthusiastic senior data engineers who share our passion for big data, AI, and machine learning and are excited by seemingly-impossible challenges. As an early employee, you must be amazingly entrepreneurial and thrive in a fast-paced environment where the solutions aren’t predefined.

Every year, corporations spend more than $250B on litigation in the United States alone. And yet, critical decisions such as whether to litigate or settle, or where to file suit or which attorney to hire, are all made the same way they were 100 years ago.

We are applying artificial intelligence, machine learning, and natural language processing to provide our customers with the insight they need to make highly informed decisions and gain a winning advantage. Think of it like Moneyball, but for a market more than 20x the size of Major League Baseball.

Job Description

You’ll be part of a team that owns the AI self-service portal and the related set of microservices. You’ll work with a cross-functional team of engineers to contribute to our data platform.  The role also offers the opportunity to grow into either more back-end or ML/AI work depending on your interests and experience.

In this role, you can expect to:

·       Work directly with product owners and data experts to build products that solve to solve complex client problems

·       Build and support a distributed platform supporting all ExParte data

·       Work across our product, primarily on the data pipelines

·       Interface directly with internal teams

·       Evaluate software and implementation options and document them for technical teams

·       Work with data analysts to collect insight on possible data collection efficiencies and identify automation potential for manual workflows

·       Integrate best qualitative practices in program design and development.

·       Be a part of a distributed team (we’re in North America and Europe)

·       Work with Azure cloud and Databricks

·       Develop technical architectures and specific implementations to meet business needs.

Qualifications

Requirements:

·       3+ Knowledge of Spark, Python, or Scala

·       3+ years experience with RDBMS and T-SQL.

·       3+ years experience with NoSQL and DataLakes.

·       Strong familiarity with map/reduce programming models

·       Proficiency in writing production-quality code

·       Deep expertise in database schema design, optimization, and scalability

·       Experience with Azure or AWS cloud-based service-oriented architecture

·       Solid understanding of testing pyramid (unit, integration, black box, service)

·       Experience working in an Agile/Scrum environment

·       Strong analytical and problem-solving skills

·       Good time management and organizational skills

·       Ability to work on challenging issues independently or in a team environment

·       Ability to learn and adapt quickly to new technologies and environments

·       Strong communication skills

Nice-to-have's:

·       Experience with Databricks or Azure ML

·       Experience applying machine learning algorithms to solve complex data mining problems

·       Experience with JavaScript/Typescript, HTML5, and CSS3

·       Experience with .NET core and entity framework core.

·       Experience with Power BI Embedded

·       Understanding of cloud platforms and providers, specifically Microsoft Azure

·       Knowledge of CI/CD tools in Azure DevOps

·       Bachelor's Degree in Computer Science, Engineering, Mathematics or related field

Additional Information

All your information will be kept confidential according to EEO guidelines.