Big Data Snowflake Architect

  • Contract

Company Description

Diverse Lynx is a Go to Place for all your IT needs . We combine industry leading expertise and our personal dedication for all your needs.Established in 2002, we are head quartered in Princeton NJ . It is also seamlessly connected with the US office through VOIP & T1 lines. Our transparent approach and customer first attitude keeps you at the forefront of your industry. Our belief in value added relationships has allowed us to partner with our clients for a long haul If you are looking to augment staff for your existing projects, our team can help you get through the finish line. With experiences in industries like banking, insurance, financial services, pharmaceuticals and many more; we hit the ground running.

Job Description

Position: Snowflake Architect
Location: San Francisco, CA
Job Type: Full Time / Contract

Responsibilities:
Snowflake Architect will build, create and configure enterprise level Snowflake environments.
The focus will be on choosing optimal solutions Snowflake implementations then maintaining, implementing, monitoring and integrating them with the architecture used across our client.
Build, design, architect and implement high-volume, high-scale data analytics and machine learning Snowflake solutions in the cloud.
Bring new ideas in cloud, big data, and machine learning software development.
Design and develop features, understand customer requirements and meet business goals.
Build high quality and highly reliable software to meet the needs to the largest customers.
Analyze and improve the performance, scalability, and high availability of large scale distributed systems and the query processing engine.

Required Skills:
Must have extensive experience with Snowflake.
Proficient understanding of distributed computing principles.
Management of Hadoop cluster, with all included services.
Ability to solve any ongoing issues with operating the cluster.
Proficiency with Hadoop v2, MapReduce, HDFS.
Experience with building stream-processing systems, using solutions such as Storm or Spark-Streaming.
Good knowledge of Big Data querying tools, such as Pig, Hive, and Impala.
Experience with Spark and Scala.
Experience with integration of data from multiple data sources.
Experience with NoSQL databases, such as HBase, Cassandra, MongoDB.
Knowledge of various ETL techniques and frameworks, such as Flume.
Experience with various messaging systems, such as Kafka or RabbitMQ.
Experience with Big Data ML toolkits, such as Mahout, SparkML, or H2O.
Good understanding of Lambda Architecture, along with its advantages and drawbacks.
Experience with Cloudera/MapR/Hortonworks.

Additional Information

All your information will be kept confidential according to EEO guidelines.