Data Engineer/Developer (Spark, Scala, Python)

Contech Systems Online
Contech Systems Online

  • US
  • Post Date: 19 August 2020
  • Views 4
Job Overview

DATA ENGINEERDEVELOPER ndash Right to hire Location Newark, NJ Target Start Date ASAP Years of Experience (4-6 years Intermediate Level) If interested in discussing, please call Rita 914-461-1670 or email Our client is searching for a data engineer developer to build next-generation data platform from the ground up on AWS as part of a small focused team. Our ideal candidate will have strong knowledge of computer science fundamentals with programming experience in Scala Python and Spark. The right candidate for this role will identify this challenge as a unique and valuable opportunity to help drive our global technology transformation REQUIRED Qualifications Bachelor’s degree in computer science or related field required. Master’s degree preferred Knowledge of data structures, algorithms, and functional programming Passion to learn new things, experiment with new ideas and build world-class data platform 5 years of experience in programming with Scala, Python or Java 2 years of experience in Scala, Spark, and functional programming. Deep knowledge of Spark internals such as partitioning, DAG, lazy evaluation. Strong experience with relational databases, SQL, and query optimization. Knowledge of data warehousing, dimensional data model and business intelligence is a plus. Knowledgeexperience with event-driven programming and Akka actor model Excellent verbal and written communication skills Qualifications (Desired) bull Experience with AWS infrastructure, docker, ECSEKS, EMR, KafkaKinesis. bull Front-end development experience with Angular, JavaScript, Reactive Programming. bull Knowledge or desire to learn Investment Management, Fixed Income, and Finance bull Familiarity with NoSQL and Elasticsearch Responsibilities Build complex data ingestion pipelines using Scala, Spark, Parquet and S3 Design scalable processes in event-driven architecture to support Fixed Income applications Develop near real-time streaming analytics using KafkaKinesis Act as Subject Matter Expert and help rest of the team in leveraging the platform and migrating applications to it Establish end to end data lineage and data catalog. Work with data governance team to setup data quality checks and metrics Create a self-service notebook environment with ZeppelinJupyter for exploratory data analytics and rapid interactive development Troubleshoot any performance issues and ensure efficient data organization Build efficient web-based tools for monitoring and tracking

View More
Job Detail
Shortlist Never pay anyone for job application test or interview.