Big Data Developer/Lead-Spark/Hive/Impala/Cloudera/Kafka

Job Overview

Our client, a leading global financial services company, has approximately 200 million customer accounts and does business in more than 140 countries. They provide consumers, corporations, governments and institutions with financial products and services, including consumer banking and credit, corporate and investment banking, securities brokerage, transaction services, and wealth management. Big Data Development Lead Key Responsibilities – Work with core team and deliver solutions using ImpalaHive, Parquet, Kafka and related Big Data Technologies – Requirement gathering and understanding, Analyze and convert functional requirements into concrete technical tasks and able to provide reasonable effort estimates – Responsible for end-to-end project delivery within schedule and with required level of quality. – Reporting on all projects to senior management and cross-functional key stakeholders – Coordinate the management of cross-function interdependencies and lead on the execution of communication plans to all key stakeholders – Work proactively with global teams to address project requirements, and articulate issueschallenges with enough lead time to address project delivery risks – Providing expertise in technical analysis and solving technical issues during project delivery – Code reviews, test case reviews and ensure code developed meets the requirements – Responsible for systems analysis, architecture, Design, Coding, Unit Testing and other SDLC activities Qualifications – Graduate degree in Computer Science, Information Systems or equivalent quantitative field Skills Must Have Experience – 10-15 years’ relevant experience in Technology development and delivering projects – Should have been involved in enterprise scale multi region project development and tracking initiatives – Experience in bankingcapital markets or Risk or Finance is necessary. – Experience in leading large Big Data Development Program on Cloudera Platform and having Hands on Experience in different tech stacks in big data including Spark(on Scala and Java), Hive, and Impala etc. – Should have strong experience in Relational and No-SQL databases – Experience of development methodologies such as SDLC, Agile key mile stones artifacts around same, size estimation, structure of BRDFRD and full blown project plan preparation, test methodologies ( functional, regression, performance). – Overview programming paradigms- object oriented, functional etc should have driven a large build out, implemented end to end one to two program end to end in global model. – Complete project lifecycle exposure – Exposure experience in enterprise level platform development – Ability to manage high performance teams in high pressure delivery environment – Experience in systems analysis and programming of software applications – Experience in managing and implementing successful projects – Ability to work under pressure and manage deadlines or unexpected changes in expectations 143189 Please see our complete list of jobs at Big Data Developer/Lead-Spark/Hive/Impala/Cloudera/Kafka 1

View More
Job Detail
Shortlist Never pay anyone for job application test or interview.