Job Overview
Were assembling some of the industrys best and brightest AWS cloud and software engineering talent and pairing that with cutting-edge science and technology to push the boundaries of data, IoT, and AI/ML platforms to accelerate the discovery of transformative medicines.This newly formed team is responsible for creating an integrated data pipeline, API, and microservices ecosystem using serverless architectures, languages like Python, and AWS capabilities like Athena, DynamoDB, Firehose, Kinesis, Glue, SageMaker, and Lambda. As a member of the team, youll be hands-on exploring new capabilities, building rapid prototypes, iterating on longer-lasting solutions, improving our automation capabilities, and establishing our operating frameworks.Essential Job DutiesBuild and maintain serverless data pipelines, derived datasets, and data discovery platforms at scale using AWS cloud servicesBuild and maintain serverless APIs and microservices to simplify data mastering, access, interrogation, cleansing, and exchangeDevelop and implement tests to ensure data quality across all integrated data sourcesContribute to software development efficiencies by advancing our agile development practices, automated build & test frameworks, privacy-by-design frameworks, and secure coding expertise.Collaborate directly with technical peers and non-technical end users to understand requirements, invent solutions, and create value quickly.QualificationsBachelors degree in Computer Science, or related discipline required, PhD / Masters degree in related discipline preferred3+ years of relevant cloud data engineering or software engineering experienceExperience using AWS cloud services for data processing, storage, computation, monitoring, event processing, machine learning, and messaging such as Glue, Kinesis, Kinesis Firehose, S3, Athena, DynamoDB, Redshift, Athena, Neptune, SageMaker, API Gateway, Lambda, and othersExperience creating, versioning, and supporting RESTful services and APIs at scaleExperience ingesting and integrating data from many sources using streams, flat files, APIs and databasesDemonstrated expert level understanding of Python and SQL by the ability to understand and apply advanced conceptsExperience determining and using the optimal database (relational, graph, columnar, document, …) based on requirementUnderstanding of contemporary data file formats like ParquetExperience preparing data for use in a research setting a plusExperience with full stack development a plusExperience in life sciences industry a plus
View MoreJob Detail
More jobs from our partners (931)
Cynet SystemsGCP Architect with Big Data on22 January 2021Any
Turnberry Solutions, IncBig Data Developer on23 January 2021Any
AmazonSoftware Development Manager, AWS AI on21 January 2021Any
AmazonBig Data Engineer on21 January 2021Any
AmazonSenior Manager, Software Development ML/AI Platforms on21 January 2021Any
AmazonSoftware Development Engineer / Conversational AI on22 January 2021Any
AmazonBig Data Engineer on24 January 2021Any
AmazonSoftware Development Manager – Big Data, Amazon EMR on21 January 2021Any
Franklin Infotech Inc.Azure ML-Big Data Engineer | Location : Plano/Dallas, TX on23 January 2021Any
J.P. MorganBig Data Software Developer on14 January 2021Full Time