PillPack is an independently operated subsidiary of Amazon, acquired in 2018. At PillPack, you will have the opportunity to make a tangible impact on both the company as well as the quality of our customers’ lives. We have a variety of roles of differing scope available across several teams and would love the opportunity to help you learn more about what’s available. Join our team and help create the future of medicine.
Do you want to be in the forefront of engineering big data solutions that takes analytics at PillPack to the next generation? Do you have a solid analytical thinking, metrics driven decision making and want to solve problems with solutions that will meet the growing needs in the analytical space? We are looking for a top notch Data Engineer to be part of our data warehousing and analytics team. We are building real time analytical platforms using big data tools and AWS technologies.
The ideal candidate relishes working with large volumes of data, enjoys the challenge of highly complex technical contexts, and, above all else, is passionate about data and analytics. He/she is an expert with data modeling, ETL design and business intelligence tools with the business to identify strategic opportunities where improvements in data infrastructure creates out-sized business impact. He/she is a self-starter, comfortable with ambiguity, able to think big (while paying careful attention to detail), and enjoys working in a fast-paced team. We’re excited to talk to those up to the challenge!
· Experience with scripting languages like Python.
· Ability to manage competing priorities simultaneously and drive projects to completion.
· Bachelor’s degree or higher in a quantitative/technical field (e.g. Computer Science, Statistics, Engineering).
· Understanding of Big data technologies and frameworks. Hive, Spark, Hadoop, SQL on Big Data, Redshift
· Understanding of near real time analytics
· Understanding of ETL, Data Modeling, and large-scale data processing concepts.
· Extremely proficient in writing performant SQL working with large data volumes
· Ability to communicate complex ideas simply, coherently and fluently both in writing and verbally.
· Ability to manage a large and varied operational workload and stakeholder expectations
· Experience with large scale data processing, data structure optimization and scalability of algorithms is a plus
· Real time ingestion pipeline technologies SNS, SQS, Kinesis, etc.
More jobs from our partners (1212)
DeloitteAI Strategic Growth Offering (SGO) Sales Engineering Solution Architect on24 October 2020Any
AppleMachine Learning / AI – Engineering & Research Internship (Interspeech 2020) on23 October 2020Any
AppleComputer Vision Engineer – Tracking and Sensor Fusion (TDG) on23 October 2020Any