Job Overview
Summary Perform data engineering tasks using PySpark to aggregate multiple data sources into a centralized repository on AWS Maintain and improve machine learning models and PySpark based personalizationassignmentmeasurement tools Develop new features, augment tools and create new models based on business requirements Technical Qualifications Data Engineering data flow pipelines (experience as a builder of new systems) Software engineering skills (Pull requests, unit testing, automated model tests, etc) Deep experience designing, executing, measuring experiments Experience w creating experiment results visualizationsreports applying results to drive business value Statistics knowledge (e.g. parametrized vs non-parametrized tests) Modeling experience preferred (Uplift modeling, propensity score matching, LinearLogistic Regression and other supervised and unsupervised algorithms) LanguagesTools Python (pandas, numpy) SQL (analytic functions, subqueries, etc) Git Github Spark (using pySpark) Visualization (e.g. Tableau) Hadoop AWS EMR (Debian), Hive ExcelPowerPoint
Job Detail
More jobs from our partners (598)
AEG - Application Engineering GroupSQL Data Engineer on17 August 2020Full Time
Irvine Technology Corporation (ITC)Data Engineer (ETL/Cloud) on17 August 2020Full Time
SpotlineAWS Devops Admin/ Data Engineer Technical Lead(9+ exp must) on17 August 2020Full Time
EnquizitMid. Python Data Engineer on17 August 2020Full Time
Louisiana Economic DevelopmentRPA developer on17 August 2020Full Time
Kforce Technology StaffingData Engineer on17 August 2020Full Time
Precision Technologies CorpAzure Data Engineer on17 August 2020Full Time
The Judge Group, Inc.SQL Server Data Engineer on17 August 2020Full Time
New York Technology PartnersData Engineer/ETL/Data ware House engineer on17 August 2020Full Time
TechnitiaFull Stack data Engineer (Tableau + Python) on17 August 2020Full Time