Overview:
Berkley Hunt has partnered with a Series A firm backed by a Tier 1 VC, we are currently seeking a Founding Machine Learning Engineer with a proven track record in designing and deploying ML models and systems. This role is integral to the companies startup mission, focusing on addressing sophisticated challenges in the realms of security, traditional ML, and Large Language Models (LLMs).
Who You Are:
- You possess at least 4 years of experience in roles such as Data Scientist, Research Scientist, or Research/ML Engineer, with a preference for expertise in trust and safety.
- Proficiency in SQL, Python, and data analysis/data mining tools is essential.
- Your skillset includes building trust and safety AI/ML systems, specifically in behavioral classifiers or anomaly detection.
- Strong communication skills are crucial, enabling you to convey complex technical concepts to non-technical stakeholders.
- You are deeply committed to considering the societal impacts and long-term ramifications of your work.
Desirable Skills:
- Familiarity with ML frameworks like Scikit-Learn, TensorFlow, or PyTorch is highly desirable.
- Previous experience in full-stack engineering for developing internal tooling is advantageous.
- Exposure to high-performance, large-scale ML systems and language modeling using transformers is beneficial.
- Knowledge of reinforcement learning and large-scale ETL processes is a plus.
Responsibilities:
- Collaborate on the development of safety and oversight mechanisms for AI systems, with a focus on detecting harmful behaviors and safeguarding user well-being.
- Train ML models to identify unwanted or anomalous behaviors from users and API partners, integrating them into production systems seamlessly.
- Enhance automated detection and enforcement systems in alignment with safety, transparency, and oversight principles, as well as terms of service and acceptable use policies.
- Proactively analyze user reports to detect and address inappropriate accounts using ML models.
- Provide valuable insights on abuse patterns to research teams to enhance model performance during the training phase.
- Lead efforts in model training and fine-tuning to optimize performance.
- Deploy and scale model inference capabilities effectively.
- Engage in the exploration of adversarial machine learning techniques.
- Contribute to resolving complex LLM security challenges, including multimodal threats, red teaming, and data loss prevention.