Meta is seeking a Research Scientist to join our Llama Applied Multimodal team. We are looking for recognized experts in generative AI, multimodal reasoning, NLP, with experience in areas like multimodal model training; data processing for pretraining and fine-tuning; LLM alignment; reinforcement learning for model tuning; efficient training and inference; image and video generation. The ideal candidate will have an interest in producing and applying new science to help us develop and deploy large multimodal models.
Fundamental Multimodal Research Scientist - Generative AI Responsibilities
- Lead, collaborate, and execute on research that pushes forward the state of the art in multimodal reasoning and generation research.
- Work towards long-term ambitious research goals, while identifying intermediate milestones. Directly contribute to experiments, including designing experimental details, writing reusable code, running evaluations, and organizing results.
- Mentor other team members. Play a significant role in healthy cross-functional collaboration.
- Prioritize research that can be applied to Meta's product development.
Minimum Qualifications
- Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience.
- PhD in computer vision, machine learning, NLP, computer science, applied statistics, or other related fields
- Experience writing software and executing complex experiments involving large AI models and datasets.
- Must obtain work authorization in the country of employment at the time of hire, and maintain ongoing work authorization during employment.
Preferred Qualifications
- Direct experience in generative AI, computer vision and multimodal research.
- First author publications experience at peer-reviewed AI conferences (e.g., CVPR, ICCV, ECCV, NeurIPS, ICML, ICLR, EMNLP, and ACL).