OneMain Financial Jobs

Job Information

Meta AI Research Scientist (Technical Leadership), Multimodal - Monetization GenAI in Menlo Park, California

Summary:

The Monetization GenAI Video Gen & Visual Search group, part of the Ads pillar, is building the next generation of generative AI for video - from creation to understanding to transformation - at a scale that reaches billions of users. We develop proprietary foundation models for video generation, video-to-video editing, audio generation (music, voiceover, dubbing), and multimodal video understanding powered by large language models. We produce industry leading video models, and our research translates into products that drive hundreds of millions of dollars in annual advertising revenue. We are looking for a technical leader in AI research to define the research vision across video and audio generation, drive breakthrough research, and shape the technical direction of the organization.

Required Skills:

AI Research Scientist (Technical Leadership), Multimodal - Monetization GenAI Responsibilities:

  1. Lead end-to-end AI research and model development for video-centric generative AI across Meta's advertising surfaces

  2. Drive advancements in video generation & enhancement

  3. Develop video-to-video & audio generation capabilities

  4. Advance video & visual understanding through novel research

  5. Conduct foundation model research to support generative AI innovation

  6. Define research agendas and pioneer new directions in video/audio generation and multimodal understanding

Minimum Qualifications:

Minimum Qualifications:

  1. Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience

  2. Has obtained a PhD in Computer Science, AI/ML, or a relevant technical field

  3. Experience as a technical lead, driving major technical initiatives with cross-functional impact and influencing strategy across multiple teams

  4. 4+ years of experience training large language and/or vision models, with extensive and recent experience training multimodal LLMs

  5. Research expertise in video generation/understanding, multimodal learning, or diffusion models

  6. Demonstrated significant industry influence in the field of AI and/or recently published research in leading peer-reviewed conferences (e.g., ACL, NeurIPS, ICML, ICLR, AAAI, KDD, CVPR, ICCV)

Preferred Qualifications:

Preferred Qualifications:

  1. First-author publications at top peer-reviewed conferences (e.g., ACL, NeurIPS, ICML, ICLR, AAAI, KDD, CVPR, ICCV)

  2. Experience working on frontier-quality/state-of-the-art MLLM

  3. Experience with audio/speech generation or processing

  4. Experience with unified/multi-task foundation model architectures

  5. Industry research lab experience

Public Compensation:

$219,000/year to $301,000/year + bonus + equity + benefits

Industry: Internet

Equal Opportunity:

Meta is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. Meta participates in the E-Verify program in certain locations, as required by law. Please note that Meta may leverage artificial intelligence and machine learning technologies in connection with applications for employment.

Meta is committed to providing reasonable accommodations for candidates with disabilities in our recruiting process. If you need any assistance or accommodations due to a disability, please let us know at accommodations-ext@meta.com.

DirectEmployers