Member of Technical Staff - Multimodal VLM/LLM Researcher
• Mid-Level
• Remote
• Data
Mark status as:
✨ The Role in One Sentence
Black Forest Labs is seeking a Multimodal VLM/LLM Researcher to push the boundaries of generative AI by developing and training state-of-the-art models.
📋 What You'll Likely Do
- 40%: Develop and train multimodal vision-language models within the FLUX technology stack. 
- 30%: Conduct research to creatively combine vision and language models for enhanced generative capabilities. 
- 30%: Collaborate with cross-functional teams to implement and deploy models at scale. 
🧑💻 Profiles Doing This Job
- High Priority: Expertise in training and fine-tuning large-scale vision-language models. 
- High Priority: Proficiency in PyTorch or similar deep learning frameworks. 
- High Priority: Experience with distributed training systems and large-scale model optimization. 
📈 How This Role Will Look on Your CV
- Developed and deployed state-of-the-art multimodal vision-language models in a leading AI research lab. 

