The Opportunity
We’re working with an AI startup in San Francisco that’s pushing the boundaries of generative AI and NLP. Backed by top-tier VCs, they’ve secured major funding and are scaling fast. If you’re looking for a hands-on ML role where you can work on cutting-edge large language models (LLMs), reinforcement learning, and real-world AI applications—this is it.
They need an ML Engineer who thrives in a fast-moving startup environment, loves research-driven development, and wants to see their models power real products used by millions.
Why You Should Join
- Build production-level AI models that go beyond academic research
- Work alongside PhDs and ex-Big Tech ML engineers
- Get in early—massive equity upside and full ownership of key AI features
- Solve high-impact LLM and deep learning problems in a practical way
Tech Stack: Python, PyTorch, TensorFlow, JAX, Hugging Face, AWS/GCP, Kubernetes, Docker
What They’re Looking For
- Strong experience in building and deploying ML models (LLMs, transformers, CNNs, RNNs)
- Hands-on with PyTorch or TensorFlow in production
- Deep understanding of scalability & optimization (think MLOps, model deployment, inference optimization)
- Experience with distributed computing (Ray, Spark, Dask) is a plus
- Prior experience in a startup or fast-moving environment
Location: California (Remote OK, but strong preference for SF Bay Area)
Interested? Let’s chat—this is an exclusive opportunity with an AI team that’s redefining the industry.