We’re looking for an AI / ML / MIR Engineer to help Hyph understand music the way musicians do. You’ll design systems that analyze, tag, and model musical stems: detecting key, tempo, time signature, sections, and style - while exploring deep representations that capture the DNA of sound itself.
You might be a fit if you:
Are fluent in Python and experienced with modern ML frameworks (we use PyTorch)
Have strong data engineering and programming skills, you write clean, efficient code and love scaling experiments into production
Understand or are eager to learn music theory concepts
Know your way around signal processing or physics of sound
Are comfortable building models for complex data (audio and/or images)
Enjoy experimentation, fast iteration, and owning projects end-to-end
Thrive in an environment where creative and technical people collaborate closely
Bonus points:
Background in music information retrieval (MIR) or computational musicology
Experience with audio representation learning
Hands-on with DSP
Contributions to open-source ML or music tech projects
You play an instrument or produce music, and think like both an engineer and an artist
What you’ll do
In one sentence: Teach Hyph to understand music at scale.
Build and train ML systems that extract musical features (mode, scale, root key, tempo, time signature, sections, genre, etc…) from audio stems
Develop internal tools to streamline our music tagging pipeline and empower the creative team
Explore representations of sound
Collaborate with the backend and music teams to bring intelligent music understanding into the Hyph creation engine
Contribute to architecture, experimentation frameworks, and model evaluation pipelines
Stay current with new methods in audio AI, signal processing, and know when to apply them pragmatically
Our tech stack
Backend: Python, FastAPI
ML: Python, PyTorch
Data: Postgres
Cloud: AWS
Mobile: Flutter
Frontend: Angular (TypeScript)