NeuReality is looking for a Senior Machine Learning Engineer to work on LLM-based systems with a strong emphasis on integration, end-to-end workflows, and production readiness. This role combines performance optimization (prefill/decode, throughput, latency) with hands-on work integrating components across the AI platform.
A significant part of the role involves building end-to-end integration flows and tests, particularly around token generation pipelines and system orchestration, as well as contributing to intelligent system behavior such as hardware selection and execution strategies. The role also includes developing system-level logic in Python for multi-tenant management, caching strategies, and service lifecycle management across the platform.
Responsibilities:
- Build and maintain end-to-end integration flows across the AI inference pipeline (serving, orchestration, APIs, and infrastructure)
- Design, implement, and optimize LLM inference workflows, including prefill and decode stages
- Improve system performance with focus on throughput, latency, and interactivity
- Write production-grade components in Python and integrate them into the broader system
- Contribute to system-level logic such as smart hardware selection and execution strategies
- Integrate models (open source and custom), services, and APIs into cohesive, reliable end-to-end application pipelines
Requirements
- 4+ years of experience in software engineering or machine learning engineering
- Strong proficiency in Python
- Strong experience with LLM inference systems and performance optimization
- Hands-on experience with system integration and end-to-end workflows
- Experience with inference frameworks such as vLLM, TensorRT, SGLang etc
- Experience working with GPU/accelerator-based systems
Preferred Qualifications
- Hands-on experience with Dynamo and LLM-D for LLM inference and serving
- Familiarity with Kubernetes and cloud environments