As a Machine Learning Engineer, you will bridge the gap between data science prototypes and production-grade machine learning systems. You will be responsible for deploying, scaling, and monitoring machine learning models in a production environment, ensuring reliability, performance, and maintainability across enterprise use cases.
What You’ll Do and How You’ll Succeed
-
Productionise ML models from Jupyter notebooks into scalable and reliable services.
-
Build feature engineering pipelines and feature stores for real-time and batch inference.
-
Implement model serving infrastructure, including REST APIs, batch scoring, and streaming inference.
-
Design and maintain CI/CD pipelines for ML models as part of MLOps practices.
-
Monitor model performance, detect drift, and trigger automated retraining where needed.
-
Optimise model inference for latency and cost through techniques such as model compression, quantisation, and caching.
-
Integrate ML models with existing systems such as CRM, billing, and campaign management platforms.
We’d Love to Hear From You If...
Experience
-
You have 4+ years of experience in software engineering, with 2+ years in ML engineering.
Technical Expertise
-
You have strong Python capability and solid software engineering practices, including testing, version control, and code review.
-
You have experience with containerisation technologies such as Docker and Kubernetes, as well as cloud deployment.
-
You are familiar with ML serving frameworks such as MLflow, Seldon, TensorFlow Serving, or Triton.
-
You have experience with Databricks, Spark, or similar distributed computing platforms.
-
You have knowledge of API design and microservices architecture.
Ways of Working
-
You can translate machine learning prototypes into production-ready systems with strong reliability and maintainability.
-
You take a structured approach to monitoring, optimisation, and operationalising ML models at scale.