<img height="1" width="1" style="display:none;" alt="" src="https://dc.ads.linkedin.com/collect/?pid=371178&amp;fmt=gif">
Fiona Chow
Fiona Chow

Machine Learning Application - Roundtable Summary

 

We've had a fruitful year of roundtables covering topics relating to Management, DevOps and Security. Together with our friends at Skyscanner and Arnold Clark, we wrapped up our 2018 roundtable sessions by sharing knowledge and challenges on applying machine learning in our respective fields from recommendation systems to image recognition to predictive analytics.

 

Here we share our top 5 takeaways from the session!


1. Machine learning experiments need to be reproducible
Keeping track of machine learning experiments manually can easily get out of hand, especially when we try to speed up development by training different models or different sets of hypermeters at the same time.

Versioning pipelines, data, models and configurations are the first steps to making experiments reproducible.

2. Models are not a standalone product
Proof of concept allows us to drive right into the machine learning magic, but when it comes to productionising a concept, it is more important to first have a solid infrastructure for storing and plumbing data before jumping into the learning.

3. Machine Learning Operations (MLOps) is still at its early stage
There are gold standards in implementing Continuous Integration/Continuous Deployment systems for DevOps, but it has not been nailed down for MLOps. The difference in nature between machine learning and that of our traditional application development means that there is a limit to how much DevOps practices can lend themselves to MLOps. New workflows and development paradigms will be needed to create that same gold standard.

4. Monitoring and maintaining deployed models
Work does not end after models have been deployed to production. The truth is that model performance will decline with time as more unseen patterns are being introduced. Hence they need to be continuously monitored for retraining. Finding the right metric or combination of metrics for monitoring live models is a challenge.

5. Debugging poor performing models is not an easy task
In most cases, debugging errors in applications are fairly straightforward. Exceptions and errors thrown will usually give hints on which line of code to look at. But this is not the case for machine learning, a lot more effort needs to go into investigating the data and labels before even getting a slight hint of the problem.

 

---

 

Join us!
If you’re a start-up and interested in finding out more about our roundtable discussions or would like to join our next one, contact us here!

 

Subscribe to Get the Latest Insights