<img height="1" width="1" style="display:none;" alt="" src="https://dc.ads.linkedin.com/collect/?pid=371178&amp;fmt=gif">
Fiona Chow
Fiona Chow

Machine Learning Prague 2019

 

ML Prague is one of Central Europe’s biggest Machine Learning conferences, and focuses on practical applications of AI, Machine Learning and Deep Learning.

The event was attended by speakers and Machine Learning practitioners across a wide range of industries. Each shared their experience with different applications of Machine Learning and gave valuable views on the current and future development in the space.

We have summarised 2 key insights which we believe Machine Learning practitioners from any field and industry can benefit from:

Training deeper networks is not sustainable
With access to more powerful and inexpensive machines in the current times, it is possible to train much deeper networks for supervised learning with improved model performance.

However, deep networks are very data-hungry - they require significant volumes of accurately labelled data. As most practitioners know, curating and labelling data already takes up to 80% of our time. While crowdsource labelling can make the task less expensive, it would only be suitable for simpler tasks where specialist expertise is not required. In cases where domain expertise is required, the general consensus was that we’ll reach a point where the increase in data would no longer justify the time spent obtaining accurate labels.

MIT Professor Tomaso Poggio also shared this view, adding that more research should be undertaken, focusing on developing networks with implicit supervised learning abilities, mimicking the way children are able to learn with small amounts of data.

Increasing need for model explainability to gain users’ trust
A great point surfaced by Srivatsan Santhanam, SAP was that for successful mass adoption of Machine Learning models to occur, it’s becoming increasingly important to gain users’ trust and believe in models’ results.

For Machine Learning practitioners, we would determine a model is performing very well when it achieves or surpasses a set of metrics - whether it is precision, recall, F1 score etc. However, this same set of metrics does not necessarily give business users the same amount of confidence needed to rely on the results given. The end users need to understand how a result was proposed before they can start to trust the results.


Hence, in order to achieve mass adoption, practitioners should place focus into developing tools or methods to increase model explainability.

These insights certainly resonate with the team at Bird.i. In developing our intelligence products, we had made a conscious decision to steer away from adopting deeper networks as we understood their practical challenges - the increase in time and resource required to curate and label satellite images, train the models, and deploy our products.

Thanks to the visual nature of our product, we’re able to supply our users with supporting evidence when required, helping us gain a degree of trust. In addition to this, we are investigating how neural networks’ explainability can be an integral part of the results we hand over to our customers.

 

-


Interested in how our Intelligence Products could benefit your business? Contact us for more information or request a customised demo below!

Request demo

Subscribe to Get the Latest Insights