DevFest Singapore 2019 Takeaways

Last Saturday, I attended DevFest Singapore 2019 at Google Developer Space. Among the twenty-odd technical sessions in the lineup, I would like to share two which I found particularly interesting from an AI perpective.

Firstly, Than Hien May from Google tackled the difficult (to me!) subject of fairness in machine learning model development and discussed how biases might be detected. A large part of her sharing revolved around the appropriately named What-If Tool (WIT). First released in September 2018, WIT offers a way to probe the behaviour of machine learning models in a visual way with a minimal need to code. Model transparency is a starting point towards understanding fairness and bias and I am curious to learn more about it.

Thanh Hien May from Google.

On a more fun note, Preston Lim and Tan Kai Wei from GovTech introduced the deep learning model they developed to apply vivid colours to vintage black and white photographs. Trained on images from our local Singapore context, the public can readily give their model a try through the ColouriseSG website. For more details on how the model was developed and deployed, there is an excellent blog post by the team which I encourage the reader to read it.

Tan Kai Wei from GovTech.

I decided to give the colourisation model a spin by taking some colour photographs, converting them to B&W and then running them through the colourisation process to see if I could get back the original. I stacked them side-by-side (original on left) to make the comparison.

Source :

Interestingly, the sky and trees in the picture above came out looking natural. Clearly, the model has learned to recognise a cloudy sky and natural foliage. However, the roof tiles of the building did not come out orange, which is what I would expect to be the most common colour. Next, I decided to try it on Singapore’s favourite fruit.

The greenness of the king of fruits is fairly discernible and it makes me wonder how many durian images the model has been trained on. The skin colour of the people also appear natural for the most part. A big challenge in this particular picture is the colour of the 福 (“blessing”) decor on the wall. This should always be in bright red, with no exceptions. I think perhaps more data from Chinatown would be required. 🙂

Our deep learning model performs best on higher resolution images that prominently feature human subjects and natural scenery.

As the team blog post makes clear, the model works very well in certain cases and falls short in others. Hopefully, the team can continue to improve upon it as it has proven popular even among overseas users. Do check it out yourself!