AI Singapore : Past, Present, Future
In our inaugural podcast, I had the opportunity to speak…
For posting by companies and agencies to share their projects and AI use cases.
In our inaugural podcast, I had the opportunity to speak…
Last month, in a Facebook post, Prime Minister Lee Hsien…
Introduction Machine learning algorithms can take significant amounts of time…
As an economist by training, I did a lot of…
Embracing AI, sharpening competitiveness, improving lives The annual PyCon Singapore…
BiDAF is a popular machine learning model for Question and Answering…
As many of you know, I conduct training in a…
This article illustrates the workings of BiDAF, an NLP model…
A compiled visualisation of the common convolutional neural networks.
It was with great pleasure that AI Singapore invited Aurélien…
Recurrent neural networks are a class of artificial neural networks…
Gradient descent is an optimisation method for finding the minimum of a function. It is commonly used in deep learning models to update the weights of the neural network through backpropagation.
In this post, I will summarise the common gradient descent optimisation algorithms used in popular deep learning frameworks (e.g. TensorFlow, Keras, PyTorch, Caffe). The purpose of this post is to make it easy to read and digest (using consistent nomenclature) since there aren’t many of such summaries out there, and as a cheat sheet if you want to implement them from scratch.
Get the latest AI Singapore news, programme updates and offers directly into your INBOX.