Daily Reads of November 2018

  1. A Medium article that covers the topic of visualizing high-dimensional data using PCA and t-SNE in Python [26/11/2018]
  2. Google AI open-sources BERT- A Pretraining model for NLP tasks. This mainly addresses domains that have small amount of labeled data like sentiment analysis and Question-answering. BERT is trained to produce generalized language model representation which we then train on our own tasks, now requiring much less amount of labeled data.  [article][code][02/11/2018]
  3. OpenAI develops Random Network Distillation for Reinforcement Learning, which now uses curiosity to train, instead of a future reward function or without any given objective function. [article][code] [02/11/2018]

What is your take on this topic?