Introduction to Neural Network Architecture

Have you wondered how machine learning models can suddenly do so many different types of work? How is it that machines can learn things like language, vision and translation in such a short amount of time, and what has helped drive these kinds of improvements? The obvious answers - big data and big processors - are only part of the story, and to understand the full picture, we need to take a closer look at the models driving the AI revolution. This talk is aimed at people who are familiar with the basics of feed-forward neural networks, and involves an in-depth explanation of how information is represented for machines to learn on, how machines can make sense of information, and the challenges presented.