21 Mar 2017
Ground floor, Informatics Forum, 10 Crichton Street,
This workshop follows on from the previous Edinburgh Deep Learning workshops, each attracting between 150 and 200 people. It will take place on 21st March 2017, and run from 09:00 until 17:15.
Update: The workshop has concluded. Thank you to all those who attended. Videos are available for the majority of the talks below.
Deep learning methods continue to dominate the field of machine learning, and are now commonplace in many research areas. They get significant media attention. Such methods are remarkably successful at a diverse range of tasks, particularly in supervised learning environments; they have surpassed humans at Go, they self-drive cars, play video games, and are able to detect objects in images and videos with astonishing accuracy. One big area of interest now is utilising deep learning in unsupervised environments, or making use of unsupervised information. This is in part due to the success of Generative Adversarial Networks which are able to learn to generate high-dimensional artificial images that can be mistaken for real images. This workshop will explore the latest flavours in supervised and unsupervised deep learning to keep machine learners on the cutting edge, as well as the challenges and future directions of deep learning. This workshop also acts as an opportunity for cross-disciplinary discussion and collaboration.
|09.00||Introduction: Amos Storkey (University of Edinburgh)|
|09.15||Oriol Vinyals (DeepMind)
Recent Advances in One Shot Learning Abstract Video
|10.00||Harrison Edwards (University of Edinburgh)
Towards a Neural Statistician Video
|10.45||Coffee and Posters
|11.15||Aapo Hyvärinen (University College London, University of Helsinki)
Nonlinear ICA using temporal structure: a principled framework for unsupervised deep learning Abstract Video
|12.00||Chris Maddison (University of Oxford, DeepMind)
Particle Value Functions
|12.20||Saumya Jetley (University of Oxford)
Straight to Shapes: Real-time Detection of Encoded Shapes Video
|12.40||Emma Hart (SICSA)
Artificial Intelligence Research Theme Video
|14.00||Andrew Zisserman (University of Oxford, DeepMind)
Learning to lip read by watching TV Abstract
|14.45||José Miguel Hernández-Lobato (University of Cambridge)
Learning and Policy Search in Stochastic Dynamical Systems with Bayesian Neural Networks Video
|15.05||Alison Lowndes (Nvidia)
Nvidia Hardware/Software Update Video
|15.15||Sidharth Kashyap (Intel)
Performance Optimization of Tensorflow Framework on Modern Intel Architectures Video
|15.25||Richard Carter (DataLab)
|15.30||Coffee and Posters
|16.00||Ferenc Huszar (Twitter Cortex)
Deep Learning for Image Processing: From MSE to Adversarial Variational Inference Abstract Video
|16.45||Andrew Brock (Heriot-Watt University)
Neural Photo Editing with Introspective Adversarial Networks Video
|17.05||Final Comments and Close
|Renzo Andri (ETH Zurich)
The Deep Internet-of-Things: Why the Deep Learning community should care about hardware
|Yanis Bahroun (Loughborough University)
Online Representation Learning with Multi-layer Hebbian Networks for Image Classification Tasks
|Shabab Bazrafkan (National University of Ireland, Galway)
Semi-Parallel Deep Neural Networks
|Francesco Conti (ETH Zurich)
An IoT Endpoint System-on-Chip for Secure and Energy-Efficient Near-Sensor Analytics
|Ondrej Dusek (Heriot-Watt University) Spotlight
Sequence-to-Sequence Generation for Spoken Dialogue via Deep Syntax Trees and Strings Video
|Matthew Graham (University of Edinburgh)
Inference in differentiable generative models
|Catherine Higham (University of Glasgow) Spotlight
Achieving Single-Pixel Camera Video Rates using Deep Learning Video
|Katarzyna Janocha (Jagiellonian University) Spotlight
On Loss Functions for Deep Neural Networks in Classification Video
|Christoph Kading (Friedrich Schiller University)
Active and Continuous Exploration with Deep Neural Networks and Expected Model Output Changes
|Joseph Lemley (National University of Ireland, Galway)
Smart Augmentation: Learning an Optimal Data Augmentation Strategy
|Noura Moubayed (Durham University)
SMS Spam Filtering using Probabilistic Topic Modelling and Stacked Denoising Autoencoder
|Nick Pawlowski (Imperial College London) Spotlight
Efficient Variational Bayesian Neural Network Ensembles for Outlier Detection Video
|Marwin Segler (University of Münster) Spotlight
Towards "AlphaChem": Chemical Synthesis Planning with Tree Search and Deep Neural Network Policies Video
|Sunna Torge (TU Dresden)
Deep Learning and High-Performance Computing
|Duo Wang (University of Cambridge)
X-CNN: Cross-Modal Convolutional Neural Networks for Sparse Datasets
|Ce Zhang (Lancaster University)
A hybrid MLP-CNN classifier for very fine resolution remotely sensed image classification
This workshop is sponsored by Data Lab, the Institute for Adaptive and Neural Computation (ANC), the Scottish Informatics and Computer Science Alliance (SICSA Data Science and SICSA AI), the EPSRC Centre for Doctoral Training in Data Science, and the EPSRC Doctoral Training Centre in Neuroinformatics.
Participants will need to make their own travel arrangements. By train the nearest station is Edinburgh Waverley, which is less than a 15 minute walk from the forum. See National Rail Enquiries for train information. For those coming from further afield, information about travel to and from Edinburgh Airport is available. A taxi to the city centre from the airport costs about 22GBP to 24GBP one way. There is an express bus from the airport called Airlink that terminates in the city centre and costs 7GBP for a return journey. The journey to the airport requires approximately 30 minutes from the city centre of actual journey time (add a little more during the rush hours). There is also a tram that goes to the centre of town. It takes a little longer than the bus, and is slightly more expensive.
If you wish to contribute a talk/poster to this workshop, then please send a title, and a reference to an arXiv or openreview paper. We welcome papers that will be/have been published elsewhere; this is a forum for discussion and dissemination, not for publication. Alternatively you can provide a paper or two page description.
Please send submissions to amos+deep@@inf.ed.ac.uk, replacing the dual @ sign. Please send contributions before 21st February (the sooner the better). We will then be in contact about your submission. It is likely that not all submissions will be able to be included.
Update: The submission deadline has passed. Thank you to those that submitted work.
If you wish to attend then please register at the eventbrite site. Registration is free.
Update: The event is now fully booked.