Monday 7:30pm – 9:30pm & Saturday 9:00am
Tuesday 7:30pm – 9:30pm Sundat: 9:00am – 11:00am
Price: 1900RMB / 297USD
In the first week of our Bootcamp we will focus on learning how to navigate in Google Colab (the online version of Jupyter Notebook that allows us to easily train neural network models on GPU). We learn how to connect Google Drive to Colab, install all fast.ai dependencies. Along with that, we will be introduced to a high-level fast.ai library that allows us just in few lines of code to train supervised models for Image Classification, Natural Language, and Tabular (structured data) problems.
Training the model itself is not a complicated process if we have prepared datasets correctly to train and validate predictions. We will be learning how to split the data to see if the trained models can work well on new data
The biggest takeaway from the first week would be getting a new mindset about Machine Learning as the whole game, during which we can play with hyperparameters, configurations, techniques to run many experiments and learn practically what the math concepts really mean by seeing the inputs/outputs and by tracking metrics.
That’s probably the most important lesson in the entire bootcamp. Regardless of how strong your math or coding background is, we will be able to build end-to-end Deep Learning web applications from the manually collected data or through an image search engine gathered data. The importance here is to develop the right approach in designing our machine learning products. We will be introduced to Drivetrain approach to learn if our models predictions make sense for real-life situations.
We will also learn about different Data Augmentation methods that help us to better generalize our models to different unforeseen situations (i.e. What if the model has to classify a picture that it has never seen before?). Furthermore, we will study the fast.ai API and will learn how to properly utilize Dataloaders and how to work with the DataBlock API. Depending on the results we learn how we can further improve our model.
This week’s content is designed to reflect on our learnings from the last week’s and dive into data ethics in Machine and Deep Learning.
We will study four main topics in Data Ethics like Recourse and Accountability, Feedback Loops, Biases and Disinformation and discuss them using case studies. These studies will help us to identify and address ethical issues during analyzing projects we are working on and that are at the stage of implementation.
In this lesson, we will discuss the most foundational and important theoretical aspects, that are applied in every neural network model. Thus, it will be the hardest lesson in the bootcamp because we will start digging into math and theoretical concepts of Deep Learning.
We will study the roles of arrays, tensors and broadcasting. We will learn about stochastic gradient descent (SGD). We’ll discuss the choice of a loss function for MNIST digit classification tasks and the roles of mini-batches. We will also describe the math that a basic neural network is doing.
On a practical side, we will get back to image classification project to further improve the accuracy of the model according with newly learned knowledge. We will be also required to train full MNIST digit classifier model with the lowest error rate possible in competitive fashion against our fellow students.