Insegnamento a.a. 2023-2024

20596 - MACHINE LEARNING

Department of Decision Sciences

Course taught in English
Go to class group/s: 23
DSBA (8 credits - II sem. - OB  |  SECS-S/01)
Course Director:
DANIELE DURANTE

Classes: 23 (II sem.)
Instructors:
Class 23: DANIELE DURANTE


Suggested background knowledge

For a fruitful and effective learning experience, it is strongly recommended a basic preliminary knowledge in mathematics and linear algebra, descriptive statistics, probability and random variables, simple and multiple linear regression, likelihood-based inference, and generalized linear models. Students should also be familiar with basic statistical softwares.

Mission & Content Summary

MISSION

In 2009, the Chief Economist of Google, Hal Varian, said that Data Science would have been the most attractive job of the next ten years. He also claimed that understanding, processing and extracting value from data were going to be hugely important skills in many careers. He was right. Indeed, the Data Scientist is listed among the top jobs in the United States since several years now. The reason of this huge demand is simple and can be found in the words of Eric Schmidt, Chief Economist of Google after Hal Varian: "we create as much information in two days now as we did from the dawn of man through 2003". But information (data) is not knowledge. This fundamental translation process requires skills in database management, statistical learning, machine learning, computational statistics, along with a good intuition and the ability to deal with data, understand the analytic goals and interpret the final outputs. The course in Machine Learning aims at fostering these skills and provide students with the instruments and the mind-set to successfully deal with a wide range of data analytic problems they may find in their future jobs.

CONTENT SUMMARY

  • Introduction: An introduction to Machine Learning.
  • Linear methods: High-dimensional linear regression; Logistic regression; Discriminant analysis.
  • Model assessment and selection: Bias-variance trade-off; Training, test and validation sets; Cross-validation; Bootstrap.
  • Regularization and shrinkage: Subset selection; Ridge regression; Lasso; Elastic-net and related algorithms.
  • Methods beyond linearity: Regression and smoothing splines; K-nearest neighbors; Local linear regression; Kernel methods; GAM; MARS.
  • Tree-based methods: Regression and classification trees; Bagging; Random forests; Gradient boosting methods; Stacking.
  • Beyond tree-based methods: Support vector machines; Projection pursuit; Neural networks.

 

The above methods are also implemented during LAB sessions on real-world case studies. Code and implementation in classical statistical softwares are also part of the course topics.


Intended Learning Outcomes (ILO)

KNOWLEDGE AND UNDERSTANDING

At the end of the course student will be able to...
  • Explain the methodology and theory underlying the classical Machine Learning methods.
  • Illustrate the technical aspects related to the implementation of classical Machine Learning methods.
  • Recognize the distinctive properties of each Machine Learning technique.
  • Identify the most suitable Machine Learning technique for a given data analytic problem.
  • Summarize differences and similarities between multiple Machine Learning techniques.

APPLYING KNOWLEDGE AND UNDERSTANDING

At the end of the course student will be able to...
  • Examine the relevant research questions underlying a real-data analytic problem.
  • Choose a Machine Learning technique coherent with the analytic question and apply it to the dataframe.
  • Identify relevant structures underlying the data and effectively predict unobserved events.
  • Discuss the empirical output produced by a Machine Learning technique.
  • Connect different Machine Learning techniques to improve predictive performance in complex analytic problems.

Teaching methods

  • Face-to-face lectures
  • Exercises (exercises, database, software etc.)
  • Case studies /Incidents (traditional, online)
  • Individual assignments
  • Interactive class activities on campus/online (role playing, business game, simulation, online forum, instant polls)

DETAILS

Classical face-to-face lectures focus on the presentation and the discussion of the Machine Learning techniques covered by the course, with a main attention to methodology, theory and computational methods. To improve the learning experience and motivate the interaction, illustrative case studies and in-class exercises may also be considered.

 

A series of lab sessions, with the students working on their own laptop, are also provided. These classes, typically (but not always), consist of two main parts:

 

  1. The students are guided in the implementation of the Machine Learning techniques on standard statistical softwares, such as R and Python.
  2. After the guided implementation, an in-class individual assignment (performed on the Bocconi Data Science Challenges platform) asks the students to solve a specific predictive problem from a data analytic case study, leveraging suitable Machine Learning tools. This interactive class activity is expected to improve the autonomy of the students in answering a variety of real-world analytic questions, and serve as a self-assessment occasion. Some other online data competitions may be provided as individual homeworks (not compulsory), to offer additional training materials for the interested students.

Assessment methods

  Continuous assessment Partial exams General exam
  • Written individual exam (traditional/online)
    x
  • Individual assignment (report, exercise, presentation, project work etc.)
    x

ATTENDING AND NOT ATTENDING STUDENTS

Due to the nature of the course, only a final general exam is considered to evaluate, with the same criteria, attending and non-attending students. This assessment consists of two main parts.

 

  1. Traditional written individual exam which consists of open and closed answers questions, and small exercises. The focus is on evaluating students based on their methodological, theoretical and computational understanding of the Machine Learning techniques presented in the face-to-face lectures.
  2. Individual assignment based on a data challenge where students are asked to develop and apply a data analytic strategy to answer a predictive problem. Such a data challenge is a longer and more structured version of those proposed in the lab sessions, and takes place towards the end of the course. This assignment is managed via the Bocconi Data Science Challenges platform, and the evaluation considers the predictive performance of the analytic approach proposed by the student along with the quality of a document describing the methods considered, the code, the final results and related comments.

 

Grading rule: Let X denote the grade of the traditional written individual exam and let Y be the grade of the individual assignment. Then, if Y is greater than or equal to X, the final grade is 0.3*Y+0.7*X. Otherwise, if Y is less than X, the final grade is X.


Teaching materials


ATTENDING AND NOT ATTENDING STUDENTS

The course relies, mostly, on two books which complement each other and are available online for free.

 

  •  T. HASTIE, R. TIBSHIRANI, J. FRIEDMAN (2009). The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer, Second Edition.
  •  G. JAMES, D. WITTEN, T. HASTIE, R. TIBSHIRANI (2013). An Introduction to Statistical Learning with Applications in R. Springer.

 

Other useful secondary references are listed below:

 

  • K.P. MURPHY (2021) Machine Learning: A Probabilistic Perspective. MIT press.
  • C.M. BISHOP (2006) Pattern Recognition and Machine Learning. Springer.

 

Slides and clarification notes summarizing the contents presented in class are also provided. Students who are interested in deepening, individually, specific concepts are provided with additional reading materials upon request. These additional materials are not object of final evaluation.

Last change 08/11/2023 14:02