20597 - NATURAL LANGUAGE PROCESSING
Course taught in English
Go to class group/s: 31
To feel comfortable in this course, you should have good knowledge of programming in Python, as well as simple linear algebra (what are vectors and matrices, how are they multiplied) and probability theory (what is a probability distribution, what is conditional probability). Additional knowledge of data structures (Counter, defaultdict) make many of the assignments easier to solve.
Natural Language Processing tools are becoming ubiquitous: from everyday tools like Siri or Alexa to decision making processes in industry or politics and to text analysis tools in social science research. Machine-learning based text analysis tools provide a range of possibilities and are a growing field of expertise. Whether it is the exploration of text to find structures and topics, or the construction of a classifier to predict the sentiment or author characteristics of a text, this course provides an overview and hands-on experience in all relevant techniques.
Preparation: how do I work with text:
- Data formats.
- Storage and retrieval.
Exploration: exploring structure in the data:
- Topic models.
- Word embeddings.
Prediction: finding patterns to impute new values:
- Text classification (sentiment analysis, author attributes).
- Logistic Regression.
- Feed-forward Neural Nets.
- Convolutional Neural Nets.
- Structured perceptron.
- Recurent Neural Nets.
- Describe different text analysis problems.
- Talk about the linguistic foundations.
- Distinguish between exploration and prediction approaches.
- Know which algorithm to choose for a given problem.
- Understand the trade-offs between different approaches.
- Apply their knowledge to a practical problem.
- Implement a variety of algorithms for text exploration and classification in Python.
- Face-to-face lectures
- Guest speaker's talks (in class or in distance)
- Exercises (exercises, database, software etc.)
- Individual assignments
- Group assignments
- Participation in external competitions
- Each lecture featureshands-on exercises in Jupyter notebooks.
- Each student completes several individual assignments to get experience in implementation details.
- Students work together in groups to solve a joint task.
- If applicable/available, students have the option to participate in external competitions such as Kaggle competitions or shared tasks in natural language processing.
- If available, guest speakers from data-science companies present their work on text and language processing.
|Continuous assessment||Partial exams||General exam|
- Individual Assignment (50%).
- Final Group project (50%): it is graded based on the performance of the system and the quality of the report.
- Lecture notes, provided on Bboard.
- JURAFSKY, DAN, J.H. MARTIN, Speech and language processing, Vol. 3. London, Pearson, 2014.
- C.D. MANNING, H. SCHUTZE, Foundations of statistical natural language processing, MIT press, 1999.
- S. MARSLAND, Machine learning: an algorithmic perspective, CRC press, 2015.
- F. CHOLLET, Deep learning with Python, Manning Publications Co., 2017.