### Links

- NYU brightspace: links to
**Zoom** meetings and **codes for Piazza and Gradescope.**
- Piazza: online forum used for asking/answering questions, make announcements...
- Gradescope: this is where you have to upload your homeworks.

### Contents

This course covers the basics of optimization and computational linear algebra used in Data Science.
About 66% of the lectures will be about linear algebra and ~33% about convex optimization.
The first 5 lectures will cover basic linear algebra. Then we will study applications: Markov chains and PageRank, PCA and dimensionality reduction, spectral clustering, linear regression. Lastly, we will go over convex functions, optimality conditions and gradient descent.
See the

**syllabus pdf.**
**Important**: This course is "proof-based", meaning that we will prove theorems and that you will have to prove things in homeworks/exams.

### Questions and feedback

Feel free to ask me any questions you may have, in class, during office hours or by email.

**Feedback:** If you have any feedback on the class (it's going too fast, too slow...) please let me know (in person or through email) or submit an anonymous comment to this

Google form.

### Grading

There will be

weekly homeworks, a midterm and a final exam. Checkout the

syllabus for more details.
You will find exams from past years in the

Archive section.

**Grade** = 40% Homework + 25% Midterm + 35% Final. The exams are open book/notes.

### Books (optional, some references are also given in the notes)

We will not follow any book. If you are looking for practice exercises, I would recommend to look at the course's archive. However, if you are looking for further development about linear algebra and optimization, you can refer to the following books:

**Strang**: Introduction to Linear Algebra (there are very good lecture videos available on YouTube)
**Boyd & Vandenberghe**: Introduction to applied linear algebra (available online here)
**Nocedal & Wright**: Numerical Optimization (should be available online via NYU here)
**Boyd & Vandenberghe**: Convex Optimization (available online here)

There are also