Large language models (LLMs) have utterly transformed the field of natural language processing (NLP) in the last 3-4 years. These relevant foundation models are becoming ubiquitous in solving a wide range of tasks not limited to natural language understanding, notably in generation tasks in computer vision, for instance. These models on the other hand have given rise to new ethical and scalability challenges. This course aims to cover contemporary LLM topics centering around pre-trained language models (BERT, GPT, T5 models, mixture-of-expert models, retrieval-based models), emerging capabilities (knowledge, reasoning, few-shot learning, in-context learning, computer vision), fine-tuning and adaptation, as well as security and ethics. We will cover each topic and discuss important papers in depth. Students will be expected to read and present research papers and complete a research project at the end, in a group of 2-3 students. This is an advanced graduate course and all the students are expected to have taken machine learning and preferably NLP courses, and should be familiar with deep learning models such as Transformers.