Introduced the use of large language models (LLMs) such as 7B and 13B in various use cases and discussed the growth in the generative AI space. This course emphasized the importance of considering LLM as a series of numbers or matrices during fine-tuning and highlighted the capabilities of Chat GPT. Also discusses the cost-effectiveness of fine-tuning a model, the fine-tuning process using a library called an auto train, and the resources required for fine-tuning on Google Colab. And announced an upcoming offline boot camp covering the basics of generative AI and Python.
3 chapters
12 videos