Topic 💥 | Description 📘 |
---|---|
Introduction to ML Using JAX |
This tutorial introduces basic concepts in Machine Learning and their implementation using JAX. JAX is a high performance Python library for computation, optimization and model learning. Room PC2 FACEA (2nd floor, same building as main venue) |
Let's map Chile! Intro to Geospatial Machine Learning |
Satellite imagery and machine learning (ML) can help address many problems addressing the global south such as climate related challenges in food and water security, biodiversity, energy, and public health. To this end, this practical is designed to provide an introductory tutorial on geospatial machine learning for agriculture, particularly to classify farm-level crop types in Chile using Sentinel-2 satellite imagery. Starting with an introductory session on key concepts of geospatial ML, the tutorial delves into ML development, validation, and performance evaluation techniques. Room PC1 FACEA (2nd floor, same building as main venue) |
Foundations of LLMs |
Foundations of LLMs is your gateway to the fascinating world of Large Language Models (LLMs)! In this practical, we shall dive into the core principles of transformers, the cutting-edge technology behind models like GPT, Llama and Gemini, and explore how these impressive AI systems create such realistic and engaging text. You'll also get hands-on experience training your very own Language Model!. Room K203 Ingenieria (5 minute walk from venue, 2nd Floor) |
Topic 💥 | Description 📘 |
---|---|
Graph Neural Networks |
In this tutorial, we will be learning about Graph Neural Networks (GNNs), a topic which has exploded in popularity in both research and industry. We will start with a refresher on graph theory, then dive into how GNNs work from a high level. Next we will cover some popular GNN implementations and see how they work in practice. Room K202 Ingenieria (5 minute walk from venue, 2nd Floor) |
Vision-Language Models |
In this tutorial we'll explore how we can use image-text data to build Vision Language Models 🚀. We'll start with an introduction to multimodal understanding that describes the main components of a Vision Lanugage Model and provides a brief history of how these have evolved in recent years. Then, we'll dive deep into Contrastive Language-Image Pre-training (CLIP), a model for learning general representation from image-text pairs that can be used for a wide range of downstream tasks. We'll then explore how CLIP can be used for semantic image search followed by a showcase of its failure cases. Finally, we'll finetune together PaliGemma, a powerful 3B vision language model. Room K203 Ingenieria (5 minute walk from venue, 2nd Floor) |
Socio-Cultural Evaluation of AI |
In this tutorial, you will examine how AI systems reflect and reinforce socio-cultural biases. Through hands-on activities and group discussions, you will analyze biases in large language models, assess culturally situated benchmarks like SeeGULL and HESEIA from different perspectives, and experiment with bias detection tools. You will also step into the role of a crowdworker to understand how annotation decisions shape AI outcomes. No prior NLP knowledge is required. This session encourages critical reflection on bias from your own cultural and personal perspective while connecting with a diverse network of colleagues across Latin America. Room PC1 FACEA (2nd floor, same building as main venue) |
Topic 💥 | Description 📘 |
---|---|
Diffusion Models |
This practical will walk you through the challenges involved in developing an effective generative model for the particular case of Denoise Diffusion Models (a.k.a. a Score-Based Generative Model), the backbone of the popular text-to-image and text-to-video models seen online. Room PC2 FACEA (2nd floor, same building as main venue) |
Reinforcement Learning Model Predictive Control |
In Reinforcement Learning (RL), an agent is trained to take the best actions in an environment so as to maximize a given reward in the long run. RL has seen tremendous success on a wide range of challenging problems such as learning to play complex video games like Atari and StarCraft II. In this introductory tutorial we will explore various RL and Model Predictive Control (MPC) approaches for solving the classic CartPole, an inverted pendulum system, where an agent must learn to balance a vertical pole by displacing the cart. Room E10 Civil Construction (map 5 minute walk from venue, ground Floor) |
Resource-efficient NLP (Spanish Data) Resource-efficient NLP (Portuguese Data) |
Low-resource NLP (Natural Language Processing) refers to the study and development of NLP models and systems for languages, tasks, or domains that have limited data and resources available. These can include languages with fewer digital text corpora, limited computational tools, or less-developed linguistic research. In this practical, we will explore data scarcity and compute resource limitations in low-resource NLP, and introduce some ways to address these challenges with parameter-efficient finetuning of LLMs. Room PC1 FACEA (2nd floor, same building as main venue) |
Topic 💥 | Description 📘 |
---|---|
Basics of JAX |
This standalone notebook covers the fundamentals of JAX, a Python library for accelerator-oriented array computation and program transformation, designed for high-performance numerical computing and large-scale machine learning. This notebook is recommended if you are new to JAX and want to warm up before attending the practicals. |
Some of the above practicals (including intro to LLMs, parameter efficient ML, geospatial ML, diffusion, RL) are derived from the practicals developed for Indaba 2024 with kind permission from the authors. See each notebook for specific attribution information.