By participating in this course, participants will be equipped to:
Query and interact with deployed LLMs using NVIDIA Inference Microservices (NIMs).
Rigorously evaluate LLM performance with benchmarks (GSM8K), LLM-as-a-judge, and human evaluation principles.
Leverage the NeMo Evaluator microservice for systematic, repeatable, and trackable evaluations.
Prepare custom datasets for both evaluation and fine-tuning workflows.
Quantitatively measure the impact of In-Context Learning (ICL) on model performance.
Adapt an LLM to a specific domain by fine-tuning it with Low-Rank Adaptation (LoRA) using the NeMo Customizer.
Analyze training metrics and compare the performance of a fine-tuned model against its base version.
Make data-driven decisions about which customization technique is appropriate for a given use case.