Building Business Applications of LLMs and Generative Models

I believe human language has become the standard programming language. This course teaches future product managers, investors, and entrepreneurs at Foster to build with that capability.

Instructor: Zikun Ye  ·  Term: Every spring quarter, starting Spring 2026

Description

Building with AI no longer requires writing code: students describe what they want, and the model produces it. This course is grounded in that shift. In ten weeks, we go from a first API call to a working LLM-powered prototype: agents that use tools, retrieval systems built on any corpus, and fine-tuned models that produce reliable business outputs.

The course has three parts. We build hands-on every week. We demystify the techniques underneath, from attention and embeddings to RAG, fine-tuning, agents, multimodality, and physical AI. Every concept is grounded in a business application: market research, content generation, advertising, and AI search.

No prior coding experience is required.

Learning Outcomes

By the end of this course, you will be able to:

  1. Demystify the core building blocks of generative AI: attention, embeddings, RAG, fine-tuning, alignment, and the AI product stack from chat interfaces to agentic workflows.
  2. Build end-to-end LLM-powered applications and agentic systems that solve real business problems.
  3. Lead AI initiatives, weighing feasibility, cost, risk, and responsible deployment.

Course Materials

  • All slides, Colab notebooks, case readings, and recordings are posted on Canvas. No textbook required.
  • We use the OpenAI API for some in-class activities and assignments.
  • Use any AI coding tool of your choice. IDE-based options include Cursor and Antigravity. CLI-based options include Claude Code, Codex, and Gemini CLI. In class, I primarily demo with Claude Code and Codex.

Schedule

Week 1 Agentic Coding

Course overview and an introduction to agentic coding. A coding agent runs analytics and builds a frontend dashboard from a plain-English brief, and we explore more agent use cases together.

Week 2 LLM Foundations & API

Demystify the LLM: tokens, next-word prediction, sampling, transformers, pre-training and post-training, and deployment for inference. We then run zero-shot, few-shot, and chain-of-thought sentiment analysis on Amazon Reviews via the OpenAI API.

Mini Assignment 1: First API call  ·  Major Assignment 1: Batch text analysis
Week 3 Agentic AI

An agent is an LLM that decides its own next step, looping through plan, act, observe, and verify, with tools, memory, skills, and MCP integrated. We dissect coding, deep research, and shopping agents, then build our own ad-campaign generation agent.

Mini Assignment 2: Agentic AI showcase
Week 4 Embeddings & RAG

Why vanilla LLMs hallucinate and lack access to private data, and how embeddings and RAG address these limitations. We build a full pipeline on Amazon reviews, evaluate whether RAG outperforms the baseline, then discuss enterprise RAG and the trajectory of agentic RAG.

Mini Assignment 3: Embeddings & RAG  ·  Major Assignment 2: RAG in the wild
Week 5 GenAI Ecosystem

Map the five-layer AI stack from energy and chips up through infrastructure, models, and applications. We trace where value is captured across the stack, where competitive moats are forming, and the implications for builders and investors.

Week 6 Fine-Tuning

What fine-tuning is, when to apply it, and what it powers: safety alignment, distillation, reasoning models, brand voice, content generation. We work through the Upworthy headline case study end-to-end, from data and LoRA to deployment and revenue lift.

Major Assignment 3: Fine-tuning
Week 7 GenAI Applications

AI beyond text: multimodal and vision models, and physical AI. We tour applications in content generation via adaptive prompting, market research via digital twins, and robotics in operations.

Week 8 Deploying AI Agents

Deploying an agent in production: evaluations, guardrails, cost, latency, and the organizational changes such deployments require.

Panel with industry experts
Week 9 AI Impact, Risks & Society

How AI is reshaping work and society, and the live debates around safety, polarization, discrimination, and copyright. We translate the governance debates into what it means for the businesses you'll build or invest in.

Week 10 Demo Day

Each team demonstrates a working LLM-powered prototype to a non-technical audience.

Assessment

Component Weight
Class participation 15%
Mini assignments 15%
Major assignments 30%
Final project 40%
Total 100%

The final project is the central deliverable: an end-to-end LLM-powered application that solves a real business problem. The format is flexible: a web app, a local demo, a tool built on top of an LLM API, or any other working prototype. The expectation is a functioning prototype, not a slide deck.

Course Policies

Generative AI

Students are permitted and encouraged to use generative AI tools such as ChatGPT, Claude, Gemini, and GitHub Copilot throughout this course, including assignments, projects, and in-class exercises. Students are responsible for the accuracy and quality of all submitted work, regardless of whether AI assisted in producing it. Submitting AI-generated work that the student cannot explain constitutes a violation of academic integrity.

Prerequisites

None. Prior exposure to machine learning or analytics is helpful but not required. Familiarity with AI tools such as ChatGPT through general use is sufficient preparation.

Academic Integrity, Disability & Religious Accommodations

This course follows the University of Washington Student Conduct Code. Students requiring accommodations should contact Disability Resources for Students (DRS) as early as possible. Religious accommodations follow UW's Religious Accommodations Policy and must be requested within the first two weeks of the course.