
Ever wanted your own private version of ChatGPT? LLMs offer powerful capabilities, but not everyone is comfortable sending their data over the internet to Microsoft, OpenAI, or Anthropic. Fortunately, open-source tools make it possible to set up and customize a local, secure LLM on your own terms. From installation to customization, this workshop will guide you through the process step by step, with demos to illustrate each stage. Itβs time to build your own mini-Jarvis and start getting things done, efficiently, securely, and privately.
Where: Metztli Room
When: 3:10 PM to 4:45 PM
Course Agenda
Intro/Overview
- What is AI?
- How do LLMs like ChatGPT/Copilot/Claude fit within the AI universe?
- Why is there so much hype about AI and LLMs?
Cybersecurity Concerns About AI
- Jailbreaking & Prompt Injection
- Data Leakage & Privacy Risks
- Model Bias & Poisoning
- Social Engineering Automation
- Poor API Implementation & Authentication Controls
Popular Chatbots
- Compare/contrast of Copilot, ChatGPT, Claude
- Security implications of commercial chatbots
- Strengths and limitations
Prompt Engineering
- What is a prompt?
- Prompt types
- Lab: Prompt Engineering
Ways to Enhance Your LLM
- Customize a model
- Retrieval-Augmented Generation (RAG)
- Train your own model β DEEP Rabbit Hole!
Going Local
- Why use a local LLM?
- Popular options:
- Ollama
- Hugging Face
- LM Studio
- GPT4All
- Lab: Working with Ollama
- Installation and set up
- Using Ollama to build a custom model
- Lab: Working with LM Studio (If time allows)
- Installation and set up
- RAG Implementation
Additional Resources & References
Anything Else We Have Time For!