AI Systems¶
Basic Course Information¶
- Course Name: 인공지능시스템 (AI Systems)
- English Name: AI Systems
- Course Type: Major Elective (Track Advanced)
- Credits/Hours: 3 credits / 3 hours per week (Recommended: 2 lecture + 1 lab)
- Recommended Year: 3-4
Course Overview¶
This course focuses on designing, building, and validating AI models as operational systems rather than research prototypes. It covers AIOps elements such as data pipelines, experiment reproducibility, deployment, monitoring, drift response, and governance, while comprehensively learning agent systems engineering (agents/sub-agents, prompts/context, memory/mode/permissions, tools/skills/plugins, hooks/workflows, MCP/LSP/IDE integration) to automate and scale these operations. The goal is to design and implement an "agent-based operational system that safely assists operational tasks."
Educational Objectives¶
- Design the full lifecycle of AI systems (data→training→evaluation→deployment→monitoring→improvement).
- Establish reproducible experiment management and release criteria.
- Decompose operational tasks into agents/sub-agents and design permission/approval flows for safe execution.
- Automate repetitive operations with tools/skills/workflows and ensure observability.
Learning Outcomes¶
- Define operational metrics (SLO, latency, cost, quality) and build measurement systems.
- Design and document data quality validation and model release criteria.
- Implement incident response flows as agent workflows.
- Implement safety mechanisms based on mode/permissions/approvals.
- Implement operational automation including MCP or LSP/IDE integration elements.
Prerequisites¶
- Required: Intelligent Operating Systems (or equivalent OS competency), Introduction to Machine Learning
- Recommended: Software Engineering (testing/CI), Database, Network Fundamentals
Main Content (Modules)¶
- AI system full lifecycle design: data validation, training/evaluation, release criteria
- Deployment patterns: batch/online, API design, performance measurement
- Observability: logs/metrics/traces, alerts and runbooks
- Drift/performance degradation response: detection, retraining, rollback strategies
- Agent architecture: sub-agent decomposition, result integration
- Prompts/context/memory: evidence-based responses, incident records, summary memory
- Mode/permissions/approvals: least privilege, risky operation gating
- Tools/skills/plugins: standard interfaces, reusable packaging
- Hooks/slash commands/workflows: CI and operational automation
- MCP/LSP/IDE integration: dev-ops connection (minimum feature implementation)
Practice and Project Examples¶
- End-to-end pipeline construction: data validation → training → model registration → deployment → monitoring
- Operational scenario-based automation: alert occurrence → triage → mitigation → automatic report generation
- Slash command-based ChatOps: implementation of
/triage,/rollback,/report - Choose and implement one MCP or IDE integration feature
Evaluation Method (Example)¶
- Weekly Lab Assignments: 35%
- Midterm Design Review (architecture/permissions/workflow documentation + demo): 20%
- Final Team Project: 40%
- Participation/Code Review: 5%