Dự Án Tổng Hợp

Build một AI Agent hoàn chỉnh end-to-end — từ thiết kế đến deploy

Mục Tiêu

Áp dụng tất cả kiến thức từ 13 bài trước để xây dựng một AI Agent hoàn chỉnh. Bạn sẽ tạo một Personal Research Agent — agent giúp nghiên cứu topics, tổng hợp thông tin, và tạo reports.

Architecture

┌─────────────────────────────────────────┐
│           Personal Research Agent        │
├─────────┬──────────┬───────────┬────────┤
│ Planning│ Research │ Synthesis │ Memory │
│ Agent   │ Agent    │ Agent     │ System │
├─────────┴──────────┴───────────┴────────┤
│              Tool Layer                  │
│  ┌────────┐ ┌──────┐ ┌───────────────┐  │
│  │Web     │ │MCP   │ │Supabase       │  │
│  │Search  │ │Tools │ │(pgvector +    │  │
│  │        │ │      │ │ memories)     │  │
│  └────────┘ └──────┘ └───────────────┘  │
├──────────────────────────────────────────┤
│           Observability (Langfuse)       │
└──────────────────────────────────────────┘

Bước 1: Setup Project

mkdir research-agent && cd research-agent
npm init -y
npm install ai @ai-sdk/openai zod langfuse @supabase/supabase-js
npm install -D typescript @types/node tsx
npx tsc --init
// .env
OPENAI_API_KEY=sk-...
SUPABASE_URL=https://your-project.supabase.co
SUPABASE_KEY=eyJ...
LANGFUSE_PUBLIC_KEY=pk-...
LANGFUSE_SECRET_KEY=sk-...

Bước 2: Memory System (Bài 12)

// src/memory.ts
import { createClient } from "@supabase/supabase-js";
import OpenAI from "openai";

const supabase = createClient(
  process.env.SUPABASE_URL!,
  process.env.SUPABASE_KEY!
);

const openaiClient = new OpenAI();

export async function getEmbedding(text: string): Promise<number[]> {
  const response = await openaiClient.embeddings.create({
    model: "text-embedding-3-small",
    input: text
  });
  return response.data[0].embedding;
}

export async function saveResearch(topic: string, content: string, sources: string[]) {
  const embedding = await getEmbedding(topic);

  await supabase.from("research_memories").insert({
    topic,
    content,
    sources,
    embedding,
    created_at: new Date().toISOString()
  });
}

export async function recallResearch(query: string, limit = 5) {
  const embedding = await getEmbedding(query);

  const { data } = await supabase.rpc("match_research", {
    query_embedding: embedding,
    match_count: limit
  });

  return data || [];
}

Bước 3: Tools (Bài 3, 4, 10)

// src/tools.ts
import { tool } from "ai";
import { z } from "zod";
import { saveResearch, recallResearch } from "./memory";

export const searchWeb = tool({
  description: "Tìm kiếm thông tin trên web về một chủ đề",
  parameters: z.object({
    query: z.string().describe("Từ khoá tìm kiếm"),
    maxResults: z.number().default(5)
  }),
  execute: async ({ query, maxResults }) => {
    // Dùng search API (SerpAPI, Tavily, etc.)
    const response = await fetch(
      `https://api.tavily.com/search?query=${encodeURIComponent(query)}&max_results=${maxResults}`,
      { headers: { Authorization: `Bearer ${process.env.TAVILY_API_KEY}` } }
    );
    const data = await response.json();
    return { results: data.results };
  }
});

export const saveNote = tool({
  description: "Lưu ghi chú nghiên cứu quan trọng vào bộ nhớ dài hạn",
  parameters: z.object({
    topic: z.string(),
    content: z.string(),
    sources: z.array(z.string())
  }),
  execute: async ({ topic, content, sources }) => {
    await saveResearch(topic, content, sources);
    return { saved: true };
  }
});

export const recallNotes = tool({
  description: "Tìm kiếm ghi chú nghiên cứu đã lưu trước đó",
  parameters: z.object({
    query: z.string()
  }),
  execute: async ({ query }) => {
    const memories = await recallResearch(query);
    return { memories };
  }
});

Bước 4: Agent Với Planning + Metacognition (Bài 6, 8)

// src/agent.ts
import { generateText, generateObject } from "ai";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";
import { searchWeb, saveNote, recallNotes } from "./tools";

const ResearchPlanSchema = z.object({
  topic: z.string(),
  subQuestions: z.array(z.string()),
  approach: z.string(),
  estimatedSteps: z.number()
});

const QualityCheckSchema = z.object({
  completeness: z.number().min(1).max(10),
  accuracy: z.number().min(1).max(10),
  missingAreas: z.array(z.string()),
  shouldContinue: z.boolean()
});

export async function researchAgent(topic: string): Promise<string> {
  // Phase 1: Planning
  console.log("📋 Phase 1: Planning...");
  const { object: plan } = await generateObject({
    model: openai("gpt-4o"),
    schema: ResearchPlanSchema,
    prompt: `Tạo kế hoạch nghiên cứu cho: "${topic}".
Chia thành 3-5 sub-questions cần trả lời.`
  });

  // Phase 2: Research
  console.log("🔍 Phase 2: Researching...");
  const findings: string[] = [];

  for (const question of plan.subQuestions) {
    const result = await generateText({
      model: openai("gpt-4o"),
      tools: { searchWeb, recallNotes },
      maxSteps: 5,
      system: "Bạn là nhà nghiên cứu. Tìm thông tin chính xác và chi tiết.",
      prompt: question
    });
    findings.push(`### ${question}\n${result.text}`);
  }

  // Phase 3: Synthesis
  console.log("✍️ Phase 3: Synthesizing...");
  const report = await generateText({
    model: openai("gpt-4o"),
    prompt: `Tổng hợp các findings thành report hoàn chỉnh:
Topic: ${topic}
Findings:
${findings.join("\n\n")}`
  });

  // Phase 4: Metacognition — Self-check
  console.log("🧠 Phase 4: Quality check...");
  const { object: quality } = await generateObject({
    model: openai("gpt-4o"),
    schema: QualityCheckSchema,
    prompt: `Đánh giá chất lượng report:
${report.text}`
  });

  if (quality.shouldContinue && quality.completeness < 7) {
    console.log("🔄 Improving report...");
    const improved = await generateText({
      model: openai("gpt-4o"),
      tools: { searchWeb },
      maxSteps: 3,
      prompt: `Cải thiện report. Missing areas: ${quality.missingAreas.join(", ")}
Original: ${report.text}`
    });
    return improved.text;
  }

  // Save to memory
  await saveNote.execute({
    topic,
    content: report.text,
    sources: ["web-search"]
  });

  return report.text;
}

Bước 5: Observability (Bài 9)

// src/tracing.ts
import { Langfuse } from "langfuse";

export const langfuse = new Langfuse({
  publicKey: process.env.LANGFUSE_PUBLIC_KEY!,
  secretKey: process.env.LANGFUSE_SECRET_KEY!
});

export function createTrace(name: string, userId: string) {
  return langfuse.trace({ name, userId });
}

Bước 6: Entry Point

// src/index.ts
import { researchAgent } from "./agent";

async function main() {
  const topic = process.argv[2] || "AI Agent trends 2025";
  console.log(`\n🚀 Researching: "${topic}"\n`);

  const report = await researchAgent(topic);

  console.log("\n" + "=".repeat(60));
  console.log("📄 RESEARCH REPORT");
  console.log("=".repeat(60));
  console.log(report);
}

main().catch(console.error);

Chạy Thử

npx tsx src/index.ts "So sánh Vercel AI SDK vs LangChain cho production agents"

Concepts Đã Áp Dụng

BàiConceptÁp dụng trong project
00Agent basicsAgent loop, tools, environment
01FrameworksVercel AI SDK (TypeScript)
03Tool UsesearchWeb, saveNote tools
04Agentic RAGrecallNotes + semantic search
05TrustworthyError handling, rate limiting
06PlanningResearchPlanSchema, sub-questions
08MetacognitionQualityCheckSchema, self-improvement
09ProductionLangfuse tracing
11Context Eng.Scratchpad pattern (findings)
12MemorySupabase pgvector memory system

Tiếp Theo

Từ đây, bạn có thể mở rộng:

  • Multi-agent: Thêm writer agent, editor agent (Bài 07)
  • MCP server: Tạo MCP server cho agent (Bài 10)
  • Web UI: Tạo chat interface với Next.js + Vercel AI SDK
  • Deploy: Deploy lên Vercel hoặc Cloudflare Workers

Chúc mừng! Bạn đã hoàn thành series AI Agents! 🎉