Managing Conversation History in AI Apps
AI Applications Module
Fullstack React Integration
LLM APIs (like OpenAI's GPT or Anthropic's Claude) are inherently stateless. If you want the AI to remember the user's name from two messages ago, you are entirely responsible for keeping track of the transcript and sending it with every request.
The Messages Array
Most modern chat completion APIs consume an array of objects. Each object represents a single message and requires two properties: a role and content.
React State Integration
In a frontend application like React or Next.js, this array maps perfectly to a state variable. When a user submits a form, you don't just send the string; you append a new `role: "user"` object to the array, trigger the API, and then append the response as `role: "assistant"`.
❓ Frequently Asked Questions
What are the different roles in an AI conversation array?
- System: Usually placed at index 0. Dictates the AI's persona, formatting rules, and constraints (e.g., "You are a pirate who outputs JSON").
- User: The human's input or prompts.
- Assistant: The AI's previous responses. Crucial for establishing the context of the back-and-forth.
How do I prevent exceeding the token limit?
Every model has a maximum Context Window (e.g., 8k, 128k tokens). Since you send the history every time, the payload grows infinitely. To fix this, implement a trimming strategy before sending the payload to the API:
// Keep system prompt, but slice older messages if array is too long
const trimmed = [messages[0], ...messages.slice(-10)];