Building a Generative AI Agent with Next.js and Open AI

At Interstellar Solutions, our goal was to create a generative AI agent capable of producing high-quality, context-aware content—such as blog drafts, product descriptions, or creative writing—directly within our web application. By combining Open AI’s powerful language models with Next.js’s robust routing and rendering capabilities, we built a scalable, user-friendly AI agent.

Generative AI Agent Interface Screenshot

To accomplish this, I set up a Next.js project, configured secure API routes for Open AI integration, and developed a client-side interface for users to interact with the AI agent. Below, I’ll walk you through the process of building a generative AI agent with Next.js and Open AI.

Setting Up the Next.js Project

Start by creating a new Next.js project and installing the Open AI client library:

npx create-next-app@latest my-ai-agent-app
cd my-ai-agent-app
npm install openai

You’ll need an Open AI API key from Open AI’s platform. Store it securely in a .env.local file:

OPENAI_API_KEY=your-openai-api-key

Server-Side API Route for Open AI

To handle Open AI requests securely, we created a Next.js API route in pages/api/generate.ts. This route processes user prompts and sends them to Open AI’s API for content generation:

import type { NextApiRequest, NextApiResponse } from 'next';
import OpenAI from 'openai';

const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
});

export default async function handler(req: NextApiRequest, res: NextApiResponse) {
  if (req.method !== 'POST') {
    return res.status(405).json({ error: 'Method not allowed' });
  }

  try {
    const { prompt, maxTokens = 150 } = req.body;
    if (!prompt) {
      return res.status(400).json({ error: 'Prompt is required' });
    }

    const completion = await openai.chat.completions.create({
      model: 'gpt-4o',
      messages: [
        { role: 'system', content: 'You are a creative AI agent that generates high-quality, context-aware content.' },
        { role: 'user', content: prompt },
      ],
      max_tokens: maxTokens,
    });

    const response = completion.choices[0]?.message?.content || 'No response generated';
    res.status(200).json({ response });
  } catch (error) {
    console.error('Open AI error:', error);
    res.status(500).json({ error: 'Internal server error' });
  }
}

This API route accepts POST requests with a user prompt and an optional maxTokens parameter, sends them to Open AI’s gpt-4o model, and returns the generated content.

Client-Side AI Agent Interface

We built a client-side interface in pages/index.tsx using React state to manage user prompts and AI responses. The interface allows users to input a prompt and receive generated content:

import { useState } from 'react';

export default function AIAgent() {
  const [prompt, setPrompt] = useState('');
  const [response, setResponse] = useState('');
  const [isLoading, setIsLoading] = useState(false);

  const handleGenerate = async () => {
    if (!prompt.trim()) return;

    setIsLoading(true);
    setResponse('');

    try {
      const res = await fetch('/api/generate', {
        method: 'POST',
        headers: { 'Content-Type': 'application/json' },
        body: JSON.stringify({ prompt, maxTokens: 200 }),
      });
      const data = await res.json();
      if (data.error) {
        setResponse(`Error: ${data.error}`);
      } else {
        setResponse(data.response);
      }
    } catch (error) {
      setResponse('Error: Failed to connect to the server');
    } finally {
      setIsLoading(false);
    }
  };

  return (
    <div className="max-w-3xl mx-auto p-6">
      <h1 className="text-3xl font-bold mb-6">Generative AI Agent</h1>
      <div className="mb-4">
        <textarea
          value={prompt}
          onChange={(e) => setPrompt(e.target.value)}
          className="w-full p-3 border rounded-lg"
          placeholder="Enter your prompt (e.g., 'Write a product description for a futuristic gadget')"
          rows={4}
        />
      </div>
      <button
        onClick={handleGenerate}
        disabled={isLoading}
        className="p-3 bg-blue-500 text-white rounded-lg disabled:bg-gray-400"
      >
        {isLoading ? 'Generating...' : 'Generate Content'}
      </button>
      {response && (
        <div className="mt-6 p-4 bg-gray-100 rounded-lg">
          <h2 className="text-xl font-semibold mb-2">Generated Content</h2>
          <p>{response}</p>
        </div>
      )}
    </div>
  );
}

This component provides a textarea for users to input prompts, a button to trigger content generation, and a section to display the AI’s response. It communicates with the /api/generate endpoint.

Next.js Routes

The Next.js application uses the following routes:

  1. / - Main AI agent interface (defined in pages/index.tsx)
  2. /api/generate - API route for handling Open AI content generation requests

Best Practices for Next.js and Open AI

  1. Secure API Key: Store the Open AI API key in environment variables to prevent client-side exposure.
  2. Input Validation: Validate user prompts on the server to ensure they meet length or content requirements.
  3. Rate Limiting: Implement rate limiting on the /api/generate route to prevent abuse and manage API costs.
  4. Styling: Use Tailwind CSS (as shown) for a clean, responsive interface.
  5. Context Management: For advanced use cases, maintain a history of prompts in the Open AI API calls to provide context-aware responses.