Unleash Your Creativity: Building an AI Music Generator with Next.js and Replicate
Are you ready to dive into the world of AI and music? In this blog, we will embark on an exciting journey to create an AI music generator. We'll leverage the power of Replicate for our AI model and harness the capabilities of Next.js
to build a seamless application. Our project will feature a robust backend for model access and a sleek, interactive frontend designed with Tailwind CSS
.
Setting Up the Next.js Project
Let's start by setting up a new
Next.js
project.Open your terminal and execute the following commands:
npx create-next-app ai-music-generator
✔ Would you like to use TypeScript with this project? … No
✔ Would you like to use ESLint with this project? … Yes
✔ Would you like to use Tailwind CSS with this project? … Yes
✔ Would you like to use `src/` directory with this project? … Yes
✔ Use App Router (recommended)? … No
✔ Would you like to customize the default import alias? … No
cd ai-music-generator
Installing Depedencies
npm install replicate
Now, create a new file called .env
in the root of your project and add the following environment variables:
REPLICATE_API_TOKEN=<paste-your-token-here>
Your API token is here: Replicate account settings
Backend
To get started, we'll focus on creating the backend of our AI music generator. We'll use the Replicate API to generate music, and for that, we need a dedicated endpoint. Let's create a new file called musicgenerator.js
inside the src/pages/api
directory of our Next.js project.
Creating the musicgenerator.js
File
First, navigate to the src/pages/api
directory in your project. If the api
directory doesn't exist yet, go ahead and create it. Inside this directory, create a new file named musicgenerator.js
. This file will handle the interaction with the Replicate API to generate music.
import Replicate from "replicate";
export default async function handler(req, res) {
if (req.method === "POST") {
const replicate = new Replicate({
auth: process.env.REPLICATE_API_TOKEN,
});
const { prompt } = req.body;
try {
const output = await replicate.run(
"facebookresearch/musicgen:7a76a8258b23fae65c5a22debb8841d1d7e816b75c2f24218cd2bd8573787906",
{
input: {
model_version: "melody",
prompt: prompt,
},
}
);
console.log("AI music generation started:", output);
res.status(200).json({ music: output });
} catch (error) {
console.error("AI music generation failed:", error);
res.status(500).json({ error: "AI music generation failed" });
}
} else {
res.status(405).json({ error: "Method not allowed" });
}
}
Frontend
With our backend in place, it's time to create a user-friendly frontend interface. We'll use Next.js for our framework and Tailwind CSS for styling to ensure our application looks clean and modern.
Now, let's create the frontend interface. Open the pages/index.js
file in your project and replace the existing code with the following:
import { useState } from "react";
export default function Home() {
const [music, setMusic] = useState("");
const [prompt, setPrompt] = useState("");
const [isLoading, setIsLoading] = useState(false);
const generateMusic = async () => {
setIsLoading(true);
try {
const response = await fetch("/api/generator", {
method: "POST",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify({ prompt }),
});
const { music } = await response.json();
setMusic(music);
} catch (error) {
console.error("Failed to generate music:", error);
}
setIsLoading(false);
};
return (
<div className="flex flex-col items-center justify-center h-screen animate-fadeIn bg-gradient-to-r from-blue-500 via-purple-500 to-pink-500">
<header className="bg-gradient-to-r from-blue-500 via-purple-500 to-pink-500 mt-10 py-8 shadow-lg rounded-lg transform transition duration-500 hover:scale-105">
<h1 className="text-4xl font-bold text-white text-center">AI Music Generator</h1>
</header>
<div className="flex flex-col items-center justify-center h-screen space-y-6">
<div className="w-full max-w-md relative">
<label htmlFor="promptInput" className={`absolute left-4 transition-all duration-300 ease-in-out ${prompt ? 'text-blue-500 top-0 text-sm' : 'top-1/2 transform -translate-y-1/2'}`}>
Enter a prompt
</label>
<input
type="text"
id="promptInput"
value={prompt}
onChange={(e) => setPrompt(e.target.value)}
placeholder=""
className="px-4 py-3 text-black bg-white border-2 border-gray-300 rounded-lg focus:outline-none focus:ring-2 focus:ring-blue-500 focus:border-transparent w-full transition-colors duration-300 focus:shadow-lg"
/>
</div>
<button
onClick={generateMusic}
className="px-6 py-3 bg-gradient-to-r from-blue-500 via-purple-500 to-pink-500 text-white rounded-lg hover:bg-gradient-to-l focus:outline-none focus:ring-2 focus:ring-blue-500 focus:ring-opacity-50 transition-all duration-300 ease-in-out transform hover:scale-110 hover:shadow-2xl"
disabled={isLoading}
>
{isLoading ? (
<div className="flex items-center justify-center">
<svg
className="animate-spin h-5 w-5 mr-3 text-white"
xmlns="http://www.w3.org/2000/svg"
fill="none"
viewBox="0 0 24 24"
>
<circle
className="opacity-25"
cx="12"
cy="12"
r="10"
stroke="currentColor"
strokeWidth="4"
></circle>
<path
className="opacity-75"
fill="currentColor"
d="M4 12a8 8 0 018-8V0C5.373 0 0 5.373 0 12h4zm2 5.291A7.962 7.962 0 014 12H0c0 3.042 1.135 5.824 3 7.938l3-2.647zM12 20c3.042 0 5.824-1.135 7.938-3l-2.647-3A7.962 7.962 0 0112 16v4zm5.291-6A7.962 7.962 0 0112 20v4c4.418 0 8-3.582 8-8h-4zM16.938 3C15.824 1.135 13.042 0 10 0v4c1.79 0 3.527.684 4.826 1.938L16.937 3z"
></path>
</svg>
Generating...
</div>
) : (
"Generate Music"
)}
</button>
</div>
{music && (
<div className="w-full max-w-md mt-8">
<audio className="w-full rounded-lg shadow-lg transition duration-300 ease-in-out hover:shadow-2xl" controls src={music} />
</div>
)}
</div>
);
}
State Management: The code uses React's
useState
hook to manage three states:music
,prompt
, andisLoading
.music
stores the generated music's URL,prompt
holds the user's input, andisLoading
indicates whether the music generation process is ongoing.Generating Music: The
generateMusic
function sends a POST request to the/api/musicgenerator
endpoint with the user's prompt. It handles the request asynchronously, updates theisLoading
state totrue
while the request is being processed, and then sets themusic
state with the URL of the generated music once the response is received.User Interface: The return statement defines the JSX structure of the page. It includes an input field for the prompt, a button to trigger music generation, and an audio player to play the generated music. The UI is styled using Tailwind CSS classes for a visually appealing and responsive design.
Loading Indicator: When the music generation process is ongoing (
isLoading
istrue
), the button displays a loading spinner to inform the user that their request is being processed. Once the process is complete, the button text changes back to "Generate Music."Styling and Animation: The component employs Tailwind CSS for styling and animations, creating a dynamic and engaging user experience. This includes gradient backgrounds, hover effects, focus rings, and smooth transitions, ensuring the interface is both attractive and functional.
Conclusion
Congratulations! You've successfully built an AI music generator using Next.js and Replicate. By combining the power of AI with a sleek frontend, you've created a tool that turns text prompts into music. This project not only showcases the capabilities of modern web development frameworks and AI technologies but also opens the door to countless creative possibilities. Keep experimenting with different prompts and enhancements to refine your music generator further.
Happy coding and composing!