Hey, hey! 👋
Wassap?!
I was sitting in the lecture but it was too boring. I left the room. Then, I prompt ChatGPT what to do. LOL. And, it said: “make something cool, but worst at the same time”. So, I came up with this idea. Let’s build a platform that transforms modern websites into authentic 90s-style designs while preserving core functionality. 😂 LMAO.
Interesting?
Follow along! 🚀
In this tutorial, we will build a website that transforms modern websites into authentic 90s-style designs while preserving core functionality. We will use AI/ML API
as a core component to redesign the whole website. We will also use Next.js
, Tailwind CSS
, Clerk Auth
, and Vercel
to build and deploy the website. All those tools are pretty easy to use and will help us build a powerful and scalable website in no time. 🤓
The idea and implementation are pretty simple. We will take a modern website URL as input. Then, prompt the user to select the page counts. I mean whether they want to transform the whole website or just a single landing page or 2, 3, etc. pages.
It will help us a lot;
1) to save API tokens and reduce the cost, and
2) also minimize the API calls. Then, we will crawl the website and save the data in a JSON file
. So, later on we could easily get the data that we need.
After that, we create a new demo
folder for the transformed website. Everything will be placed inside this folder. First, we will build landing (main) page. Then iteratively (looping) build other pages that are interconnected with the main page with navigation links. We will prompt GPT-4o
to redesign the website. Voila! We have a new 90s-style website. 🤩
Here’s the UI of our website:
Pretty crazy, right? 🔥
I shamelessly copied from lovido.lol. LMAO. 😂
Especcially, the color palette. 🎨
--violet: #625df5;
--dark-violet: #625df580;
--bg-a: #0B0E11;
--text-a: #FFFFFF;
--text-b: #C3C4C7;
--text-c: #787B89;
--orange: #ee5d19;
Save it. Well crafted color palette you have ever seen. By me for you 🤝
So, let’s get started! 🚀
AI/ML API is a game-changing platform for developers and SaaS entrepreneurs looking to integrate cutting-edge AI capabilities into their products. It offers a single point of access to over 200 state-of-the-art AI models, covering everything from NLP to computer vision.
Key Features for Developers:
Get Started for FREE !🧑🍳
Use the codeIBROHIMXAIMLAPI
for 1 week FREE Access
Deep Dive into AI/ML API Documentation (very detailed, can’t agree more) 📖
Here’s a brief tutorial: Quickstart to make your first API call.
Firecrawl turns entire websites into clean, LLM-ready markdown or structured data. Scrape, crawl and extract the web with a single API. Ideal for AI companies looking to empower their LLM applications with web data.
Key Features for Developers:
Documentation: Firecrawl
Next.js is a React framework that enables server-side rendering and static site generation for React applications. It provides a range of features that make it easier to build fast, scalable, and SEO-friendly web applications.
Documentation: Next.js
Tailwind CSS is a utility-first CSS framework that makes it easy to build custom designs without writing custom CSS. It provides a range of utility classes that can be used to style elements directly in the HTML.
Documentation: Tailwind CSS
Clerk is an authentication platform that provides a range of features for managing user authentication and authorization in web applications. It offers a range of features, including social login, multi-factor authentication, and user management.
Documentation: Clerk
Here’s a brief tutorial on: How to create account on Clerk and setup new project
Vercel is a cloud platform to deploy and host web applications. It offers a range of features, including serverless functions, automatic deployments, and custom domains.
Documentation: Vercel
Here’s a brief tutorial: How to Deploy Apps to Vercel with ease
Before we get started, make sure you have the following installed on your machine:
Let’s get started by creating a new Next.js project:
npx create-next-app@latest
It will ask you a few *simple questions:
What is your project named? Here, you should enter your app name. For example: Retrofy
( or whatever you wish 🫣 ). For the rest of the questions, simply hit enter:
Here’s what you’ll see:
✔ Would you like to use TypeScript? … No / Yes
✔ Would you like to use ESLint? … No / Yes
✔ Would you like to use Tailwind CSS? … No / Yes
✔ Would you like your code inside a `src/` directory? … No / Yes
✔ Would you like to use App Router? (recommended) … No / Yes
✔ Would you like to use Turbopack for `next dev`? … No / Yes
✔ Would you like to customize the import alias (`@/*` by default)? … No / Yes
Open your project with Visual Studio Code:
cd Retrofy
code .
Let’s firstly setup the notification
component. Create a new folder utils
then create a new file notify.tsx
inside it:
import React, { useEffect } from 'react';
type NotificationProps = {
message: string;
type: 'error' | 'success' | 'info';
onClose: () => void;
};const Notification: React.FC<NotificationProps> = ({ message, type, onClose }) => {
useEffect(() => {
const timer = setTimeout(() => {
onClose();
}, 3000); // Change it to your favorite number ( kidding )
return () => clearTimeout(timer);
}, [onClose]); const bgColor = type === 'error' ? 'bg-[#f84f31]' : type === 'success' ? 'bg-[#23c552]' : 'bg-[#1e90ff]'; return (
<div className={`fixed w-[300px] text-xs sm:text-md top-10 left-1/2 transform -translate-x-1/2 ${bgColor} text-white px-4 py-2 rounded-md shadow-lg z-50`}>
<p>{message}</p>
</div>
);
};export default Notification;
Then add loader
in loader.tsx
file:
export const loader = () => (
<svg xmlns="http://www.w3.org/2000/svg" width="1em" height="1em" viewBox="0 0 24 24">
<circle cx={4} cy={12} r={3} fill="currentColor">
<animate id="svgSpinners3DotsScale0" attributeName="r" begin="0;svgSpinners3DotsScale1.end-0.25s" dur="0.75s" values="3;.2;3" />
</circle>
<circle cx={12} cy={12} r={3} fill="currentColor">
<animate attributeName="r" begin="svgSpinners3DotsScale0.end-0.6s" dur="0.75s" values="3;.2;3" />
</circle>
<circle cx={20} cy={12} r={3} fill="currentColor">
<animate id="svgSpinners3DotsScale1" attributeName="r" begin="svgSpinners3DotsScale0.end-0.45s" dur="0.75s" values="3;.2;3" />
</circle>
</svg>
);
It loads like that:
https://github.com/user-attachments/assets/7b8daa18-d72a-419d-b279-4960229be7f4
Get loader from svgbackgrounds.com
Creating the main interface of the app is pretty simply. We need just a few stuff; header text
, input field
, one button for dropdown
, one button for processing
, and one button for viewing
the transformed website. And few functions to handle the events.
Let’s integrate the notification
component first. Open src/app/page.tsx
and add the following code:
'use client';
import Image from 'next/image';
import React, { useEffect, useState } from 'react';
import Notification from './utils/notify';
import { loader } from './utils/loader';
import Footer from './components/Footer';export default function Home() { const [notification, setNotification] = useState<{ message: string; type: 'error' | 'success' | 'info' } | null>(null); // notification message and type
const messages = {
crawling: 'Crawling website...',
scraping: 'Scraping website...',
redesigning: 'Redesigning website...',
stillRedesigning: 'Still redesigning website...',
crawledSuccess: 'Website crawled successfully.',
scrapedSuccess: 'Website scraped successfully.',
redesignSuccess: 'Website redesigned successfully.',
} return (
<div className="grid grid-rows-[20px_1fr_20px] bg-[var(--bg-a)] items-center justify-items-center min-h-screen pb-8 gap-8 p-4 font-[family-name:var(--font-geist-sans)]">
<main className="flex flex-col gap-8 row-start-2 items-center w-full max-w-7xl"> {notification && (
<Notification
message={notification.message}
type={notification.type}
onClose={() => setNotification(null)}
/>
)}
</main>
</div>
);
}
Next, let’s add the header
. Put it right after the notification:
<div className="mb-6 mt-16 sm:mt-24 w-full max-w-2xl text-center text-xl sm:text-2xl md:text-3xl leading-9">
<h1 className="text-[var(--text-a)] font-semibold flex flex-row gap-2">
<p className="text-center mx-auto">AI-Powered Time Machine for Web Design</p>
</h1>
</div>
Let’s put all the states. otherwise it will be confusing later on. Add the following code:
const [webUrl, setwebUrl] = useState('');
const [loading, setLoading] = useState(false);
const [scrapedDataFilePath, setScrapedDataFilePath] = useState<string | null>(null);
const [redesignedFolderPath, setRedesignedFolderPath] = useState<string | null>(null);
const scrapeStates = {
singlePage: 'Single',
fullSite: 'Multi',
}
const [scrapeState, setScrapeState] = useState(scrapeStates.singlePage);
const [pageCount, setPageCount] = useState<number>(1);
const [showDropdown, setShowDropdown] = useState(false);
Then, add the input
field:
<input
type="text"
value={webUrl}
onChange={(e) => setwebUrl(e.target.value)}
placeholder="Enter website link here..."
className="placeholder:text-[var(--text-c)] placeholder:text-sm text-sm bg-transparent focus:outline-none text-[var(--text-a)] w-full px-4 py-2 rounded-full shadow transition-colors border border-[var(--ring)] focus:border-[var(--violet)]"
disabled={loading}
/>
Next, add the dropdown
button for selecting number of pages to be scraped or crawled:
<button
disabled={loading}
onClick={() => setShowDropdown(!showDropdown)}
className={`flex items-center justify-center py-2 px-4 sm:px-8 text-sm md:text-sm rounded-full shadow transition-colors
${loading
? 'cursor-not-allowed bg-[var(--text-b)] text-[var(--bg-a)]'
: 'cursor-pointer bg-[var(--text-b)] hover:bg-[var(--text-c)] text-[var(--bg-a)]'
}`}
> <span className="mr-2">{scrapeState}</span>
{!loading
? (
<Image
aria-hidden
src="/line-angle-down-icon.svg"
alt="line-angle-down-icon"
width={14}
height={14}
/>
)
: loader()
}
</button>
{showDropdown && (
<div className="absolute mt-12 w-32 rounded-md shadow-lg bg-[var(--text-b)] ring-1 ring-black ring-opacity-5 z-10">
<div className="py-1" role="menu">
<button
className="block w-full text-left px-4 py-2 text-sm hover:bg-gray-100"
onClick={() => selectPages(1)}
role="menuitem"
>
1 page
</button>
<button
className="block w-full text-left px-4 py-2 text-sm hover:bg-gray-100"
onClick={() => selectPages(2)}
role="menuitem"
>
2 pages
</button>
<button
className="block w-full text-left px-4 py-2 text-sm hover:bg-gray-100"
onClick={() => selectPages(3)}
role="menuitem"
>
3 pages
</button>
<button
className="block w-full text-left px-4 py-2 text-sm hover:bg-gray-100"
onClick={() => selectPages(4)}
role="menuitem"
>
4+ pages
</button>
</div>
</div>
)}
Add function to select and set the number of pages:
const selectPages = (count: number) => {
setPageCount(count);
setScrapeState(count === 1 ? scrapeStates.singlePage : scrapeStates.fullSite);
setShowDropdown(false);
};
Then, add the process
button:
<button
disabled={webUrl === '' || loading}
onClick={handleScrape}
className={`flex items-center justify-center py-2 px-4 sm:px-8 text-sm md:text-sm rounded-full shadow transition-colors
${webUrl === '' || loading
? 'cursor-not-allowed bg-[var(--ring)] text-[var(--text-a)]'
: 'cursor-pointer bg-[var(--violet)] hover:bg-[var(--ring)] text-[var(--text-a)]'
}`}
> <span className="mr-2">Back90s</span>
{!loading
? (
<Image
aria-hidden
src="/history-line-icon.svg"
alt="Download Icon"
width={18}
height={18}
/>
)
: loader()
}
</button>
Finally, add the view
button to view the redesigned website:
{redesignedFolderPath && (
<div className="w-full max-w-3xl mx-auto flex flex-col items-center p-4 mb-8 shadow-lg gap-4 bg-[var(--bg-a)] rounded-full">
<a
href={redesignedFolderPath!}
target="_blank"
rel="noopener noreferrer"
className="flex items-center justify-center py-2 px-4 sm:px-8 text-sm md:text-sm rounded-full shadow transition-colors bg-[var(--violet)] hover:bg-[var(--ring)] text-[var(--text-a)]"
>
<span className="mr-2">View redesigned website</span>
<Image
aria-hidden
src="/arrow-top.svg"
alt="External Link Icon"
width={18}
height={18}
/>
</a>
</div>
)}
(All the above code should be placed inside the main
tag.)
Interesting part, implementing functions to handle the different stuff. Let’s start with the handleScrape
function:
const handleScrape = () => {
if (pageCount === 1) {
scrapeUrl();
} else {
crawlUrl();
}
};
It will select specific function based on the pageCount
. If it's 1
, then it will call scrapeUrl
function. Otherwise, it will call crawlUrl
function.
Next, let’s implement the scrapeUrl
function:
const scrapeUrl = async () => {
if (!webUrl) return;
setLoading(true);
setNotification({ message: messages.scraping, type: 'info' }); try {
const response = await fetch('/api/scrape', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ url: webUrl }),
}); const data = await response.json(); if (response.ok) {
setNotification({ message: messages.scrapedSuccess, type: 'success' });
const scrapedDataMsg = data.message;
const scrapedDataFilePath = data.filePath;
console.log("====================================")
console.log(scrapedDataMsg);
console.log('Scraped data saved at:', scrapedDataFilePath); setScrapedDataFilePath(scrapedDataFilePath);
} else {
setNotification({ message: data.error || 'An unexpected error occurred.', type: 'error' });
}
} catch (error) {
console.error('Error crawling website:', error);
alert('An unexpected error occurred.');
} finally {
setLoading(false);
}
};
The scrapeUrl
function will send a POST
request to the /api/scrape
endpoint with the website URL. It will then display a notification based on the response from the server. From the response, it will set the scrapedDataFilePath
state with the file path of the scraped data. And scrapedDataFilePath
is always markdown
file in this case. For example: scraped_1734452873592.md
.
Next, let’s implement the crawlUrl
function:
const crawlUrl = async () => {
if (!webUrl) return;
setLoading(true);
setNotification({ message: messages.crawling, type: 'info' }); try {
const response = await fetch('/api/firecrawl', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ url: webUrl }),
}); const data = await response.json(); if (response.ok) {
setNotification({ message: messages.crawledSuccess, type: 'success' });
const scrapedDataMsg = data.message;
const scrapedDataFilePath = data.jsonFilePath;
console.log("====================================")
console.log(scrapedDataMsg);
console.log('Crawled data saved at:', scrapedDataFilePath); // Crawled data saved at: /Users/abdibrokhim/VSCode/projects/retroed/files/scraped_1734447602439.json setScrapedDataFilePath(scrapedDataFilePath);
} else {
setNotification({ message: data.error || 'An unexpected error occurred.', type: 'error' });
}
} catch (error) {
console.error('Error scraping website:', error);
alert('An unexpected error occurred.');
} finally {
setLoading(false);
}
};
The crawlUrl
function will send a POST
request to the /api/firecrawl
endpoint with the website URL. It will then display a notification based on the response from the server. From the response, it will set the scrapedDataFilePath
state with the file path of the scraped data. And scrapedDataFilePath
is always json
file in this case. For example: scraped_1734447602439.json
.
Well, okey.
Why JSON? Because, it’s easier to work with JSON data. We can easily get the data that we need. For example, we can get the title
, description
, keywords
, images
, links
, etc. from the JSON file. It's pretty simple. 🤓 ( just believe me. lmao )
Now, we need useEffects
to listen to the scrapedDataFilePath
state. If it's not null
, then we will call redesignWebsite
function. And, set the scrapedDataFilePath
state to null
:
useEffect(() => {
if (scrapedDataFilePath) {
redesignWebsite();
setScrapedDataFilePath(null);
}
}, [scrapedDataFilePath]);
Next, let’s implement the redesignWebsite
function:
const redesignWebsite = async () => {
setLoading(true);
setNotification({ message: messages.redesigning, type: 'info' });
try {
const response = await fetch('/api/redesign', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ filePath: scrapedDataFilePath, ptype: pageCount }),
}); const data = await response.json(); if (response.ok) {
setNotification({ message: messages.redesignSuccess, type: 'success' });
const newwebsitepath = data.newwebsitepath;
console.log("====================================")
console.log('Website redesigned inside folder=', newwebsitepath);
setRedesignedFolderPath(newwebsitepath);
} else {
setNotification({ message: data.error || 'An unexpected error occurred.', type: 'error' });
}
} catch (error) {
console.error('Error redesigning website:', error);
alert('An unexpected error occurred.');
} finally {
setLoading(false);
}
};
The redesignWebsite
function will send a POST
request to the /api/redesign
endpoint with the scraped data file path and the page count. It will then display a notification based on the response from the server. From the response, it will set the redesignedFolderPath
state with the folder path of the redesigned website. ( spoiler; it's always demo
folder ).
Belive or not, we have done with UI. Congrats! 🎉
We came to the most interesting part. The core of the app. The API routes and functionalities behind the scene.
Let’s implement the API routes. We need three API routes: scrape
, firecrawl
, and redesign
with few helper functions. Helper functions will help us to properly handle the data, clean it, setup the folder and files.
Create a new folder scrape
inside the app/api/
folder. Then, create a new file route.ts
inside it. Add the following code:
// app/api/scrape/route.ts
import FirecrawlApp, { CrawlParams, CrawlStatusResponse } from '@mendable/firecrawl-js';
import { NextResponse } from 'next/server';
import fs from 'fs';
import path from 'path';
export async function POST(request: Request) {
try {
const { url } = await request.json(); const app = new FirecrawlApp({apiKey: process.env.FIRECRAWL_API_KEY}); // Scrape a website
const scrapeResponse = await app.scrapeUrl(url, {
formats: ['markdown'],
}); if (!scrapeResponse.success) {
throw new Error(`Failed to scrape: ${scrapeResponse.error}`)
} console.log("====================================")
console.log('Scraped data:', scrapeResponse); // Write scraped markdown data to a new file
const filesDir = path.join(process.cwd(), 'files');
const fileName = `scraped_${Date.now()}.md`;
const filePath = path.join(filesDir, fileName); // Ensure the "files" directory exists
if (!fs.existsSync(filesDir)) {
fs.mkdirSync(filesDir, { recursive: true });
} // Write markdown content to the file
fs.writeFileSync(filePath, scrapeResponse.markdown!); console.log(`Markdown file saved at: ${filePath}`); return NextResponse.json({
message: 'Scrape successful and file saved.',
filePath
});
} catch (error: any) {
console.error('Error in /api/scrape', error);
return NextResponse.json(
{ error: error.message || 'Internal Server Error' },
{ status: 500 }
);
}
}
The POST
function will scrape the website using the Firecrawl
API and save the scraped data to a markdown file. It will then return the file path of the saved markdown file. Scrape: scrapes the content of a web page and return it in LLM-ready format. Here's documentation on Firecrawl: Scrape API.
For example: Take a look at the files/scraped_1734452873592.md
file. It contains the scraped data in markdown format.
Don’t forget to get your Firecrawl API key
and set .env
file. Here's a tutorial on How to get API Key from Firecrawl . Setup process a little below, here Environment Variables.
Create a new folder firecrawl
inside the app/api/
folder. Then, create a new file route.ts
inside it. Add the following code:
// app/api/firecrawl/route.ts
import FirecrawlApp, { CrawlParams, CrawlStatusResponse } from '@mendable/firecrawl-js';
import { NextResponse } from 'next/server';
import fs from 'fs';
import path from 'path';
export async function POST(request: Request) {
try {
const { url } = await request.json(); const app = new FirecrawlApp({apiKey: process.env.FIRECRAWL_API_KEY}); // Crawl a website
const crawlResponse = await app.crawlUrl(url, {
limit: 100,
scrapeOptions: {
formats: ['markdown'],
}
}); if (!crawlResponse.success) {
throw new Error(`Failed to crawl: ${crawlResponse.error}`)
} console.log(crawlResponse) console.log("====================================")
console.log('Crawled data:', crawlResponse); // Ensure the "files" directory exists
const filesDir = path.join(process.cwd(), 'files');
if (!fs.existsSync(filesDir)) {
fs.mkdirSync(filesDir, { recursive: true });
} // Write the scraped markdown data to a file
const timeStamp = Date.now(); // Write the entire scrapeResponse to a .json file
const jsonFileName = `scraped_${timeStamp}.json`;
const jsonFilePath = path.join(filesDir, jsonFileName);
fs.writeFileSync(jsonFilePath, JSON.stringify(crawlResponse, null, 2), 'utf8');
console.log(`JSON file saved at: ${jsonFilePath}`); return NextResponse.json({
message: 'Scrape successful and files saved.',
jsonFilePath
});
} catch (error: any) {
console.error('Error in /api/firecrawl', error);
return NextResponse.json(
{ error: error.message || 'Internal Server Error' },
{ status: 500 }
);
}
}
The POST
function will crawl the website using the Firecrawl
API and save the crawled data to a JSON file. It will then return the file path of the saved JSON file. Crawl: scrapes all the URLs of a web page and return content in LLM-ready format. Here's documentation on Firecrawl: Crawl API.
For example: Take a look at the files/scraped_1734447602439.json
file. It contains the crawled data of whole website in JSON format.
Create a new folder redesign
inside the app/api/
folder. Then, create a new file route.ts
inside it. Add the following code:
// app/api/redesign/route.ts
import { NextResponse } from 'next/server';
import { chatCompletion, layoutGenerator } from './utils/ass';
import { buildSite } from './utils/webbuilder';
export async function POST(request: Request) {
try {
// we will get JSON file path.
const { filePath, ptype } = await request.json(); if (ptype === 1) {
const response = await chatCompletion(filePath);
console.log("====================================")
console.log('response:');
console.log(response); const layoutPath = await layoutGenerator('src/app/api/redesign/utils/layout.txt');
console.log("====================================")
console.log('layoutPath:');
console.log(layoutPath);
} else {
const buildResponse = await buildSite(filePath);
const msg = buildResponse.message;
const fdir = buildResponse.demoDir; console.log("====================================")
console.log('msg: ', msg);
console.log('fdir: ', fdir);
}
const newwebsitepath = "demo"; return NextResponse.json({ newwebsitepath });
} catch (error: any) {
console.error('Error in /api/redesign:', error);
return NextResponse.json(
{ error: error.message || 'Internal Server Error' },
{ status: 500 }
);
}
}
Here we are getting two parameters: filePath
and ptype
. If ptype
is 1
, then we will call chatCompletion
and layoutGenerator
functions. And scrape single page. Otherwise, we will call buildSite
function. And scrape iteratively all the pages and build the website.
Create a new folder; utils
.
Add ass.ts
file inside the utils
folder.
Setup AI/ML API and system prompt.
import { instr } from "./instr";
import fs from 'fs';
import path from 'path';
import OpenAI from "openai";
const openai = new OpenAI({
baseUrl: "https://api.aimlapi.com/chat/completions",
apiKey: process.env.AIML_API_KEY,
dangerouslyAllowBrowser: true,
});const systemPrompt = instr;
Implement the chatCompletion
function. It will read the markdown content from the file, send it to the OpenAI API, and write the response to the page.tsx
file. Save it in the demo
folder.
export const chatCompletion = async (filePath: string) => {
console.log("loading chatCompletion...");
console.log("====================================");
console.log("systemPrompt: ");
console.log(systemPrompt); try {
console.log("====================================");
console.log("Opening file...") const fileContent = fs.readFileSync(filePath, 'utf8'); console.log("====================================");
console.log("fileContent: ");
console.log(fileContent); console.log("====================================");
console.log("Sending request to OpenAI API..."); const completion = await openai.chat.completions.create({
messages: [
{
role: "system",
content: systemPrompt,
},
{
role: "user",
content: "[Markdown content]:"+"\n\n"+fileContent,
},
],
model: "gpt-4o",
}); const responseMessages = completion.choices[0].message.content;
console.log("====================================")
console.log("responseMessages: ");
console.log(responseMessages); // Create the demo directory if it doesn't exist
const demoDir = path.join(process.cwd(), 'src', 'app', 'demo');
if (!fs.existsSync(demoDir)) {
fs.mkdirSync(demoDir, { recursive: true });
} // Define the output file path
const outputPath = path.join(demoDir, 'page.tsx'); // remove first and last line
const processedMessages = removeFirstAndLastLines(responseMessages); // Write the response to the file
fs.writeFileSync(outputPath, processedMessages!, 'utf8');
console.log("====================================")
console.log("File written successfully to:", outputPath); // Return the relative path from the project root
return path.relative(process.cwd(), outputPath); } catch (error) {
console.error("Error fetching the data:", error);
return "An error occurred while fetching the data.";
}
}
Remove first and last lines from the response messages. Usually GPT-4o adds language specific messages or code blocks.
function removeFirstAndLastLines(str: string | null | undefined): string {
if (!str) {
return ""; // Or handle null/undefined differently if needed
}
const lines = str.split('\n');
if (lines.length <= 1) { // Handle short strings
return ""; // Or return the original string if desired: return str;
} lines.shift();
lines.pop();
return lines.join('\n');
}
Now implement the layoutGenerator
function. It will simply read the already prepared template file and write the response to the layout.tsx
file. Save it in the same directory as page.tsx
.
// add `layout.tsx` to the same directory as `page.tsx`.
export const layoutGenerator = async (filePath: string) => {
// simply read the file content from `filePath` and write to the `layout.tsx` file. save it in the same directory as `page.tsx`.
console.log("loading layoutGenerator...");
try {
console.log("====================================")
console.log("Opening file...")
const fileContent = fs.readFileSync(filePath, 'utf8'); console.log("====================================")
console.log("fileContent: ");
console.log(fileContent); // Create the demo directory if it doesn't exist
const demoDir = path.join(process.cwd(), 'src', 'app', 'demo');
if (!fs.existsSync(demoDir)) {
fs.mkdirSync(demoDir, { recursive: true });
} // Define the output file path
const outputPath = path.join(demoDir, 'layout.tsx'); // Write the response to the file
fs.writeFileSync(outputPath, fileContent, 'utf8');
console.log("====================================")
console.log("File written successfully to:", outputPath); // Return the relative path from the project root
return path.relative(process.cwd(), outputPath); } catch (error) {
console.error("Error fetching the data:", error);
return "An error occurred while fetching the data.";
}
}
Intructions for the GPT-4o. Add instr.ts
file inside the utils
folder. Add the following code:
// instr.js
export const instr = `
Develop a Next.js application that takes the Markdown content of a scraped modern one-page website and transforms its design to strictly reflect 90s web aesthetics with weird color schemes, fonts, and layouts.
The transformation includes modifying layouts, color schemes, fonts, and ensuring compatibility with 90s-era web technologies.
[Challenge]:
Develop a system to analyze modern web designs and convert them to 90s aesthetics. You may align them based on the Markdown content. [Technologies Used]:
Next.js: React framework for server-side rendering and static site generation.
React: TypeScript library for building user interfaces.
TypeScript: Superset of JavaScript for static type checking.
Tailwind CSS: Utility-first CSS framework for rapid UI development.
Markdown: Format of the input content to be transformed -> "page.tsx" [Key Tasks]:
Transform layouts to reflect 90s design patterns. Super simple, no complex layouts.
Convert modern color schemes to 90s-appropriate palettes. Make sure colors highly compatible with 90s-era web technologies.
Replace modern fonts with period-appropriate alternatives.
Ensure compatibility with 90s-era web technologies. [Return]:
As an output only return the full code that will be placed inside "page.tsx" file. Return only the code, full implementation.
Never explain the code. Don't write comments. Don't write console.log().
Just return the code that will be placed inside "page.tsx" file. The code should 90s web aesthetics.
Strictly keep the imgae URLs as they are. Don't change the image URLs.
Make sure to keep the navigation paths as they are. Don't change the navigation paths. Always start with the following code (SUPER STRICT): 'use client'; import Image from 'next/image';
import React, { useEffect, useState } from 'react'; export default function Home() {
return (
<></>
);
}`;
We have done with the single page.
The next step is to implement the buildSite
function. The most comprehensive part of the tutorial. It will scrape all the pages of the modern website, pre-process it, build the 90s styled website, loop ovre the all pages, and save it in the demo
or other folders with corresponding files.
Let’s first implement all the helper functions. They will help us to properly build the website.
Folder maker function foldermaker.ts
:
import fs from 'fs';
import path from 'path';
export function ensureFolderStructure(folderName: string) {
const demoDir = path.join(process.cwd(), 'src', 'app', 'demo');
if (!fs.existsSync(demoDir)) {
fs.mkdirSync(demoDir, { recursive: true });
} // create subfolder under demo
const folderPath = path.join(demoDir, folderName);
if (!fs.existsSync(folderPath)) {
fs.mkdirSync(folderPath, { recursive: true });
} return folderPath;
}
Layout generator function layoutgen.ts
:
import fs from 'fs';
import path from 'path';
export async function layoutGenerator(folderPath: string, title: string, description: string) {
console.log("Generating layout for folder:", folderPath); const layoutTemplatePath = "src/app/api/redesign/utils/layout.txt";
if (!fs.existsSync(layoutTemplatePath)) {
throw new Error(`layout.txt template not found at: ${layoutTemplatePath}`);
} const templateContent = fs.readFileSync(layoutTemplatePath, 'utf8');
const replacedContent = templateContent
.replace('{{title}}', title || 'Default Title')
.replace('{{description}}', description || 'Default Description'); const layoutPath = path.join(folderPath, 'layout.tsx');
fs.writeFileSync(layoutPath, replacedContent, 'utf8'); console.log("layout.tsx created at:", layoutPath);
}
Here title
and description
will be replaced based on the scraped data.
Page generator function pagegen.ts
:
import fs from 'fs';
import path from 'path';
import { removeFirstAndLastLines } from './cleaner';
export function pageGenerator(folderPath: string, pageContent: string) {
const pagePath = path.join(folderPath, 'page.tsx');
// If you need to remove first and last lines from pageContent, uncomment the below line
const processedContent = removeFirstAndLastLines(pageContent);
fs.writeFileSync(pagePath, processedContent, 'utf8');
console.log("page.tsx created at:", pagePath);
}
Cleaner function cleaner.ts
:
export function removeFirstAndLastLines(str: string | null | undefined): string {
if (!str) {
return ""; // Or handle null/undefined differently if needed
}
const lines = str.split('\n');
if (lines.length <= 1) { // Handle short strings
return ""; // Or return the original string if desired: return str;
} lines.shift();
lines.pop();
return lines.join('\n');
}
Find markdown and return it helpers.ts
:
export function findMarkdown(item: any): string {
return item.markdown || '';
}
GPT-4o completion function gpt.ts
:
import { instr } from "./instr";
import fs from 'fs';
import path from 'path';
import OpenAI from "openai";
const openai = new OpenAI({
baseUrl: "https://api.aimlapi.com/chat/completions",
apiKey: process.env.AIML_API_KEY,
dangerouslyAllowBrowser: true,
});export const chatCompletion = async (markdown: string) => {
console.log("loading chatCompletion..."); const systemPrompt = instr;
console.log("====================================");
console.log("systemPrompt: ");
console.log(systemPrompt); try {
console.log("====================================");
console.log("Opening file...") console.log("====================================");
console.log("markdown: ");
console.log(markdown); console.log("====================================");
console.log("Sending request to OpenAI API..."); const completion = await openai.chat.completions.create({
messages: [
{
role: "system",
content: systemPrompt,
},
{
role: "user",
content: "[Markdown content]:"+"\n\n"+markdown,
},
],
model: "gpt-4o",
}); const responseMessages = completion.choices[0].message.content;
console.log("====================================")
console.log("responseMessages or styled nextjs string: ");
console.log(responseMessages); return responseMessages; } catch (error) {
console.error("Error fetching the data:", error);
return "An error occurred while fetching the data.";
}
}
We have done with helpers. Now, let’s implement the buildSite
function.
import fs from "fs";
import path from "path";
import { layoutGenerator } from "./layoutgen";
import { pageGenerator } from "./pagegen";
import { chatCompletion } from "./gpt";
import { ensureFolderStructure } from "./foldermaker";
import { findMarkdown } from "./helpers";
export async function buildSite(filePath: string) {
// Load your scraped JSON data
let jsonFilePath = filePath;
if (!path.isAbsolute(filePath)) {
jsonFilePath = path.join(process.cwd(), filePath);
} if (!fs.existsSync(jsonFilePath)) {
throw new Error(`${filePath} not found`);
} const rawData = fs.readFileSync(jsonFilePath, 'utf8');
const jsonData = JSON.parse(rawData); // Ensure main demo folder
const demoDir = path.join(process.cwd(), 'src', 'app', 'demo');
if (!fs.existsSync(demoDir)) {
fs.mkdirSync(demoDir, { recursive: true });
} const mainPageData = findMainPageData(jsonData.data); const pageTitle = mainPageData.metadata.title || 'Raptors.dev';
const pageDescription = mainPageData.metadata.description || 'Raptors.dev is a collection of useful resources for developers.'; // Create layout.tsx in the root "demo" folder.
// If you want the root layout different, you can do it here:
await layoutGenerator(demoDir, pageTitle, pageDescription); const pageContent = findMarkdown(mainPageData); // Create page.tsx in the root "demo" folder if needed.
// Or skip if you do not need a root page.
const rootPageContent = await chatCompletion(pageContent);
pageGenerator(demoDir, rootPageContent!); // Build a map: folderName -> { title, description, markdowns: string[] }
const folderMap: Record<string, {title: string, description: string, markdowns: string[]}> = {}; for (const item of jsonData.data) {
const url: string = item.metadata.url;
// Extract folder name: everything after 'https://www.raptors.dev/'
const folderName = url.replace('https://www.raptors.dev/', '').split('?')[0];
// Remove trailing slashes if any
const cleanedFolderName = folderName.replace(/\/$/, '') || ''; // If it's the root (e.g. ""), you can skip or handle differently
if (!cleanedFolderName) {
continue;
} const title = item.metadata.title || 'Default Title';
const description = item.metadata.description || 'Default Description';
const markdownContent = findMarkdown(item); if (!folderMap[cleanedFolderName]) {
folderMap[cleanedFolderName] = { title, description, markdowns: [] };
} folderMap[cleanedFolderName].markdowns.push(markdownContent);
} // Now loop over each folder and generate layout.tsx and page.tsx
for (const [folderName, data] of Object.entries(folderMap)) {
const folderPath = ensureFolderStructure(folderName); // Generate layout.tsx per folder
await layoutGenerator(folderPath, data.title, data.description); // Combine all markdown entries for this folder
const combinedMarkdown = data.markdowns.join('\n\n'); // Call chatCompletion to transform markdown to page.tsx content
const pageContent = await chatCompletion(combinedMarkdown); // Write page.tsx in the folder
pageGenerator(folderPath, pageContent!);
} console.log('All pages and layouts generated successfully!'); return { message: 'All pages and layouts generated successfully!', demoDir };
}// scrape individual markdown content from the JSON data where the "url"=== "https://www.raptors.dev/" super strictly!
function findMainPageData(data: any) {
for (const item of data) {
if (item.metadata.url === 'https://www.raptors.dev/') {
return item;
}
} return '';
}
Brief explanation of the buildSite
function:
The buildSite
function, after all helpers have been implemented, follows a detailed sequence to transform a modern website’s scraped JSON data into a classic, 90s-themed Next.js directory structure:
Load and Parse JSON Data:
Set Up the Output Directory (demo folder):
demo
directory is created. This serves as the root folder where all generated pages and layouts will be stored.Extract Main Page Data:
https://www.raptors.dev/
) from the JSON. This ensures we have a reference point for the main site’s title, description, and initial content.Generate the Root Layout and Page:
layoutGenerator
helper, it creates the root layout.tsx
file with the site’s main title and description.chatCompletion
to transform the main page’s scraped markdown content into a page.tsx
file that matches the retro styling.Build a Folder Map for Sub-Pages:
Iterate Over All Pages and Sub-Pages:
ensureFolderStructure
).layout.tsx
file is generated for that folder (again using layoutGenerator
).chatCompletion
to produce a page.tsx
file representing the 90s-styled version of that page.Finish Up:
layout.tsx
and page.tsx
files) is now complete within the demo
directory.In summary, buildSite
orchestrates the entire workflow: from reading and preparing data, through generating both layout and content files, to outputting a fully structured, retro-styled Next.js site.
OMG! We have done with the API routes. 🎉
It was super fun to implement the API routes. Now, let’s test the application locally.
But, before that. I wanted to tell you something. These entire helper
functions and main builder
were implemented by ChatGPT. LMAO 😂. Check the src/app/instr.txt
for used prompt and src/app/daft.txt
for draft idea. I hope it will help you to LEVEL UP your prompt engineering skills. 🔥
Oh, we forgot some stuff; styling. Open globals.css
file and remove everything. Add the following code:
@tailwind base;
@tailwind components;
@tailwind utilities;
:root {
--violet: #625df5;
--ring: #625df580;
--bg-a: #0B0E11;
--text-a: #FFFFFF;
--text-b: #C3C4C7;
--text-c: #787B89;
--orange: #ee5d19;
}@media (prefers-color-scheme: dark) {
:root {
--violet: hsla(242,88.4%,66.3%,1);
--bg-a: #0B0E11;
--text-a: #FFFFFF;
--text-b: #C3C4C7;
--text-c: #787B89;
--orange: #ee5d19;
}
}body {
color: var(--foreground);
background: var(--background);
font-family: Arial, Helvetica, sans-serif;
}@layer utilities {
.text-balance {
text-wrap: balance;
}
}::selection {
background-color: var(--violet);
color: var(--text-a);
}
Save it. Well crafted color palette you have ever seen. By ME for YOU 🎨
You can also change your app details. Just open src/app/layout.tsx
and update both title
and description
fields:
export const metadata: Metadata = {
title: "make your website retired. LOL",
description: "make your website retired. using AI-Powered Time Machine for Web Design. LMAO",
};
Next step let’s quickly set up environment variables and test it locally.
Open .env
file and add the following environment variables:
FIRECRAWL_API_KEY=...
AIML_API_KEY=...
Now, you can run the application locally with the following command:
npm run dev
Open http://localhost:3000 in your browser to see the application running.
You should see something similar to this:
Here’s an example of how you can test the application.
Enter this link https://www.raptors.dev/
and select 4+ pages
from dropdown. Then, click on Back90s
button. It will take some time to transform the website. After that, you will see the another button below input field. Click on it. It will take you to the transformed website. 🚀
Woohoo! Here’s Activity Logs from Firecrawl API:
I streamed the whole process here on my Twitch channel. You can watch the recording here:
Watch on Twitch: https://www.twitch.tv/videos/2329114716
Watch on YouTube: https://youtu.be/_wTaMLL4by0?si=nvSbaOktXjk3aw7l
Here’s an example of a modern website transformed into a 90s-style design using the AI-Powered Time Machine for Web Design. Kindly check the src/app/demo
folder for the transformed website. It has bunch of folders and files. To see it just tun the app and put /demo
after the URL. For example: http://localhost:3000/demo
.
You can integrate Clerk Auth for user authentication and authorization. It’s a great way to add user authentication to your application.
Deploy the application to Vercel. We described adding these possibilities in the article Building an AI text Humanizer with AI/ML API, Next.js, Tailwind CSS and Integration with Clerk Auth and Deploying to Vercel.
In this tutorial we learned how to use AI in a worst way. 😂 LMAO.
I hope you enjoyed building this project and learned something new. If you have any questions or feedback, feel free to Book a Call or DM me. I would love to help you out with any questions you may have. 🤓
All the code for this project is available on GitHub. It’s Open Source 🌟. AI-Powered Time Machine for Web Design.