Building an AI-powered chat to help you know me

I added a ChatGPT-style assistant to my portfolio so visitors could ask about me. $5 to start, markdown for context, Neon to track usage. A way to stand out and keep building while job searching.

Solo builder / PMDay 6 after layoff news (March 2025)Personal portfolio / B2B SaaSAnthropic API, Cursor, Nuxt 3, Vercel AI SDK, Neon, Tailwind

Company context

I found out I was getting laid off from Axure six days ago. I started this website the day I found out. I had always wanted to build my own website. Now I had a reason and a deadline: stand out while looking for the next role. I had a lot of ideas. One felt both relevant and a little bold: what if my site had its own way to "talk to me"? Not a contact form. A chatbot that knew about me and could point people to my work. Would it be super useful? Probably not. Would it be novel and would I learn a ton? Yes. So I built the chat interface today.

Mandate

I wanted the chatbot to help get me a job. Really. I wanted it to refer visitors to the right case studies and resources so that the time I put into those pieces actually got seen. If the bot could redirect people to the work I care about, it would have done its job. I also wanted to keep building on top of the job search. Creating and shipping something would keep me sane.

  • I was about to not have a job. I was hesitant to spend money on a "feature" that might not result in anything. I almost gave up. In the end: $5 to create an Anthropic API key at console.anthropic.com. I capped spend at $1 so I wouldn't blow the budget.
  • I had to move fast. I used Cursor to get the starting pieces working with a sample page, then tweaked from there.
  • I needed the LLM to know about me without over-engineering. I wanted to keep things simple. So I used a single markdown context file the API reads from. Later I wired it to point people at case studies and surface information from there.

My contributions

Chat API and context

Backend: Nuxt server route that streams from Claude (Sonnet 4.5). System prompt tells the model it's my assistant, to keep answers short, to stick to the context file, and to suggest case studies. Context is one markdown file (about me, resume, tools, case study summaries). I kept it simple so I could ship without adding a whole new stack.

Hero and chat UI

Homepage hero with a textarea and send button that routes to /chat?q=... . Chat page: back link, message list (user bubbles and plain assistant responses), and an input with send button. Styled to match the rest of the site. Enter to send, Shift+Enter for newline.

Rate limiting and cost guardrails

In-memory rate limit per IP (e.g. 20 requests per minute) so one user can't blow up the Anthropic bill. Returns 429 with a message when over limit.

Storing submissions in Neon

I wanted to know if anyone ever used it. Optional Neon Postgres: when NUXT_DATABASE_URL is set, each user message (and optionally the assistant response) is written to a chat_submissions table. No auth, just a log so I can see what people ask.

Problem

As a candidate, I needed a way to stand out. A static portfolio was fine. A portfolio where you could actually ask me questions and get pointed to the right case study or resource felt different. I also needed to keep creating. Job searching can be draining. Building something real kept me in "do" mode.

I didn't know if a chat feature would be worth the cost or the time. I was literally about to be unemployed. Spending money on an API felt risky. I had to decide: ship something that might not "result in anything" or play it safe. I decided $5 was survivable. Anthropic's own limits let me cap spend at $1 so I could try it without fear.

The layoff made the website urgent. It also made the "should I spend money on this?" question real. Doing it anyway and capping spend was the compromise. And I wanted to prove to myself I could still ship: curiosity and being a do-er are what I pride myself on.

Before: a static site. After: ask a question on the hero, get an answer that points you to case studies and resources.
Before: a static site. After: ask a question on the hero, get an answer that points you to case studies and resources.
01

Deciding what to build and how to pay for it

I didn't do formal research. I had a constraint (stand out, learn, stay sane) and an idea (chat that knows about me). I validated the cost in my head: $5 to start, $1 cap. I validated the learning: Cursor + Anthropic + streaming + Neon would teach me the full stack of a modern AI feature.

Challenges

  • Getting the first stream working: correct model ID, API key in runtime config, and Vercel AI SDK's toUIMessageStreamResponse so the Vue chat component could consume the stream.
  • Choosing context strategy: I wanted to keep things simple. I was tempted to go deeper into topics like RAG but it seemed overkill. So a single markdown file that the server reads and injects into the system prompt, with summaries and pointers to case studies so the bot can recommend the right pieces.
  • Storing submissions without slowing the response: insert the user message (and later the assistant response) without blocking the stream. Rate limiting had to run before any LLM call so abuse didn't burn budget.
  • The mental block around spending money. I had to tell myself: it's $5, I'll survive. Once the cap was set at $1, I could stop worrying and focus on building.
  • Deciding what the bot was for: not just "answer questions" but "get me a job" by routing people to the right work. That shaped the system prompt and the tone.

Key decisions

Use a single markdown context file for context.

I wanted to keep things simple. One file (about me, resume, tools, case study blurbs) is easy to update and enough for the model to stay on script and point people to the right case studies.

Cap Anthropic spend at $1 and add in-app rate limiting.

So I could ship without fear of a runaway bill. Rate limiting per IP keeps a single user from exhausting the cap.

Store submissions in Neon when DATABASE_URL is set.

I wanted to see if anyone used it. Optional Neon means the feature works with or without it; when configured, I get a simple log of questions (and optionally answers).

Design the bot to refer people to case studies and direct contact.

The goal was to get me a job. The bot's job is to surface the right work and suggest next steps (case studies, resume, contact), not to be a generic FAQ.

I went with a single context file: scope was just my own content, and I wanted to ship without over-engineering. I added summaries and pointers to case studies so the bot could recommend specific pieces.

The chat system uses a markdown file (chat-context.md)
The chat system uses a markdown file (chat-context.md)
02

What I shipped

A chat interface on my Nuxt site: hero CTA that goes to /chat with the question, a chat page with back link and message list, and a server route that streams from Claude using a markdown context file. Rate limiting and optional Neon logging round it out.

Hero: one question box, one send. Chat page: conversation plus input.
Hero: one question box, one send. Chat page: conversation plus input.

Chat API with context and streaming

POST /api/chat streams Claude (Sonnet 4.5) with a system prompt and a single markdown context file.

I needed the model to know about me without building something complex I'd have to learn from scratch.

Server reads content/chat-context.md and injects it into the system prompt. Messages are converted and sent to the API. Response is streamed back via Vercel AI SDK so the UI updates in real time.

Rate limiting and cost guardrails

Per-IP rate limit (e.g. 20/min) and Anthropic spend cap so the feature can't blow the budget.

One user or bot could send a lot of requests and burn through the API budget.

In-memory rate limit before calling the LLM; 429 when over limit. Anthropic console spend cap set to $1.

Optional Neon logging

When NUXT_DATABASE_URL is set, each user message (and optionally the response) is stored in Postgres.

I wanted to know if anyone used the chat and what they asked.

chat_submissions table: user_message, ip, created_at (and response if we store it). Insert runs when the request is handled; no auth, just a log.

03

What I got out of it

$5

To get started (Anthropic API key)

$1

Spend cap so I could ship without worry

1 day

Time to ship

  • The chat is live. Visitors can ask about me from the hero or the chat page and get streamed answers that point to case studies and contact.
  • Submissions are logged in Neon when configured, so I can see usage and questions over time.
  • Rate limiting is in place so a single IP can't exhaust the budget.
  • I proved I could ship something end-to-end during a layoff: API, streaming, context, rate limiting, and optional persistence.
  • I got over the "should I spend money on this?" hump. $5 and a $1 cap made the decision small enough to act.
  • The bot is explicitly designed to get me a job by routing people to the right work. That clarity shaped the prompt and the product.

Retrospective: As I go through the motions of a layoff and think about who I am and what I want to do, the thing I pride myself on is curiosity and being able to figure it out. This project was a way to stand out, to learn, and to show that I'm someone with real curiosity who gets things done. It was $5, a few Cursor chats, and a decision to build instead of wait.