Job Hunter Bot
Job hunting is a full-time job. Upload your resume once — get AI-tailored applications for every position. Built as a Telegram bot because that's where people already spend their time.
Applying for jobs is broken. Read a listing, tailor your resume, write a cover letter, submit. Repeat 50 times. Most people either spray-and-pray with the same resume everywhere (gets ignored) or burn out customizing each one (doesn't scale).
This wasn't a theoretical problem — it was a personal one. After 10 years of product leadership, the job hunt process felt like the least product-thinking-friendly experience in the world. So I built the tool I wanted to use.
Product thinking applied to a personal problem.
Telegram bot, not a web app
A web app means a new URL, a new login, a new tab to manage. Job seekers are already stressed and context-switching constantly. A Telegram bot meets them where they already are — in their messaging app.
Send a PDF, get results as chat messages. No signup, no onboarding flow, no dashboard to learn. The interface IS the conversation.
Experience connection: LeadSnap (another project) validated this exact pattern — a Telegram bot that scans business cards using GPT-4o Vision. The insight: for single-purpose tools, chat interfaces have zero friction. No app store, no download, no account. Just /start. Applied the same architecture here.
Structured parsing, not text extraction
Most resume parsers just dump text. We use GPT-4o to extract structured data — name, headline, experience with sub-sections (each role broken into achievements, metrics, responsibilities), education, skills. This structured data is what makes tailoring possible.
The parsed resume becomes a "master profile" — a comprehensive database of everything you've done. When matching against a job description, the system pulls the most relevant achievements, not just keyword-matches the full resume.
Experience connection: Same data extraction pattern as LeadSnap (business card → structured CRM data) and Doc Reach (Google Places → structured website data). The principle: unstructured input → structured output → automated action. Applied it three times across three different domains — the architecture is the same, the domain changes.
Background queue, not synchronous processing
GPT-4o resume parsing takes 10-30 seconds. Job matching and tailoring takes longer. Making users wait in a chat while AI processes is terrible UX — the bot feels "frozen."
Built a PostgreSQL-backed job queue. Upload triggers the queue, bot immediately responds "Got it, processing..." and delivers results as they complete. Batch multiple jobs simultaneously. The user can close Telegram and come back to completed results.
Experience connection: At ZYOD, managed IoT data from 700+ sewing machines — real-time data from hundreds of sources hitting the server simultaneously. Queue-based processing was the only architecture that scaled. Same principle at micro-scale: never block the user interface while backend processes.
Master resume as the source of truth
Before tailoring, the bot generates a comprehensive "master resume" PDF — every achievement, every metric, every skill extracted and organized. The user reviews this first: "Is this accurate? Is anything missing?"
This human-in-the-loop step is intentional. AI extracts well but sometimes misses context. The master resume becomes the verified foundation — all tailored versions derive from it. Fix it once, benefit everywhere.
Experience connection: At ZYOD, the digital twin was exactly this concept — a single source of truth that every system reads from. In manufacturing, one wrong data point cascades into wrong production decisions. Same with resumes: one wrong achievement description appears in every tailored application. Build the master right, derive everything from it.
Everything that shipped
Bot Features
- → PDF resume upload + parsing
- → GPT-4o structured extraction
- → Master resume PDF generation
- → Job description matching
- → Tailored cover letters
- → Customized resume highlights
- → Batch processing
Architecture
- → PostgreSQL job queue
- → Background workers (PM2)
- → Async result delivery
- → PDF generation pipeline
- → Structured data storage
- → Error retry with backoff
Parsing Extracts
- → Name + headline
- → Experience (with sub-sections)
- → Achievements + metrics
- → Education
- → Skills taxonomy
- → Summary generation