Jobs
Filter our scraped LinkedIn jobs dataset by keyword, geo, company, salary, level, and date.
The Jobs surface reads from our own self-hosted LinkedIn jobs dataset — a 24/7 scraper continuously ingests public job postings and stores the full row (title, company, salary, description, geo, posted_at, etc.). No extension or signed-in session is involved; just an API key.
This is the cheapest, lowest-friction surface in LinkFetch — 1 credit per call, flat regardless of how many results come back.
Search jobs
The catch-all filter endpoint. Every parameter is optional — combine freely.
/v1/jobs1 creditEvery filter is optional — combine freely. `q` is a full-text keyword. Use `location` (free-text) OR `geo_id` (canonical from `/v1/locations`) for place filters. `posted_within` (24h/week/month) is a shortcut; `posted_after`/`posted_before` give precise bounds (YYYY-MM-DD or ISO-8601). Results paginate via `limit` (max 50) and `offset`.
Filter ergonomics
- Geo: prefer
geo_id(resolve via/v1/locations/search) for stable matching across cities. Fall back tolocation(free-text substring) when you only have a city name from a job description. - Company:
company_idis more reliable thancompanybecause it survives renames. Resolve via/v1/companies/by-id/:id. - Date:
posted_within(24h/week/month) is the easiest shortcut. Useposted_after/posted_before(YYYY-MM-DD or ISO-8601) when you need precise bounds. - Pagination:
limit1–50,offset0–N. The total result count is inmeta.totalso you can plan paging without a probe call.
Example — staff engineering jobs in NYC posted this week
curl "https://api.linkfetch.io/v1/jobs?\
q=staff%20engineer&\
geo_id=90000084&\
posted_within=week&\
level=mid_senior&\
limit=20" \
-H "Authorization: Bearer sk_live_..."Get job by ID
/v1/jobs/{id}1 creditReturns the complete stored row — title, company, description (plain + HTML), salary range, geo, industry, employment type, posted timestamps, Easy Apply flags, social meta. Single-call endpoint; no pagination.
Returns the complete stored row — title, company, description (plain text + HTML), salary range, geo, industry, employment type, posted timestamps, Easy Apply flags, social meta. Single-call endpoint; no pagination.
Get job by URL
/v1/jobs/by-url1 creditAccepts `/jobs/view/<id>/`, `/jobs/collections/…/currentJobId=<id>`, a bare `urn:li:fsJobPosting:<id>`, or a bare numeric ID. Returns the full detail row when we've scraped it — 404 otherwise (the scraper may not have seen it yet; retry in a few hours).
Convenience over /v1/jobs/:id when you already have the link.
Accepts:
linkedin.com/jobs/view/<id>/linkedin.com/jobs/collections/.../currentJobId=<id>- a bare
urn:li:fsJobPosting:<id> - a bare numeric ID
If a job hasn't been scraped yet you get 404 — the scraper may not
have seen it, or LinkedIn may have closed the listing. Retry in a few
hours; the scraper's typical lag is under an hour from posting.
Webhooks
Pro and Scale tiers can subscribe to job.first_seen events to skip
polling — see the webhooks setup. The hook fires with the
same row shape GET /v1/jobs/:id returns, plus a first_seen_at
timestamp.
Notes
- 1 credit per call (search or detail), flat per request.
- Empty result sets return
data: []and are not charged. - The scraper backfills posted_at from the relative
("3 days ago") string LinkedIn renders, so older jobs may have a
slightly fuzzy
posted_at—posted_at_preciseis set when the scrape captured an exact timestamp.