Jobs

Filter our scraped LinkedIn jobs dataset by keyword, geo, company, salary, level, and date.

The Jobs surface reads from our own self-hosted LinkedIn jobs dataset — a 24/7 scraper continuously ingests public job postings and stores the full row (title, company, salary, description, geo, posted_at, etc.). No extension or signed-in session is involved; just an API key.

This is the cheapest, lowest-friction surface in LinkFetch — 1 credit per call, flat regardless of how many results come back.

Search jobs

The catch-all filter endpoint. Every parameter is optional — combine freely.

Filter ergonomics

  • Geo: prefer geo_id (resolve via /v1/locations/search) for stable matching across cities. Fall back to location (free-text substring) when you only have a city name from a job description.
  • Company: company_id is more reliable than company because it survives renames. Resolve via /v1/companies/by-id/:id.
  • Date: posted_within (24h/week/month) is the easiest shortcut. Use posted_after/posted_before (YYYY-MM-DD or ISO-8601) when you need precise bounds.
  • Pagination: limit 1–50, offset 0–N. The total result count is in meta.total so you can plan paging without a probe call.

Example — staff engineering jobs in NYC posted this week

curl "https://api.linkfetch.io/v1/jobs?\
q=staff%20engineer&\
geo_id=90000084&\
posted_within=week&\
level=mid_senior&\
limit=20" \
  -H "Authorization: Bearer sk_live_..."

Get job by ID

GET/v1/jobs/{id}1 credit
Get Job
Full job detail by numeric LinkedIn job ID.

Returns the complete stored row — title, company, description (plain + HTML), salary range, geo, industry, employment type, posted timestamps, Easy Apply flags, social meta. Single-call endpoint; no pagination.

parameters

Returns the complete stored row — title, company, description (plain text + HTML), salary range, geo, industry, employment type, posted timestamps, Easy Apply flags, social meta. Single-call endpoint; no pagination.

Get job by URL

GET/v1/jobs/by-url1 credit
Get Job by URL
Resolve any LinkedIn job URL to our dataset row.

Accepts `/jobs/view/<id>/`, `/jobs/collections/…/currentJobId=<id>`, a bare `urn:li:fsJobPosting:<id>`, or a bare numeric ID. Returns the full detail row when we've scraped it — 404 otherwise (the scraper may not have seen it yet; retry in a few hours).

parameters

Convenience over /v1/jobs/:id when you already have the link. Accepts:

  • linkedin.com/jobs/view/<id>/
  • linkedin.com/jobs/collections/.../currentJobId=<id>
  • a bare urn:li:fsJobPosting:<id>
  • a bare numeric ID

If a job hasn't been scraped yet you get 404 — the scraper may not have seen it, or LinkedIn may have closed the listing. Retry in a few hours; the scraper's typical lag is under an hour from posting.

Webhooks

Pro and Scale tiers can subscribe to job.first_seen events to skip polling — see the webhooks setup. The hook fires with the same row shape GET /v1/jobs/:id returns, plus a first_seen_at timestamp.

Notes

  • 1 credit per call (search or detail), flat per request.
  • Empty result sets return data: [] and are not charged.
  • The scraper backfills posted_at from the relative ("3 days ago") string LinkedIn renders, so older jobs may have a slightly fuzzy posted_atposted_at_precise is set when the scrape captured an exact timestamp.