This repository contains a LinkedIn jobs scraper written in Python that extracts job data and saves it in JSON files.
- Clone the repository to your local machine:
git clone https://github.com/Ruy-Araujo/Linkedin-Jobs-Scraper
- Install the dependencies:
cd Linkedin-Jobs-Scraper
pip install -r requirements.txt
-
Configure the exemple.env file:
-
Fill in the LINKEDIN_COOKIES and CSRF_TOKEN parameters with those from the platform see how to generate cookies and csrf-token
-
Execute the main.py script, passing the search filters as arguments:
python main.py --keywords 'Data Engineer' --location 'Canada' --pastdays 15
Parameters:
keywords
: Keywords to filter job listings.
location
: Location where job listings will be searched.
pastdays
: Number of days to search for job listings. -
The scraper will extract job data from LinkedIn Jobs and save it in a JSON file in the project directory.
-
Access the LinkedIn Jobs website.
-
Open the browser console (F12) and go to the Network tab.
-
In the Network tab, press CTRL+F to perform a search and type "csrf-token".
-
Select any item and you will see the "cookie" and "csrf-token" fields in the request header.
-
Rename the exemple.env file to .env and fill in the LINKEDIN_COOKIES and CSRF_TOKEN fields with the values obtained in the previous step. Ex.
LINKEDIN_COOKIES="your_cookies" CSRF_TOKEN=ajax:123456789
The scraper uses the Scrapy framework to parse the HTML of the LinkedIn jobs page and extract information such as job title, company name, location, job description, and date of publication.
The raw data is available here
If you want to contribute to this project, feel free to open an issue or send a pull request.