The ultimate guide to hiring a web developer in 2021
If you want to stay competitive in 2021, you need a high quality website. Learn how to hire the best possible web developer for your business fast.
Web Scraping is the process of extracting data or information from an online source such as a website, database, application, etc. Web Scraping Specialists have the skill that helps people collect valuable digital data and quickly find the useful information they need from websites, mobile apps, and APIs. The experts usually use web scraping tools and advanced technologies to collect large amounts of targeted data without any manual work for the client.
With web scraping, tasks that otherwise may require a lot of time can be automated and done faster. Our experienced Web Scraping Specialists use their expertise to develop scripts that continuously target structured and unstructured data sources.
Here's some projects that our expert Web Scraping Specialist made real:
Web Scraping Specialists are skilled professionals who know how to help businesses optimize processes while collecting rich structured data they need for their specific purposes. Our experts fasten the process and return accurate results in less time, so that the customer can make better decisions more quickly without any manual labour. If you are looking for a talented professional to make a web scraping project for you, you have come to the right place. Here in Freelancer.com you can find talented professionals who will get the job done with top quality results! Post your project now and see what our Web Scraping professionals can do for you!
From 361,704 reviews, clients rate our Web Scraping Specialists 4.9 out of 5 stars.Web Scraping is the process of extracting data or information from an online source such as a website, database, application, etc. Web Scraping Specialists have the skill that helps people collect valuable digital data and quickly find the useful information they need from websites, mobile apps, and APIs. The experts usually use web scraping tools and advanced technologies to collect large amounts of targeted data without any manual work for the client.
With web scraping, tasks that otherwise may require a lot of time can be automated and done faster. Our experienced Web Scraping Specialists use their expertise to develop scripts that continuously target structured and unstructured data sources.
Here's some projects that our expert Web Scraping Specialist made real:
Web Scraping Specialists are skilled professionals who know how to help businesses optimize processes while collecting rich structured data they need for their specific purposes. Our experts fasten the process and return accurate results in less time, so that the customer can make better decisions more quickly without any manual labour. If you are looking for a talented professional to make a web scraping project for you, you have come to the right place. Here in Freelancer.com you can find talented professionals who will get the job done with top quality results! Post your project now and see what our Web Scraping professionals can do for you!
From 361,704 reviews, clients rate our Web Scraping Specialists 4.9 out of 5 stars.AI Automation for Finance Analytics AI / Machine Learning DO NOT BID IF BIDDING FOR 40-HOUR WORK WEEK WE ARE LOOKING FOR A CONSULTANT / BUILDER / TUTOR TO WORK WITH OUR TEAM 3-10 HOURS A WEEK TO BUILD THE SYSTEM JONITLY DO NOT BID FOR LONGER THAN THOSE HOURS. DO NOT BID FOR FULL-TIME WORK DETAILS OF WHAT I NEED HELP WITH I run a real estate private equity and hotel development platform. We want to replace manual analysis and reporting with a practical AI workflow. This is about extracting, comparing, and interpreting data. Excel and PowerPoint remain the source of truth. What we need: -Compare PowerPoint vs Excel and flag mismatches - Explain underwriting models and trace outputs - Compare legal/term sheets vs financial assumptions - Track document versions and changes - Summarize deal...
AI Automation for Finance Analytics AI / Machine Learning DO NOT BID IF BIDDING FOR 40-HOUR WORK WEEK WE ARE LOOKING FOR A CONSULTANT / BUILDER / TUTOR TO WORK WITH OUR TEAM 3-10 HOURS A WEEK TO BUILD THE SYSTEM JONITLY DO NOT BID FOR LONGER THAN THOSE HOURS. DO NOT BID FOR FULL-TIME WORK DETAILS OF WHAT I NEED HELP WITH I run a real estate private equity and hotel development platform. We want to replace manual analysis and reporting with a practical AI workflow. This is about extracting, comparing, and interpreting data. Excel and PowerPoint remain the source of truth. What we need: -Compare PowerPoint vs Excel and flag mismatches - Explain underwriting models and trace outputs - Compare legal/term sheets vs financial assumptions - Track document versions and changes - Summarize deal...
I am currently using apify for $1.5/1000 leads. Need things at scale - around 50k emails, this need cost effective solution. Bid on this proposal and I shall DM you, need to know cost for: 1. Apollo emails 2. Linkedin emails
IM TYRING TO RUN THE ATTACHED JPNY SCRIPT TO GET INFO FROM A WEBSITE BUT I CANT UNDERSTAND IT DOESN'T WORK.
Hindi and Indonesian Safety Hardening and Safety Dataset - Annotation 1. Annotation Requirement Description This annotation task aims to construct safety datasets for Hindi and Indonesian through manual annotation. 1.1 Basic Task Information Task Summary: Annotate five types of raw data (sensitive words, text samples, image samples, "image-text" pairs, "video-text" pairs) in Hindi and Indonesian according to requirements. Deliverable Types and Formats: ï‚·a. Sensitive Words: Words, phrases. Delivered in Excel and JSONL formats only. ï‚·b. Text Samples: Sentences, paragraphs. Delivered in Excel and JSONL formats only. ï‚·c. Image Samples: Images in JPG or PNG format, stored in folders. Deliver Excel, JSONL, and corresponding attachment folders. ï‚·d. "Image-Text" Pairs...
Hindi and Indonesian Safety Hardening and Safety Dataset - Annotation 1. Annotation Requirement Description This annotation task aims to construct safety datasets for Hindi and Indonesian through manual annotation. 1.1 Basic Task Information Task Summary: Annotate five types of raw data (sensitive words, text samples, image samples, "image-text" pairs, "video-text" pairs) in Hindi and Indonesian according to requirements. Deliverable Types and Formats: ï‚·a. Sensitive Words: Words, phrases. Delivered in Excel and JSONL formats only. ï‚·b. Text Samples: Sentences, paragraphs. Delivered in Excel and JSONL formats only. ï‚·c. Image Samples: Images in JPG or PNG format, stored in folders. Deliver Excel, JSONL, and corresponding attachment folders. ï‚·d. "Image-Text" Pairs...
I need a reliable scraper that monitors every basketball league listed on Bet365 () if accessing that is an issue you can use The script must do two separate pulls for each game: Objective 1 • Run #1 – as soon as Bet365 publishes the starting lineup. • Run #2 – again on game day, no later than one hour before tip-off. For each run, capture Teams and scores, all published lineups and odds, plus the Q1 Total, full Quarter and Half statistics as soon as they appear. The goal is to analyse how the line and odds move between the first and second snapshot, feeding a broader betting-strategy model, so accuracy and time-stamping are essential. Store everything in a structured database of your choice (PostgreSQL or MySQL are fine). The tables must let me query: • ...
I need a one-time, UK-wide scrape that captures every wedding-related business you can find across England, Wales, Scotland and Northern Ireland—no single directory limitations, so feel free to pull from any public site that meets the brief. Deliverable • A single Excel file containing the following columns: URL, Business Name, Full Address, Post Code, Telephone, and every email address that appears on the site (not just the first one you find). • The sheet should be neatly de-duplicated and ready for filter/sort. Business types to include • Wedding & Bridal Wear • Wedding Planners / Services • Wedding Cars, Horse & Carriages • Wedding Venues • Photographers & Videographers • Florists & Wedding Flowers •...
I need a small automation script that periodically checks item availability on the Bigbasket website and pings me on Telegram the moment any of the tracked products come back in stock. You are free to choose the underlying tech stack (Python + Requests/BeautifulSoup, Selenium, Playwright, or a headless browser of your choice) as long as it works reliably with Bigbasket’s current site layout and protects my account from rate-limit blocks or captchas. The flow I have in mind is straightforward: I feed the bot a list of product URLs (or SKUs). It runs on a schedule I can change—every few minutes during peak shortages, maybe every hour otherwise—grabs the stock status, and fires a concise Telegram message whenever the status flips from “Out of Stock” to “Av...
I need a reliable script that can pull a complete database of Food & Beverage outlets in Singapore directly from Google (Maps or Search). The scope covers Restaurants, Cafes, Bars, Clubs and Bistros island-wide. For every venue scraped I must receive: • Name • Full address (street, unit, postal code) • Phone number - Type of outlet - Website - General Area Deliverable in Excel. Need 4 tabs representing each region in Singapore. Each tab, consisting of several district within the region. Please ensure: • No duplicates • Accurate field separation (e.g., address split into distinct columns) • Script runs without paid APIs Let me know your proposed method and approximate turnaround time, and feel free to highlight any previous scraping w...
I need every public phone number that appears on gathered into a single, well-structured Excel workbook. Please crawl the entire site, not just a few sections, and return each number alongside the key profile details that make the data usable at a glance—name, profile URL, and any other easily captured identifiers shown next to the number. A clean .xlsx with one row per profile, no duplicates, and clearly labelled columns is the only deliverable I’m expecting. If you prefer Python, Scrapy, Selenium, Beautiful Soup or a comparable stack, go ahead; I’m interested in results, not the specific toolset, as long as the script can be rerun later should the site content change. Before delivery, double-check that: • every row contains a valid phone number and url • n...
I need a reliable scraper that monitors every basketball league listed on Bet365 (). The script must do two separate pulls for each game: Objective 1 • Run #1 – as soon as Bet365 publishes the starting lineup. • Run #2 – again on game day, no later than one hour before tip-off. For each run, capture Teams and scores, all published lineups and odds, plus the Q1 Total, full Quarter and Half statistics as soon as they appear. The goal is to analyse how the line and odds move between the first and second snapshot, feeding a broader betting-strategy model, so accuracy and time-stamping are essential. Store everything in a structured database of your choice (PostgreSQL or MySQL are fine). The tables must let me query: • first-pull values • second-pull val...
I need help streamlining a small questionnaire that captures only open-ended answers. Respondents will be typing directly into a web form, and I simply want each answer stored and exported as clean, plain-text strings—no JSON, CSV, or additional metadata layers. Your task is to: • Set up the formatting logic so every submission is saved exactly as entered, preserving paragraph breaks but stripping any extra HTML or special characters the form might inject. • Provide a straightforward way for me to download or copy that text in bulk once the survey closes. If you prefer, a lightweight script or form-handler (PHP, Python, or JavaScript are all fine) that writes the responses into a flat .txt file or an equivalent plain-text store will meet the requirement. Please keep th...
I need a seasoned backend developer to design and implement a secure REST API that lets my users check award-seat availability (Avios) directly from Iberia.com. The core of the job is to automate the full search flow — login, query, filter, and return the results — while keeping the service fast and reliable. Authentication & security The service must issue and validate JWT tokens for every request beyond the public health-check route. Token refresh, revocation, and a simple role model (“user” vs. “admin”) should be built in from the start. Flight data extraction I do not have official Iberia developer access, so we will need to pull the data ourselves. I’m open to whichever tooling you are most comfortable with — BeautifulSoup, Sel...
I'm looking for an experienced freelancer to build a complete, low-maintenance web-based educational app that uses AI to suggest peptides for anti-aging and health issues (e.g., recovery, inflammation) based on public research. The app will include study-based dosage, cycle, and usage suggestions, plus an integrated cost-comparison tool similar to (aggregating prices from legal suppliers via affiliates or scraping). This is strictly for educational purposes—**no medical advice or promotion of unapproved substances**. The app must include strong disclaimers everywhere to comply with FDA regulations. **Project Goals:** - Create a freemium SaaS web app (MVP first, then scale). - Low overhead: Use no/low-code tools where possible. - Monetization: Subscriptions ($9–$29/...
I need a senior-level specialist to harvest product data from several e-commerce sites and deliver it in a single, well-structured CSV file. The task demands production-ready techniques—think Scrapy spiders hardened with rotating proxies, Selenium or Playwright for dynamic content, and solid anti-bot countermeasures. The information I’m after is very specific: product names, prices, pictures, and SKU. Nothing less, nothing more. Your solution must run reliably at scale, cope with frequent layout changes, and leave no trace that could trigger blocks. Python is the preferred stack, but if you have a proven alternative that meets the same bar, I’m open to hearing it. To be considered, include in your proposal: • At least one example of a comparable e-commerce scrapi...
I need a small script or micro-service that calls an odds API once per day and extracts NBA player-prop markets—specifically all categories—for every nba game on the board. The job is only about player props; spreads, moneylines, and totals can be ignored. Here is what I expect: • Code (Python or Node preferred, but I’m flexible) that hits a public or paid odds endpoint, parses the daily response, and saves the three prop categories in a tidy JSON or CSV file. Excel preferably • A clear spot in the code where I can drop my own API key and set the run time (cron, Cloud Function, Lambda, etc.). • Basic logging so I can confirm the call succeeded and see any errors. • Quick README explaining setup and the output format. If the script runs co...
PDF to Excel Data Scraper Needed Job Title: Data Scraper Needed: Convert 24 PDF Factsheets to Clean Excel (Mutual Fund Portfolios) Project Overview: I need a freelancer to extract detailed stock portfolio data from ~24 Mutual Fund Monthly Factsheets (PDFs). I will provide the URLs/Files. Your job is to extract the full stock holdings table for specific funds and deliver a consolidated, clean Excel/CSV file. The Goal: I need the complete list of stocks (100% of the portfolio), NOT just the Top 10. The data is used for financial backtesting, so accuracy is critical. Even top 85-90% data works. Scope of Work: Input: ~24 PDF Files (Monthly Factsheets). Target Funds: For each month, extract data for the Top 10 Equity Funds (e.g., Bluechip, Midcap, Smallcap, Value Discovery, etc. - list wi...
I’m expanding our Florida outreach list and need a reliable web-scraped data set of school, college, and university administrators who oversee Nursing or other Healthcare programs. You’ll pull the information directly from two source types only—official institution websites and reputable educational directories—so every entry must be traceable back to one of those pages. Here’s exactly what must land in the spreadsheet: • Institution name • Contact’s first and last name • Job title (Administrator, Director of Nursing, CTE Healthcare lead, etc.) • Verified email address • State (always Florida) Format & delivery – Send the file in Excel (.xlsx). – First progress drop: within 5 days so I can spot-c...
We want to do this in a consulting / facilitators / builders format in which we work with the facilitator / consultant / trainer for 3-6 hours a week for 3-6 months in order to help us collaboratively create various agents for our private equity business. The only billed time will be the time spent on the video call with our team, unless specifically approved otherwise. we want to be able to create a screen scrape tool to average certain cost items of specific real estate proejcts We also want to compare legal documents vs term sheets and excel spreadsheets Data sources • Company databases (SQL, flat files, Excel exports) - Dropbox all our files are in drop box • Extensive web scraping for competitor benchmarks and investment-market signals If you have ideas for safely add...
I need a single WebExtension that runs in both Chrome and Firefox and turns our current manual workflow into a one-click process. Its core job is data collection—capturing information from pages we specify—while also handling the little chores my team repeats every day: filling forms, scraping targeted fields, and kicking off routine browser actions such as page refreshes or button clicks once certain conditions are met. The add-on must connect cleanly to three parts of our internal stack: • our CRM system (REST APIs already documented) • the project-management tool we use (webhook support available) • a central database for long-term storage (PostgreSQL) Please build with the standard WebExtension/Manifest V3 approach so we can maintain a single code...
I need webscraping expert to scrape data and export to excel from Indiegogo. Details I need for the projects are: Title: Project title. Category: The category of the project based on Indiegogo categorization system. Category: The sub-category of the project based on Indiegogo categorization system. Close Date: Close data of the campaign. Open Date: Open date of the campaign. Currency: Currency used for collected funds. Funds Raised: The amounts of funds raised. Funds Raised Percent: The percent of funds raised from the targeted funds. Funding Target: The targeted amounts of funds by the campaign initiator to be collected. Country: Country in which the project is based. Publisher: The name of the campaign initiator. Backers: The number of people who decided to fund the campaign. Updates: ...
I’m looking for a well-structured Python solution, built around BeautifulSoup (BS4) and any supportive libraries you deem essential, that reliably pulls both product details and customer reviews from Lazada on a daily schedule. The data will fuel ongoing competitor research, so consistency and clarity of the output are critical. I looking specifically to get data using bs4 by bypassing the captcha Here’s how I picture the flow: • Input: category URL(s) or product list I supply in a CSV/JSON. • Scrape: title, price, promos, specs, images, ratings, full review texts, review dates, and reviewer scores. • Output: clean CSV or JSON dropped into a dated folder after each run. Make the script easy to tweak if Lazada changes its markup. Acceptance criteria 1. S...
I need a seasoned Python developer to build a robust scraper that collects the required data and writes it straight to JSON—no additional cleaning or processing necessary. Once we begin I’ll provide the target URL(s) and any access details; for now, assume a standard public site with pagination and occasional anti-bot checks. Core expectations • Written in Python 3 using requests/BeautifulSoup or Scrapy; resort to Selenium only if there’s no lighter workaround. • Handles pagination, retries, and polite delays gracefully so the run can complete unattended. • Config file or clear constants for headers, cookies, and start URLs, letting me tweak targets without editing core logic. • Produces a single JSON file (or one file per page if that’s...
I need to build a reliable, well-structured lead list and I already know exactly what it should contain. The task is to extract contact information—email addresses, phone numbers and full mailing addresses—from three sources: company and organisation websites, their public social-media profiles, and well-known online directories. I expect the data to be gathered with a solid scraping workflow (Python, Scrapy, BeautifulSoup, Selenium or an equivalent stack is fine) and then verified so that bounced emails and dead numbers are kept to an absolute minimum. Deliverables • One CSV or Excel file with separate columns for name, company, job title, email, phone, street address, city, state, ZIP/postcode, country, source URL and date collected. • No duplicates; every...
PDF to Excel Data Scraper Needed Job Title: Data Scraper Needed: Convert 24 PDF Factsheets to Clean Excel (Mutual Fund Portfolios) Project Overview: I need a freelancer to extract detailed stock portfolio data from ~24 Mutual Fund Monthly Factsheets (PDFs). I will provide the URLs/Files. Your job is to extract the full stock holdings table for specific funds and deliver a consolidated, clean Excel/CSV file. The Goal: I need the complete list of stocks (100% of the portfolio), NOT just the Top 10. The data is used for financial backtesting, so accuracy is critical. Even top 85-90% data works. Scope of Work: Input: ~24 PDF Files (Monthly Factsheets). Target Funds: For each month, extract data for the Top 10 Equity Funds (e.g., Bluechip, Midcap, Smallcap, Value Discovery, etc. - list wi...
I have a data-analysis pipeline that relies on a steady flow of fresh product images from a well-known e-commerce site. What I need is a robust scraper that can navigate the catalog, collect every product’s main and variant images, and deliver them to me neatly organized. Key points you should know: • Target: a single e-commerce platform (URL supplied after award). • Payload: high-resolution image files plus a CSV/JSON map linking each file to product ID, title, price, and category text that you extract during the same run. • Scale: thousands of products per crawl; a resumable approach is essential so partial failures don’t force a full restart. • Frequency: I’ll trigger the crawl weekly, so reusable code is a must. I’m happy with Pytho...
Help wanted: daily/multi-daily comparison of supplier prices and stock levels (B2B webshop) Text: We operate a B2B webshop where business customers can place orders or commission items on request. Most of the goods are sourced directly from manufacturers. For most suppliers we have access to their stock levels and current prices; for some, no login is required, while others require login credentials. We are looking for a solution or a skilled professional who can help us retrieve supplier prices and stock levels daily — ideally multiple times per day — and compare them with our internal purchase prices so we stay up to date. No automatic syncing with our system or automatic price changes are required. It is sufficient if discrepancies between supplier prices and our system pu...
We are looking to hire an experienced freelancer for B2B contact data scraping using Apollo.io. Project Requirements Scrape contact data using Apollo filters provided by us Data must be extracted only after confirming filters are correct We will start with one state, and if the data quality is good, we will assign more states Data Fields Required Each contact must include: Full Name Job Title (Decision Makers only) Company Name Business Email (Verified) Phone Number / Mobile (where available) Company Revenue Location (City, State, Country) Company Website / LinkedIn Quality Expectations No dummy or generic emails No duplicate records Clean, structured, and fresh data Apollo-sourced data only Process We provide filters Freelancer applies filters and shares sample data ...
I need OpenClaw on my dedicated Mac with three core capabilities: Chrome automation: open websites, click elements, fill forms, extract structured snippets, and return results in WhatsApp. Coding/app workflows: generate code locally and optionally interact with web dev platforms when commanded. Deep research workflows: run multi-step web research, compare sources, and return concise findings with references. Security and reliability are mandatory: least privilege, approved-user-only WhatsApp commands, startup on boot, restart on crash, logs, and health check.
I need all data that starts from , walks through every brand, opens each handset page and captures the complete specification table exactly as shown. The end-product I expect is: • A clean JSON file data where every phone is an object containing every available field (model name, release date, dimensions, display, chipset, camera, battery—everything published on the spec sheet). Please make sure the scraper respects polite crawling rules, handles pagination and brand/model edge cases gracefully, and returns UTF-8 encoded text. If anything on the site requires minor waits or retries, can block your way. I will test JSON data and if validates proper data, the job is done.
I have a list of titles (number depends on the search results, and the last time I checked it was 250)currently tagged “In Production” on IMDbPro and I need every line item turned into a clean, ready-to-filter spreadsheet. Because IMDbPro expressly forbids scraping, each record must be gathered by hand. Here is what I expect to see, each point in its own column: • Movie Title • Director(s) • Composer(s) – if any are listed • Music Supervisor(s) • Producer(s) • Producer contact details (email and/or phone whenever they appear) • Direct URL of the movie page • Cast The workflow is straightforward: open the title, copy the details, paste them into the sheet, move on to the next film. Where information is missing on IMD...
There is around 20k reviews publically available, so I can't scroll endlesly but I need you to scrape it for me and put in the spreadsheet along with filters - 1 stars to 5 stars. The job is simple for a professional, so please be realistic with prices. Should you do this correct and fast, I will give you more leads to scrape. Thanks
There is around 20k reviews publically available, so I can't scroll endlesly but I need you to scrape it for me and put in the spreadsheet along with filters - 1 stars to 5 stars. The job is simple for a professional, so please be realistic with prices. Should you do this correct and fast, I will give you more leads to scrape. Thanks
I'm looking for a qualified freelancer to develop a bot that can navigate the Almaviva Egypt website just like a human would. The bot must be capable of completing three key tasks: - Filling out all necessary appointment-related information - Selecting the date and time of the appointment - Submitting the request for the appointment Considering the constraints of the website, I require a bot that can still function proficiently with a limited number of appointment slots. Moreover, it must be programmed to input login credentials. A crucial requirement is that it can bypass or solve captcha verifications, ensuring a smooth booking process. The essential skillset for this project comprises expertise in Python, as the bot should be developed in this language. Familiarity with web scra...
Hello, I am looking for a professional translator who can accurately and naturally translate Japanese content into English. The ideal candidate will have experience in translating business, technical, or creative content and can maintain the original tone and meaning while producing fluent, high-quality English text. Project Requirements: Translate Japanese text into clear, accurate, and natural English Maintain the original tone, style, and nuance of the Japanese content Ensure proper grammar, punctuation, and formatting Deliver translations on time and communicate proactively if there are any questions Qualifications: Native or near-native English proficiency Proven translation experience with samples or portfolio preferred Attention to detail and commitment to high-quality work Addi...
I need a clean, up-to-date mailing list focused exclusively on schools and daycares, camps, and churches located in my immediate area. After I award the project I will give you the exact city limits and surrounding ZIP codes to keep the search tight. For every entry I want the business or institution name, their direct email, a working phone number, and the mailing address. Accuracy matters more than volume—please verify that each record is current and remove any duplicates you find along the way. The finished file should arrive as an Excel or Google Sheet that I can sort and filter easily that i can easily create mailing labels from. If you already use tools such as LinkedIn Sales Navigator, Apollo, Hunter, or a similar scraper/validation service, let me know; anything that help...
I need a reliable way to pull data from Facebook Marketplace seller pages at scale. The target platform is Facebook; other marketplaces such as eBay, Amazon or Etsy are irrelevant for this job. Here’s what I’m after: when I paste one or many seller profile URLs into your script or small desktop app, it should crawl every public listing on those pages and export the results to CSV or Google Sheets. I mainly care about item title, price, description, photos (image URLs are fine), posting date, item location and the seller’s profile link so I can trace each record back to its source. If you can collect additional fields that Facebook exposes, even better—just keep everything neatly labelled. No hard requirement on the stack: Python with BeautifulSoup / Selenium, ...
I am looking for an experienced developer with strong expertise in Python and web automation to build a smart system for monitoring ticket availability and event updates on the Webook platform. The system should focus on automation, notifications, and usability while following best technical and compliance practices. Scope of Work • Develop a Python-based automation system to monitor events and ticket availability. • Send real-time notifications when: • New events are published • New ticket batches become available • Build a clean and user-friendly dashboard to: • Manage monitoring settings • Control alerts and configurations • Implement structured and scalable automation logic. • Ensure the solution is maintainable and adaptable to f...
For an upcoming market research study, I need a fully-automated workflow that gathers and enriches data from well over 500 LinkedIn profiles. The automation should locate the profiles that match criteria I will provide, pull the key public details, then append reliable off-platform contact information so I can reach those professionals directly. Please design the script or low-code sequence with any reliable stack you prefer—Python, Selenium, PhantomBuster, Sales Navigator API, or comparable tools are fine as long as the method is repeatable and respects rate limits. Deliverables • CSV/Excel file containing one row per person with: – Current job title – Company name – Verified email (and phone, when available) • Source code or workflow fi...
I have a growing list of company names, and I need a small, reliable Python script that can: Automatically find each company’s career/jobs page where open positions are posted (pages may be built using HTML, JavaScript, or modern front-end frameworks) Navigate through all job listings, including: Pagination (page numbers, next/previous, etc.) “Load more” buttons Infinite scrolling Ability to fetch data from multiple pages (e.g., page 3, 4, or beyond) Apply job filters, especially location-based filtering, so that only job links for specific locations are collected Extract only individual job posting links after filters are applied Visit each job link and scrape complete job details, including: Job title Job description Location Employment type (if available) Department / ...
I need help to make my catalogue of automotive spare parts by pairing every OEM number I supply with a clean, high-resolution product photo and basic part information. The scope covers the full range of engine, suspension and brake system components, so you’ll be digging through manufacturer websites and trustworthy e-commerce listings until you find an image that is crisp, watermark-free and matches the exact OEM reference. Once you locate a match, capture the part name exactly as it appears on the source page, copy the product-page link, download the image at its highest available resolution, and note everything in a structured Google Sheet. File naming should mirror the OEM numbers so that images and rows line up perfectly. Deliverables • A Google Sheet containing OEM num...
Senior Automation Engineer for Traffic Simulation & Referrer Spoofing I am looking for a specialized Automation/Growth Engineer to build a custom Traffic Orchestration System. The goal is to simulate "Viral" traffic spikes to specific URLs to test search engine ranking signals. The Technical Challenge: This is not a simple headless browser task. You must solve the problem of taking high-volume, raw human traffic (via Pop-Under/PPV APIs) and "cleaning" it through a Bridge/Redirect layer to spoof specific social referrers while maintaining session integrity. Core Deliverables: - Referrer Bridge: Build a script/server that receives raw hits and uses a "Double Meta Refresh" or similar logic to spoof (e.g., masking traffic to appear as if it's coming f...
I need a clean, freshly-sourced list of 5,000–10,000 tech-startup contacts for a one-to-one outreach campaign promoting GrowthAI’s free trial. Every record has to come from information that is already public—think company websites, press pages, blog author bios, event directories, Crunchbase-style listings—never scraped LinkedIn data, leaked dumps, or anything that could be considered private. What the sheet must contain • Company name • Website URL • Contact name (when it’s on the site) • Public business email only (no personal Gmail/Yahoo unless the firm itself lists it as its main contact) • Industry tag • Country Target profile • Primary industry: Tech Startups • Regions: North America, Europe, and ...
Please Read Carefully Before Applying It does not matter whether you consider yourself a “vibe coder” or a traditional software engineer we accept both here. What matters is whether you can make this system work reliably at scale. We operate a production scraper that processes 500+ leaderboard sites per hour. All sites we scrape are leaderboards, but no two sites are the same. This is not a basic scraper. What Makes This Scraper Different The leaderboards we scrape vary heavily in structure and behavior: Dynamic buttons, tabs, and switchers JavaScript-rendered content Hybrid navigation (UI interaction + background API calls) Tables, card layouts, podium layouts, or combinations of all three Masked usernames and inconsistent rank formats Different ordering of wager / prize data ...
I have an existing Flutter mobile app (Firebase backend + RevenueCat + web scraping). Most functionality is already implemented. I need an experienced Flutter developer to update and refine several features. I believe this should not take more than a few days for an experienced developer. Scope of Work: 1. Sync Local Storage with Firestore (Offline Support) - Keep using local storage for offline mode - Sync shift data with Firebase Firestore - Handle offline → online auto sync - Prevent duplicates (unique shift ID) - Secure Firestore rules (user-only access) - Ensure cross-device sync works properly 2. Fix Email Verification (Spam Issue) - Configure Firebase Auth to use custom domain () - Set up SPF, DKIM, DMARC - Improve email template - Ensure emails land in inbox (Gmail/Outlook) ...
I need a clean, well-structured extract of permit holder information from the WA State Labor & Industry online permit lookup (sometimes called the Permit Center). Whether you can do a fully automated scrape or need to do a manual pull is up to you—the key is accuracy and complete coverage. Scope • Visit the WA State L&I electrical permit lookup site and capture every record that appears in the public search results that: - Is for a generator or automatic transfer switch installation. - For the license numbers that will be given to you - for the timeframe given (5-6 years back). • Extract only the permit holder–related fields (name, address, and any other holder-specific details that the site exposes). • Return the ...
The contractor is commissioned to download DRM-protected videos from an online portal to which the client has legitimate access and usage rights. The videos must be processed as follows: - Download approximately 240 videos from the portal with about 18 hours video material - The videos have an average length of approximately 5 minutes - Original video titles must be preserved - The videos must be organized into folders according to the portal order/structure - All files must be uploaded and stored on Google Drive - The final folder structure on Google Drive must be same like on the portal
I need a reliable specialist who can log into our dealership’s backend every weekday, pull fresh customer information, and feed it straight into our call-tracking platform the same day. The only data I’m after are contact details and service records—nothing else—so the extraction script or manual process can stay laser-focused on those two fields for speed and accuracy. Turnaround is critical. If you can set this up and have the first full export/import cycle running smoothly right away, I’m happy to add a rush bonus on top of the agreed rate. Accuracy must be spot-on and the data has to land in the tracking system without duplicates or formatting hiccups. Deliverables each weekday: • Clean export of new customer contact details and service record...
I have an Excel template ready and a list of items I need populated with reliable, up-to-date product details. For every product on the list, please pull information only from official brand websites, leading eCommerce platforms, and the customer-review sections of those sites. What I expect captured for each item: • Current price and stock status • Key features or technical specifications exactly as stated by the manufacturer or retailer • Average customer rating plus any standout review insights (e.g., “4.5/5 from 230 reviews”) Accuracy matters more than speed, so cross-check conflicting figures before entering them. Add the source URL next to every data point so I can verify quickly. Once the sheet is complete, send it back in the same format&mdash...
If you want to stay competitive in 2021, you need a high quality website. Learn how to hire the best possible web developer for your business fast.
Learn how to find and work with a top-rated Google Chrome Developer for your project today!
Learn how to find and work with a skilled Geolocation Developer for your project. Tips and tricks to ensure successful collaboration.