Web Crawler Developer for Commercial and Industrial Equipment Pricing


About Us: We are seeking an experienced web crawler developer to build a system that continuously searches for and updates undervalued prices for commercial and industrial equipment compared to historical prices. The ideal candidate will have strong skills in data scraping, web crawling, data processing, and potentially some experience with machine learning for analyzing historical pricing trends. Responsibilities: Develop a web crawler that can continuously search and update undervalued prices for commercial and industrial equipment on websites such as Machinery Trader, Machinio, Ritchie Brothers, GovPlanet, AllSurplus, and Machinetools.com etc. Define and implement a method to determine "undervalued prices" based on historical data and current market data as well Compile and analyze historical pricing data to compare with new data. Ensure the web crawler adheres to each website's terms of service and robots.txt rules. Design and implement a data storage solution to store the crawled data (e.g., database, Excel, Google Sheets). Most likely a DB. Develop a user-friendly dashboard visualization interface or report to present the data in a meaningful way. Integrate the crawler with other tools or services if required. Crawler to should update as close to real-time as possible with daily being minimal requirement Requirements: 3+ years of experience in web crawling and data scraping. Proficiency in programming languages such as Python, Node.js, or similar. Experience with web crawling libraries like Scrapy, BeautifulSoup, or Cheerio. Strong understanding of data processing and analysis techniques. Familiarity with machine learning concepts for historical price trend analysis. Ability to work independently and manage project timelines. Excellent problem-solving skills and attention to detail. Nice to Have: Experience with data visualization tools like Tableau, Power BI, or D3.js. Knowledge of cloud services such as AWS or Google Cloud for hosting the crawler. Familiarity with CI/CD pipelines for automated deployment. What We Offer: Opportunity to work on a challenging and impactful project. Flexible working hours and remote work options. Collaborative and dynamic team environment. How to Apply: If you are passionate about web crawling and data analysis, and have a proven track record of please apply

Keyword: cloud

 

Integração Google Cloud Api para Coleta e Digitalização de Documentos

Descrição do projeto: Estamos desenvolvendo um sistema para coleta e processamento de documentos online e buscamos um profissional freelancer experiente para integrar a API do Google Cloud a nosso site. O objetivo é permitir a digitalização automatizada dos documentos e...

View Job
Experienced Databricks & ETL Expert Needed for Medallion Architecture Setup

We are looking for an experienced Databricks expert to help us quickly set up a Medallion architecture for our data pipeline. Our team is building a robust ETL framework and needs someone who can immediately assist with configuring Databricks, optimizing performance, an...

View Job
ERP Systems Developer

We are seeking a skilled ERP Systems Developer and Administrator with a strong foundation in Java development. This role focuses on developing and customizing ERP solutions such as Oracle-GL, Workday, or SAP to enhance business processes and user experiences. Qualifica...

View Job