Big data is a wide topic and there are many related jobs from clients across the world at Freelancer.com. Sign up today and get a chance to earn extra income using the skills you possess.

Big data refers to a collection term for sets of data that are complex and very large that they make it difficult to process using any traditional or on-hand data management applications. There are many areas where big data can be encountered and this creates a problem that can only be solved by professionals in data processing. If you can handle such data, then you will have lots of projects to work on at Freelancer.com.

Freelancer.com is a leading freelance site online that gives freelancers a chance to browse through different big data projects and bid on those that they are interested in. Working at Freelancer.com will give you a range of big data projects that you can bid on and get paid. This gives you means to increase your income and use your professional skills. Since there are many clients in search of experts in the field of big data processing, you will not miss a project that suits you and one that will pay you enough money for your services.

Hire Big Data Experts

Filter

My recent searches
Filter by:
Budget
to
to
to
Skills
Languages
    Job State
    313 jobs found, pricing in USD

    I have some project that is partially specified relating to big data and want some guidance on completing the specifications. More details to be provided.

    $30 (Avg Bid)
    $30 Avg Bid
    4 bids
    Computer Science paper work 6 days left
    VERIFIED

    Please know a about Big Data and tools/technologies that are used such as Scala, Python, Hive, Dryad, Hadapt, Hadoop, NoSQL, Map Reduce, Pig, Hbase, and such

    $35 (Avg Bid)
    $35 Avg Bid
    2 bids

    breadth and depth Searchingon Graphs using Hadoop with MapReduce OR Spark and And test the results by comparing them through using benchmark dataset.

    $58 (Avg Bid)
    $58 Avg Bid
    4 bids

    Need informatica bigdata edition training

    $473 (Avg Bid)
    $473 Avg Bid
    3 bids

    A single source of truth for human talent

    $22 / hr (Avg Bid)
    $22 / hr Avg Bid
    8 bids

    Build your own Hadoop AMI, starting from the Amazon Linux AMI ([url removed, login to view]). You have to use latest stable Hadoop release. You are required to store this AMI in S3, and its name must include your last name. This AMI will be tested with the application built for task 2. However, if your AMI doesn’t work you are allowed to use one of the pre-built Hadoop AMIs for task 2. Write a Hadoop/Yarn MapReduce application that takes as input the 50 Wikipedia web pages dedicated to the US states (we will provide these files for consistency) and: Computes how many times the words “education”, “politics”, “sports”, and “agriculture” appear in each file. Then, the program outputs the number of states for which each of these words is dominant (i.e., appears more times than the other three words). Identify all states that have the same ranking of these four words. For example, NY, NJ, PA may have the ranking 1. Politics; 2. Sports. 3. Agriculture; 4. Education (meaning “politics” appears more times than “sports” in the Wikipedia file of the state, “sports” appears more times than “agriculture”, etc.) INPUT FILE IS GIVEN - [url removed, login to view]

    $280 (Avg Bid)
    $280 Avg Bid
    12 bids

    New: “cron job" text and image extraction from a website to a database input to be used in a search system display website Every day [Schedule] a "cron job" should run a search into 2-4 websites and categorizes the information (text and images) into a database (routine), it usually takes less than 30 seconds for this routine and very little processor capacity. The database is there to see and the site as well (information over inbox), after we fix and create it, the second step will be a larger project to be released in the future. In the past system worked well, and the Freelancer need to fix it well, I'm looking for a programmer /professional that knows what he is doing:. - Important: This work should remain confidential at al times, and can not be used as marketing purposes or referal job for the freelancer to other Jobs (NDA). REQUIREMENTS -You must be proficient in English (reading, writing). -You will provide 7/24 support, fix the bugs, add features if needed after the job is done for a certain time. -If I don't get a response from you within tolerable time, I will dispute the milestone and look for another professional to do the job. -I will expect you to send me samples as you progress, so that I can make sure everything is fine. -This is a very serious project that will be used with resoponsability therefore I WILL ONLY RE-HIRE EXPERTS & RESPONSIBLE PEOPLE, SO PLEASE AVOID BIDDING IF YOU ARE NOT AN EXPERT ON THIS ISSUE. I will provide Passwords and a sample once I get your offer. You can not use my site as a window example of your work for other clientes, this is mandatory, (NDA). The system should be easy for any other programmer in the future to fix if needed (best practices should be used), After finished, a full backup should be provided. Thank you.

    $193 (Avg Bid)
    $193 Avg Bid
    12 bids

    Chronicled is the leader in supply chain solutions by leveraging block-chain. We are seeking someone who knows the ins and outs of digital marketing. Knowing a thing or two about Chronicled platform will get preference. Go through [url removed, login to view] to know their business structure before applying for this job.

    $171 / hr (Avg Bid)
    $171 / hr Avg Bid
    17 bids

    The application should be multi-threaded at least the initial Sync. The application must parse the **entire** bitcoin blockchain / chainstate db folder. The application needs to extract all BTC Addresses with Balance > 0 in the fastest possible way. These addresses has to fill a List1 ( BlockchainAdresses ) Could be stored in a database to update it later. A second List (MyBTCAdresses) needs to be filled - MyBTCAdresses should be read from a MSSQL or MYSQL DB – take care the table column will contain around 100 Million Datasets. These Table Column will be filled from another App very fast – so count will grow up. If booth Lists are filled they needs to be compared. Maybe with [url removed, login to view] and GetHashcode for comparing as fast as possible. If BlockchainAdresses holds items from MyBTCAdresses add these Addresses to another List3 The BlockchainAdresses List and MyBTCAdresses needs to be updated – every few Minutes. If Balance in BlockchainAdresses is 0 they don’t have to be in the list, if new Addresses have Balance > 0 add it to the BlockchainAdresses. If List3 Count Changed Do Some Dummywork with it. The App should be a Windows Form App, that keep the User a bit informed about the entire process. MSSQL, MySQL or another DB could be used. Performance its very very important, if you create it – it should handle 30 Million BTC Adresses and 100 Million or more MyBTCAdresses as fast as possible and keep them up to date. - A fully working Visual Studio project coded in C#. - If you find something open source and can refactor it to meet the requirements, that is fine. There are quite a few different Git repos. If you use a Git project be careful if the repo hasn't been committed to in a long time. I need the application to parse the whole blockchain -- including any places where the structure/block size/etc may have changed If you have questions, let me know. I might be missing requirements or may have failed to address something. Happy to discuss higher level questions or technical specifics.

    $1049 (Avg Bid)
    $1049 Avg Bid
    15 bids

    We have CSV report files that need to be uploaded to BigQuery. All reports are the same layout and the report file name is the client name. It is automatically updated weekly. We need the a script to automatically load the data into BigQuery when the CSV file is changed. Prefer to set it up using a script like [url removed, login to view] but open to any comparable solution.

    $233 (Avg Bid)
    $233 Avg Bid
    12 bids
    Algorithms for big data work 2 days left
    VERIFIED

    I need you to help me with writing Algorithms for big data work

    $36 (Avg Bid)
    $36 Avg Bid
    9 bids

    Need help building the foundational DB infrastructure (Hadoop?) for collecting million of tweets, and being able to query them for some analytics.

    $829 (Avg Bid)
    $829 Avg Bid
    18 bids

    Looking for freelancers who have ready made US IT firms and consultancies where in we can promote our online training courses through email marketing. We are looking for the genuine freelancers who has the genuine list of US IT firms and consultancies or recruitment firms who will be interested in online training in different technologies like Java, Salesforce, AWS, Devops, BA, Hadoop, etc.

    $128 (Avg Bid)
    $128 Avg Bid
    2 bids

    I need solution Architect to make few slides for Big Data design for Telecom Network Operation Data as Fault, events , TT, performance as so on And how to apply Artificial intelligence for this Data

    $40 (Avg Bid)
    $40 Avg Bid
    5 bids

    Video Training on Big Data Hadoop. It would be screen recording and voice over. The recording will be approx 8 hrs Must cover Hadoop, MapReduce, HDFS, Spark, Pig, Hive, HBase, MongoDB, Cassandra, Flume

    $215 (Avg Bid)
    $215 Avg Bid
    5 bids

    Hi Pradeep B., I noticed that you have COBOL in your profile. Would you be available to discuss potential project?

    $271 (Avg Bid)
    $271 Avg Bid
    5 bids

    Key Responsibilities As (Senior) Big Data Engineer / Developer you will be closely working with IT architects to elicit requirements, to optimize the system performance as well as to advance its technological foundation.  Manage very large-scale, multi-tenant and secure, highly-available Hadoop infrastructure supporting rapid data growth for a wide spectrum of innovative internal customers  Provide architectural guidance, planning, estimating cluster capacity, and creating roadmaps for Hadoop cluster deployment  Install Hadoop distributions, updates, patches, version upgrades  Design, implement and maintain enterprise-level security (Kerberos, LDAP/AD, Sentry, etc.)  Develop business relevant applications in Spark, Spark Streaming, Kafka using functional programming methods in Scala  Implement statistical methods and machine learning algorithms to be executed in Spark applications, which are automatically scheduled and running on top of the Big Data platform  Identify new components, functions and features and drive from exploration to implementation  Create run books for troubleshooting, cluster recovery and routine cluster maintenance  Troubleshoot Hadoop-related applications, components and infrastructure issues at large scale  Design, configure and manage the strategy and execution for backup and disaster recovery of big data  3rd-Level-Support (DevOps) for business-critical applications and use cases  Evaluate and propose new tools and technologies to meet the needs of the global organization  Work closely with infrastructure, network, database, application, business intelligence and data science units. Key Requirements, Skills and Experience  University degree in computer science, mathematics, business informatics or in another technical field of study  Deep expertise in distributed computing and the factors determining and affecting distributed system performance  Experience with implementing Hadoop clusters in a large scale environment, preferably including multitenancy and security with Kerberos  Excellent hands-on working experience with Hadoop ecosystem for at least 2 years, including Apache Spark, Spark Streaming, Kafka, Zookeeper, Job Tracker, HDFS, MapReduce, Impala, Hive, Oozie, Flume, Sentry, but also with Oracle, MySQL, PSQL  Strong expertise in functional programming, object oriented programming and scripting, i.e. in Scala, Java, Ruby, Groovy, Python, R  Proficiency with IDEs (IntelliJ IDEA, Eclipse, etc.), build automation (Maven, etc.) and continuous integration tools (Jenkins, etc.)  Strong Linux skills; hands-on experience with enterprise-level Linux deployments as well as shell scripting (bash, tcsh, zsh)  Well versed in installing, upgrading & managing distributions of Hadoop (CDH5x), Cloudera Manager, MapR, etc.  Hadoop cluster design, cluster configuration, server requirements, capacity scheduling, installation of services: name node, data node, zookeeper, job tracker, yarn, etc.  Hands-on experience with automation, virtualization, provisioning, configuration and deployment technologies (Chef, Puppet, Ansible, OpenStack, VMware, Docker, etc.)  Experience working in an agile and international environment – excellent time-management skills  Excellent communication skills and high level of motivation (self-starter)  Strong sense of ownership to independently drive a topic to resolution  Ability and willingness to go the extra mile and support the overall team  Business fluent English in speech and writing, German is a plus.

    $941 - $1881
    $941 - $1881
    0 bids

    Build an web scraper for [url removed, login to view] Collect predetermined available fields Store or Update in SQL Database collected results Input Parameters You can create/edit/manage multiple searches You can specify for each search: (a) Make (b) Model (c) Starting Year and (d) Ending Year Specs: No scripting language restrictions Scraping Search for Make/Model/Starting to Ending Year on the platform The scraping script should take these ‘search parameters’ as input that i can change later. For e.g. I may want to run a scrape for all Toyota cars for now but later change it for Honda cars Scraper should use delayed and random timing to make URL calls, not too quick, not too predictable else the website may blacklist us Data Collection Collect 15+ common fields (confirm with me the list of fields that you will be scraping before finalizing the script; I need to approve the field list) Data collection should run in a periodic fashion (at some configurable time, lets say once every day or every 6 hours or whatever i configure) Each search result should be saved in a mysql table With every subsequent runs, the same data should not be updated in the database Database Store collected SQL Authenticated

    $218 (Avg Bid)
    $218 Avg Bid
    35 bids

    I need you to read few references and a book about databases as references to write a 10 pages research proposal

    $135 (Avg Bid)
    $135 Avg Bid
    51 bids

    Hi Herlon N., I noticed your profile and would like to offer you my project. We can discuss any details over chat.

    $10 / hr (Avg Bid)
    $10 / hr Avg Bid
    3 bids

    Top Big Data Community Articles