aws project related to data
$8-15 USD / hour
1) Peform ETL S3 ( HDFS file) as source and Redshift as target
Technical Environment : EC2,EMR, Pyspark & Hive
2) Peform ETL across S3, AWS noSQL database using Glue
3) One sample Python AWS Lambda function to run AWS Redshift SQL scripts
4) One sample on realtime streaming ( preferably Kafka( producer) + Pyspark( Consumer)
5) Executing bash commands ( jobs ) via Airflow ( or AWS equivalent)
Need walkthrough of above at urgent basis.
Even if you can solve some points , that is fine.
Project ID: #27922225
About the project
11 freelancers are bidding on average $14/hour for this job
hello- I am a 15year experienced professional having hands on experience working with AWS. I am interested to work on your task. lets discuss.
hi, i have majorly worked with aws, s3, hadoop, pyspark, hive, shell script. i have little knowledge on no sql and kafka. but i can work on these technology peices. i would like to work on your project. please reply ba More
I am certified aws data engineer in aws services- Aws lambda Aws glue Aws step function Aws sagemaker Aws lake formation Aws athena Aws redahift Aws dynomo db programing languages- python, pyspark, sql I can surely h More
Hi, I've an aws data engineer with 2+ years of experience. I'm been majorly working with aws glue for performing ETL, with s3 as a data source. I'm pretty much worked on the following services as well - EMR, EC2, Lamb More
I have good experience of over 2 years in designing and developing ETL solutions using AWS services like DMS, Glue, Lambda, Athena and Redshift.
Hello , I am professional AWS cloud and PYTHON BigData Developer with an experience of 6 plus years. I have so far worked on multiple AWS services and have a good hands-on as well as architectural design knowledge on A More
I am BigData engineer with vast experience in various tools like spark(pyspark), Hadoop, yarn, Hive, kafka and Python, shell scripting. Have good experience in designing and implementing ETL pipelines using pyspark wit More