25 Oct
Recro
Anand
Job Title: AWS Data Engineer
Location - Remote
Experience - 3+ Experience
We are seeking a AWS Data Engineer with strong hands-on experience in data engineering techniques to create data pipelines and build data assets. The ideal candidate must possess deep expertise in PySpark, Python, SQL, and AWS services (Glue, S3, Redshift). The role involves code optimization, architecture design, and maintaining best practices for scalability and performance.
Key Responsibilities:
● Design, develop, and optimize data pipelines and data assets using PySpark and Python.
● Perform code optimization using Spark SQL and PySpark for enhanced performance.
● Work closely with AWS services, especially S3, Glue, Redshift, Lambda, and EC2,
and explain the benefits of each.
● Conduct code refactorization of legacy codebases for better readability and maintainability.
● Apply unit testing and Test-Driven Development (TDD) practices to ensure code quality.
● Debug and fix complex bugs, addressing performance, concurrency, or logic issues. Mandatory Skills :
● PySpark and Python
● Strong expertise in SQL and experience in Spark SQL for data optimization.
● AWS services knowledge, particularly in Glue, S3, Redshift, and Lambda.
● Proficiency in code versioning tools (Git) and artifact repositories like JFrog Artifactory.
● Code refactorization and modernization skills for maintaining code quality.
● Experience with Unit Testing and TDD. Nice to Have:
● Familiarity with other AWS services like EC2 and CloudFormation.
● Experience with Boto3 for AWS automation and integration.
Note- Looking for candidate who can join immediately or serving notice period
▶️ AWS Data Engineer
🖊️ Recro
📍 Anand