The Rapid and Rural Logistics (R2L) org is seeking an exemplary Data Engineer with broad technical skills to develop, own and support pipelines that back business analytics, to build tools that facilitate orchestration, and to produce automation scripts that optimize infrastructure. We look for candidates who are excellent communicators, self-motivated, flexible, hardworking, and who like to have fun.

This role is on a large analytical team that supports a wide range of businesses including sub same day, Rural super rural etc. This role has great exposure to a broad scope that can really help shape the future of operational fulfillment and promotes career progression.

We are looking for someone with preferred experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose and Lambda. Experience working independently on and completing end-to-end projects. Knowledge of professional software engineering and best practices for the full software development life cycle, including coding standards, software architectures, code reviews, source control management, continuous deployments, testing, and operational excellence

Key job responsibilities
Main responsibilities of this role include but are not limited to:

- Manage and grow a database infrastructure
- Develop automation solutions through programming languages
- Support analytical researches and provide recommendations to business challenges
- Use best practices for data modeling, ETL/ELT procedures, SQL, Redshift, and OLAP technologies to implement data structures.
- Experience in at least one modern scripting or programming language, such as Python, Typescript, or Scala.
- Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases)
- Collect and convert functional and business requirements into solutions that are operable, scalable, and well-suited to the overall data
- Determine best practices for creating data lineage from a range of data sources by analyzing source data systems. data sources.
- Engage in all phases of the development life cycle, including design, implementation, testing, delivery, documentation, support, and maintenance.
- Generate complete, reusable metadata and dataset documentation.

BASIC QUALIFICATIONS

- 5+ years of data engineering experience
- Experience with data modeling, warehousing and building ETL pipelines
- Experience with SQL
- Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS
- Experience mentoring team members on best practices

PREFERRED QUALIFICATIONS

- Experience with big data technologies such as: Hadoop, Hive, Spark, EMR
- Experience operating large data warehouses

Amazon is committed to a diverse and inclusive workplace. Amazon is an equal opportunity employer and does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status. For individuals with disabilities who would like to request an accommodation, please visit https://www.amazon.jobs/en/disability/us.

Our compensation reflects the cost of labor across several US geographic markets. The base pay for this position ranges from $139,100/year in our lowest geographic market up to $240,500/year in our highest geographic market. Pay is based on a number of factors including market location and may vary depending on job-related knowledge, skills, and experience. Amazon is a total compensation company. Dependent on the position offered, equity, sign-on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and/or other benefits. For more information, please visit https://www.aboutamazon.com/workplace/employee-benefits. This position will remain posted until filled. Applicants should apply via our internal or external career site.