AWS Neuron is the complete software stack for the AWS Inferentia and Trainium cloud-scale machine learning accelerators and the Trn1 and Inf1 servers that use them.

As the Software Development Manager for the Tools Team, you will be responsible for leading a talented team of engineers to develop and maintain high-performance monitoring and profiling tools for machine learning applications and AI accelerators. You will oversee the design, development, and deployment of the Neuron Profiler and other Neuron Tools. The profiler plays a crucial role to internal and external customers in optimizing AI workloads across hardware platforms such as Trainium and Inferentia devices, by providing deep insights into performance bottlenecks and system behavior.

In this role, you will manage the full development life cycle of the Neuron Profiler/Tools toolchain, ensuring scalability, reliability, and usability. You will collaborate with cross-functional teams to ensure that the our C++ compiler and runtime generates key information so customers can understand and optimize the performance of our custom hardware. Additionally, you will drive innovations that allow the profiler to support multiple frameworks, such as PyTorch, TensorFlow, and XLA.

A successful candidate will have an established background in building AI/ML and performance analysis tools. Experience with ML-specific profiler tools (like PyTorch Profiler or TensorFlow Profiler) is highly desirable, along with along with direct customer-facing experience and a strong motivation to achieve results.


A day in the life
day in the life
You will work with the executive leadership and other senior management and technical leaders to define product directions and deliver them to customers. We build massive-scale distributed training and inference solutions. This organization builds the full stack of software, servers and chips to accelerate at the highest scale.

About the team
Inclusive Team Culture

Here at AWS, we embrace our differences. We are committed to furthering our culture of inclusion. We have ten employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences. Amazon’s culture of inclusion is reinforced within our 16 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust.

Work/Life Balance

Our team puts a high value on work-life balance. It isn’t about how many hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives.

Mentorship & Career Growth

Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded professional and enable them to take on more complex tasks in the future.

BASIC QUALIFICATIONS

- 3+ years of engineering team management experience
- 7+ years of working directly within engineering teams experience
- 3+ years of designing or architecting (design patterns, reliability and scaling) of new and existing systems experience
- * Experience partnering with product or program management teams
- * Experience in C++, Go, and Python

PREFERRED QUALIFICATIONS

- * 2+ years experience leading teams that in Machine Learning development including building and training large models, working with Pytorch and/or Tensorflow using large distributed fleets of GPU or other accelerated systems.
- * Experience with Linux distributions such as Ubuntu or CentOS, kernel development, and tooling such as perf and gdb.
- * Experience with performance profiling, tracing, and analysis of AI training/inference applications.
- * Experience with large scale, distributed AI training/inference applications, including libfabric, MPI, slurm, and EKS.
- * Experience with fleet monitoring, debugging, and reliability.
- * Knowledge of AI-powered optimization suggestions for profiling would be an advantage for this position.

Amazon is committed to a diverse and inclusive workplace. Amazon is an equal opportunity employer and does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status. For individuals with disabilities who would like to request an accommodation, please visit https://www.amazon.jobs/en/disability/us.

Our compensation reflects the cost of labor across several US geographic markets. The base pay for this position ranges from $166,400/year in our lowest geographic market up to $287,700/year in our highest geographic market. Pay is based on a number of factors including market location and may vary depending on job-related knowledge, skills, and experience. Amazon is a total compensation company. Dependent on the position offered, equity, sign-on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and/or other benefits. For more information, please visit https://www.aboutamazon.com/workplace/employee-benefits. This position will remain posted until filled. Applicants should apply via our internal or external career site.