What Autodesk Has to Offer:
Autodesk makes the software and tools that help people imagine, design, and make a better world. If you've ever driven a high-performance car, admired a towering skyscraper, used a smartphone, or watched a great film, chances are you've experienced what millions of Autodesk customers are doing with their software. Autodesk offers their employees benefits like:
Autodesk makes the software and tools that help people imagine, design, and make a better world. If you've ever driven a high-performance car, admired a towering skyscraper, used a smartphone, or watched a great film, chances are you've experienced what millions of Autodesk customers are doing with their software. Autodesk offers their employees benefits like:
Job details
Job Requisition ID #
Position Overview
The Applied AI team in Autodesk’s Data and Process Management (DPM) organization ships cloud-native services that power AI agents and AI-driven workflows that make our Product Data Management (PDM) and Product Lifecycle Management (PLM) workflows smarter and easier.
We’re hiring a QA/SDET who is automation-first and production-minded. You’ll start by owning quality for cloud services and AI-integrated features, then grow into owning our AI evaluation pipelines (built on internal frameworks and Opik) over the next 2–3 quarters.
You don’t need to be an ML expert on day one—but you do need strong software engineering fundamentals, comfort working with distributed systems, and curiosity to learn AI-specific quality evaluation patterns and tools.
Responsibilities
• Build and maintain automated tests for cloud-native services: API/contract tests,, and end-to-end workflow tests
• Validate non-functional requirements: performance, resiliency/failure modes, multi-tenant behavior, and observability-driven debugging (logs/metrics/traces)
• Partner with engineers and PMs to define acceptance criteria and quality gates for releases
• Develop and maintain scenario-based regression suites for AI-integrated workflows (multi-step tasks, tool calls, retrieval-backed behaviors)
• Build and operationalize evaluation pipelines using internal frameworks and evaluation tools like Opik
• Curate and maintain “golden” datasets (test cases, expected behaviors, labels/metadata)
• Automate Agent evaluation runs (CI, scheduled runs, and/or sampled runtime evaluation)
• Publish results to dashboards and establish alerting for failures/regressions
• Security-aware testing for AI surfaces: include abuse cases (e.g., prompt-injection style attempts, unsafe tool execution paths, sensitive-data leakage checks) and verify guardrails/controls
• AI-assisted delivery: Use AI coding agents to accelerate delivery of tests and automations
Minimum Qualifications
• Bachelor’s or Master’s degree in Computer Science, Software Engineering, or equivalent practical experience
• 4+ years experience as a QA Engineer, SDET, or Software Engineer with substantial test automation ownership
• Strong programming skills in Python and/or TypeScript/Java; you write maintainable automation code, not just scripts
• Experience testing cloud-native distributed systems (REST/GraphQL APIs, async workflows, service-to-service integrations)
• Proven verification habits: test design, CI hygiene, disciplined incremental delivery, and strong debugging skills
• Comfort operating production-like systems: reading telemetry, reproducing issues, triaging failures, and driving fixes with engineers
• Strong communication: you can document test strategy, influence quality gates, and collaborate cross-functionally
• Experience with testing AI-integrated systems in production (any of):
• LLM feature regression testing, prompt/version change validation
• RAG-style workflows (retrieval quality checks, grounding/citation checks, data freshness)
• Tool-use / agentic workflows (validating tool-call sequences and failure recovery paths)
• Demonstrated experience using AI coding tools to develop tests for production systems, and the engineering judgment to verify and correct AI output (code review rigor, debugging skill, ownership of correctness)
Preferred Qualifications
• Familiarity with evaluation tooling (Opik, Langfuse, or similar), dataset versioning practices, and automated evaluation runs
• Experience with performance testing and resiliency patterns (rate limiting, retries/idempotency validation, chaos/fault testing)
• Security-minded testing experience, especially for systems that integrate external tools/data sources
#LI-KS2
Learn More
About Autodesk
Welcome to Autodesk! Amazing things are created every day with our software – from the greenest buildings and cleanest cars to the smartest factories and biggest hit movies. We help innovators turn their ideas into reality, transforming not only how things are made, but what can be made.
We take great pride in our culture here at Autodesk – it’s at the core of everything we do. Our culture guides the way we work and treat each other, informs how we connect with customers and partners, and defines how we show up in the world.
When you’re an Autodesker, you can do meaningful work that helps build a better world designed and made for all. Ready to shape the world and your future? Join us!
Salary transparency
Salary is one part of Autodesk’s competitive compensation package. Offers are based on the candidate’s experience and geographic location. In addition to base salaries, our compensation package may include annual cash bonuses, commissions for sales roles, stock grants, and a comprehensive benefits package.Diversity & Belonging
We take pride in cultivating a culture of belonging where everyone can thrive. Learn more here: https://www.autodesk.com/company/diversity-and-belonging
Are you an existing contractor or consultant with Autodesk?
Please search for open jobs and apply internally (not on this external site).
Get Weekly Job Offers
Be first to know when jobs open.