ABOUT ASTRUS

ABOUT ASTRUS

ABOUT ASTRUS

📍 Location: Toronto or Waterloo, Canada


At Astrus, we are using AI to automate microchip design, starting with the biggest bottleneck, analog layout. Our mission is to radically improve global computation and empower chip designers to create the world's most advanced microchips with AI. Astrus is backed by top-tier VC firms: Khosla Ventures, HOF Capital, and 1517 Fund.

📍 Location: Toronto or Waterloo, Canada


At Astrus, we are using AI to automate microchip design, starting with the biggest bottleneck, analog layout. Our mission is to radically improve global computation and empower chip designers to create the world's most advanced microchips with AI. Astrus is backed by top-tier VC firms: Khosla Ventures, HOF Capital, and 1517 Fund.

THE MISSION

THE MISSION

THE MISSION

Astrus is building a physics-aware foundation model for analog chip design — and this role sits exactly where research becomes real, production systems.

As a Research Engineer, you won’t just support research — you’ll help shape it. You’ll work side-by-side with AI researchers to turn promising ideas into systems that actually run: fast, reliable, scalable, and deployable inside a real product.

This is a role for engineers who like operating in the messy middle — where models are evolving, assumptions are fluid, and the work is to turn something experimental into something that works every time.

You do not need prior chip design experience. The domain is complex, but you’ll learn it as you go. What matters is your ability to build strong systems, reason about performance, and collaborate closely with researchers to make ideas real.

You’ll be working across AI systems, distributed infrastructure, and production engineering, building the bridge between experimentation and deployment.

If you’re excited by taking cutting-edge ideas and turning them into robust systems that ship, this role is for you.

Astrus is building a physics-aware foundation model for analog chip design — and this role sits exactly where research becomes real, production systems.

As a Research Engineer, you won’t just support research — you’ll help shape it. You’ll work side-by-side with AI researchers to turn promising ideas into systems that actually run: fast, reliable, scalable, and deployable inside a real product.

This is a role for engineers who like operating in the messy middle — where models are evolving, assumptions are fluid, and the work is to turn something experimental into something that works every time.

You do not need prior chip design experience. The domain is complex, but you’ll learn it as you go. What matters is your ability to build strong systems, reason about performance, and collaborate closely with researchers to make ideas real.

You’ll be working across AI systems, distributed infrastructure, and production engineering, building the bridge between experimentation and deployment.

If you’re excited by taking cutting-edge ideas and turning them into robust systems that ship, this role is for you.

WHAT YOU'LL OWN

WHAT YOU'LL OWN

WHAT YOU'LL OWN

  • The bridge from research to production. Work directly with AI researchers to turn evolving ideas into reliable, production-ready systems.

  • AI system design and iteration. Contribute to how models are structured, executed, and scaled — not just how they’re implemented.

  • Performance and scalability. Identify and resolve bottlenecks across GPU and distributed workloads, improving throughput, latency, and cost.

  • Productionizing research code. Take experimental systems and make them robust: testable, maintainable, observable, and dependable.

  • Training and inference pipelines. Build and evolve systems for data generation, training, evaluation, and deployment.

  • Deployment and packaging. Design workflows that turn models into hardened, production-ready inference containers integrated into the Astrus product.

  • Engineering standards for AI systems. Raise the bar on how research systems are built by introducing better tooling, abstractions, and practices.

  • The bridge from research to production. Work directly with AI researchers to turn evolving ideas into reliable, production-ready systems.

  • AI system design and iteration. Contribute to how models are structured, executed, and scaled — not just how they’re implemented.

  • Performance and scalability. Identify and resolve bottlenecks across GPU and distributed workloads, improving throughput, latency, and cost.

  • Productionizing research code. Take experimental systems and make them robust: testable, maintainable, observable, and dependable.

  • Training and inference pipelines. Build and evolve systems for data generation, training, evaluation, and deployment.

  • Deployment and packaging. Design workflows that turn models into hardened, production-ready inference containers integrated into the Astrus product.

  • Engineering standards for AI systems. Raise the bar on how research systems are built by introducing better tooling, abstractions, and practices.

WHAT MAKES THIS ROLE DIFFERENT

WHAT MAKES THIS ROLE DIFFERENT

WHAT MAKES THIS ROLE DIFFERENT

  • You shape the work, not just implement it. You’re part of the thinking process — challenging ideas, proposing alternatives, and improving system design.

  • You operate across the full lifecycle. From early experiments to production deployment, you own the journey.

  • You’ll work on real performance problems. Multi-GPU systems, distributed execution, and large-scale workloads create non-trivial engineering challenges.

  • You’re not expected to know the domain upfront. Strong systems and engineering instincts matter more than prior semiconductor experience.

  • You shape the work, not just implement it. You’re part of the thinking process — challenging ideas, proposing alternatives, and improving system design.

  • You operate across the full lifecycle. From early experiments to production deployment, you own the journey.

  • You’ll work on real performance problems. Multi-GPU systems, distributed execution, and large-scale workloads create non-trivial engineering challenges.

  • You’re not expected to know the domain upfront. Strong systems and engineering instincts matter more than prior semiconductor experience.

WHAT YOU BRING

WHAT YOU BRING

WHAT YOU BRING

Core Experience

  • Strong software engineering fundamentals and experience building production-grade systems.

  • Proven experience with Python in AI, distributed, or performance-sensitive environments.

  • Hands-on experience with modern ML frameworks such as JAX, PyTorch, or TensorFlow, and distributed tooling such as Ray.

  • Experience working closely with researchers or applied scientists to move ideas into production systems.

  • Familiarity with at least some of the techniques we use (e.g., neural networks, search, reinforcement learning).

  • A track record of improving performance, reliability, and maintainability in AI or data-intensive systems.

  • Experience working with AWS and infrastructure as code (Terraform in our stack).

  • Practical experience with GPU-based systems, including multi-GPU environments and debugging performance or memory issues.

  • Strong judgment and initiative — you identify problems, propose better approaches, and drive improvements forward.

Bonus

  • Experience packaging and deploying containerized inference workloads

  • Experience with observability, experiment tooling, orchestration, or evaluation systems

  • Experience optimizing distributed workloads for throughput, latency, and cost

  • Familiarity with Rust or C++ for performance-critical paths

  • Interest in complex technical systems or computational design problems

Core Experience

  • Strong software engineering fundamentals and experience building production-grade systems.

  • Proven experience with Python in AI, distributed, or performance-sensitive environments.

  • Hands-on experience with modern ML frameworks such as JAX, PyTorch, or TensorFlow, and distributed tooling such as Ray.

  • Experience working closely with researchers or applied scientists to move ideas into production systems.

  • Familiarity with at least some of the techniques we use (e.g., neural networks, search, reinforcement learning).

  • A track record of improving performance, reliability, and maintainability in AI or data-intensive systems.

  • Experience working with AWS and infrastructure as code (Terraform in our stack).

  • Practical experience with GPU-based systems, including multi-GPU environments and debugging performance or memory issues.

  • Strong judgment and initiative — you identify problems, propose better approaches, and drive improvements forward.

Bonus

  • Experience packaging and deploying containerized inference workloads

  • Experience with observability, experiment tooling, orchestration, or evaluation systems

  • Experience optimizing distributed workloads for throughput, latency, and cost

  • Familiarity with Rust or C++ for performance-critical paths

  • Interest in complex technical systems or computational design problems

WHY THIS ROLE IS EXCITING

WHY THIS ROLE IS EXCITING

WHY THIS ROLE IS EXCITING

You’ll be working at the point where ideas become real systems — taking cutting-edge AI work and turning it into something that can run reliably at scale inside a product.

This is a role for engineers who care about:

  • Bridging research and production

  • Solving hard performance and scaling problems

  • Improving how AI systems are built, not just using them

  • Working on technically deep systems without needing prior domain expertise

If you want to build the systems that make advanced AI actually work in practice — we should talk.

Email Talent@Astrus.ai

TRANSPARENCY

  • AI Disclosure: Astrus uses AI to assist with parts of the screening and assessment process. Final hiring decisions are made by humans.

You’ll be working at the point where ideas become real systems — taking cutting-edge AI work and turning it into something that can run reliably at scale inside a product.

This is a role for engineers who care about:

  • Bridging research and production

  • Solving hard performance and scaling problems

  • Improving how AI systems are built, not just using them

  • Working on technically deep systems without needing prior domain expertise

If you want to build the systems that make advanced AI actually work in practice — we should talk.

Email Talent@Astrus.ai

TRANSPARENCY

  • AI Disclosure: Astrus uses AI to assist with parts of the screening and assessment process. Final hiring decisions are made by humans.

Ready to radically improve global computation? 🚀📈🌎 🤖

Ready to radically improve global computation? 🚀📈🌎 🤖

Reach out to Talent@Astrus.ai for more details

Reach out to Talent@Astrus.ai for more details