Upcoming Webinar: "AI Inside Embedded Development"

Practical use cases from embedded teams, and where performance engineering fits

May 8, 2026 Events
AI Inside Embedded Development

Seminar Overview

Embedded software teams want to bring generative AI into how they build software, but standard cloud AI services rarely fit: source code, design documents, and customer IP often can't leave the company network. At the same time, AI models that hit accuracy targets in the cloud frequently miss latency, power, or memory budgets once they run on the target SoC—the model works, but the hardware doesn't.

In this 40-minute webinar, we walk through how embedded teams are addressing the first of those problems—bringing AI into the development process in a way that fits their constraints—and where performance engineering on the target hardware fits in the broader picture. Drawing on engagements with automotive and industrial customers including Toyota and Renesas, the session covers in-house LLM environments, real use cases from embedded development workflows, and how teams decide where to invest first.

This is a focused look at how AI actually fits into embedded development today, and the engineering approaches that make it work in practice.

Speaker

Aki Asahara
Aki Asahara Ph.D.
Product Manager/CMO
Aki began his career at the summit of Maunakea, Hawaii, where he developed gamma-ray detectors for the Subaru Telescope and dedicated his 20s to pulsar research. After earning a Ph.D. in Astrophysics, he transitioned to IT, specializing in high-performance computing and accelerator technologies such as GPUs and FPGAs. With deep expertise in bridging business and technology, he has led numerous software projects for global enterprises and research institutions. At Fixstars, he drives the company's AI-powered embedded software development services, helping automotive and industrial clients integrate modern AI tooling into performance-critical C/C++ workflows.

Agenda

  1. The two AI problems embedded teams are facing
    • The development gap: AI in your engineering workflow when code and design data can't leave the network
    • The deployment gap: AI models that work in the cloud but miss latency, power, or memory budgets on the target SoC
    • Why these are different problems requiring different investments
  2. In-house AI for embedded development
    • Why standard cloud AI services don't fit embedded development environments
    • Open LLMs and how they differ from closed cloud models
    • Deployment patterns: on-premises, private cloud, and self-hosted options
    • What to consider when standing one up
  3. Use cases from real engagements
    • Internal knowledge search across design documents and codebases
    • Coding assistance and coding agents inside the development workflow
    • Linking issues, design documents, and test artifacts
    • Examples from embedded processor development environments
    • Performance engineering for AI on the target hardware: when the bottleneck moves from the development process to the silicon
  4. Q&A

Total 40 minutes

* Q&A included throughout.
* Recording will be made available to registrants.
* Schedule and content are subject to change without notice.

Date and time

Wednesday, May 27, 2026
12:00 PM - 12:40 PM PDT

Location

Zoom

Target Audience

  • VPs of Engineering, engineering directors, and platform leads at embedded software organizations exploring how to bring generative AI into their development workflow
  • DX or AI transformation leads in automotive, industrial machinery, FA, and robotics companies looking at internal AI deployment
  • ML engineering leaders working on AI products that need to run on embedded silicon
  • Embedded systems and platform leads working with NVIDIA Jetson, Qualcomm, Renesas R-Car, NXP, and custom NPUs
  • Anyone trying to figure out where AI actually fits in embedded development today—whether in the development process or on the target hardware

Participation fee

Free


Share this article: