I’ve been writing embedded firmware for more than twenty-five years. I’ve watched assembly give way to C, bare metal give way to RTOSes, and the whole field drift. First slowly, then all at once, from proprietary systems toward open source.
AI in Embedded Development – The Job Is Becoming Less Coding, More Reviewing
So, when the AI coding wave hit a few years back, I tried the tools. Honestly, I wanted them to work.
They didn’t. Not for embedded. Not really.
The first time I asked a chatbot to write an I²C driver for a part I was working with, it gave me something that looked right, compiled, and silently used a register address that didn’t exist on that chip. That was the moment I stopped trusting generated code at face value.
For a long stretch, the story was the same: useful for a weekend web project, not useful for the actual day job.
That’s changed in the last two years, and it’s changed fast. Not slowly improving fast. Exponentially fast.
I’m not here to evangelize. What I want to highlight is how this shift is real, and quietly, it’s reshaping what the job of an embedded engineer is becoming.
Why embedded was the last holdout
Web developers have been “vibe-coding” for a while now, describing what they want in natural language and shipping what comes back. Embedded engineers have been sitting this one out, and it wasn’t stubbornness. It was physics.
Language models are pattern matchers trained on text. Embedded work demands precision about things that aren’t in the text or are buried in a 2,000-page datasheet that the model never fully absorbed. The failure modes are distinct and nasty.
- Hallucinated register addresses and bit fields are the most common embedded-AI horror story. Code that compiles, looks textbook-correct, and points at the wrong peripheral base address. I wasted half a day on one of these once.
- Timing invisibility. An ISR that takes 200 µs is a different bug than one that takes 2 µs. Language models can’t feel the difference.
- Memory constraints. “Just use a vector” doesn’t work on a chip with 16 KB of RAM.
- Proprietary, fast-churning SDKs. Vendor HALs rename APIs between releases. Models trained on last year’s examples confidently produce this year’s deprecated calls.
- Safety-critical standards like ISO 26262 (automotive), DO-178C (avionics), or IEC 62304 (medical devices). AI-generated code doesn’t come with a certification trail.
It’s not that AI is useless for embedded. It’s that the failure modes are worse when they hit, and often silent when they don’t. A bricked dev board is the best case. A bricked production fleet is the nightmare case.
Last year, the tools stopped wasting my time
I noticed it first with boilerplate. The kind of peripheral initialization work I’d been writing for decades started drafting itself correctly in seconds. Then it was HAL navigation: instead of digging through reference manuals to understand what a HAL_UART_Transmit_IT actually does, I could ask and get a useful answer in plain English. Then test scaffolding. Then, explanations of legacy code.
At some point, the pattern clicked. AI shines on text-manipulation tasks where the hardware is abstracted away, and stumbles when register-level precision matters.
That limitation doesn’t go away, but it’s narrower than I initially thought. A large part of embedded work sits above the register layer: configuration files, build scripts, device tree overlays, documentation, cross-vendor ports, and unit tests for pure logic.
That’s where the gains are real. Industry surveys now put embedded AI adoption above 80%, with most teams using it for code generation, testing, or documentation. Reported productivity gains land in the 30–50% range for prototyping.
That matches my experience.
The real story: the rate of change
Here’s what genuinely surprised me, and it’s the part I want embedded engineers to sit with: the rate of improvement.
Embedded tooling usually moves slowly. You pick a toolchain and expect to live with it for the lifetime of a product. Major IDE releases every few years. Compiler updates on a predictable cadence. HALs that hold their shape for a decade.
AI tooling is not on that clock.
The assistants of 2023 have already been replaced by embedded-specific tools in 2025, and whatever comes next will likely make today’s generation feel primitive. In embedded terms, we’re used to tools that last ten years. Here, you’re looking at tools that meaningfully change every six months.
For someone who’s spent decades carefully and deliberately watching tooling evolve, that’s jarring. And, once you get past that, it’s genuinely exciting.
AI that actually understands the hardware
The most interesting development isn’t generic AI getting better. It’s the emergence of tools built specifically for embedded work: tools that ingest datasheets, schematics, and reference manuals, and ground their answers in a specific chip.
That changes the nature of the output.
Instead of averaging across the internet, the model retrieves from an authoritative source and then writes. Register addresses, bit fields, and timing values can be traced back to a specific page in a datasheet. For the first time, you’re not asking an AI what’s generally true; you’re asking it about your hardware.
That directly addresses the biggest trust issue embedded engineers have had.
The parallel development is even more interesting: agent-style tools that close the loop with real hardware. Compile, flash, read serial output, inspect debugger state, analyze waveforms, iterate. Not autocomplete. An AI that can run the experiment and interpret the result.
That’s where this is going.
The debugging superpower
Code generation gets the headlines. Debugging is where AI has become most useful in my day-to-day work.
Paste in a linker error, and you get a diagnosis. Describe strange behavior, and you get a set of hypotheses worth checking. Feed in logic analyzer output, and you get pattern recognition. Ask what could cause a register to hold a certain value after reset, and you get a line of reasoning that might otherwise take an hour of datasheet diving to reconstruct.
Embedded debugging has always been pattern matching against accumulated experience. That’s exactly what language models are structurally good at.
Some of my best AI-assisted sessions haven’t produced a single line of code: they’ve produced a good question to ask, or a forgotten peripheral quirk to check.
What this means for learning embedded
There’s a fork in the road for the next generation of embedded engineers.
AI lowers the barrier to entry. That part is real. A student who would have given up at “the toolchain won’t build” now has a patient tutor available at any hour. Hobbyists ship IoT projects faster. Firmware becomes less of a black art and more of an accessible discipline.
The risk is just as real: people shipping firmware they don’t fully understand, into systems where failure has real-world consequences. When a web app crashes, you refresh. When firmware crashes in a medical device, an industrial controller, or a car, the stakes are different.
What we end up with is likely the middle path. AI as a force multiplier for engineers who still understand what’s underneath, and a technology that demands more caution as the stakes get higher.
Less coding, more reviewing
This is the shift that matters. And I didn’t fully see it coming.
The day-to-day work of an embedded engineer is becoming less about typing code and more about reading it.
AI drafts the driver. I, as an engineer, review it. I ask whether the DMA setup will actually survive a worst-case interrupt storm. I check the register writes against the reference manual. I spot that the model used a deprecated HAL call from three SDK versions ago. I decide whether the whole thing fits the system’s timing budget.
This is not a smaller job. It’s arguably a harder one.
Writing code from scratch forces you through every decision. Reviewing AI-generated code means spotting the decisions the model didn’t know it was making: the defaults it inherited, the edge cases it didn’t consider, the hardware quirks it couldn’t have known about.
That demands more hardware intuition, not less. More systems thinking, not less. More willingness to say “this compiles and looks reasonable, but it’s wrong.”
The skills that matter have quietly inverted from what the “AI replaces everyone” narrative suggests. The things AI is taking over – HAL API memorization, boilerplate, first-draft drivers – were the easiest parts of the job to learn and the most tedious to do. The things AI can’t do – reading a waveform and trusting your eyes, knowing the real bug is a ground loop, deciding that a clever optimization isn’t worth the certification headache, and being accountable when the device ships – are exactly the parts that separate a good embedded engineer from a mediocre one.
What still matters
I’ve lived through a few “this changes everything” moments in embedded: smart IDEs, the open-source HAL era, Git replacing SVN. This one is bigger. But the same rule still applies.
The tools change. The judgment doesn’t.
A team that uses AI well ships faster. A team that uses AI without judgment ships broken systems faster. The difference between those outcomes is still engineering. Skilled engineers still win.
After twenty-five years, I’m not nostalgic for writing peripheral initialization code by hand. If AI takes that, I won’t miss it.
What I hope doesn’t go with it is the mindset that makes a good embedded engineer: skepticism, hardware intuition, and the willingness to read the datasheet when the model is confident and wrong.
The tools are improving exponentially. The silicon doesn’t care.
The engineers who thrive won’t be the ones who trust AI the most or the least. They’ll be the ones who review it best.
Looking for Embedded Engineers to your team? Let’s talk.
Head of Sales +358 50 327 0846 julia.harjula@softability.fi Connect on LinkedIn