A year ago, if you told me an AI tool would become one of the most useful things in my embedded development workflow, I would have been skeptical. Embedded work is registers, timing, memory constraints, and hardware quirks. It felt like the last place AI would be useful.
Turns out I was wrong. These tools have changed how I work day to day, and I want to talk about what that actually looks like when you are writing C for ARM Linux devices and debugging hardware in the field.
What I Use
I work on an ARM64 Linux-based telematics device — C code, modem control, cellular connectivity, the whole stack from hardware abstraction up through application-level data collection. On the AI side, I primarily use Claude Code (Anthropic’s CLI tool) and Gemini. Nothing fancy. They just live in my terminal alongside everything else.
These are not side projects. This is production firmware running on large fleets. The stakes are real, which is part of why I find it worth writing about.
The Most Underrated Part: Connecting Your Information Sources
Before I get into specific use cases, I want to talk about something that does not get enough attention. The single most important thing you can do to get value out of AI tooling is to connect it to the information sources you already use.
In embedded work, the knowledge you need to do your job is scattered everywhere. The bug you are investigating has context in a JIRA ticket. The code change that might have caused it is in a GitLab merge request. The hardware behavior is documented in a spec sheet or datasheet. The device logs are in one place, the fleet data is in another.
The real power of AI tooling is not that it writes code for you. It is that it can pull context from all of these sources and synthesize it. When I can point Claude Code at a JIRA ticket, have it pull the relevant merge request, cross-reference that with device logs, and then look at the code — that is when it goes from a neat trick to a genuine workflow change. Instead of spending 30 minutes just gathering context before I can even start thinking about a problem, the AI does that legwork.
Getting these integrations set up properly is not glamorous work. It is configuring MCP servers, setting up API tokens, making sure the tool can actually reach your JIRA instance and your GitLab repos and your BigQuery tables. But it is the foundation that makes everything else work. If your AI tool cannot see the same information you see, it is working with one hand tied behind its back.
Debugging and Log Analysis
This is where AI has had the biggest impact on my productivity, hands down.
Embedded devices generate a ton of log data — syslogs, multilogs, modem traces, network manager output. When a device in the field starts acting up, I used to spend a long time manually going through logs, correlating timestamps, trying to build a picture of what happened.
Now I feed those logs into Claude Code. What took me an hour of reading often takes minutes. It is good at the thing I find most tedious: correlating events across different subsystems. A modem disconnect event that lines up with a network manager state transition that lines up with an application retry — the AI spots those connections fast.
More than that, I can have a back-and-forth about it. “Why would ModemManager transition to this state here?” or “What would cause this sequence?” Questions that used to mean going and reading source code or documentation. Now I get a useful starting point in seconds and can verify it against the code.
I want to be clear: I still verify everything. This is embedded software. Incorrect assumptions mean bricked devices in the field. But the debugging cycle has gotten dramatically shorter.
Writing and Modifying C Code
This is where I expected AI to fall flat, and where it has surprised me the most.
Embedded C does not leave much room for error. Fixed-size buffers, hardware registers, strict memory constraints, code running on devices you cannot easily get your hands on once they are deployed. I figured AI-generated C would be sloppy and full of the kind of subtle issues that bite you three months later.
In practice, the quality is much higher than I expected. Not perfect — I catch issues regularly, and I review everything — but the baseline is solid. Where it really shines:
Boilerplate. Setting up new modules, writing init and cleanup functions, defining structures with their helper functions. This stuff has to be correct but is not intellectually interesting. AI handles it well and saves me real time.
Refactoring. Restructuring code, changing interfaces, updating a data structure that gets used in 40 places. AI makes consistent changes across a codebase without the kind of “I updated it in 39 of 40 places” bugs that happen when I do it manually.
Pattern translation. “Take this polling-based implementation and make it event-driven with callbacks.” I know how to do that transformation, but it is tedious and error-prone by hand. AI does it fast and consistently.
The important thing is that AI does not replace the thinking. I still design the architecture, make the trade-offs, pick the approach. It just makes implementing those decisions a lot faster.
Data Analysis
My work involves looking at telemetry data from large device fleets — BigQuery SQL against tables with millions of rows, tracking firmware rollout health, diagnosing fleet-wide issues.
AI has made me noticeably faster here. I describe what I want in plain language — “top 10 devices by connection failure rate this week, broken down by firmware version” — and get a working query to start from. For someone who writes SQL regularly but would not call themselves a SQL expert, that is a big deal.
It also helps me ask better questions. When I am exploring data, it suggests angles I had not thought of. “Does this correlate with the carrier?” or “This looks time-of-day dependent, here is a query to check.” Having that kind of back-and-forth when you are trying to understand a dataset is genuinely useful.
Where It Falls Short
I do not want to oversell this. There are real limitations.
Hardware-specific knowledge is still thin. When I am dealing with a specific modem chipset’s AT command set or debugging a timing-sensitive interaction between two components, the AI often does not have the specific domain knowledge. It can help me organize my thinking, but the actual work still requires reading datasheets and understanding the hardware.
Real-time and safety-critical work is where I stay conservative. Code where timing matters at the microsecond level, or where failures have physical consequences — I am much more careful about AI-generated code there. The tools are getting better, but the stakes are too high.
Build tooling is hit or miss. Yocto recipes, cross-compilation problems, linker scripts — the AI sometimes produces plausible-looking but wrong answers. My guess is the training data just does not have enough of these specific configurations.
Where This Is Headed
What I find most interesting is how unremarkable it has become. I do not think about “AI-assisted development” as a separate thing. It is just how I work. Claude Code is in one terminal pane, my editor is in another, and I move between them the same way I move between code and documentation.
If you work in embedded and have not tried these tools, start with log analysis. Take a debug session that would normally eat an hour and throw the logs at an AI. When it finds the root cause in a couple minutes, you will get it.
Embedded has always been conservative about new tools, and for good reason. Our mistakes are expensive. But AI tooling has gotten past the point of being a novelty. The developers who figure out how to use it effectively — and critically, who put in the work to connect it to their actual information sources — are going to have a real edge.
The tools will keep getting better. But even right now, today, they are making me faster at my job. That is not a prediction. It is just what I see every day.