New from Texas Instruments (TI) are two microcontroller (MCU) families with edge artificial intelligence (AI) capabilities, supporting the company’s commitment to enabling edge AI across its entire embedded processing portfolio.
The MSPM0G5187 and AM13Ex MCUs integrate TI’s TinyEngine neural processing unit (NPU), a dedicated hardware accelerator for MCUs that optimizes deep learning inference operations to reduce latency and improve energy efficiency when processing at the edge.
TI is integrating the TinyEngine NPU across its entire microcontroller portfolio, including general-purpose and high-performance, real-time MCUs.
TI’s embedded processing portfolio is supported by a comprehensive development ecosystem, including the CCStudio integrated development environment (IDE). Its generative AI features allow engineers to use simple language to accelerate code development, system configuration and debugging through industry-standard agents and models paired with TI data. Altogether, TI is accelerating the adoption of edge AI in any electronic device, from real-time monitoring in wearable health monitors and home circuit breakers to physical AI in humanoid robots. These end-to-end innovations are featured in TI’s booth at embedded world 2026, March 10-12, in Nuremberg, Germany.
“TI invented the digital signal processor almost 50 years ago, laying the groundwork for today’s edge AI processing,” said Amichai Ron, senior vice president, Embedded Processing and DLP Products at TI. “Now TI is leading the next phase of innovation by integrating the TinyEngine NPU across our entire microcontroller portfolio, including general-purpose and high-performance, real-time MCUs. By enabling AI across our software, tools, devices and ecosystem, we are making edge AI accessible and easy to use for every customer and every application.”
“While much of the world has been focused on AI acceleration and NPUs in bigger SoCs, it turns out some of the more interesting and far-reaching applications of AI can be enabled inside smaller chips like microcontrollers,” said Bob O’Donnell, President and Chief Analyst at TECHnalysis Research. “Edge-based applications of AI acceleration can make consumer devices more intelligent and industrial devices more efficient. Plus, if you can combine these chips with software development tools that themselves leverage AI to help build AI features, you bring the power of AI acceleration to a significantly wider audience of engineers and device designers.”
With local computation, the TinyEngine NPU executes computations required by neural networks in parallel to the primary CPU running application code. Compared to similar MCUs without an accelerator, this hardware acceleration:
- Minimizes the flash memory footprint.
- Lowers latency by up to 90 times per AI inference.
- Reduces energy utilization by more than 120 times per AI inference.
Such levels of efficiency allow resource-constrained devices – including portable, battery-powered products – to process AI workloads. At under US$1 in 1,000-unit quantities, the MSPM0G5187 MCU reduces system and operating costs by offering an affordable alternative to other MCU or processor architectures.
For more information, see ti.com/edgeAI, ti.com/MSPM0G5187 and ti.com/AM13E23019.
Learn more at TI.com.
For more products: https://designsolutionsmag.co.uk/category/iiot-smart-manufacturing/
