Can you imagine a future where autonomous vehicles see road conditions and remember what they saw—just like the human brain? Or hospitals equipped with AI systems that automatically highlight abnormal regions in X-rays and CT scans to assist doctors in their diagnosis?
A research team led by Distinguished Professor Po-Tsun Liu from the Department of Photonics at National Yang Ming Chiao Tung University (NYCU) has made a significant leap toward that future. The team has successfully developed a novel all-metal oxide heterojunction photonic synaptic transistor—a device that mimics the memory and learning functions of human neurons. Their findings, titled “All‐Metal‐Oxide Heterojunction Optoelectronic Synapses with Multilevel Memory for Artificial Visual Perception Applications,” were recently published in Small.

A Breakthrough in Neuromorphic Vision and Sensing
This next-generation transistor, based on a heterojunction formed between tungsten oxide (WO₃) and indium tungsten zinc oxide (InWZnO), demonstrates not only high sensitivity to visible light (at 650, 525, and 460 nanometers) but also the ability to emulate synaptic plasticity—the brain’s mechanism for learning and memory.
According to Prof. Liu, the device exhibits short-term and long-term memory behaviors through optical pulse stimulation and gate voltage modulation. The result is a highly dynamic, stable, and reproducible synaptic behavior, significantly outperforming similar devices in the literature.
More importantly, the team engineered a 2 × 2 photonic synapse array module based on the device, capable of real-time processing of RGB (red, green, blue) signals. This array mimics the human retina’s layered perception and storage mechanisms for image intensity and color.
Through simulated cycles of learning and forgetting, the device showed a robust and non-volatile memory capability—retaining data even after removing optical stimuli. This feature lays crucial groundwork for the development of brain-inspired visual memory chips.
High Accuracy in Challenging AI Tasks
The team integrated the device into an artificial neural network (ANN) simulation platform to explore its real-world potential. They tested it on tasks such as handwritten digit recognition and image segmentation. The system maintained high recognition accuracy even under simulated noisy conditions (Gaussian and striped noise).
When applied to image segmentation using the U-Net architecture, the device-enabled system achieved near-ideal segmentation results, demonstrating outstanding stability, robustness, and learning ability in visual processing applications.
Towards Smarter Machines
This breakthrough technology opens up exciting possibilities for applications in innovative medical diagnostics, autonomous driving vision modules, wearable sensory devices, and biomimetic robotics—paving the way for deeper integration of artificial intelligence and advanced sensing systems.