ARTICLES https://doi.org/10.1038/s41928-020-0428-6 1 Key Laboratory of Optoelectronic Technology & Systems, Ministry of Education, Department of Optoelectronic Engineering, Chongqing University, Chongqing, China. 2 Department of Bioengineering, University of California, Los Angeles, Los Angeles, CA, USA. 3 College of Physics and Electronic Engineering, Chongqing Normal University, Chongqing, China. ✉ e-mail: yangjin@cqu.edu.cn; jun.chen@ucla.edu S igned languages are conveyed by the hands, face and body, and are primarily perceived visually 1 . Through the signed mode, the language is accessible at the optimal level through the visual sense. However, without prior knowledge of sign language, it is difficult for non-signers to receive and understand this con- versational medium. This creates a communication barrier between signers and non-signers 2 . Wearable electronics 3–17 have a number of attractive features, including their light weight, low cost, high flexi- bility and conformability, and could offer a technological solution to this communication barrier in the form of wearable sign language translation devices. Sign language translation devices have been developed that are based on electromyography 18–20 , the piezoresistive effect 21–24 , ionic conduction 25 and the capacitive effect 26 , as well as photography and image processing 27–30 . However, the large-scale production and widespread use of these techniques is limited by a number of issues, including their structural complexity 22,30 , the need for high-quality materials for fabrication 25 , poor chemical stability 21,25 , unsuitability for long-term wear 18,20–22 , vulnerability to external environmen- tal interference 29,30 and cumbersomeness in practical use 18,22 . For example, vision-based sign language translation systems have high requirements for lighting. Poor lighting compromises the visual quality of the signing motion captured by a camera and consequently affects the recognition results 27–30 . Meanwhile, sign language transla- tion systems based on surface electromyography have strict require- ments for the positions of the worn sensors, which can impact translation accuracy and reliability 18–20 . The cost of devices based on these technologies is also high, which limits their widespread use. In this Article, we report a wearable sign-to-speech translation system for real-time translation of sign language into audio speech. Analog triboelectrification and electrostatic induction 31–38 based signals generated by sign language components—including hand configurations and motions and facial expressions—are converted to the digital domain by the wearable sign-to-speech translation system to implement sign-to-speech translation. Our system offers good mechanical and chemical durability, high sensitivity, quick response time and excellent stretchability. To illustrate the capa- bilities of the wearable sign-to-speech translation system, a total of 660 sign language hand gestures based on American Sign Language (ASL) were acquired and successfully analysed with the assistance of a machine-learning algorithm. The system has a high recognition rate of 98.63% and a short recognition time of less than 1 s. Wearable sign-to-speech translation system The wearable sign-to-speech translation system consists of yarn-based stretchable sensor arrays (YSSAs) and a wireless printed circuit board (PCB; Fig. 1a,b). Owing to its unique structural design and the use of soft materials, the YSSA can conform to the skin of a human finger under both releasing and stretching states. Hand gesture movements are converted by the sensor arrays into electri- cal signals. The wireless PCB worn on the wrist integrates multiple functions, such as signal conditioning, processing and wireless transmission, using available integrated circuit components, as depicted in Fig. 1c. The white dashed boxes indicate the locations of the integrated circuit components that correspond to the numbers in parentheses in the system-level block diagram of the wearable sign-to-speech translation system in Fig. 1d. Figure 1d provides an overview of the process flow of both hardware and software, begin- ning with analog signal acquisition, then conditioning and pro- cessing, and finally wireless transmission to a customized mobile application, which is embedded with a machine-learning algorithm for robust translation of sign language hand gestures to speech. The signal conditioning path for each sensor is implemented in relation to the corresponding transduced signals with an analog Sign-to-speech translation using machine-learning-assisted stretchable sensor arrays Zhihao Zhou 1,2 , Kyle Chen 2 , Xiaoshi Li 1 , Songlin Zhang 2 , Yufen Wu 3 , Yihao Zhou 2 , Keyu Meng 1 , Chenchen Sun 1 , Qiang He 1 , Wenjing Fan 1 , Endong Fan 1 , Zhiwei Lin 1 , Xulong Tan 1 , Weili Deng 2 , Jin Yang 1 ✉ and Jun Chen 2 ✉ Signed languages are not as pervasive a conversational medium as spoken languages due to the history of institutional sup- pression of the former and the linguistic hegemony of the latter. This has led to a communication barrier between signers and non-signers that could be mitigated by technology-mediated approaches. Here, we show that a wearable sign-to-speech trans- lation system, assisted by machine learning, can accurately translate the hand gestures of American Sign Language into speech. The wearable sign-to-speech translation system is composed of yarn-based stretchable sensor arrays and a wireless printed circuit board, and offers a high sensitivity and fast response time, allowing real-time translation of signs into spoken words to be performed. By analysing 660 acquired sign language hand gesture recognition patterns, we demonstrate a recognition rate of up to 98.63% and a recognition time of less than 1 s. NATURE ELECTRONICS | www.nature.com/natureelectronics