
Actionwire
Background This project is developed specific for Lin Pei-Yao Solo Exhibition: Who is the speaker? (2025). In this exhibition, it required speech recognition for selected keywords, and perform specific actions, including smart light control and video playhead control. The speech recognition is done in real-time, deployed locally on a Raspberry Pi. For example, in the command “Drink Tea”, it blinks one set of lights and seeks the video to a specific time (00:25) and jumps back to original position after 10 seconds. Different voice commands have different actions, and some of them may depends on each other. To make the concurrent events manageable, I used Reactive Programming design pattern via RxPy. Structure The program is divided into three parts: Events, Commands, and Actions. Events are the input to the system, including microphone and WebSocket inputs. It will be transform into an Observable stream. ...
