<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Projects on WANcatServer</title>
    <link>https://wancat.cc/en/projects/</link>
    <description>Recent content in Projects on WANcatServer</description>
    
    <generator>Hugo -- 0.152.2</generator>
    <language>en</language>
    <lastBuildDate>Thu, 09 Apr 2026 17:11:38 +1000</lastBuildDate>
    <atom:link href="https://wancat.cc/en/projects/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Linux Odyssey</title>
      <link>https://wancat.cc/en/projects/linuxodyssey/</link>
      <pubDate>Sat, 01 Jul 2023 00:00:00 +0000</pubDate>
      <guid>https://wancat.cc/en/projects/linuxodyssey/</guid>
      <description>Interactive online Linux command teaching website with gamification experience</description>
      <content:encoded><![CDATA[<p><img alt="Linux Odyssey screenshot" loading="lazy" src="/projects/linuxodyssey/linuxodyssey.png"></p>
<p>My graduation project. An interactive terminal learning platform that provides guided courses, visual file tree functionality, and error message guidance.
Each course creates a container on the server side for user interaction.</p>
<p>Stack: TypeScript, Vue, Express, WebSocket, Docker in Docker<br>
License: GPL</p>
<p><a href="https://github.com/linux-odyssey/linux-odyssey">GitHub</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Synchan</title>
      <link>https://wancat.cc/en/projects/synchan/</link>
      <pubDate>Thu, 01 Aug 2024 00:00:00 +0000</pubDate>
      <guid>https://wancat.cc/en/projects/synchan/</guid>
      <description>Cross-device multichannel video sync engine with automatic latency measurement and playhead alignment</description>
      <content:encoded><![CDATA[<h2 id="synchan">Synchan</h2>
<p>2024 - 2025</p>
<p>Multichannel video synchronization tool that works across devices and platforms, with mobile support.</p>
<div style="position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;">
      <iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share; fullscreen" loading="eager" referrerpolicy="strict-origin-when-cross-origin" src="https://www.youtube.com/embed/gh-UvZkEhOs?autoplay=0&amp;controls=1&amp;end=0&amp;loop=0&amp;mute=0&amp;start=0" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" title="YouTube video"></iframe>
    </div>

<p><a href="https://peiyao.run/2024-the-dual-double-channel/">Video: Lin Pei-Yao. Triangular Relationship / Three-Channel Loop Video / 03'39&quot; / 2021</a></p>
<h2 id="background">Background</h2>
<p>Multichannel video playing is a common needs in modern art exhibitions.
The usual way to implement it is using dual monitors connected to the same computer,
and use proprietary video player to control it.</p>
<p>However, this method requires different monitors to be physically connected,
and doesn&rsquo;t allow using mobile devices as display.</p>
<p>Therefore, I developed a Web-based system to allow multidevice multichannel video synchronisation.</p>
<h2 id="structure">Structure</h2>
<p>The system consists of two parts: Server and Client</p>
<ul>
<li><strong>Server</strong>
<ul>
<li>NodeJS, TypeScript, Electron, tRPC, Express, Socket.io</li>
<li>Store the video files</li>
<li>Manage the current playhead</li>
<li>Calculate the latency to each connected client</li>
<li>Periodically update current playhead to every client via WebSocket</li>
<li>Provide a tRPC and RESTful interface for control</li>
<li>Wrapped inside an Electron desktop application to simplify starting.</li>
</ul>
</li>
<li><strong>Clients</strong>
<ul>
<li>TypeScript, ReactJS, Vite, tRPC, Socket.io, Redux</li>
<li>Connect to the server instance, load available videos</li>
<li>Dynamically syncing current playhead to the remote</li>
<li>Provide Admin interface when open inside Electron</li>
</ul>
</li>
</ul>
<h2 id="easy-to-use-interface">Easy-to-Use Interface</h2>
<p><img alt="Admin Interface of Synchan" loading="lazy" src="/projects/synchan/synchan-admin.webp">
<em>Admin Interface of Synchan</em></p>
<p>In an art exhibition environment, the exhibition managers usually are not from tech background.
It is required to provide an easy-to-use interface to start the whole system.
Also, the installations usually don&rsquo;t have mouse and keyboard connected once the set-up is completed.
Thus, the unattended start-up is important to minimise the effort for the exhibition managers.</p>
<p>There are many limitations for a Web-based application to achieve this.
Modern browsers require user interaction to play the sound, auto full-screen is also blocked.</p>
<p>Therefore, I wrapped the backend and the admin interface as an Electron application.
The server is spawned once the application starts, requiring no terminal or background service configuration.
Furthermore, Electron can be configured to allow autoplay and open-to-fullscreen.
The application also memorises last played video, and automatically play it on the start.</p>
<p>As a result, exhibition managers can just turn on the computer, and the whole installation is up and running.</p>
<h2 id="real-time-sync-engine">Real-Time Sync Engine</h2>
<p>Latency between devices is the biggest problem for cross-device synchronisation, especially when both have audio tracks, the difference becomes trivial.</p>
<p>The latency mainly comes from the network latency between server and clients, while different clients have different latency.
In the time code synchronisation protocol, I implemented a <strong>round-trip latency measurement</strong>.
Every time the server sends a new time code, it starts a stopwatch, and wait for client&rsquo;s ping back.
Then server uses a <strong>moving-window algorithm</strong> to calculate the median of recent transmissions, so the spikes in networking can be eliminated.
Finally server includes the calculated latency in the next time code packet sent to the client.</p>
<p>Once the client receives the time code with latency, it can calculate the real time code.
But the next problem arises:</p>
<p>The video seek operation in the browser takes long time (&gt;0.5s) on low-end devices like Raspberry Pi.
So it is not possible to do precise control using seek, and it also breaks the continuity of the video playback.</p>
<p>Therefore, I use micro speed adjustment to align the target playhead,
by a linear speed control between 1.05x to 0.95x.
The number chosen is a balance between adjustment balance and user experience.
It is unnoticeable by the users, and it doesn&rsquo;t breaks the video playing or lag the browser.</p>
<p>After the fix, latency can be minimised to under 5ms, which is indistinguishable by human ear.</p>
<h2 id="performance-optimisation-via-preloading">Performance Optimisation via Preloading</h2>
<p>The networking may not be stable in an exhibition environment.
While playing on low-end devices like Raspberry Pi,
mid-playback buffering often causes little lags.
Besides, exhibition installations usually play the same videos,
and the set-up time is before the exhibition starts, which can be ignored.</p>
<p>Therefore, instead of streaming, it has many benefits to preload the whole video before playing.</p>
<p>I implemented a cache layer in the video player, which downloads the whole file and saves into IndexedDB.
This method works in every modern browser out-of-box.</p>
<p>By using preloading, it solves the mid-playback buffering issue and significantly improves the stability.</p>
<h2 id="cross-platform-clients">Cross-Platform Clients</h2>
<p>Synchan itself ships a Web-based client, which can run on every platform with a modern browser,
including both desktop and mobile devices.
Its server-client structure also allows it to be used by different clients.</p>
<p>On low-end devices like Raspberry Pi, running a Chromium instance to play video may be too heavy.
Thus, I developed a headless client <strong>VLChan</strong> using VLC Python SDK.
It connects to a Synchan Server and plays local files with synchronised playhead, allows best performance on a Raspberry Pi.</p>
<h2 id="show-cases">Show Cases</h2>
<h3 id="lin-pei-yao-solo-exhibition-the-dual-double-channel-2024"><a href="https://yao-bite.github.io/exhibitions/2024-the-dual-double-channel/#gaze-triangle">Lin Pei-Yao Solo Exhibition: The Dual Double-Channel (2024)</a></h3>
<p>Used in work <strong>Gaze Triangle</strong></p>
<ul>
<li>3 video channels + 2 audio channels</li>
<li>Server: Raspberry Pi 4</li>
<li>Clients
<ul>
<li>Raspberry Pi 4 (same machine, connected monitor)</li>
<li>Mac Mini (projector + Bluetooth headset)</li>
<li>Android Phone (play audio by built-in speaker)</li>
</ul>
</li>
</ul>
<p><img loading="lazy" src="/projects/synchan/synchan.jpg">
<img loading="lazy" src="/projects/synchan/synchan-3.webp">
<img loading="lazy" src="/projects/synchan/synchan-4.webp"></p>
<h3 id="lin-pei-yao-solo-exhibition-who-is-the-speaker-2025"><a href="https://yao-bite.github.io/exhibitions/2025-who-is-the-speaker/#inter-view-with-a-philosopher">Lin Pei-Yao Solo Exhibition: Who is the speaker? (2025)</a></h3>
<p>Used in work <strong>Inter-view with a Philosopher</strong></p>
<ul>
<li>1 video + 2 audio channels</li>
<li>Server: Mac Mini</li>
<li>Clients
<ul>
<li>Mac Mini (same machine): Video + audio</li>
<li>Raspberry Pi 4 (audio only VLC client)</li>
</ul>
</li>
<li>Extra time-code control with <a href="/en/projects/actionwire">Actionwire</a></li>
</ul>
<p><img loading="lazy" src="/projects/synchan/who-is-the-speaker.webp"></p>
<h2 id="want-to-try">Want to Try?</h2>
<p>Currently available by invitation only. For inquiries, please contact <a href="mailto:wancat@wancat.cc">wancat@wancat.cc</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Actionwire</title>
      <link>https://wancat.cc/en/projects/actionwire/</link>
      <pubDate>Thu, 09 Apr 2026 17:11:38 +1000</pubDate>
      <guid>https://wancat.cc/en/projects/actionwire/</guid>
      <description>Reactive automation system linking offline speech recognition, smart lighting, and video control for live installations.</description>
      <content:encoded><![CDATA[<h2 id="background">Background</h2>
<p>This project is developed specific for <a href="https://yao-bite.github.io/exhibitions/2025-who-is-the-speaker/#inter-view-with-a-philosopher">Lin Pei-Yao Solo Exhibition: Who is the speaker? (2025)</a>.</p>
<div style="position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;">
      <iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share; fullscreen" loading="eager" referrerpolicy="strict-origin-when-cross-origin" src="https://www.youtube.com/embed/mKmAC1MVB6E?autoplay=0&amp;controls=1&amp;end=0&amp;loop=0&amp;mute=0&amp;start=0" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" title="YouTube video"></iframe>
    </div>

<p>In this exhibition, it required speech recognition for selected keywords, and perform specific actions, including smart light control and video playhead control.
The speech recognition is done in real-time, deployed locally on a Raspberry Pi.</p>
<p>For example, in the command <em>&ldquo;Drink Tea&rdquo;</em>, it blinks one set of lights and seeks the video to a specific time (00:25) and jumps back to original position after 10 seconds.</p>
<p>Different voice commands have different actions, and some of them may depends on each other.</p>
<p>To make the concurrent events manageable, I used <a href="https://en.wikipedia.org/wiki/Reactive_programming">Reactive Programming</a> design pattern via <a href="https://rxpy.readthedocs.io/en/latest/">RxPy</a>.</p>
<h2 id="structure">Structure</h2>
<p>The program is divided into three parts: Events, Commands, and Actions.</p>
<p>Events are the input to the system, including microphone and WebSocket inputs.
It will be transform into an Observable stream.</p>
<p>Actions are the output behaviours.
Including light control and video playhead control.</p>
<p>Commands are the business logic. Freely connecting, composing, mixing all the inputs, and producing one output.
Can be easily customised by user needs.</p>
<ul>
<li>Events (inputs)
<ul>
<li>Microphone -&gt; Vosk -&gt; Keyword extraction</li>
<li>WebSocket -&gt; Current timecode</li>
</ul>
</li>
<li>Commands
<ul>
<li>Define the pipeline logic for every command</li>
<li>Written in Reactive Programming styles</li>
<li>No hidden state management. Easy to update</li>
</ul>
</li>
<li>Actions (outputs)
<ul>
<li>Light control -&gt; LIFX LAN API</li>
<li>Video playhead control -&gt; HTTP request</li>
</ul>
</li>
</ul>
<h2 id="keywords-recognition">Keywords Recognition</h2>
<p>I used <a href="https://alphacephei.com/vosk/">Vosk</a> as the offline speech recognition model, because it is small enough to run on a Raspberry Pi.</p>
<p>The original accuracy of the model is not good, and it is designed as a speech-to-text model, not for recognise specific keywords.
I customised the vocabulary list to make it only select tokens that appears the keyword list.
It&rsquo;s also important to include <code>[unk]</code> in the list, to prevent the model output unknown words.</p>
<h2 id="synchan-integration">Synchan Integration</h2>
<p>The video playing system is <a href="/en/projects/synchan">Synchan</a>, a multichannel multidevice synchronised video playing system.
It allows control via HTTP requests, and it updates the current time code to every clients via WebSocket.
The time code is parsed as an Observable stream, and used to perform action according to video time code.</p>
<p>For example, in the beginning of the video, it turns on the light in the exhibition as the light is turned on in the video.
And in the command &ldquo;Drink Tea&rdquo;, it seeks the video to back to 00:25, where the performer asked &ldquo;Would you like some tea?&rdquo;, and seeks back to the original playhead after 10 seconds.</p>
<h2 id="tech-stack">Tech Stack</h2>
<ul>
<li>Python</li>
<li><a href="https://rxpy.readthedocs.io/en/latest/">RxPy</a></li>
<li><a href="https://alphacephei.com/vosk/">Vosk</a></li>
<li><a href="https://python-socketio.readthedocs.io/en/latest/index.html">Socket.IO</a></li>
<li><a href="https://github.com/mclarkk/lifxlan">lifxlan</a>: Smart Light Control in LAN</li>
</ul>
<h2 id="gallery">Gallery</h2>
<p><img loading="lazy" src="/projects/actionwire/2025-ZoneArt-1.webp"></p>
<p><img loading="lazy" src="/projects/actionwire/2025-ZoneArt-11.webp"></p>
<p><img loading="lazy" src="/projects/actionwire/2025-ZoneArt-14-1994.webp"></p>
<h2 id="want-to-try">Want to Try?</h2>
<p>Currently available by invitation only. For inquiries, please contact <a href="mailto:wancat@wancat.cc">wancat@wancat.cc</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Divisignal</title>
      <link>https://wancat.cc/en/projects/divisignal/</link>
      <pubDate>Sat, 01 Feb 2025 00:00:00 +0000</pubDate>
      <guid>https://wancat.cc/en/projects/divisignal/</guid>
      <description>&lt;h2 id=&#34;divisignal-stock-traffic-light&#34;&gt;&lt;a href=&#34;https://divi-signal.pages.dev/&#34;&gt;DiviSignal Stock Traffic Light&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;2025&lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;DiviSignal screenshot&#34; loading=&#34;lazy&#34; src=&#34;../../projects/divisignal/divisignal.png&#34;&gt;&lt;/p&gt;
&lt;p&gt;Stock traffic light analysis tool. Supports dividend yield calculations, import/export of watchlists, and stock filtering by yield.&lt;/p&gt;
&lt;p&gt;Stack: TypeScript, React, Redux, Cheerio, Cloudflare Worker&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="divisignal-stock-traffic-light"><a href="https://divi-signal.pages.dev/">DiviSignal Stock Traffic Light</a></h2>
<p>2025</p>
<p><img alt="DiviSignal screenshot" loading="lazy" src="/projects/divisignal/divisignal.png"></p>
<p>Stock traffic light analysis tool. Supports dividend yield calculations, import/export of watchlists, and stock filtering by yield.</p>
<p>Stack: TypeScript, React, Redux, Cheerio, Cloudflare Worker</p>
]]></content:encoded>
    </item>
  </channel>
</rss>
