<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>我的專案 on WANcatServer</title>
    <link>https://wancat.cc/projects/</link>
    <description>Recent content in 我的專案 on WANcatServer</description>
    
    <generator>Hugo -- 0.152.2</generator>
    <language>zh</language>
    <lastBuildDate>Thu, 09 Apr 2026 17:11:38 +1000</lastBuildDate>
    <atom:link href="https://wancat.cc/projects/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Linux Odyssey</title>
      <link>https://wancat.cc/projects/linuxodyssey/</link>
      <pubDate>Sat, 01 Jul 2023 00:00:00 +0000</pubDate>
      <guid>https://wancat.cc/projects/linuxodyssey/</guid>
      <description>&lt;h2 id=&#34;linux-odyssey&#34;&gt;&lt;a href=&#34;https://linuxodyssey.xyz&#34;&gt;Linux Odyssey&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;2023 - 2024&lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;Linux Odyssey screenshot&#34; loading=&#34;lazy&#34; src=&#34;../projects/linuxodyssey/linuxodyssey.png&#34;&gt;&lt;/p&gt;
&lt;p&gt;我的畢業專題。互動式終端機教學網站，提供導引式的課程，視覺化檔案樹功能，及錯誤訊息引導。
每個課程都會在伺服器端建立一個容器讓使用者操作。&lt;/p&gt;
&lt;p&gt;Stack: TypeScript, Vue, Express, WebSocket, Docker in Docker&lt;br&gt;
License: GPL&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://github.com/linux-odyssey/linux-odyssey&#34;&gt;GitHub&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;了解我們的開發歷程：&lt;a href=&#34;../post/linux-odyssey/&#34;&gt;Linux Odyssey: 我們的旅程&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="linux-odyssey"><a href="https://linuxodyssey.xyz">Linux Odyssey</a></h2>
<p>2023 - 2024</p>
<p><img alt="Linux Odyssey screenshot" loading="lazy" src="/projects/linuxodyssey/linuxodyssey.png"></p>
<p>我的畢業專題。互動式終端機教學網站，提供導引式的課程，視覺化檔案樹功能，及錯誤訊息引導。
每個課程都會在伺服器端建立一個容器讓使用者操作。</p>
<p>Stack: TypeScript, Vue, Express, WebSocket, Docker in Docker<br>
License: GPL</p>
<p><a href="https://github.com/linux-odyssey/linux-odyssey">GitHub</a></p>
<p>了解我們的開發歷程：<a href="/post/linux-odyssey/">Linux Odyssey: 我們的旅程</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Synchan: 多頻道影音跨裝置同步播放系統</title>
      <link>https://wancat.cc/projects/synchan/</link>
      <pubDate>Tue, 07 Apr 2026 17:26:00 +1000</pubDate>
      <guid>https://wancat.cc/projects/synchan/</guid>
      <description>&lt;p&gt;2024 - 2025&lt;/p&gt;
&lt;p&gt;&lt;img loading=&#34;lazy&#34; src=&#34;../projects/synchan/synchan.jpg&#34;&gt;
&lt;a href=&#34;https://peiyao.run/2024-the-dual-double-channel/&#34;&gt;圖：林沛瑤。三角關係／三頻道循環錄像／03’39”／2021&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;多頻道影音同步播放工具，可跨裝置、跨平台，支援手機。&lt;/p&gt;
&lt;p&gt;使用於&lt;a href=&#34;https://peiyao.run/2024-the-dual-double-channel/&#34;&gt;林沛瑤個展 雙頻道 The Dual Double-Channel (2024)&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Stack: TypeScript, React, tRPC, WebSocket&lt;/p&gt;
&lt;p&gt;目前僅開放邀請試用。有興趣請來信 &lt;a href=&#34;mailto:wancat@wancat.cc&#34;&gt;wancat@wancat.cc&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>2024 - 2025</p>
<p><img loading="lazy" src="/projects/synchan/synchan.jpg">
<a href="https://peiyao.run/2024-the-dual-double-channel/">圖：林沛瑤。三角關係／三頻道循環錄像／03’39”／2021</a></p>
<p>多頻道影音同步播放工具，可跨裝置、跨平台，支援手機。</p>
<p>使用於<a href="https://peiyao.run/2024-the-dual-double-channel/">林沛瑤個展 雙頻道 The Dual Double-Channel (2024)</a></p>
<p>Stack: TypeScript, React, tRPC, WebSocket</p>
<p>目前僅開放邀請試用。有興趣請來信 <a href="mailto:wancat@wancat.cc">wancat@wancat.cc</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Actionwire</title>
      <link>https://wancat.cc/projects/actionwire/</link>
      <pubDate>Thu, 09 Apr 2026 17:11:38 +1000</pubDate>
      <guid>https://wancat.cc/projects/actionwire/</guid>
      <description>Reactive automation system linking offline speech recognition, smart lighting, and video control for live installations.</description>
      <content:encoded><![CDATA[<h2 id="background">Background</h2>
<p>This project is developed specific for <a href="https://yao-bite.github.io/exhibitions/2025-who-is-the-speaker/#inter-view-with-a-philosopher">Lin Pei-Yao Solo Exhibition: Who is the speaker? (2025)</a>.</p>
<div style="position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;">
      <iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share; fullscreen" loading="eager" referrerpolicy="strict-origin-when-cross-origin" src="https://www.youtube.com/embed/mKmAC1MVB6E?autoplay=0&amp;controls=1&amp;end=0&amp;loop=0&amp;mute=0&amp;start=0" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" title="YouTube video"></iframe>
    </div>

<p>In this exhibition, it required speech recognition for selected keywords, and perform specific actions, including smart light control and video playhead control.
The speech recognition is done in real-time, deployed locally on a Raspberry Pi.</p>
<p>For example, in the command <em>&ldquo;Drink Tea&rdquo;</em>, it blinks one set of lights and seeks the video to a specific time (00:25) and jumps back to original position after 10 seconds.</p>
<p>Different voice commands have different actions, and some of them may depends on each other.</p>
<p>To make the concurrent events manageable, I used <a href="https://en.wikipedia.org/wiki/Reactive_programming">Reactive Programming</a> design pattern via <a href="https://rxpy.readthedocs.io/en/latest/">RxPy</a>.</p>
<h2 id="structure">Structure</h2>
<p>The program is divided into three parts: Events, Commands, and Actions.</p>
<p>Events are the input to the system, including microphone and WebSocket inputs.
It will be transform into an Observable stream.</p>
<p>Actions are the output behaviours.
Including light control and video playhead control.</p>
<p>Commands are the business logic. Freely connecting, composing, mixing all the inputs, and producing one output.
Can be easily customised by user needs.</p>
<ul>
<li>Events (inputs)
<ul>
<li>Microphone -&gt; Vosk -&gt; Keyword extraction</li>
<li>WebSocket -&gt; Current timecode</li>
</ul>
</li>
<li>Commands
<ul>
<li>Define the pipeline logic for every command</li>
<li>Written in Reactive Programming styles</li>
<li>No hidden state management. Easy to update</li>
</ul>
</li>
<li>Actions (outputs)
<ul>
<li>Light control -&gt; LIFX LAN API</li>
<li>Video playhead control -&gt; HTTP request</li>
</ul>
</li>
</ul>
<h2 id="keywords-recognition">Keywords Recognition</h2>
<p>I used <a href="https://alphacephei.com/vosk/">Vosk</a> as the offline speech recognition model, because it is small enough to run on a Raspberry Pi.</p>
<p>The original accuracy of the model is not good, and it is designed as a speech-to-text model, not for recognise specific keywords.
I customised the vocabulary list to make it only select tokens that appears the keyword list.
It&rsquo;s also important to include <code>[unk]</code> in the list, to prevent the model output unknown words.</p>
<h2 id="synchan-integration">Synchan Integration</h2>
<p>The video playing system is <a href="/en/projects/synchan">Synchan</a>, a multichannel multidevice synchronised video playing system.
It allows control via HTTP requests, and it updates the current time code to every clients via WebSocket.
The time code is parsed as an Observable stream, and used to perform action according to video time code.</p>
<p>For example, in the beginning of the video, it turns on the light in the exhibition as the light is turned on in the video.
And in the command &ldquo;Drink Tea&rdquo;, it seeks the video to back to 00:25, where the performer asked &ldquo;Would you like some tea?&rdquo;, and seeks back to the original playhead after 10 seconds.</p>
<h2 id="tech-stack">Tech Stack</h2>
<ul>
<li>Python</li>
<li><a href="https://rxpy.readthedocs.io/en/latest/">RxPy</a></li>
<li><a href="https://alphacephei.com/vosk/">Vosk</a></li>
<li><a href="https://python-socketio.readthedocs.io/en/latest/index.html">Socket.IO</a></li>
<li><a href="https://github.com/mclarkk/lifxlan">lifxlan</a>: Smart Light Control in LAN</li>
</ul>
<h2 id="gallery">Gallery</h2>
<p><img loading="lazy" src="/projects/actionwire/2025-ZoneArt-1.webp"></p>
<p><img loading="lazy" src="/projects/actionwire/2025-ZoneArt-11.webp"></p>
<p><img loading="lazy" src="/projects/actionwire/2025-ZoneArt-14-1994.webp"></p>
<h2 id="want-to-try">Want to Try?</h2>
<p>Currently available by invitation only. For inquiries, please contact <a href="mailto:wancat@wancat.cc">wancat@wancat.cc</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Divisignal</title>
      <link>https://wancat.cc/projects/divisignal/</link>
      <pubDate>Tue, 07 Apr 2026 17:28:49 +1000</pubDate>
      <guid>https://wancat.cc/projects/divisignal/</guid>
      <description>&lt;h2 id=&#34;divisignal-股票紅綠燈&#34;&gt;&lt;a href=&#34;https://divi-signal.pages.dev/&#34;&gt;DiviSignal 股票紅綠燈&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;2025&lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;DiviSignal screenshot&#34; loading=&#34;lazy&#34; src=&#34;../projects/divisignal/divisignal.png&#34;&gt;&lt;/p&gt;
&lt;p&gt;股票紅綠燈分析工具。支援股利報酬率計算，匯入匯出自選股，依殖利率篩選股票。&lt;/p&gt;
&lt;p&gt;Stack: TypeScript, React, Redux, Cheerio, Cloudflare Worker&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="divisignal-股票紅綠燈"><a href="https://divi-signal.pages.dev/">DiviSignal 股票紅綠燈</a></h2>
<p>2025</p>
<p><img alt="DiviSignal screenshot" loading="lazy" src="/projects/divisignal/divisignal.png"></p>
<p>股票紅綠燈分析工具。支援股利報酬率計算，匯入匯出自選股，依殖利率篩選股票。</p>
<p>Stack: TypeScript, React, Redux, Cheerio, Cloudflare Worker</p>
]]></content:encoded>
    </item>
  </channel>
</rss>
