Google Summer of Code Internship
Working with scikit-learn.
is my development log.
National Tsing Hua University: Bachelor's Degree
Major in Computer Science.
. I am currently a final year Computer Science undergraduate student at NTHU, working with Min Sun
. My research focuses on Deep Learning and its applications in Computer Vision. In particular, I am interested in recent advances in Deep Reinforcement Learning.
In addition to my research experience, I build interesting side projects such as DeepLearningFlappyBird
, which has been featured on Hacker News. Also, I am an iOS developer and have received Apple WWDC 2015 scholarship.
See my CV
for more details.
Tactics for Adversarial Attack on Deep Reinforcement Learning Agents
We introduce two tactics to attack agents trained by deep reinforcement learning algorithms using adversarial examples:
Strategically-timed attack: the adversary aims at minimizing the agent's reward by only attacking the agent at a small subset of time steps in an episode.
Enchanting attack: the adversary aims at luring the agent to a designated target state.
Yen-Chen Lin, Zhang-Wei Hong, Yuan-Hong Liao, Meng-Li Shih, Ming-Yu Liu, Min Sun
Deep 360 Pilot: Learning a Deep Agent for Piloting through 360° Sports Videos
Watching a 360° sports video requires a viewer to continuously
select a viewing angle, either through a sequence of mouse clicks or head movements. To relieve the viewer
from this “360 piloting” task, we propose “deep 360 pilot”
– a deep learning-based agent for piloting through 360°
sports videos automatically. At each frame, the agent observes
a panoramic image and has the knowledge of previously
selected viewing angles. The task of the agent is to shift the current viewing angle
(i.e. action) to the next preferred one (i.e., goal).
(* indicates equal contribution)
Hou-Ning Hu*, Yen-Chen Lin*, Ming-Yu Liu, Hsien-Tzu Cheng, Yung-Ju Chang, Min Sun
CVPR 2017 (Oral)
Tell Me Where to Look: Investigating Ways for Assisting Focus in 360° Video
One challenge of watching 360° videos is continuously focusing and re-focusing intended targets. To address this challenge, we developed two Focus Assistance techniques: Auto Pilot, and Visual Guidance. We conducted an experiment to compare users’ video-watching experience and sickness using these techniques, and obtained their qualitative feedback. Our results provide design implications for better 360° video focus assistance.
Yen-Chen Lin, Yung-Ju Chang, Hou-Ning Hu, Hsien-Tzu Cheng, Chi-Wen Huang, Min Sun
Semantic Highlight Retrieval
Finding highlights relevant to a text query in unedited videos has become increasingly important due to their unprecedented growth. We refer this task as semantic highlight retrieval and propose a query-dependent video representation for retrieving a variety of highlights. Our method consist of two parts: (1) “viralets”, a mid-level representation bridging between visual and semantic spaces; (2) a novel Semantically-Modulation (SM) procedure to make viralets query-dependent (referred to as SM viralets).
Kuo-Hao Zeng, Yen-Chen Lin, Ali Farhadi, Min Sun
is a curated list of awesome watchOS frameworks, libraries, sample apps. It is created with the purpose of helping people to get the hang of apple watch programming.
Google Summer of Code
In 2016, I participated Google Summer of Code
program and worked with scikit-learn
community. My mission is to make implementation of many algorithms, such as Stochastic Gradient Descent, Coordinate Descent, support Cython fused types and therefore reduce memory waste.
Real Forest is a feature I developed for well-known iOS app - Forest: Stay focused, be present
when I was an intern there. This feature let user plant a real tree in the world using the coins they earned in the app. So far, it has helped planting 8000+ real trees in India and Zambia.