About
Activity
417 followers
Experience & Education
Volunteer Experience
-
Mentor for Technology Management Program
UC Santa Barbara
- 2 years
Education
Mentor for startup teams in the Technology Management Program and New Venture Competition.
-
Mentor
Startup Weekend
- Present 13 years 5 months
Economic Empowerment
Mentor for Startup teams giving teams advice on technology and pitching at Startup Weekend Santa Barbara
-
Volunteer
Santa Barbara County Animal Shelter: Dogs
- Present 12 years
Animal Welfare
Publications
-
Calibrating a Wide- Area Camera Network with Non-Overlapping Views using Mobile Devices
ACM Transactions on Sensor Networks
In a wide-area camera network, cameras are often placed such that their views do not overlap. Collaborative tasks such as tracking and activity analysis still require discovering the network topology including the extrinsic calibration of the cameras. This work addresses the problem of calibrating a fixed camera in a wide-area camera network in a global coordinate system so that the results can be shared across calibrations. We achieve this by using commonly available mobile devices such as…
In a wide-area camera network, cameras are often placed such that their views do not overlap. Collaborative tasks such as tracking and activity analysis still require discovering the network topology including the extrinsic calibration of the cameras. This work addresses the problem of calibrating a fixed camera in a wide-area camera network in a global coordinate system so that the results can be shared across calibrations. We achieve this by using commonly available mobile devices such as smartphones. At least one mobile device takes images that overlap with a fixed camera's view and records the GPS position and 3D orientation of the device when an image is captured. These sensor measurements (including the image, GPS position, and device orientation) are fused in order to calibrate the fixed camera. This article derives a novel maximum likelihood estimation formulation for finding the most probable location and orientation of a fixed camera. This formulation is solved in a distributed manner using a consensus algorithm. We evaluate the efficacy of the proposed methodology with several simulated and real-world datasets.
Other authorsSee publication -
Camera Alignment using Trajectory Intersections in Unsynchronized Videos
IEEE International Conference on Computer Vision
This paper addresses the novel and challenging problem of aligning camera views that are unsynchronized by low and/or variable frame rates using object trajectories. Unlike existing trajectory-based alignment methods, our method does not require frame-to-frame synchronization. Instead, we propose using the intersections of corresponding object trajectories to match views. To find these intersections, we introduce a novel trajectory matching algorithm based on matching Spatio-Temporal Context…
This paper addresses the novel and challenging problem of aligning camera views that are unsynchronized by low and/or variable frame rates using object trajectories. Unlike existing trajectory-based alignment methods, our method does not require frame-to-frame synchronization. Instead, we propose using the intersections of corresponding object trajectories to match views. To find these intersections, we introduce a novel trajectory matching algorithm based on matching Spatio-Temporal Context Graphs (STCGs). These graphs represent the distances between trajectories in time and space within a view, and are matched to an STCG from another view to find the corresponding trajectories. To the best of our knowledge, this is one of the first attempts to align views that are unsynchronized with variable frame rates. The results on simulated and real-world datasets show trajectory intersections are a viable feature for camera alignment, and that the trajectory matching method performs well in real-world scenarios.
Other authorsSee publication -
Design and Calibration of Wide-Area Camera Networks
University of California, Santa Barbara
See publicationPhD Thesis: This dissertation addresses the challenges of designing a wide-area camera network and finding the spatial and temporal relationships between the cameras. In particular, we examine the challenges of calibrating non-overlapping cameras, aligning unsynchronized cameras with variable frame rates, and synchronizing cameras with variable frame rates.
-
Spatial-Temporal Understanding of Urban Scenes through Large Camera Network
ACM International Conference on Multimedia, Workshop on Multimodal Pervasive Video Analysis
Outdoor surveillance cameras have become prevalent as part of the urban infrastructure, and provided a good data source for studying urban dynamics. In this work, we provide a spatial-temporal analysis of 8 weeks of video data collected from the large outdoor camera network at UCSB campus, which consists of 27 cameras. We first apply simple vision algorithm to extract the crowdedness information in the scene. Then we further explore the relationship between the traffic pattern observed from the…
Outdoor surveillance cameras have become prevalent as part of the urban infrastructure, and provided a good data source for studying urban dynamics. In this work, we provide a spatial-temporal analysis of 8 weeks of video data collected from the large outdoor camera network at UCSB campus, which consists of 27 cameras. We first apply simple vision algorithm to extract the crowdedness information in the scene. Then we further explore the relationship between the traffic pattern observed from the cameras with activities in the nearby area using additional knowledge such as campus class schedule. Finally we investigate the potential of discovering aggregated human movement pattern by assuming a simple probabilistic model. Experiment has shown promising results using the proposed method.
Other authorsSee publication -
Design and Implementation of a Wide Area, Large-Scale Camera Network
IEEE Computer Society Conference on Computer Vision and Pattern Recognition, IEEE Workshop on Camera Networks
We describe a wide area camera network on a campus setting, the SCALLOPSNet (Scalable Large Optical Sensor Network). It covers with about 100 stationary cameras an expansive area that can be divided into three distinct regions: inside a building, along urban paths, and in a remote natural reserve. Some of these regions lack connections for power and communications, and, therefore, necessitate wireless, battery-powered camera nodes. In our exploration of available solutions, we found existing…
We describe a wide area camera network on a campus setting, the SCALLOPSNet (Scalable Large Optical Sensor Network). It covers with about 100 stationary cameras an expansive area that can be divided into three distinct regions: inside a building, along urban paths, and in a remote natural reserve. Some of these regions lack connections for power and communications, and, therefore, necessitate wireless, battery-powered camera nodes. In our exploration of available solutions, we found existing smart cameras to be insufficient for this task, and instead designed our own battery-powered camera nodes that communicate using 802.11b. The camera network uses the Internet Protocol on either wired or wireless networks to communicate with our central cluster, which runs cluster and cloud computing infrastructure. These frameworks like Apache Hadoop are well suited for large distributed and parallel tasks such as many computer vision algorithms. We discuss the design and implementation details of this network, together with the challenges faced in deploying such a large scale network on a research campus. We plan to make the datasets available for researchers in the computer vision community in the near future.
Other authorsSee publication -
A Lightweight Multiview Tracked Person Descriptor for Camera Sensor Networks
Proc. IEEE International Conference on Image Processing
We present a simple multiple view 3D model for object tracking and identification in camera networks. Our model is composed of 8 distinct views in the interval [0, 7*PI/4]. Each of the 8 parts describes the person's appearance from that particular viewpoint. The model contains both color and structure information about each view which are assembled into a single entity and is meant as a simple, lightweight object representation for use in camera sensor networks. It is versatile in that it can…
We present a simple multiple view 3D model for object tracking and identification in camera networks. Our model is composed of 8 distinct views in the interval [0, 7*PI/4]. Each of the 8 parts describes the person's appearance from that particular viewpoint. The model contains both color and structure information about each view which are assembled into a single entity and is meant as a simple, lightweight object representation for use in camera sensor networks. It is versatile in that it can be gradually assembled on-line while a person is tracked. The model's ease of use and effectiveness for identification in surveillance video is demonstrated.
Other authorsSee publication -
VISNET: A Distributed Vision Testbed
ACM/IEEE International Conference on Distributed Smart Cameras, 2008
We introduce UCSB's Visual Sensor Network (VISNET) and discuss current research being conducted with the system. VISNET is a ten-node experimental camera network at UCSB used for various vision-related research. The mission of VISNET is to provide an easy-to-use multi-node camera network to the vision research community at UCSB. This paper briefly discusses design and setup considerations before discussing current research. Current research includes operation visualization, camera network…
We introduce UCSB's Visual Sensor Network (VISNET) and discuss current research being conducted with the system. VISNET is a ten-node experimental camera network at UCSB used for various vision-related research. The mission of VISNET is to provide an easy-to-use multi-node camera network to the vision research community at UCSB. This paper briefly discusses design and setup considerations before discussing current research. Current research includes operation visualization, camera network calibration, tracked object modeling, and multiple object / multiple camera tracking
Other authorsSee publication -
Feature fusion and redundancy pruning for rush video summarization
International Workshop on TRECVID Video Summarization
This paper presents a video summarization technique for rushes that employs high-level feature fusion to identify segments for inclusion. It aims to capture distinct video events using a variety of features: k-means based weighting, speech, camera motion, significant differences in HSV colorspace, and a dynamic time warping (DTW) based feature that suppresses repeated scenes. The feature functions are used to drive a weighted k-means based clustering to identify visually distinct, important…
This paper presents a video summarization technique for rushes that employs high-level feature fusion to identify segments for inclusion. It aims to capture distinct video events using a variety of features: k-means based weighting, speech, camera motion, significant differences in HSV colorspace, and a dynamic time warping (DTW) based feature that suppresses repeated scenes. The feature functions are used to drive a weighted k-means based clustering to identify visually distinct, important segments that constitute the final summary. The optimal weights corresponding to the individual features are obtained using a gradient descent algorithm that maximizes the recall of ground truth events from representative training videos. Analysis reveals a lengthy computation time but high quality results (60% average recall over 42 test videos) as based on manually-judged inclusion of distinct shots. The summaries were judged relatively easy to view and had an average amount of redundancy.
Other authorsSee publication
Courses
-
Udacity: How to Build a Startup, The Lean LaunchPad
EP 245
Projects
-
IV Adopt-A-Block Tracker
Google I/O Extended Hackathon at Citrix Project. Collaborated with a randomly formed team to build a service for IV Adopt-A-Block that tracks streets that have been cleaned by volunteers through an iOS and Android app, and collects statistics using through an admin dashboard interface. Wrote iOS app in Swift.
Won First Place at Hackathon.Other creators -
Triangulator
Android App to triangulate a GPS position by pointing multiple smartphones at it. Additional option using computer vision (object re-identification based on color histograms) to achieve an improved result.
Other creators -
Honors & Awards
-
First Place
Google I/O Extended Hackathon at Citrix
Award given for best pitch and demonstration at the Hackathon.
Built a service for IV Adopt-A-Block, a non-profit that picked up 83,133 pounds of trash in 2013-14. The service tracks streets that have been cleaned by volunteers through an iOS and Android app, and collects statistics using through an admin dashboard interface. -
DEMO God
DEMO Conference
One of the best pitches at the DEMO Conference out of 75 companies.
Pitch for Birdeez, friendly mobile app for bird watchers. -
Best Mobile App and Second Place Team
Startup Weekend Santa Barbara
Award for Best Pitch and Demonstration for a Mobile App.
Pitch for Artful, a mobile app to turn the world into an art gallery. -
Best Market Pull
UCSB New Venture Competition
Best Pitch in Market Pull category.
Pitch for Birdeez, friendly mobile app for bird watchers.
Languages
-
English
Native or bilingual proficiency
-
Chinese
Limited working proficiency
Other similar profiles
Explore top content on LinkedIn
Find curated posts and insights for relevant topics all in one place.
View top content