Samuel J. Yang joined Google Research in 2016. Prior to that, he completed a Ph.D. in Electrical Engineering at Stanford University, where his research in the labs of Karl Deisseroth and Gordon Wetzstein focused on computational imaging and display, the co-design and optimization of optics hardware and data processing alogrithms. He was supported by a NSF Graduate Research Fellowship and a NDSEG Graduate Fellowship.About
In 2013, I received a M.S. in Electrical Engineering from Stanford, studying machine learning, image processing and computer vision.
In 2015, at Google Research, I applied deep learning methods to images as a Software Engineering Intern.
In 2014, at Google [x], I worked with optical physicists to design and implement imaging instrumentation hardware as an intern.
I volunteer for Science Olympiad after learning to program and building some robots myself a long time ago. I also enjoy photography, and was a teaching assistant for Stanford's Digital Photography class.
I have also been involved with several interesting team efforts as well, including designing and building an autonomous robot (second place at Robogames 2009), developing a functional license-plate-reading iPhone app during a 24-hour hackathon, designing and constructing a net-zero solar-powered smart home for the Solar Decathlon competition, and participating in and winning the $10,000 first place prize in a early-stage technology commercialization plan competition.
Contact: samuely (at) alumni (dot) stanford (dot) edu
March 2018: Updated with blog post and 4 recent publications.
February 21, 2016: Added two computer vision/machine learning projects, real-time tail/eye tracking for zebrafish virtual reality and depth-assisted portrait perspective correction.
December 2015: Our light sheet microscopy paper is out at Cell.
December 2015: My paper is out at Optics express.