Arrays of Microlenses allow us to encode
orientation by different appearances.


Using microlens arrays and lenticular arrays, we are able to create fidicual markers and calibration objects that explicitly encode orientation with different appearances. These arrays are plastic sheets comprised of many small 2d and 1d lenses. The lenses magnify small printed patterns on the back of the array to create an appearance from the front of the array. Therefore, by attaching different textures and patterns under the arrays, we are able to design structured appearances. As an example, below is a lenticular array that encodes 60 degrees of rotation at 1 degree resolvable precision. Please play with the slider to see the different appearances:



Using this idea, we have derived geometric constraints for geometric inference and shown applications in pose estimation and camera calibration.

Please explore our research by clicking below on the conferences at which the research was presented:


  [pdf] [Citation] [Bibtex]

Appearing in CVPR 2016, this work describes how to use lenticular arrays for single image focal length estimation.

Single Image Camera Calibration with Lenticular Arrays for Augmented Reality

Ian Schillebeeckx and Robert Pless


Topleft, counter-clockwise: 1) A calibration object comprised of 3 lenticular arrays is used to estimate the pose and focal length of a camera. 2) As a result, for single frames of a video, we can estimate the correct perspective with a variable zoom instead of 3) poor perspective assuming a static focal length. 4) As a result, Augmented Reality applications are more visually accurate.





Abstract

We consider the problem of camera pose estimation for a scenario where the camera may have continuous and unknown changes in its focal length. Understanding frame by frame changes in camera focal length is vital to accurately estimating camera pose and vital to accurately rendering virtual objects in a scene with the correct perspective. However, most approaches to camera calibration require geometric constraints from many frames or the observation of a 3D calibration object -- both of which may not be feasible in augmented reality settings. This paper introduces a calibration object based on a flat lenticular array that creates a color coded light-field whose observed color changes depending on the angle from which it is viewed. We derive an approach to estimate the focal length of the camera and the relative pose of an object from a single image. We characterize the performance of camera calibration across various focal lengths and camera models, and we demonstrate the advantages of the focal length estimation in rendering a virtual object in a video with constant zooming.


  [pdf] [Citation] [Bibtex]

Appearing in 3DV 2015, this work describes how to use lenticular arrays to create fiducial markers for pose estimation.

The Geometry of Colorful, Lenticular Fiducial Markers

Ian Schillebeeckx, Joshua Little, Brendan Kelly, and Robert Pless


By attaching two small lenticular arrays, called chromo-coded markers, onto a difficult object like a pair of forceps, we can determine an accurate pose for the object from a single image.

Abstract

Understanding the pose of an object is fundamental to a variety of visual tasks, from trajectory estimation of UAVs to object tracking for augmented reality. Fiducial markers are visual targets designed to simplify this process by being easy to detect, recognize, and track. They are often based on features that are partially invariant to lighting, pose and scale. Here we explore the opposite approach and design passive calibration patterns that explicitly change appearance as a function of pose. We propose a new, simple fiducial marker made with a small lenticular array, which changes its apparent color based on the angle at which it is viewed. This allows full six degree-of-freedom pose estimation with just two markers and an optimization that fully decouples the estimate of rotation from translation. We derive the geometric constraints that these fiducial markers provide, and show improved pose estimation performance over standard markers through experiments with a physical prototype for form factors that are not well supported by standard markers (such as long skinny objects). In addition, we experimentally evaluate heuristics and optimizations that give robustness to real-world lighting variations.


  [pdf] [Citation] [Bibtex]

Appearing in ICCP 2015, this work describes how to use lenticular arrays to create light fields that encode orientation by color for correspondence free rotation estimation.

Structured Light Field Design for Correspondence Free Rotation Estimation

Ian Schillebeeckx and Robert Pless


(Left) We create light fields using lenticular arrays which encode direction by color. (Right) With these chromo-coded light fields, we are able to estimate rotation without point correspondences.

Abstract

Many vision and augmented reality applications require knowing the rotation of the camera relative to an object orscene. In this paper we propose to create a structured light field designed explicitly to simplify the estimation of camera rotation. The light field is created using a lenticular sheet with a color coded backplane pattern, creating a lightfield where the observed color depends on the direction of the light. We show that a picture taken within such a light field gives linear constraints on the K^-1 R matrix that defines the camera calibration and rotation. In this work we derive an optimization that uses these constraints to rapidly estimate rotation, demonstrate a physical prototype and characterize its sensitivity to errors in the camera focal length and camera color sensitivity.