NOVA scienceNOWNOVA scienceNOWNOVA scienceNOWComing up
arrow Profile: Brothers Chudnovsky
Moving Images

Moving Images

When team Chudnovsky—brothers Gregory and David Chudnovsky and I, their graduate student—took on the project of piecing together some 30 digital scans of three-by-three-foot sections of the Metropolitan Museum of Art's unicorn tapestry, we ran into a problem. The tapestry's millions of fibers had shifted randomly throughout its photo op, and piecing the images back together resulted in a frustrating collection of images that lined up in some places and were misaligned in others. Correcting one misaligned area only threw another area out of whack.

What exactly was the problem? And how did we fix it?

What was the problem?

To answer the first question, suppose you took two pictures from two different positions of an unmoving, unchanged object. If you identified matching feature points—in the case of the unicorn tapestry, say, its horn, the top of its tail, etc.—in each of the two images, a series of so-called "perspective transformations" would put the matching points in exact correspondence. After transformation, the points from one image would lie exactly on top of the points from the other.

Perspective transformations work like this: If you photograph a scene, the visible points in the 3-D scene are mapped to points on the 2-D film. This is a perspective projection. Depending on the camera's location, the rotation of the camera left, right, up, down, and around its center, the focal length of the camera, and other factors, you will get varying perspective projections—points in the scene will appear at different points on the film. Perspective transformations are maps between the points as seen by a camera as these properties are varied.

You can try this out for yourself if you get an erasable marker and a sheet of stiff, clear plastic from an art store. If you hold the sheet steady and outline something you see through the plastic (like a chair, table, flower) then twist, tilt, or otherwise move the sheet nearer or farther from your eye or around in space, then outline the same object again, you will see how different the two shapes can seem. These two outlines are related by a perspective transformation.

The "Shifting Feature Points" interactive you can launch on this page shows the overlapping area of two tapestry image sections. One layer is the image taken from the left, the other layer is the image taken from the right. We put matching feature points from the two overlapping images into best possible correspondence by means of perspective transformations. As you can see, the tapestry moved—the fabric twisted and changed position—by small amounts as the images were made, so the matching feature points don't lie exactly on top of each other. This was the crux of the problem of creating an image of the unicorn tapestry.

How did we fix it?

To solve this problem, we did this: for each feature point in the overlapping images, we placed the tail end of an arrow at the position of the feature as seen from one image, and we placed the point end of the arrow at the position of the same feature point as seen from the other image. (See the "Vector Field Map" interactive you can launch on this page.) The pattern of so-called "displacement vector" arrows—they have varying lengths (which you can't perceive here) and they point in varying directions—shows the complex fabric motions that we had to allow for to make a perfect match of the overlapping tapestry images. By tracking the direction of 15,000 vector arrows, we saw that the tapestry was a moving photographic subject. After centuries of hanging on a wall, when the tapestry was cleaned and placed on the floor, its 500-year-old threads started to relax and randomly shifted as the tapestry was photographed.

The crucial step in the process of locating the feature points was to divide the image into many, many postage-stamp-size rectangles and measure the colors and the rate of change in the intensity of each of the color channels within the small rectangles. Approximately 7.7 quadrillion calculations later, we slid the rectangles around the images in the area of overlap until we found the best match of left to right rectangles. Only then were we able to digitally weave each thread back together, creating a complete, intact image.

Photos: (all, except finished tapestry) Courtesy of the Chudnovsky lab, Brooklyn Polytechnic University, (finished tapestry) © Metropolitan Museum of Art

Tom Morgan is a doctoral candidate in mathematics, studying under the Chudnovskys at Brooklyn Polytechnic University.

arrow Profile: Brothers Chudnovsky

Support provided by

For new content
visit the redesigned
NOVA site