CS 180 Programming Project 3

Defining Correspondences

The first part is to label key points on the face, including the contours around the jaws and eyes. In addition, we would need to add the four corners of our image, which allow us to warp the entire image instead of only the areas of the face. As suggested in the specification, we utilize Delaunay for triangularization to create the mesh of points.

To get better results, we retrieve the average points of the source and the target image and use this information for triangularization. This provides better stability and better visual results.  Target Image (red dots are key points and blue lines is for triangularization):

Image

Source Image (myself):

Image

Computing the Mid-Way Face

To compute the mid-way face, we first calculate the geometric midpoints from the source and target faces. Then, we utilize the affine transformation to transfer each mesh. To calculate the affine transformation, we set up the equations for coordinate transformation and solved them through linear algebra.

The essential idea is to try to fill in the value of the new morphed image and sample the color from both the source and target image. We also use scipy. image.map_coordinates to accelerate the sampling process. Based on our experiments, it could be 10-20x faster than traversing the pixels.

Mid-way face:

Image

Source:

Image

Target:

Image

The Morph Sequence

To calculate the morph sequence, we simply create a for-loop to morph the image pair. We increase the warp and color dissolve parameters simultaneously, and the result is very solid.

Image

The "Mean face" of a population

We choose the FEI Face Database, which is available here (https://fei.edu.br/~cet/facedatabase.html). We use the frontal face images dataset, which is is already annotated. The image size is 360x260 pixels, and we use the first 200 images (part 1).

Mean of the population:

Image

To get the mean of the population, we first calculate the geometric mean of the key points from all those images. Then, we morph the original image to the new shape—in this case, the dissolve percentage is 0 (because we don't want to change the color) and the warp will be 1 (as we want to fully transform into the new geometry). Finally, once we have transformed all the images, we simply calculate the mean of all those images.

Here's a few examples:

(image 1a) Original:

Image

Morph:

Image

(image 2b) Original:

Image

Morph:

Image

My face warped into the average geometry:

Image

The average face warped into my geometry (the face shape become slighly wider):

Image

Caricatures: Extrapolating from the mean

My face warped into the average geometry (extrapolating 1.3x):

Image

Bells and Whistles

Change age:

Average Old People's Image:

Image

My Photo:

Image

Combined:

Image

Although the image is not perfectly accurate due to imprecise data annotation, we can observe that the direction of the muscle striations changes from my photo to the average of the older group, which clearly highlights this transformation.