Note:

ML-for-Creative-Coding/02-landmarks at main · shiffman/ML-for-Creative-Coding

Home work

As technology advances, the world of artificial intelligence and computer vision has opened up exciting possibilities. My recent exploration into facial recognition technology was sparked by an unexpected source: Chinese dramatic face-changing (变脸).

In traditional Chinese opera, face-changing is a captivating art where performers change their masks in the blink of an eye, reflecting different emotions, personas, and plot twists. This dynamic transformation fascinated me, and I couldn’t help but imagine how technology might allow us to replicate such instantaneous changes—digitally, using the power of AI and computer vision.

截屏2025-02-07 01.27.27.png

The Spark of Inspiration: Chinese face-changing is a performance art where the performer changes masks swiftly, symbolizing transformation, shifting identities, and reflecting emotions. This fascinating concept of rapid transformation caught my imagination, and I wondered how this could be replicated using modern technology. The idea of applying real-time face recognition to change "masks" digitally seemed like a perfect fit.

Process

Step 1: Setting Up Face Recognition

I started by using ml5.js's FaceMesh model, which is an excellent tool for detecting facial features in real-time. Here’s how I set up the basic face mesh tracking:

let video;
let faceMesh;
let faces = [];
let triangles;
let uvCoords;

function preload() {
  // Initialize FaceMesh model
  faceMesh = ml5.faceMesh({ maxFaces: 1 });
}

function setup() {
  createCanvas(640, 480, WEBGL);
  video = createCapture(VIDEO);
  video.hide();

  faceMesh.detectStart(video, gotFaces);  // Start detecting faces
}

function gotFaces(results) {
  faces = results;
}

function draw() {
  background(0);  // Set background color to black
  orbitControl();
  
  // Draw video as background
  image(video, 0, 0, width, height);

  if (faces.length > 0) {
    let face = faces[0];
    
    // Retrieve mesh triangles and UV coordinates for texture mapping
    triangles = faceMesh.getTriangles();
    uvCoords = faceMesh.getUVCoords();
    
    // Further code will go here to map textures on the face mesh
  }
}

This basic setup initializes the webcam and detects faces using the ml5.faceMesh model. In the next step, I'll add texture mapping to simulate the face-changing effect.

Step 2: Mapping Textures to the Face Mesh

Next, I mapped textures onto the detected face mesh. For simplicity, let’s assume that we have two masks—mask1.jpg and mask2.jpg. We’ll randomly switch between these textures.

Here’s how I applied the texture:

let images = [];  // Array to hold mask images
let currentTexture;

function preload() {
  // Load the two mask images
  images.push(loadImage("mask1.jpg"));
  images.push(loadImage("mask2.jpg"));

  // Randomly choose an initial mask
  currentTexture = random(images);
}

function draw() {
  background(0);
  orbitControl();

  // Draw video as background
  image(video, 0, 0, width, height);

  if (faces.length > 0) {
    let face = faces[0];

    // Apply the current texture (mask)
    texture(currentTexture);
    textureMode(NORMAL);
    noStroke();
    beginShape(TRIANGLES);

    for (let i = 0; i < triangles.length; i++) {
      let tri = triangles[i];
      let [a, b, c] = tri;

      // Retrieve 3D face keypoints
      let pointA = face.keypoints[a];
      let pointB = face.keypoints[b];
      let pointC = face.keypoints[c];

      // Retrieve UV texture coordinates
      let uvA = uvCoords[a];
      let uvB = uvCoords[b];
      let uvC = uvCoords[c];

      // Map the vertices and UV coordinates
      vertex(pointA.x, pointA.y, pointA.z, uvA[0], uvA[1]);
      vertex(pointB.x, pointB.y, pointB.z, uvB[0], uvB[1]);
      vertex(pointC.x, pointC.y, pointC.z, uvC[0], uvC[1]);
    }

    endShape();
  }
}

In this code: