Dr. Rebecca Fiebrink's presentation explored how machine learning (ML) can be a tool for creative expression, not just automation or prediction. She emphasized how ML expands creative possibilities by allowing artists, musicians, and designers to interact with technology in new ways.
Machine learning as a tool for expressive interaction.ML can be used to build responsive, interactive systems that adapt to human input, not just for efficiency. For example, musicians train ML models to recognize gestures or sounds for real-time improvisation.ML acts as a co-creator, helping to generate new ideas and interactions that would otherwise be impossible. Finally, ML enhances human creativity by making tools more responsive, adaptable, and personalized.
This can be used for virtual reality, live streaming or interactive performances, allowing digitized avatars or real-time filters to reflect emotions in an artistic way. This can be used for virtual reality, live streaming or interactive performances, allowing digitized bodies or real-time filters to reflect emotions in an artistic way.



What is the input?
The model will receive live video and process facial keypoints using a webcam.
What is the output?
The model will generate artistic makeup overlays that match or enhance the effects of the detected emotions. Example:
Happy → gold color, radiant Anger → sharp, red streaks Sadness → soft, cool colors Surprise → electronic neon effect
What kind of learning task is this?
This would be a categorization problem, as the model needs to categorize emotions into predefined labels (happy, sad, angry, surprised, etc.). Secondary generative components can use style transformations to create expressive visuals.