Research
Through a study abroad program my university offered along with a lot of help from the wonderful CRCC Asia, who helped me land this position while I was still in the US, I was able to do an internship this past summer at Cross Labs, getting an introduction to the fields of AI and neuroengineering. But in addition to the work, I had an amazing experience being in the Cross Labs office working with Alyssa, Luc, and Akira as well as getting exposed to Japanese business culture through various Cross Labs clients and guests. Furthermore, working and making lifelong friends with the other interns was an experience I will truly treasure. With the cozy work environment and everyone there being more than willing to help, I’m beyond grateful to Cross Labs and its employees for the experience that I consider some of the best months of my life.
In this day and age, music is a constant companion. Feeling sad? Play your favorite song to feel better. Just got a promotion? Put on a celebratory playlist. Need to concentrate at work? Listen to some classical music. Music accompanies our every action as well as the emotions associated with them. We feel happier, sadder, calmer, and more alive when we listen to music. It has a direct affect on our mood and how we feel in the moment—something this project seeks to explore and test.
The purpose of this project is to use AI generated music and EEG feedback to guide the listener into feeling a specific emotion. Specifically, this would consist of a feedback loop with three parts: (1) reading emotions from an EEG headset, (2) conversion of these emotions into an input for the AI music generator, and (3) the use of an AI music generator. The system’s design is illustrated in Figure 1.
This summer, we used the OpenBCI Ultracortex Mark IV headset to collect our EEG data. After a lot of debugging, we were able to stream data from the headset into a computer using the BrainFlow biosensor library. Once this was done, we prescribed to the dimensional emotion theory in classifying emotions, meaning emotions are described on 3 scales: valence (positivity of emotions), arousal (intensity of emotions), and dominance (how in control one feels). In doing so, we are able to classify emotions in the form of a vector, allowing for an easier time implementing the feedback loop.
Converting the emotions into an appropriate input for the AI music generator is the most challenging part of this project. The ideal program would essentially take in the classified emotional data vector, compare it to the target emotional vector, and change things about the current music such as the tempo, melody, instruments, genre, and every other aspect of the music to minimize the difference between the classified and target emotional vector. The current stage of this project is primarily focused on this step of the project, as it integrates a diverse range of complex fields such as affective science, machine learning, neuroscience, and music theory.
Creating personalized music specifically crafted to influence the emotions of the listener can only be done through the use of generative AI. We found this to be one of the particularly exciting parts of this project as recent trends in AI are only now making this possible. The generative music AI we used this summer was called AudioCraft. It was developed by Meta and has various models allowing for text and sound sample input, making it perfect for our project as it could take in the music currently playing along with text suggesting changes in the music as input. This allows us to slowly tune the music to the individual and desired emotion without abrupt changes in the overall music.
In conclusion, whether used for movies, emotional therapy, or even just to pass the time, music constantly influences how we feel. Emotion-driven AI-power music holds enormous potential to make this effect even more prominent. By leveraging the power of advancing AI models and emotional detection, we can potentially help empower people to feel however they want though music tailored to emotionally resonate with them individually.