![]() ![]() We used the same Spotify API to retrieve data as the former researcher. ![]() Since we need to recommend songs to our users, we need to retrieve more songs according to the moods classified by the face model. Finally, we used the label_encode from sklearn.preprocessing to encode the labels. The MinMaxScaler will initially fit data and then subsequently transform it. When splitting the dataset into its training and testing portions, we used the MinMaxScaler from sklearn.preprocessing to scale the values between a range of and preserving the shape of the original distribution. We decided it would be efficient if we had individual sets for the mood as well as the features (training 80% testing 20%). Then, we imported train_test_split from sklearn.model_selection to help with the process of splitting the dataset into training and testing sets. First, we dropped some columns of the excessive data including the type, key, mode, and time signature. Classification of music by moodsĪfter retrieving the data, we need to clean the data. The model divides songs along the lines of energy and stress, from happy to sad and calm to energetic, respectively. ![]() In most existing methods of music mood classification, the moods of songs are divided according to psychologist Robert Thayer’s traditional model of mood. The moods were in four categories, (0=calm, 1=happy, 2=energetic, 3=sad). And the moods of the song were classified manually by the author of the former research. The metadata and the features of the songs were retrieved using Spotipy, an API of Spotify in python. It has 19 columns including name, album, artist, id, release_date, popularity, length, danceability, acousticness, energy, instrumentalness, liveness, valence, loudness, speechness, tempo, key, time_signature, and mood. With the newly parsed datasets containing both labels and data, we had also split the sets into training and testing datasets.Ī labeled dataset with the size of 686 was collected from previous research about predicting the music mood of a song with deep learning. Thus, by using functions created in the data provider class, images were converted to a shape of (64, 64, 1) that can be later fed into our model. and Panda API’s get_dummies function helped retrieve the label of the dataset (happy, angry. The arrays were then normalized to have a range of that ensure pixels have similar data distribution, which contributes to faster convergence during model training. Numpy’s expand function enabled us to decrease the dims of the 2D array back to 1D. With the help of OpenCV API, arrays were resized to size=(48, 48). By employing Numpy API, the data in the “pixels” column were reshaped as 2D arrays. In this algorithm, Panda API was used to extract the CSV from the fer2019 emotion classification dataset. There are 7 categories in total (0=Angry, 1=Disgust, 2=Fear, 3=Happy, 4=Sad, 5=Surprise, 6=Neutral).Ī data provider class was written to process input data retrieved from the FER2013 to produce a processed dataset that was made compatible with the model. Project components Components of the Project Dataset Collectionįor the facial detection task, the FER2013 dataset was collected from Kaggle which consists of 48x48 pixel well-centered grayscale face images. Django project was created to develop a web application that deploys our ML and related functionsįigure1.A music recommending system was created.OpenCV function was created to process input images and a Music list classified by moods was created to be later used by the music recommender.The face emotion model and music mood model were written and trained by the datasets.Datasets were collected to train the face emotion model and music mood classifier model.Xception is used as the base model for recognizing facial emotion, and a handwritten multiclass neural network will be used to classify songs by moods. This application will be implemented through a webpage. The inputs are images uploaded by users, and the outputs have the Spotify links that will direct users to the recommended songs. Based on the predicted mood, the system can list out appropriate songs for users to choose from. The Music Mood Lifter is a software system empowered by machine learning algorithms that can detect facial expressions from input faces. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |