Using Machine Learning for composition and sound design
The Fluid Corpus Manipulation project (FluCoMa) provides novel machine learning tools for use in the digital composition process. The MLPRegressor is a neural network that can be used to perform regression.
What is Regression?
In Machine Learning, regression can be thought of as a mapping from one space to another where each space can be any number of dimensions. By providing input and output data as DataSets, the neural network is trained using supervised learning to predict output data points based on input data points. This gives a vast array of creative possibilities for composition, sound design and performance.
What to expect in this workshop?
In this workshop, Ted Moore from the FluCoMa project guides you through an exploration of some of the creative possibilities available via Neural Networks with their Max Package. Basic experience of FluCoMa is advised before joining this workshop course. For example, it is recommended that you have taken the free on-demand workshop Using Machine Learning Creatively via FluCoMa In Max.
What you'll learn
Train neural networks to perform musical tasks
Use training and testing data to validate trained models
Troubleshoot the training process by adjusting neural network parameters
Combine different types of input and output data
Who is this course for?
Course content
Presentation slides (63 pages)
Flucoma patches
1. Introduction _ What is FluCoMa_
2. Plan _ Outline
3. Classification
4. Multilayer-Perceptron
5. A Musical Motivation for Classification
6. Supervised vs. Unsupervised Learning
7. Training a Classifier
8. Feed-forward and Back-propagation
9. Classification Patch
10. The _error_ _Training fluid.mlpclassifier~
11. Making Predictions with fluid.mlpclassifier
12. Validation with Training & Testing Data
13. Saving a Trained Neural Network for Later Use
14. Doing Classification with fluid.mlpregressor~
15. Artistic Use of Classification
17. Automated Dataset Creation and Validation
18. Neural Network Parameters (Object Attributes)
19. Hiddenlayers
20. Activation and Outputactivation
21. Learnrate
22. Maxiter
23. Batchsize
24. Validation
25. Overfitting
26. Momentum
27. Q&A on Parameters
28. Neural Network Regression with Audio Descriptors
29. Musical Example
30. Training fluid.mlpregressor~
31. Wavetable Autoencoder
32. @tapin and @tapout
33. Final Q&A
Requirements
A computer and internet connection
Access to a copy of Max 8 (i.e. trial or full license)
Install of the free FluCoMa Max package
Course schedule
Meet your instructor
Ted Moore (he / him) is a composer, improviser, and intermedia artist. He holds a PhD in Music Composition from the University of Chicago and recently served as a Research Fellow in Creative Coding at the University of Huddersfield, investigating the creative affordances of machine learning and data science algorithms as part of the FluCoMa project.​ His work focuses on fusing the sonic, visual, physical, and acoustic aspects of performance and sound, often through the integration of technology.
When you login first time using a Social Login button, we collect your account public profile information shared by Social Login provider, based on your privacy settings. We also get your email address to automatically create an account for you in our website. Once your account is created, you'll be logged-in to this account.
DisagreeAgree
I allow to create an account
When you login first time using a Social Login button, we collect your account public profile information shared by Social Login provider, based on your privacy settings. We also get your email address to automatically create an account for you in our website. Once your account is created, you'll be logged-in to this account.