Composition: Difference between revisions
Jump to navigation
Jump to search
mNo edit summary |
mNo edit summary |
||
Line 1: | Line 1: | ||
== AI [[Deep Learning]] Composition == | |||
* [[Automatic Music Generation]] | |||
* Death Metal | |||
* Techno | |||
== DAW Composition == | == DAW Composition == | ||
* Rapid Composer | * Rapid Composer | ||
Line 6: | Line 12: | ||
== Generative Composition == | == Generative Composition == | ||
* Koan | * Koan | ||
* [[Generative Music]] | * [[Generative Music]] | ||
===Generative AI and DeepComposer === | |||
* explore the AWS DeepComposer service. | * explore the AWS DeepComposer service. | ||
Line 33: | Line 40: | ||
* Choose how to train your model. | * Choose how to train your model. | ||
== SSEYO Koan Brian Eno used it. Where is it now? == | |||
==SSEYO Koan Brian Eno used it. Where is it now? == | |||
https://intermorphic.com/sseyo/koan/ | https://intermorphic.com/sseyo/koan/ | ||
Line 40: | Line 46: | ||
== Dynamic Composition == | == Dynamic Composition == | ||
FMOD | FMOD | ||
== [[Game Music]] == | == [[Game Music]] == |
Revision as of 22:32, 28 August 2021
AI Deep Learning Composition[edit]
- Automatic Music Generation
- Death Metal
- Techno
DAW Composition[edit]
- Rapid Composer
- Kords
- Magenta
Generative Composition[edit]
- Koan
- Generative Music
Generative AI and DeepComposer[edit]
- explore the AWS DeepComposer service.
- Train a model
- (Get hands-on experience by training your own models. Begin to grasp how models work. GANs have two contesting neural networks. One model is generative and the other model is discriminative. The generator attempts to generate data that maps to a desired data distribution.
- Understand your model
- Examine how the generator and discriminator losses changed while training. Understand how certain musical metrics changed while training. Visualise generated music output for a fixed input at every iteration. Music Studio
- Music Studio gives you a chance to play music and use a GAN! First, record melodies or choose default melodies. Then use a pre-trained or a custom model to generate original AI music compositions. Train a model Understand your model Create a composition 1. Record a melody
- Using the AWS DeepComposer keyboard or your computer keyboard, record a short melody for input. 2. Generate composition
- When you’re satisfied with your input melody, choose a model and then choose Generate composition. 3. AWS DeepComposer generates accompanying tracks
- AWS DeepComposer takes your input melody and generates up to four accompaniment tracks. 1. Choose an algorithm
- Choose a generative algorithm to train a model. 2. Choose a dataset
- Choose a genre of music as your dataset. 3. Tweak hyperparameters
- Choose how to train your model.
SSEYO Koan Brian Eno used it. Where is it now?[edit]
https://intermorphic.com/sseyo/koan/
Dynamic Composition[edit]
FMOD