Composition: Difference between revisions
Jump to navigation
Jump to search
mNo edit summary |
|||
Line 6: | Line 6: | ||
== Generative Composition == | == Generative Composition == | ||
* Koan | * Koan | ||
* [[Generative Music]] === | * [[Generative Music]] ===Generative AI and DeepComposer | ||
* explore the AWS DeepComposer service. | |||
* Train a model | |||
*(Get hands-on experience by training your own models. Begin to grasp how models work. GANs have two contesting neural networks. One model is generative and the other model is discriminative. The generator attempts to generate data that maps to a desired data distribution. | |||
* Understand your model | |||
* Examine how the generator and discriminator losses changed while training. Understand how certain musical metrics changed while training. Visualise generated music output for a fixed input at every iteration. Music Studio | |||
* Music Studio gives you a chance to play music and use a GAN! First, record melodies or choose default melodies. Then use a pre-trained or a custom model to generate original AI music compositions. Train a model Understand your model Create a composition 1. Record a melody | |||
* Using the AWS DeepComposer keyboard or your computer keyboard, record a short melody for input. 2. Generate composition | |||
* When you’re satisfied with your input melody, choose a model and then choose Generate composition. 3. AWS DeepComposer generates accompanying tracks | |||
* AWS DeepComposer takes your input melody and generates up to four accompaniment tracks. 1. Choose an algorithm | |||
* Choose a generative algorithm to train a model. 2. Choose a dataset | |||
* Choose a genre of music as your dataset. 3. Tweak hyperparameters | |||
* Choose how to train your model. | |||
==SSEYO Koan Brian Eno used it. Where is it now? == | |||
https://intermorphic.com/sseyo/koan/ | |||
== Dynamic Composition == | == Dynamic Composition == |
Revision as of 01:16, 26 August 2021
DAW Composition[edit]
- Rapid Composer
- Kords
- Magenta
Generative Composition[edit]
- Koan
- Generative Music ===Generative AI and DeepComposer
- explore the AWS DeepComposer service.
- Train a model
- (Get hands-on experience by training your own models. Begin to grasp how models work. GANs have two contesting neural networks. One model is generative and the other model is discriminative. The generator attempts to generate data that maps to a desired data distribution.
- Understand your model
- Examine how the generator and discriminator losses changed while training. Understand how certain musical metrics changed while training. Visualise generated music output for a fixed input at every iteration. Music Studio
- Music Studio gives you a chance to play music and use a GAN! First, record melodies or choose default melodies. Then use a pre-trained or a custom model to generate original AI music compositions. Train a model Understand your model Create a composition 1. Record a melody
- Using the AWS DeepComposer keyboard or your computer keyboard, record a short melody for input. 2. Generate composition
- When you’re satisfied with your input melody, choose a model and then choose Generate composition. 3. AWS DeepComposer generates accompanying tracks
- AWS DeepComposer takes your input melody and generates up to four accompaniment tracks. 1. Choose an algorithm
- Choose a generative algorithm to train a model. 2. Choose a dataset
- Choose a genre of music as your dataset. 3. Tweak hyperparameters
- Choose how to train your model.
SSEYO Koan Brian Eno used it. Where is it now?[edit]
https://intermorphic.com/sseyo/koan/
Dynamic Composition[edit]
FMOD
AI Deep Learning Composition[edit]
- Automatic Music Generation
- Death Metal
- Techno