Main Page and Composition: Difference between pages

From EMC23 - Satellite Of Love
(Difference between pages)
Jump to navigation Jump to search
 
mNo edit summary
 
Line 1: Line 1:
<strong>Electronic Music Coders Amsterdam</strong>
== DAW Composition ==
* Rapid Composer
* Kords
* Magenta


https://www.meetup.com/Electronic-Music-Coding
== Generative Composition ==
* Koan
* [[Generative Music]] ===Generative AI and DeepComposer


== The Group ==


* [https://www.meetup.com/Electronic-Music-Coding/ Electronic Music Coding Amsterdam]
* explore the AWS DeepComposer service.
* [http://www.emc23.com The Blog]
* [http://wiki.emc23.com This Wiki]
* [https://twitter.com/emc23dotcom/ Twitter Account]
* [https://github.com/EMC23 Github Account]


== The Topics ==
* Train a model


=== [[Coding]] ===
*(Get hands-on experience by training your own models. Begin to grasp how models work. GANs have two contesting neural networks. One model is generative and the other model is discriminative. The generator attempts to generate data that maps to a desired data distribution.
=== [[Livestream]] ===
=== [[Studio Technology]] ===
=== [[Performance]] ===
=== [[Installations]] ===
=== [[Modular]] ===
=== [[DIY]] ===
=== [[Composition]] ===
=== [[Digital Instruments]] ===
=== [[Sound Design]] ===
=== [[Deep Learning]] ===
=== [[DSP]] ===
=== [[Graphic Interfaces]] ===


== The Goal==
* Understand your model
Meet, Share Knowledge, Network.


* Examine how the generator and discriminator losses changed while training. Understand how certain musical metrics changed while training. Visualise generated music output for a fixed input at every iteration. Music Studio


== The Projects==
* Music Studio gives you a chance to play music and use a GAN! First, record melodies or choose default melodies. Then use a pre-trained or a custom model to generate original AI music compositions. Train a model Understand your model Create a composition 1. Record a melody


=== Daily - Audio/Visual Streaming ===
* Using the AWS DeepComposer keyboard or your computer keyboard, record a short melody for input. 2. Generate composition
[[OBS]] and [[MSDP]] and Ableton [[Max4Live]] Electronic Audio/visual [[Livestream]]


=== Weekly - Friday Jamming Session ===
* When you’re satisfied with your input melody, choose a model and then choose Generate composition. 3. AWS DeepComposer generates accompanying tracks


Ableton Link and/or midi sync up. Every genre investigated. Participants take turns in groups of varying size
* AWS DeepComposer takes your input melody and generates up to four accompaniment tracks. 1. Choose an algorithm


=== Evening Study Classes ===
* Choose a generative algorithm to train a model. 2. Choose a dataset


Tues, Wed, Thurs
* Choose a genre of music as your dataset. 3. Tweak hyperparameters
1 on 1 Audio Programming Workshops
* [[Javascript]] ([[Web Audio]])
* [[Pure Data]] or [[Max]]/Msp (students choce)
* [[C++]] (Juice or VULT) [[VCVRack]] or [[VST]] audio plugin
* [[Python]] Machine Learning
* Hardware [[Bela]] or [[Arduino]] (Students choice)
* [[Livestreaming]]


=== Monthly - Open Session ===
* Choose how to train your model.
Bandcamp Friday Livestream


=== Workshops ===


Connect a midi controller to the browser
==SSEYO Koan Brian Eno used it. Where is it now? ==


https://gizmodo.com/a-beginners-guide-to-the-synth-1736978695
https://intermorphic.com/sseyo/koan/


* [[Build a synth in C++]]
== Dynamic Composition ==
* [[Build a synth in Chrome Browser]] Javascript
FMOD
* [[Build a synth in VCVRack]]
* [[Build a synth in Reaktor]]
* [[Build a synth in Reaktor Blocks]]
* [[Build a synth in Pure Data]]
* [[Build a synth in Max For Live]]


==Minimum Requirements ==
== AI [[Deep Learning]] Composition ==


* Laptop
* [[Automatic Music Generation]]
* [[Asio]] compliant soundcard
* Death Metal
* [[Midi Controller]] (will be supplied for first few sessions)
* Techno
* Bring headphones if you have them


==Cost ==
== [[Game Music]] ==
*  Evening sessions : 300 Euro per Quarter
 
==The Place ==
the Satellite Of Love
 
Panamalaan 6d
 
1019 NE Amsterdam

Revision as of 02:17, 26 August 2021

DAW Composition[edit]

  • Rapid Composer
  • Kords
  • Magenta

Generative Composition[edit]


  • explore the AWS DeepComposer service.
  • Train a model
  • (Get hands-on experience by training your own models. Begin to grasp how models work. GANs have two contesting neural networks. One model is generative and the other model is discriminative. The generator attempts to generate data that maps to a desired data distribution.
  • Understand your model
  • Examine how the generator and discriminator losses changed while training. Understand how certain musical metrics changed while training. Visualise generated music output for a fixed input at every iteration. Music Studio
  • Music Studio gives you a chance to play music and use a GAN! First, record melodies or choose default melodies. Then use a pre-trained or a custom model to generate original AI music compositions. Train a model Understand your model Create a composition 1. Record a melody
  • Using the AWS DeepComposer keyboard or your computer keyboard, record a short melody for input. 2. Generate composition
  • When you’re satisfied with your input melody, choose a model and then choose Generate composition. 3. AWS DeepComposer generates accompanying tracks
  • AWS DeepComposer takes your input melody and generates up to four accompaniment tracks. 1. Choose an algorithm
  • Choose a generative algorithm to train a model. 2. Choose a dataset
  • Choose a genre of music as your dataset. 3. Tweak hyperparameters
  • Choose how to train your model.


SSEYO Koan Brian Eno used it. Where is it now?[edit]

https://intermorphic.com/sseyo/koan/

Dynamic Composition[edit]

FMOD

AI Deep Learning Composition[edit]

Game Music[edit]