Saturday – March 16, 2024 – Five Minute Videos – 10:30am – Martin 125 (Digital Media Production Screening Room)

Five-minute Videos follow the idea of “Show us your research, quickly!” and are intended to be fast introductions to research areas with a follow up discussion.

  • Jason Palamara: I turned my class into a reality show…here’s how it went
  • Jeff Kaiser: Electroacoustic-Music-Informed-Acoustic-Improvisation
  • Jeff Albert: Improvising Trombone and Computer Rig: Off the shelf or roll your own?
  • Mark Zanter: Use of Markov Chains in Ex Machina
  • Mike Frengel: Soundbug
  • Rodrigo Cádiz: Creativity in Generative Musical Neural Networks
  • Timothy Crist: Some Thoughts on Light as Modulation Sources in Eurorack
  • Dmitri Volkov: Pivotuner
  • Greg Cerveny: VCV Rack as Modern Radiophonic Lab

Jason Palamara – I turned my class into a reality show…here’s how it went

In the Spring of 2023, I began a process of documenting every meeting of my undergraduate performing ensemble (DISEnsemble), in the hopes of turning each meeting into a little vlog that would probably only interest the students appearing in the videos. I quickly discovered this to be an eye-opening look at my own pedagogy, and the behind-the-scenes lives of the students taking the class. In the video I discuss how this process could be useful as an assessment tool for other educators in similar situations and also outline the difficulties I encountered along the way.

Jason Palamara specializes in the development of AI and machine learning-enabled performance technologies for music. He directs the Indianapolis-based Machine Musician Lab that pursues multiple AI music initiatives. He maintains a long-term creative partnership percussionist-composer Scott Deal, with whom he developed the AVATAR software. AVATAR is an autonomous music system uses machine learning to play along with live improvisation. He is the founder/director of IUI’s 30+ member DISEnsemble (Diverse Instrument Symphonic Ensemble), which counts among its performers an embodied autonomous digital performer named the Emergency Musical Hologram (EMH). The EMH appears as a ghostly human performer playing multiple instruments. You can catch many of Dr. Palamara’s exploits on his YouTube channels (@disensemble, @machinemusicianlab), which post videos weekly.

Jeff Kaiser – Electroacoustic-Music-Informed-Acoustic-Improvisation

A recent recording session in a reverb-rich environment triggered a meditation on how electronics influenced my acoustic playing—and vice versa.

Jeff Kaiser is the director of operations for MOXsonic and rumor is this is one of his favorite events of the festival.

Jeff Albert – Improvising Trombone and Computer Rig: Off the shelf or roll your own?

This video describes two approaches to software systems for my trombone and computer performance setup. One setup is built in Max and has remained largely unchanged for years, while the other is built in Ableton Live and is often tweaked for each performance. Flexibility and consistency can be competing priorities, and I explore advantages and disadvantages of each and make a brief argument for an all analog pedal version as well.

Jeff Albert’s areas of research and creative practice include improvisation, jazz performance, performance paradigms for live computer music, and audio production. He has performed in concerts and festivals in the U.S and throughout Europe, and contributed as a performer, producer, or engineer on over 60 recordings, including the 2017 Grammy winner for Best Traditional Blues Album. He has been named a Rising Star in the DownBeat Critics Poll, and his album Unanimous Sources was named a Top 10 album of 2020 by Jan Garelick in the Boston Globe.

Albert received his B.M. from Loyola University New Orleans, and his M.M. from the University of New Orleans. In May of 2013, he became the first graduate of the PhD program in Experimental Music and Digital Media at Louisiana State University, where he was a founding member of the Laptop Orchestra of Louisiana (LOLs).

Mark Zanter – Use of Markov Chains in Ex Machina

Ex Machina is a multi-disciplinary work combining composed and algorithmically generated music, processed video, and modern dance. The work is a concert-length performance a spectacle of sound, light, and movement. Inspired by themes in novels by Philip K. Dick and William Gibson and others, ensemble performance is integrated with computer-generated music, video, and dance generating interplay between real and virtual worlds; between grittiness/immediacy of daily life and cyber fantasy. The work is a musical, visual experience embracing technology and celebrating performance.

Several movements of Ex Machina are generated by Max/MSP patches generating progressions of chords or sequences of melodies using Markov chains. Markov chains generate variations of musical content ensuring that it is similar, but never the same. Video elements are mixed live by a V.J.; they incorporate text from Donna Haraway’s A Cyborg Manifesto, and Baudrillard Simulacra and Simulation, and The Conspiracy of Art and videos of biomorphic shapes and lines. Time-based processing (e.g., interpolation) is used in conjunction with mixes of multiple video channels. The opening of Ex Machina signals possibilities suggested by Haraway: “By the late twentieth century, our time, a mythic time, we are all chimeras, theorized and fabricated hybrids of machine and organism; in short, we are cyborgs. The cyborg is our ontology; it gives us our politics.” (Haraway, Donna: A Cyborg Manifesto, 1985)

Dr. Mark Zanter is Professor of Music, Interim Director of the School of Art & Design; and Distinguished Artist and Scholar at Marshall University. He has appeared on NPR’s Live at the Landmark, WILL, IPR, on WVPN In Touch With The Arts, is published by Les Productions d’OZ, European American and MJIC, festival performances include MUSIC X, June in Buffalo, Soundscape, NYCEMF, Echofluxx, SEAMUS, ACA, Generative Art International, Seensound, MIUC Melbourne, and SPLICE. Zanter has received awards from ASCAP, AMC, ACF, Meet the Composer, WV Division Culture, WVMTA; and Lament and dream received special distinction for the ASCAP Rudolph Nissim Prize. He records for Ablaze, Navona, and innova imprints.

Mike Frengel – Soundbug

Rodrigo Cádiz – Creativity in Generative Musical Neural Networks

Deep learning, one of the fastest-growing branches of artificial intelligence, has become one of the most relevant research and development areas of the last years, especially since 2012, when a neural network surpassed the most advanced image classification techniques of the time. This spectacular development has not been alien to the world of the arts, as recent advances in generative networks have made possible the artificial creation of high-quality content such as images, movies or music. We believe that these novel generative models propose a great challenge to our current understanding of computational creativity. If a robot can now create music that an expert cannot distinguish from music composed by a human, or create novel musical entities that were not known at training time, or exhibit conceptual leaps, does it mean that the machine is then creative? We believe that the emergence of these generative models clearly signals that much more research needs to be done in this area. We would like to contribute to this debate with two case studies of our own: TimbreNet, a variational auto-encoder network trained to generate audio-based musical chords, and StyleGAN Pianorolls, a generative adversarial network capable of creating short musical excerpts, despite the fact that it was trained with images and not musical data. We discuss and assess these generative models in terms of their creativity and we show that they are in practice capable of learning musical concepts that are not obvious based on the training data, and we hypothesize that these deep models, based on our current understanding of creativity in robots and machines, can be considered, in fact, creative.

Rodrigo F. Cádiz is a composer, researcher and engineer. He studied composition and electrical engineering at the Pontificia Universidad Católica de Chile (UC) in Santiago and he obtained his Ph.D. in Music Technology from Northwestern University. His compositions, consisting of approximately 60 works, have been presented at several venues and festivals around the world. His catalogue considers works for solo instruments, chamber music, symphonic and robot orchestras, visual music, computers, and new interfaces for musical expression. He has received several composition prizes and artistic grants both in Chile and the US. He has authored around 60 scientific publications in peer reviewed journals and international conferences. His areas of expertise include sonification, sound synthesis, audio digital processing, computer music, composition, new interfaces for musical expression and the musical applications of complex systems. He has obtained research funds from Chilean governmental agencies, such as ANID and CNCA. He received a Google Latin American Research Award (LARA) in the field of auditory graphs. In 2018, Rodrigo was a composer in residence with the Stanford Laptop orchestra (SLOrk) at the Center for Computer-based Research in Music and Acoustics (CCRMA), and a Tinker Visiting Professor at the Center for Latin American Studies, Stanford University. In 2019, he received the Prize for Excellence in Artistic Creation at UC. He was the chair of the 2021 edition of the International Computer Music Conference. He is currently a professor at both the Music Institute and Electrical Engineering Department at UC.

Timothy Crist: Some Thoughts on Light as Modulation Sources in Eurorack

My presentation will describe tools available for the conversion of light, shadows, and color into modulation sources for Eurorack performance. These tools include infrared X-Y motion sensors, light dependent resistors (LDRs), and all-light sensors that monitor RGB and IR spectrums. Lighting techniques will also be discussed. Lastly, the presentation will include the the use of neural networks that receive light-converted voltages as stimuli and respond by passing signals to control destinations.

Timothy Crist is a Professor of Music at Arkansas State University where he teaches composition, electronic music, theory, and classical guitar. He is the recipient of the National Endowment for the Arts 2003 Individual Artist Fellowship for Music Composition. He has composed over 300 musical works for various media including solo, chamber, orchestral, band, and electronic media. Crist received a D.M.A. from the University of Georgia, Athens. Crist’s music is performed throughout the United States and has been featured on National Public Radio, SEAMUS conferences, Southeastern Composers League, the Electronic Music Midwest Festival, the Florida Electroacoustic Music Festival, the Society of Composers, Inc. conferences, New York City Electronic Music Festival, and at the Missouri Experimental Sonic Arts Festival. In recent years, he has turned to modular synthesis where his music has been subjected to duress through indeterminate procedures, Chua attractors, generative processes, mysterious algorithms, and a number of chaotic modulators. His recent research involves embracing STEM education and revising music classroom instruction to focus heavily on new methods of music composition including modern hardware and software tools.

Dmitri Volkov: Pivotuner

Pivotuner is a VST3/AU MIDI effect plugin that automatically tunes note data in an adaptive pure intonation, in real time. Where previously pure intonation was out of reach for most musicians due to difficulty and impracticality, Pivotuner enables it to be achieved easily and straightforwardly by using novel yet simple algorithms. This may lead to more widespread exploration of pure intonation for a larger and more diverse crowd of musicians!

This video serves as a brief overview of what pure intonation is, and how it can be used to achieve microtonal modulation. The mechanicsms shown are directly achievable with Pivotuner, and Pivotuner was used for creating the video. Pivotuner itself was created with help from Ben Bloomberg, and this video was created with input from Barak Schmool.

Dmitri Volkov primarily operates within the realms of music and computer science, both of which he currently studies at IU Bloomington, within the Jacobs School of Music and the Luddy School of Informatics, Computing, and Engineering. Musically, Dmitri is an award-winning composer who seeks to find the technical and academic boundaries to which music can be pushed while maintaining a rich emotional response; he is also an accomplished multi-instrumentalist who has performed alongside highly accomplished musicians, including members of world-famous orchestras such as the Baltimore Symphony, Dallas Symphony, and Metropolitan Opera Orchestras. With computer science, Dmitri is self-taught in C++/JUCE and Python, and has published several apps to the App and Google Play stores; he is currently developing Pivotuner, an audio plugin which enables adaptive pure intonation on keyboard instruments. Dmitri also wants you to know that he does not usually write in the third person.

Greg Cerveny: VCV Rack as Modern Radiophonic Lab

Early synthesis and radio broadcast studios are intertwined with the history of electronic music. From influencing the design of modular synthesizers to the development of recorded media as sonic material, we have much to learn from radio. In this lecture we explore modular synthesis as a platform for interactive performance that references both historic radiophonic practices as well as contemporary radio art techniques. Examples will be provided in VCV rack, an open source platform that enables experimentation with modular synthesis.

Greg Cerveny is a programmer, synthesist, composer, and independent media artist. He is the founder of Experimental Modular Radio, an online broadcast that encourages collaboration between modular synthesis practitioners via shared creative prompts. His practice extends beyond materials and process by embracing values of community, education, and inclusion as central points of his work.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.