In a nutshell, the Internet of Things (often abbreviated to IoT) is the scenario of everyday objects and devices being connected to the Internet, sharing and receiving data. These ‘things’ are items that you traditionally wouldn’t expect to have an Internet connection—your washing machine, your car, or the street light outside your house. These devices could share data about their surroundings via sensors, or they could be remotely controlled by their users via smartphone apps. Examples of current IoT consumer products include the WeMo Home Automation devices that allow monitoring and control of home appliances from anywhere in the world, or Hexoskin—a biometric shirt that monitors a users health data.
Currently, applications of IoT are mostly found within environmental monitoring, energy monitoring, home automation, healthcare, and transportation. However it’s slowly starting to be applied to the music technology industry too. For example, between 2012 and 2013 Google was running The Universal Orchestra—an interactive exhibition at the London Science Museum that allowed physical musical instruments to be played by anyone anywhere via the Google Chrome web browser. MIT also did something very similar called Patchwerk—a massive modular synthesizer that could also be controlled and monitored online. In regard to consumer products, the littleBits cloudBit can be used to connect your littleBits/KORG modular synth to the Internet, and all of the Modal Electronics hardware synthesizers also have Internet connectivity for various uses.
However, these examples only begin to scratch the surface of how IoT could be applied to the music technology industry compared to the way it has transformed other fields such as home automation. In this article, I’m going to discuss the different ways that IoT could be applied to music composition, production, and performance in the near future, and how it could be an overall benefit to the industry.
A remote music performance, also known as networked music performance, is the process of real-time interaction over a computer network that enables musicians in different locations to perform as if they were in the same room. There is a lot of research and activity going on in this area, such as by the SoundWIRE research group at CCRMA, Stanford University, who have developed JackTrip—a software application for streaming live multi-channel, uncompressed audio over the Internet.
However most activity and research in this area mainly focuses on audio streaming rather than the transmission of instrument performance data, which could be very useful and interesting when used as part of a traditional live stage performance. Imagine a solo musician who needs a backing band of particular people for a tour, however none of them are available to travel. IoT could be applied here by attaching sensors to the backing bands instruments to record performance data (e.g., the strike of drums, the presses of piano keys), which is then streamed over the Internet to the stage of the main performer to control the playing of the same type of instruments. This is very similar to Google’s Universal Orchestra, however the web browser has been replaced by actual musical instruments, offering greater playability and control over the remotely-controlled instruments. This kind of setup would allow for a fascinating remote performance that goes beyond just streaming live audio or computer sequenced music.
Remote recording, or remote collaboration, is another process that is nothing new, but an activity that could be enhanced by the application of IoT. There are many examples of producers recording bands whilst both are in different recording studios, using videoconference software to monitor and communicate with each other. There are also dedicated music production applications and tools, such as Ohm Studio, that allow musicians to collaboratively record and produce music over the Internet.
Collaboration features of Ohm Studio.
However, IoT could take remote recording a step further. Imagine a particular musical instrument in a specific environment, such as a pipe organ in a famous cathedral, which a producer wants to use in a track. They’re not able to visit the location to record it, however a number of techniques similar to that of The Universal Orchestra could be applied to allow remote recording to take place. Firstly, MIDI note data could be streamed in real time from the producers’ computer over the Internet to the cathedral to control mechanisms that physically play the organ. The created sound could then be recorded via microphones and streamed back to the producers’ computer. Further control could be provided by allowing the microphone placements to be adjusted via remote-controlled moving platforms, and mechanisms could be attached to the organ stops to allow control of the organs timbre, much like how the dials on the Patchwerk synthesizer could be remotely controlled. This kind of setup could provide a level of accuracy and control that goes beyond what can currently be done with audio sampling libraries and convolution software.
Remote live mixing would be the process of live music being mixed in real time by a mixing engineer in a different location. The market already contains a number of networked audio mixers—there’s the Roland VR-3 which is designed for live AV web streaming; the Behringer X32 Core which can process remote audio inputs and outputs, as we as be locally remotely controlled via iOS apps; and the Shure SCM820 which uses networking for streaming audio between devices on the same local network, providing control of the mixer via a web browser, and Skype integration for teleconference events.
However, none of these mixers allow remote control via the Internet, which could be a very useful feature. Talented mixing engineers are highly sought after, however currently they can only provide their services to a single show or venue at any time. This limit could be overcome by allowing audio mixers to be Internet-connectable, using a web browser anywhere in the world to control it. A mixing engineer could then be sat in their studio with a live stream of a show in a different location, mixing the audio in real time remotely. Once that show has finished, they could then quickly and easily connect to a show elsewhere and do the same all over again. Remote mixing could mean artists and venues can get the live mix they want, potentially improving the quality of live music.
Generative Music is a term popularized by Brian Eno to describe music that is ever-changing and different, and that is created by a system. Algorithmic composition is the technique of using algorithms or arbitrary data to create music, and these two fields of study sometimes combine when being applied. For example, John Cage’s composition Reunion was performed on a specially made photoreceptor-equipped chessboard, where the sound was triggered by moves made on the board, making the piece different each time it was performed.
IoT could have a huge impact on generative music and algorithmic composition due to the large amount of data that IoT devices can share. Whether it is traffic information, weather data, or a persons health statistics, all this data could potentially be shared and accessed via the Internet and then used by music software as an input for control, mapping different data to musical parameters. For example, real-time environmental data of a city could be fed into a generative music application, mapping temperate to musical scale type, wind speed to tempo, pollution level to chord density, and so on. In an era where musicians and producers are constantly on the lookout for new ways to make music in order to stand out above the rest, using IoT alongside generative music and algorithmic composition could be a way of doing so.
The Internet of Things is a very fascinating and useful field that will no doubt become more prominent in all parts of life in the near future, including within the music technology industry. We are already beginning to see both DIY and consumer examples of music-related IoT devices and platforms, and due to the rapid improvement and reduced-cost of technology, the number of these devices will quickly increase, potentially changing the way we compose, produce, and perform music.