Its easy to image what a point or a line might sound like. Each have characteristics which can also be used as musical characteristics, but what about the sound of 3-dimensional objects, 4-dimensional objects (objects changing in time) and 5-dimensional objects.
I’m currently working on a project which involves using grasshopper sequences (patterns made of points, lines, polylines, splines, surfaces, and objects) to create sounds. Typically, this process is reversed in which music is visualized. This has been done for some time from the simple stereo responder showing levels of treble and bass, to more complex means as below (using processing to visualize music):
Yet, as I said, I want to do the opposite. I want to create the visual which then creates the music, but I’m not exactly sure how to to this. I have a start with the logic in which a graph is made in grasshopper. The location of an object on that graph will control its pitch, volume, note and will be comprised of points (beats), lines (notes), splines (sweeping notes), surfaces (harmonies), and objects (which, i’m leaving open ended because I’m not sure what they will sound like). I’m not sure if they would exist on the same graph, multiple graphs, or have the ability to be pulled apart in order to examine the space between the notes (if that exists).
Right now its just a hypothetical, but the research has begun.
SMC Work featured on suckerPUNCHdaily