How does everything work?
How was MyFi built?

How were the videos made?

Cookie Tree shot all the footage for the videos on her iPhone. She glitched the videos in real time with an analog video glitch processing pipeline. The pipeline is displayed in the diagram below.

Block diagram of
    the MyFi video glitch processing pipeline. The first block is
    Cookie Tree's computer. The output of that block is HDMI digital
    video signal, which feeds into Convert video from digital to
    analog, which outputs analog video signal. This goes into the
    Fritz Decontroller glitch processor built by Big Pauper
    (BPMC). The output of this is Glitched analog video signal. This
    is fed into a Time Based Corrector (TBC). The output of this is
    Time corrected + glitched analog video signal. This is converted
    from analog to digital video. Then the output of this is Time
    corrected + glitched digitial video signal. This is fed into an
    Elgato HD60S+ capture card. The output from this is then returned
    back to Cookie Tree's computer for recording.

Cookie Tree produced all footage in the collection using the pipeline above. She made 232 video performances for MyFi. These performances range from one minute long to over two hours long. In total these performances make up ~106 hours of video art. Each of these videos were glitched in real time. During many of her glitch performances, she listened to Dr. Slurp's music from the collection.

Once the glitched videos were recorded, Dr. Slurp wrote a program that split each video into 36.45569 second clips. This span of time lines up with the length of each audio track recorded. This number is not arbitrary; it is the amount of time that passes over 12 measures of 4/4 time at 79 beats per minute. The set of 232 performances were split into 10,602 clips.

How was the music made?

Dr. Slurp recorded 105 audio layers at his studio in the humid swamp of Miami, Florida. There are 19 types of layers, including bass (with a homemade digital wah), drum machine (teenage engineering po-12), kalimba, steel drum, cabasa, bells from cambodia, and analog synthesizer (moog matriarch). Each of these layers is the length of 12 measures of 4/4 at 79 beats per minute. Each part was recorded with either a sennheiser e 602-ii, or a fun condenser pencil mic for some of the percussion parts. The interface used was a steinberg ur-rt 4, with Neve transformers.

With the source audio in hand, Dr. Slurp wrote a beautiful/crazy Java program that combinatorially generates arrangements of the audio layers for each of the 232 video performances. The arranger was programmed with the intention of creating continuity across adjacent videos in a performance. The arranger is aware of which layers are part of the melodic and percussion sections, and it uses this information when planning which layers are chosen for the videos in a performance.

Once the arrangements are decided by the arranger, Dr. Slurp used another program he wrote to programatically mix the individual audio layers into songs for each of the 10,602 videos. Dr. Slurp built his own stereo .wav file mixer in Java, since there was nothing else available that could handle the task. After all the songs were mixed, they were then layered onto their corresponding videos.