Generative Video Editor 
Artist Statement

This project has two outcomes, a programme branching-narrative video engine and infinite generic videos of a snowshoeing journey in the Okanagan. 

1 Introduction
Traditional cinema-going uses linear format video to help the audience to focus on the screen. Precise and fast-paced editing can keep people engaged while the music helps immersive audiences further. Interactivity is solely limited to the aftermath of viewing; critical thinking and reflections as social activity shifts from collective thinking to individual experience. As the pandemic continues, cinema-goers shift to online customized entertainment. Would the audience enjoy video work more when having control in sequence and music? What is the significance of generative art if the user does not know about it? 

2 Background  
As a student artist, I have worked on video work, installations and extended reality experiences. The commonality is that I always have difficulty communicating the creative process to the audience/users, and why executive decisions were made. For this project, I am interested in showing the behind the scene work to the public and thus empower users with knowledge and comfort of seeing the creative process. 

In the readings, Rokeby believes interactive technology gives "empowerment" to users as video games always amplify the illusion of power. (147) With that in mind, I desire to create a work with interface becoming part of the content for the user to interact with. However, Campbell said that “if a work is responding predictably and the viewers become aware of the correlation … then they will feel that they are in control, and the possibility of dialogue is lost.” (133) It took me trials and errors to create a fine balance of predictability between 4 sets of videos. 

This project also dived into the domain of generative work through algorithms. There are advantages of doing so, which is explained in 3.1 software design. Yet, the unfamiliarity of letting computers do most of the work is frightening to me. Generative art is a new domain to me, and I have to understand it before creating work to explain to the public. My biggest takeaway is that humans are partnering with coding to create infinite possibilities in a shorter time. Access to creative coding advances us to focus on aesthetic rules and experience design. Highlighting the computer assign sequence to clips does not have to be shown to the audience in my work. 

3 Experience Design
“A labyrinth is designed to be disorienting, but because it provides a single route, the wanderer will never be truly lost.” (28, Lupton)

When designing the user’s journey, I adopted a labyrinth design, which is a controlled environment in which a user has a fixed path from beginning to end. Through the guided paths, no matter the work is shown in an exhibition or a virtual experience, I am their guide and creator.

(a labyrinth example)

3.1 Software Design
The software is written in MaxMSP/Jitter programming environment. The language uses node programming and would run multimedia and algorithm. This programme works for outputting live videos but does not work best for users to interact on the software. 

Markov Chain, an algorithm based on the current state to the next state’s probability, is used so that there can have more than 1 version of the final work. The benefit of using this property includes, artists can control order when required and videos in the previous state will not be affected. One note is that as Rokeby believes, any algorithm is not programmed to interpret motivation, but reflect as it sees. (149) From labyrinth to Markov Chain, the goal of giving the user the best experience in a controlled environment aligns. 

(L: first draft of Markov Chain for scene 1, R: second draft)

An audience-side user interface is created in the Max program, aiming to be intuitive, simple and modern. It is located in the presenter mode in the Max patch, where the user can find buttons, text, speech bubbles and a player window. The yellow “start” button is where the user can click first. After clicking start, song 1 with vocal is played. Underneath “start”, “instrumental” and “vocal” buttons are in purple. By clicking on the opposite, “instrumental”, the background music will immediately switch to the same length but a different version. If the user would like to switch to song 2, they simply have to click on “song 2” and then “start”. There was a debate on showing video database to users as suggesting clips are jumping in an unpredicted order but was removed due to redundancy on the interface.


( Max msp first draft of interface with video database on right-hand side)


3.2 Video
Following a group of 20 something-year-olds exploring Okanagan, BC in the winter of 2021 for a scenic snowshoeing adventure. Using camera techniques including movements, shot size, colour grading and drone videoing to create 4 scenes. Each scene is assigned with 4-6 few seconds short clips. Within each scene, videos do not have an assigned order and can be viewed separately or together. Despite being an editor, I do not have full control of order and rhythm, I chose to work on colour grading each scene with a theme tone to show coherent. 

After cleaning up the videos, I started to categorize them in hope of a logical and variety order. When the order change in versions is noticeable, I assume it is more pleasurable for the audience. I first arranged the clips into different shot size categories in hopes of a mix of clips. However, this arrangement does not make sense in the time frame. So I switch to activity-based categories,  arrival, getting ready, snowshoeing and sunset. Drone videos are used to introduce the environment and invite the audience into the video space, and also an ending clip is assigned which triggers fading in the last clip and music. The main difficulty is in scene 3, where the actors snowshoe in the snow. It is problematic when actors repeatedly walking in the same direction clip to clip, but I did not realize that when shooting. In the 6 clips that were used, only 2 of them are not filming them walking towards the camera. I assigned the 2 clips in the middle of the order so that after seeing a maximum of 2 more repetitive movements, the audience will be able to see a change of direction.










(example of 3 screens side by side) 

3.3 Music
Four options of two songs that consist of vocal and instrumental are given to the audience. They are encouraged to change the music to experience the power of audio in the video. Although both pieces of music are sung by Josh Woodward, song 1 Words Fall Apart have a moody atmosphere, and song 2 Same Boat conveys a more upbeat and longer instrumental beginning. 
The idea of giving user options comes to my mind because it is always an ongoing debate for editors about which song fits the video best. In the never-ending search, I want to let the audience experience choosing by giving them the options. 

4 Potential Set Up
Since an infinite system is built, the potential setup is infinite. An exhibition of 1 or 3 videos projecting on a large wall in a gallery, or a website to a local host of the software is manageable. When inviting a guest to the gallery space, the focus would be the sequence of videos that are everchanging. The difference in order can be infinite and invite guests to stay for longer than 1 whole sequence. If 3 videos are played at the same time, the audience can see the difference in sequence right away. When users navigate the system online, I imagine they would like to try out the user interface themselves and see the sequence changes each time the algorithm starts. They would have access to the presenter mode in Max and can click on buttons for customization. However, as of now, only a video of three records and an explanation video is published on the internet. 

5 Conclusion 
Creating the generative video editor and snowshoeing videos are satisfying experiences to me as an editor. By giving users the choice to be involved in the editing process, they can witness and customize the snowshoeing experience. On the other hand, with the limitation of clips and music options in the editor, running the experience once might not be enough to understand the meaning behind it. Similar to my undergraduate academic journey, I have learned to trust the process and enjoy the collaboration work. Visual experience might not be understood by everyone, but creating with a good intention means we can be the change the world needs. 



Works Cited
Campbell, Jim. “Delusions of Dialogue: Control and Choice in Interactive Art.” Leonardo, vol. 33, no. 2, 2000, pp. 133–136. 
Lupton, Ellen. “Design Is Storytelling.” Smithsonian Design Museum, 2017
Rokeby, David. “Transforming Mirrors.” 

Screenshot 2021-04-12 at 11.34.37 AM.png
Screenshot 2021-04-26 at 7.53.30 PM.png