Instructions for Humans is my next major piece of work centred around an exhibition at Birmingham Open Media from October 2017 to January 2018.
Funded by Arts Council England, the work will be developed from May to September and be available for touring from 2018.
Research and development materials and code will be archived at this Github repository.
A blog, instructionsforhumans.tumblr.com, is collecting examples of humans performing for the benefit of machines. Further examples are welcome.
Development blog posts:
Adapted from the exhibition proposal and correct as of April 2017. Changes will occur as development proceeds.
Instructions for Humans aims to explore how machine gesture informs the human creative process and in turn how human gesture might inform mechanical representations. By employing recent developments in artificial intelligence and machine learning I will be asking what it means for a computer to “see”, how society can be influenced by opinions derived from the perceptions of machines, and how interrogating mechanical systems can help us to question the biases of our own sense-based cognition.
I will be producing the work during the upheavals of Brexit and Trump, upsets said to be informed by algorithmic distribution of “fake news” and data-mining. In this climate the work will address how machine learning affects concepts of filters, truth, objectivity and the fungibility of facts, looking into how we might develop tactics to understand and deal with this media environment.
The work will balance serious themes with an explorative and educational approach, encouraging audiences to think about these processes and systems in modern society and question the place of cameras and other sensors in an era of massive data processing by governments and corporations. My end goal is to develop work which reveals the man behind the curtain but also dispels the confused fear and despair that often inform discussion of these issues.
The work will be developed through the summer and culminate as an evolving performative exhibition directed by Machine Learning systems, which are in turn directed by the artist.
Instructions for Humans will comprise digital, sculptural, and performance artworks generated by Machine Learning programmes. The work will be centred in a gallery and comprise the following parts.
The Black Box - A sealed box containing the computer running the Machine Learning system which receives information about the city from a variety of sources, processes it into “models” which are employed to generate instructions and parameters for the artworks. This represents the proprietary, closed systems such as Google and Facebook which can only be partially understood through interrogating them.
Accompanying this will be an educational resource, “Inside The Black Box”, which will explain the workings of Machine Learning systems and their social implications to the layperson.
Portrait of the City - a digital visual/sculptural artwork involving screens, projections and digitally fabricated objects which forms the centrepiece of the exhibition. Within broad outlines set by the artist, the work will be continuously generated from data produced by The Black Box throughout the duration of the show. The work will comprise:
- Visualisations of information about the city on screens showing useful and absurd conclusions.
- Images generated from the corpus of information in the system, attempting to show how the machine sees the city.
- Objects fabricated to specifications determined by the output of the system using laser cutting and 3D printing. This collection of objects will grow during the exhibition with iterations in design showing the changes in the information corpus.
- Kinetic sculptures which move according to the data sent to them by the system, creating a machine-gesture performance informed by the city.
The arrangement of the objects and screens will be determined by the system which will produce instructions for gallery staff each day along with a generated gallery guide of dubious use.
Instructions for Humans - a series of performance and public participatory works scheduled during the exhibition, where directions are generated by the system immediately prior to the event.
Performance artists will use generated outputs from the system as a “score” for movement. These outputs will range from explicit instructions to interpreting shapes, sounds and colours. We shall workshop this during the summer and performances will take place in the gallery and around the city during the exhibition. Documentation will be exhibited in the gallery and fed back into the system.
A number of guided tours will take place during the exhibition where participants will be invited to perceive their city as a machine. Routes and instructions for how to think and behave will be algorithmically generated and issued to participants. Any materials gathered during these tours, such a photographs or written observations, will be fed back into the system.
The system will generate instructions to be disseminated via social media, such as Twitter bots asking for photos of specific objects or surveys about the city. Instructions will also be available on the website.
The work will culminate in a publication titled Instructions for Humans containing the instructions along with guidance on using them to inspire creative practice, echoing Brian Eno’s Oblique Strategies.
Why Machine Learning?
Machine Learning systems, commonly called “algorithms” and “Artificial Intelligence”, are complex statistical programmes which use vast quantities of data to predict a likely outcome. A simple example is Predictive Text or Autocorrect which notices the phrases you commonly type into your phone and suggests them to you. A more controversial case is predictive policing which uses historical crime data to suggest where police resources would best be deployed. In all cases, the algorithm is “trained” on a corpus of data and all its results are constrained by the quantity and quality of this information.
A significant part of Machine Learning in recent years has involved image analysis and computer vision, for use in areas such as the development of self-driving cars and next-generation surveillance. This has caught the attention of both visual artists and privacy campaigners and will form the basis for my work.
There will be a series of public events during the exhibition itself culminating in an online resource.
Workshops on using Machine Learning in the Arts using simple implementations that can run on laptop computers or in the browser. These will be inspired by:
- Gene Kogan’s Machine Learning for Artists book and online course.
- Rebecca Fiebrink’s Machine Learning for Musicians and Artists online course.
A symposium for Machine Learning in the Arts, in collaboration the CreativeAI events in London. I would hope to gather local, national and international experts and practitioners, in person and via Skype, with the aim of inspiring future activity in the West Midlands and develop a community to support my practice.
Image stolen from this page