Announcing the 2021 AMI Research Award Recipients

The Team at AMI
Artists + Machine Intelligence
6 min readDec 2, 2021

--

Landsat 8 image of the Amazon rainforest seen in shortwave infrared depicting details invisible to human eyes. Image courtesy of Jennifer Chen, Southern California Institute of Architecture (SCI-Arc)

We’re happy to announce a new cohort for AMI Research Awards, our annual open call for proposals on creative applications of machine learning (ML) and cultural research related to ML.

Awards provide faculty with $20,000 USD in unrestricted project funding and the opportunity to partner with Google researchers on their research initiatives to build new and constructive long-term relationships.

The following projects and faculty will receive 2021 AMI Research Awards:

Rethinking AI and Automation in Architecture — Daniel Cardoso Llach, Ph.D. Carnegie Mellon University, in collaboration with Dr. Jean Oh, Carnegie Mellon University

This project brings architecture, AI, and sociotechnical research methods together to imagine and realize humane scenarios for robotically-supported cooperative construction. We’re interested in “robot in the loop” systems that adaptively support — rather than automate, replace, or surveil — the work of construction workers on building sites. Our reflective technology design process comprises ethnographic research and qualitative engagements with construction actors and sites, simulations, technical research combining robotics and reinforcement learning, and the development of a proof of concept system to be demonstrated on site. Striving for dynamic and safe robotically-supported construction environments, our project will help foster humane and sustainable practices in the architecture, engineering, and construction (AEC) industry, and foster new forms of expertise at the intersection of AI, robotics, the building trades, and architecture.

Daniel Cardoso Llach, Ph.D. is an architect and interdisciplinary scholar interested in issues of automation in design, interdisciplinary creativity, human-machine interaction, and critical technical practices in architecture and design. He is Associate Professor at the School of Architecture at Carnegie Mellon University, where he also chairs the Master of Science in Computational Design and co-directs CodeLab, an interdisciplinary graduate research and learning laboratory re-thinking the role of computation in design and the built environment.

Jean Oh, Ph.D. is Associate Research Professor at the Robotics Institute at Carnegie Mellon University, and the director of the interdisciplinary Bot Intelligence Group. Her research focuses on persisting robots that can coexist with humans in shared environments, and she has led several intelligence tasks in government and industry in various problem domains including self-driving cars, disaster response, eldercare, and the arts.

Views of Planet City: Pale Blue Dot Mk2 — Jennifer Chen, Southern California Institute of Architecture (SCI-Arc)

Views of Planet City is a multi-year research project underway at SCI-Arc that critically investigates the possible implications of E.O. Wilson’s “Half Earth” proposition to confine and concentrate human inhabitation of the planet to heal the global ecosystem. Pale Blue Dot Mk2 is one of five segments of Views of Planet City, exploring what the Earth would look like from space in the epoch of Planet City. The project’s objective is to simulate the passage of time represented in satellite images through predictive networks. Notably, the project considers remote sensing technology as a speculative medium, one that not only shows us direct correlations between the causes and effects of our collapsing climate, but that can be reinterpreted and synthesized using deep learning to imagine the reversal of planetary sprawl.

Jennifer Chen is an architect and designer working across buildings, installation, film, and performance. At SCI-Arc she leads Design Studio, Liberal Arts, Visual Studies. She is currently developing her research interests as a designer and educator focusing on speculative futures and post climate environments.

Movement-centric calligraphy and graffiti generation — Frederic Fol Leymarie, Goldsmiths, University of London, in collaboration with Dr. Daniel Berio and Xiaobo Fu, Goldsmiths, University of London

The aim of this research is to computationally generate handwritten art forms such as calligraphy and graffiti through the combination of sequence modeling methods and primitive-based representations of movement. We plan to go beyond the state of the art for generating handwriting, including calligraphy, which is largely based on inputs and outputs consisting of dense point sequences, by building up from our previous research across the fields of machine learning, motor control, visual perception, graphonomics (the experimental study of handwriting and related skills), as well as art practice. We hope to demonstrate that movement primitives can form a fruitful basis as a data representation to significantly improve the performance and robustness of today’s sequence-based deep learning approaches to such generative tasks.

Frederic Fol Leymarie works on creativity and AI systems, with a focus on computer vision and graphics, including robots, which can perform with artistic skills similar to expert humans. He joined Goldsmiths in 2004, where he launched a Master of Science in Arts Computing and a Master of Science in Games Programming. He is also an entrepreneur, launching London Geometry (2011) and DynAikon (2015). Understanding human creative activities, such as in the visual arts, directly feeds into his AI research agenda.

Daniel Berio is a researcher and artist working between computer graphics, robotics, and graffiti art. He recently completed a PhD (Computing) at Goldsmiths, where he developed methods for the computer aided design and procedural generation of graffiti and calligraphy, with applications in robotics. As an artist with a background in graffiti writing, Daniel explores the intersections of this art form with techniques from computer graphics and robotics.

Xiaobo Fu is a visual artist and PhD student in Computer Art at Goldsmiths. Xiaobo’s research focuses on computationally generating Chinese calligraphy which is a challenging task due to its enormous character count and variety of artistic styles. As a calligraphy art lover and practitioner, Xiaobo explores new forms of expression of this thousand-year-old art with cutting-edge machine learning technology in order to understand and expand the application of AI in creative tasks.

Explorisk: Visualizing Risk-Mitigation Scenarios — Tegan Maharaj, University of Toronto, in collaboration with the Cambridge Centre for the Study of Existential Risk

Predictive risk models can help us understand how population-level risks translate to individual-level risks, and how different risks can interact, in order to examine different actions and strategies for mitigating those risks. This project proposes a collaboration with artists, graphic designers, and user-experience experts to develop an intuitive visualization tool for exploring different risk mitigation scenarios. The goal of this tool is to empower policy-makers, researchers, and other individuals to better understand and act upon risks. The core of the proposed research is to develop an intuitive visual language for expressing complex predicted scenarios — potentially short subtitled movies, augmented-reality ‘pictures’, flow-chart-type diagrams, or something else entirely — and an interface which allows researchers to choose between different visualization options.

Tegan Maharaj is Assistant Professor in the Faculty of Information at the University of Toronto, and an affiliate of both the Schwartz-Reisman and Vector institutes. Her current research interests include using deep models for policy analysis and risk mitigation, and designing data or unit test environments to empirically evaluate learning behavior or simulate deployment of an AI system.

Digital Performers Using AI — Michael Rau, Stanford University

This project applies computer vision and machine learning technologies to a 2,500 year old art form — theater — to improve virtual live performances and to discover new aesthetics of performance. The project approaches working with these systems of technology from a poetic and creative standpoint. What are the creative affordances that appear when we apply artificial intelligence to the representations of characters within a live performance? Can we improve theatrical streamed performances using machine learning? The project will undertake research in a creative field by exploring the aesthetics of a “digital performer” alongside a rigorous scientific analysis of the tools and technology to create new software tools, new dramaturgical constructions, and new performance techniques.

Michael Rau is a live performance director specializing in new plays, opera, and digital media projects. He has created work in New York City at Lincoln Center, The Public Theater, PS122, HERE Arts Center, Ars Nova, The Bushwick Starr, in Boston at ART, and recently made his debut at the Humana Festival, as well internationally in the UK, Germany, Brazil, and Czech Republic. He is a New York Theater Workshop Usual Suspect and a professor of directing and devising at Stanford University.

Artists + Machine Intelligence (AMI) is a program at Google that invites artists to work with engineers and researchers together in the design of intelligent systems.

AMI Research Award applications are currently closed. Please check back in February 2022 for details on future application cycles, or follow us on Twitter for updates.

--

--