AI & Architecture

An Experimental Perspective

GAN-enabled Space Layout under Morphing Footprint | Source: Author

Artificial Intelligence, as a discipline, has already been permeating countless fields, bringing means and methods to previously unresolved challenges, across industries. The advent of AI in Architecture, described in a previous article, is still in its early days but offers promising results. More than a mere opportunity, such potential represents for us a major step ahead, about to reshape the architectural discipline.

Our work proposes to evidence this promise when applied to the built environment. Specifically, we offer to apply AI to floor plans analysis and generation. 

Our methodology follows two main intuitions (1) the creation of building plans is a non-trivial technical challenge, although encompassing standard optimization technics, and (2) the design of space is a sequential process, requiring successive design steps across different scales (urban scale, building scale, unit scale). Then, in order to harness these two realities, we have chosen nested Generative Adversarial Neural Networks or GANs. Such models enable us to capture more complexity across encountered floor plans and to break down the complexity by tackling problems through successive steps. Each step corresponding to a given model, specifically trained for this particular task, the process can eventually evidence the possible back and forth between humans and machines.

In a nutshell, the machine, once the extension of our pencil, can today be leveraged to map architectural knowledge, and trained to assist us in creating viable design options.


I. Framework

Our work finds itself at the intersection of Architecture and Artificial Intelligence. The former is the topic, the latter the method. Both have been simplified into clear & actionable categories.

Architecture is here understood as the intersection between Style and Organization. On one hand, we consider buildings as vectors of a cultural significance, that express through their geometry, taxonomy, typology, and decoration a certain style. Baroque, Roman, Gothic, Modern, Contemporary: as many architectural styles that can be found through a careful study of floor plans. On the other hand, buildings are the product of engineering and science, answering to strict frameworks and rules -building codes, ergonomics, energetic efficiency, egress, program, etc — that can be found as we read a floor plan. This organizational imperative will complete our definition of Architecture and drive our investigation.

Artificial Intelligence will be employed, using two of its main fields of investigation — Analytics and Generative Adversarial Networks — as an investigative tool. Using GANs, we offer to educate our own AI systems to architectural design. We postulate that its utilization can enhance the practice of the architectural discipline. This field is as recent as it is experimental and yields to this day surprising results. Our hope is to be able to train it to draw actual building floor plans.

At the crossroad of Style & Organization, we lay down a framework that organizes the encounter of Architecture & AI.


II. Introduction to GANs

The design of architectural floor plans is at the core of the architecture practice. Its mastery stands as the gold standard of the discipline. It is an exercise that practitioners have overtime relentlessly tried to improve through technology. In this first part, we dive into the potential of AI applied to floor plan generation, as a mean to push the envelope even further.

Using our framework, to tackle floor plans’ style and organization, we lay down in the following chapter the potential of AI-enabled space planning. Our objective is to offer a set of reliable and robust tools to both evidence the potential of our such an approach and to test our assumptions.

The challenge is here threefold: (1) choosing the right toolset, (2) isolating the right phenomena to be shown to the machine and (3) ensuring that the machine “learns” properly.

AI & Generative Adversarial Neural Networks

Generative Adversarial Neural Networks — or GANs– are here our weapon of choice. Within the field of AI, Neural Networks stands as a key field of investigation. The creative ability of such models has been recently evidenced, through the advent of Generative Adversarial Neural Networks. As any machine-learning model, GANs learn statistically significant phenomena among data presented to them. Their structure, however, represents a breakthrough: made of two key models, the Generator and the Discriminator, GANs leverage a feedback loop between both models to refine their ability to generate relevant images. The Discriminator is trained to recognize images from a set of data. Properly trained, this model is able to distinguish between a real example, taken out of the dataset, from a “fake” image, foreign to the dataset. The Generator, however, is trained to create images resembling images from the same dataset. As the Generator creates images, the Discriminator provides him with some feedback about the quality of its output. In response, the Generator adapts, to produce even more realistic images. Through this feedback loop, a GAN slowly builds up its ability to create relevant synthetic images, factoring in phenomena found among observed data.

Generative Adversarial Neural Network’s Architecture | Image Source

Representation & Learning

IfGANs represent a tremendous opportunity for us, knowing what to show them is crucial. We have here the opportunity to let the model learn directly from floor plan images. By formatting images, we can control the type of information that the model will learn. As an example, just showing our model the shape of a parcel and associated building footprint will yield a model able to create typical building footprints given a parcel’s shape. To ensure the quality of the outputs, we will use our own architectural “sense” to curate the content of our training sets: a model will only be as good as the data we give him, as architects.

Precedents

If GANs’ application to architectural design is still in its infancy, a handful of precedents inspired our work and drove our intuition. Hao Zheng’s and Weixin Huang offered at the ACADIA conference in 2018 a first publication, demonstrating the potential of GAN for floorplans recognition and furniture layout generation. Using patches of color, their model would draw the infill of rooms, based on the room program, and its openings position. The same year, Nathan Peters in his thesis at the Harvard GSD, proposed to use GANs (pix2pix) to tackle program repartition in single-family modular homes, based on the house footprint.

Regarding GANs as design assistants, Hao Zheng’s (Drawing with Bots: Human-computer Collaborative Drawing Experiments, 2018) and Nono Martinez’ thesis at the GSD (2017) inspired our investigations. Both authors tackled the idea of a loop between the machine and the designer to refine the very notion of “design process”.

Our work expands on these precedents and offers to nest 3 models (footprint, program, and furnishing) to create a full “generation stack” while improving results quality at each step. By automating multi-units processing, our work then scales to entire buildings generation, and masterplan layouts. We further offer an array of models dealing with style transfer. Finally, our contribution adds a rigorous framework to parse and classify resulting outputs, enabling users to “browse” consistently through generated options.


III. Style Transfer

Modern-to-Baroque Floor Plan Translation | Source: Author

Within a floor plan, “Style” can be observed by studying the geometry and figure plane of its walls. Typical baroque churches will display bulky columns, with multiple round indents. A modern villa by Mies van der Rohe will show thin flat walls. This “crenellation” of the wall surface is a feature that a GAN can appreciate. By showing it pairs of images, with one image being a wireframe of a plan, and the other one the actual wall structure, we can then build a certain amount of machine intuition with regard to architectural style.

This section shows the results of a model, trained to learn the Baroque style. We proceed then to a style transfer, where a given floor plan is ripped off of its wall thickness (A) and dressed back with a new wall stylistic (B).

Style Transfer Results: Apartment Units Modern-to-Baroque Style Transfer | Source: Author

IV. Layout Assistant

Layout Assistant, a Step by Step Pipeline | Source: Author

In this section, we offer a multi-step pipeline, integrating all the necessary steps to draw a floor plan. Jumping across scales, it emulates the process taken by an architect and tries to encapsulate each step into one specific model, trained to perform a given operation. From the parcel to the building footprint (I), from the footprint to a room split with walls & fenestration (II), from a fenestrated floor plan to a furnished one (III): each step has been carefully engineered, trained and tested.

Generation Pipeline (Models I to III) | Source: Author

At the same time, by dividing the pipeline into discrete steps, the system allows for the user’s intervention between each model. By selecting the output of a model, and editing it, before giving it to the next model, the user stays in control of the design process. Its input shapes the decisions made by the model, therefore achieving the human-machine interaction expected.

1. Footprint

Context | Parcel (input)| Generated Footprint (output), Source: Author

The first step in our pipeline tackles the challenge of creating an appropriate building footprint for a given parcel geometry. To train this model, we used an extensive database of Boston’s building footprints and were able to create an array of models, each tailored for a specific property type: commercial, residential (house), residential (condo), industrial, etc.

Each model is able for a given parcel, to create a set of relevant footprints, resembling in dimension and style the type it was trained for. 9 examples, using the residential (house) model are shown here below.

Results: Generated Footprints (housing) | Source: Author

2. Room Split & Fenestration

Footprint | Openings & Balcony (input) | Program & Fenestration (output) | Source: Author

The layout of rooms across a building footprint is the natural next step. Being able to split a given floor plan, while respecting meaningful adjacencies, typical room dimensions and proper fenestrations is a challenging process, that GANs can tackle with surprising results.

Using a dataset of around 700+ annotated floor plans, we were able to train a broad array of models. Each is geared towards a specific room count and yields surprisingly relevant results once used on empty building footprints. We display here below some typical results.

More results are also available here.

Results: Generated Program & Fenestration | Source: Author

3. Furnishing

Program (input, option 1) | Furniture Position (input, option 2)| Furnished Unit (output) | Source: Author

This last step brings the principle of generation to its most granular level: the addition of furniture across space. To that end, we trained at first a model to furnish the entire apartment all at once.

The network was able to learn, based on each room program, the relative disposition of furniture across space, and the dimensions of each element. The results are displayed here below.

More results are also available here.

Results: Furnished Units | Source: Author

If these results can give a rough idea of potential furniture layouts, the quality of the resulting drawings is still too fuzzy. To further refine the output quality, we have trained an array of additional models, for each room type (living room, bedroom, kitchen, etc…). Each model is only in charge of translating a color patch added onto the plan, into a properly drawn piece furniture. Furniture types are encoded using a color code. We display here below the results of each model.

Results of Room Furnishing Models | Bathroom / Kitchen / Livingroom / Bedroom | Source: Author

4. Going Further

If generating standard apartments can be achieved using our technic, pushing the boundaries of our models is the natural next step. GANs can, in fact, offer quite remarkable flexibility to solve seemingly highly constrained problems. In the case of floor plans layout, as the footprint changes in dimension and shape, partitioning and furnishing the space by hand can be a challenging process. Our models prove here to be quite “smart” in their ability to adapt to changing constraints, as evidenced below.

GAN-enabled Space Layout under Morphing Footprint | Source: Author

Our ability to control the units’ entrance door and windows position, coupled with the flexibility of our models allows us to tackle space planning at a larger scale, beyond the logic of a single unit. In the examples below we scale our technic to entire buildings.

Experimental GAN-generated Masterplans | Source: Author

V. Conclusion

AI will soon massively empower architects in their day to day practice. As such potential is about to be demonstrated, our work participates to the proof of concept while our framework offers a springboard for discussion, inviting architects to start engaging with AI, and data scientists to consider Architecture as a field of investigation. However, today, our manifesto could be summarized in four major points.

Conceptually first, our belief is that a statistical approach to design conception shapes AI’s potential for Architecture. Its less-deterministic and more-holistic character is undoubtedly a chance for our field. Rather than using machines to optimize a set of variables, relying on them to extract significant qualities and mimicking them all along the design process is a paradigm shift.

Second, we are directionally convinced that our ability to design the right pipeline will condition AI’s success as a new architectural toolset. Our choice for the “Grey Boxing” approach, as introduced by Prof. Andrew Witt in Log, will likely secure the best potential results. This method contrasts with the “black box” approach, that only allows users to input information upfront, and to get finished design options at the end of the process, without any control over the successive generation steps. To the contrary, by breaking out our pipeline into discrete steps, “Grey Boxing” permits the user to intervene all along the way. His tight control over the machine is his ultimate guarantee of the design process quality.

Third, we technically believe that the sequential nature of the application will facilitate its manageability and foster its development. The ability to intervene throughout the generating process is a fundamental dimension: as each step of the pipeline represents a distinct portion of architectural expertise, each model can be trained independently, opening the way to significant improvements and experimentation in the near future. Indeed, improving this entire pipeline end-to-end could be a long and cumbersome task, while amending it step by step remains a manageable process, within the reach of most architects and engineers in the industry.

Finally, we hope our framework will help address the endless breadth and complexity of the models to be trained and those used in any generation pipeline. Tackling parcels-footprint-room split-etc., as we do is one possible approach among, we believe, a large set of options. To encapsulate the necessary steps of space planning, the key is more the principle than the method. And with the growing availability of architectural data, we encourage further work and open-minded experimentation.

Far from thinking about AI as the new dogma in Architecture, we conceive this field as a new challenge, full of potential, and promises. We see here the possibility for rich results, that will complement our practice and address some blind spots of our discipline.


Bibliography

  • Digital Architecture Beyond Computers, Roberto Botazzi, Bloomsbury
  • Data-Driven Design & Construction, Randy Deutsch, Wiley
  • Architectural Intelligence, How Designers and Architects Created the Digital Landscape, Molly Wright Steenson, MIT Press
  • Architectural Google, Beyond the Grid — Architecture & Information Technology pp. 226–229, Ludger Hovestadt, Birkhauser
  • Algorithmic Complexity: Out of Nowhere, Complexity, Design Strategy & World View pp. 75–86, Andrea Gleiniger & Georg Vrachliotis, Birkhauser
  • Code & Machine, Code, Between Operation & Narration pp. 41–53, Andrea Gleiniger & Georg Vrachliotis, Birkhauser
  • Gropius’ Question or On Revealing And Concealing Code in Architecture And Art, Code, Between Operation & Narration pp. 75–89, Andrea Gleiniger & Georg Vrachliotis, Birkhauser
  • Soft Architecture Machines, Nicholas Negroponte, MIT Press.
  • The Architecture Machine, Nicholas Negroponte, MIT Press.
  • A Pattern Language, Notes on the Synthesis of Form, Christopher Alexander, link
  • Cartogramic Metamorphologies; or Enter the RoweBot, Andrew Witt, Log #36
  • Grey Boxing, Andrew Witt, Log #43
  • Suggestive Drawing Among Human and Artificial Intelligences, Nono Martinez, Harvard GSD Thesis, 2016
  • Enabling Alternative Architectures: Collaborative Frameworks for Participatory Design, Nathan Peters, Harvard GSD Thesis, 2017
  • Architectural Drawings Recognition and Generation through Machine Learning, Hao Zheng (University of Pennsylvania), Weixin Huang (Tsinghua University), ACADIA 2018 [Paper]
  • DANIEL: A Deep Architecture for Automatic Analysis and Retrieval of Building Floor Plans, Divya Sharma, Nitin Gupta, Chiranjoy Chattopadhyay, Sameep Mehta, 2017, IBM Research, IIT Jodhpur
  • Automatic Room Detection and Room Labeling from Architectural Floor Plans, Sheraz Ahmed, Marcus Liwicki, Markus Weber, Andreas Dengel, 2012, University of Kaiserslautern
  • Automatic Interpretation of Floor Plans Using Spatial Indexing, Hanan Samet, Aya Soffer, 1994, University of Maryland
  • Parsing Floor Plan Images, Samuel Dodge, Jiu Xu, Bjorn Stenger, 2016, Arizona State University, Rakuten Institute of Technology
  • Project Discover: An Application of Generative Design for Architectural Space Planning, Danil Nagy, Damon Lau, John Locke, Jim Stoddart, Lorenzo Villaggi, Ray Wang, Dale Zhao and David Benjamin, 2016, The Living, Autodesk Studio
  • Raster-to-Vector: Revisiting Floor Plan Transformation, Chen Liu, Jiajun Wu, Pushmeet Kohli, Yasutaka Furukawa, 2017, Washington University, Deep Mind, MIT
  • Relational Models for Visual Understanding of Graphical Documents. Application to Architectural Drawings, Llus-Pere de las Heras, 2014, Universitat Autonoma de Barcelona
  • Shape matching and modeling using skeletal context, Jun Xie, Pheng-Ann Heng, Mubarak Shah, 2007, University of Central Florida, Chinese University of Hong Kong
  • Statistical segmentation and structural recognition for floor plan interpretation, Lluís-Pere de las Heras, Sheraz Ahmed, Marcus Liwicki, Ernest Valveny, Gemma Sánchez, 2013, Computer Vision Center, Barcelona, Spain
  • Unsupervised and Notation-Independent Wall Segmentation in Floor Plans Using a Combination of Statistical and Structural Strategies, Lluıs-Pere de las Heras, Ernest Valveny, and Gemma Sanchez, 2014, Computer Vision Center, Barcelona, Spain
  • Path Planning in Support of Smart Mobility Applications using Generative Adversarial Networks, Mohammadi, Mehdi, Ala Al-Fuqaha, and Jun-Seok Oh. , 2018
  • Automatic Real-Time Generation of Floor Plans Based on Squarified Treemaps Algorithm, Fernando Marson and Soraia Raupp Musse, 2010, PUCRS
  • Procedural Modeling of Buildings, Pascal Muller, Peter Wonka, Simon Haegler, Andreas Ulmer, Luc Van Gool, 2015, ETH Zurich, Arizona State University
  • Generative Design for Architectural Space Planning, Lorenzo Villaggi, and Danil Nagy, 2017, Autodesk Research

Original. Reposted with permission.

Opinions expressed by AI Time Journal contributors are their own.

About Stanislas Chaillou

Contributor Architecture Master Candidate | Harvard University

View all posts by Stanislas Chaillou →