Computational Design Q&A: The A.I. Revolution

This is the first in a series of posts covering topics raised during the Q&A session following my recent talk on Computational Design at the Institution of Structural Engineers.  As the talk itself was some while ago my recollection of the questions may be inexact and the answers may have been improved significantly by hindsight and additional thinking time, so I do not guarantee an exact transcript, but I have tried to remain true to the spirit of both the question and response I gave on the night.

The first topic is something that I get asked about a lot; the role that emerging A.I. techniques will play in the industry:

“There is a lot of talk at the moment about Artificial Intelligence and it seems that its use is going to revolutionise a lot of industries.  Do you think we will see an A.I. revolution in Structural Engineering?”

‘Artificial Intelligence’ is one of those things that has a lot more utility as a marketing term than it does as a technical term.  It is applied to algorithms that give the outside appearance of mimicing certain aspects of human intelligence, but beyond that there isn’t really all that much that sets them apart from any other algorithm and quite where you choose to draw the line between ‘A.I.’ and ‘non-A.I.’ algorithms is a bit fuzzily defined.

So, there are two questions worth considering.  Firstly; will something called ‘artificial intelligence’ revolutionise structural engineering?  In the long run almost certainly yes, if only because the term is used so broadly.  We can speculate about what that might look like, but because this technology hasn’t necessarily been invented yet we risk straying into the world of science fiction.  So the second, more interesting, question is perhaps; will any of the artificial intelligence techniques we currently have access to revolutionise structural engineering?  The answer to this is a bit more complicated.

Certain older ‘A.I.’ techniques have already made their way to being standard parts of the computational designer’s toolkit – most notably genetic algorithms and other optimisation methods.  In RCD we utilise such techniques regularly and find them very useful (though we are, admittedly, somewhat atypical in this).

However, these days when people talk about ‘A.I.’ they are most often referring to machine learning and in particular Artificial Neural Networks.  This class of A.I. algorithms takes (very loose) inspiration from the way that organic brains operate by simulating a network of ‘neurons’ which pass signals between one another and which ‘learn’ by adjusting the weightings of the connections between each pairing to tune the response given to particular stimulus.

A simple 3-layer neural net

This is great at engaging our imaginations (“There’s a brain in my computer!”) and they are capable of some truly impressive feats which go well beyond what we expect from a computer.  However, it should be pointed out that there are still some pretty substantial differences between the way these ANNs work and the way the human brain works and the degree to which they exhibit genuinely intelligent behaviour is often overstated.  These are ultimately statistical methods and as with any other algorithm they have strengths and weaknesses.

Broadly speaking; they are quite general and have a wide range of applicability to many different problem domains, but tend to be much less computationally efficient than any algorithm designed specifically for that task (not least because they must undergo a lengthy process of training before they are of any use at all).  Their true value, therefore, lies in applications for which no more direct method of solution exists; computer vision, object recognition and so on being the prime examples.

Engineering is fairly well codified and rules-based, and therefore there are more direct methods available in a lot of cases.  There are also a couple of drawbacks to Neural Net methods which limit their application to engineering problems.

Firstly, their quality relies heavily on the dataset on which they are trained.  This can be a problem even in image manipulation applications, when there are millions of photographs easily available.  Engineering datasets are much harder to come by and tend to be much less complete.  A BIM model might record that a project used a 6m x 9m column grid, but it won’t record the discussion with the architect which drove that decision.  Without a clearly-defined record of the inputs and outputs of a process, it is difficult for Neural Nets to discern the relationships between them.

The second, more significant issue is that Neural Nets do not show their working.  As engineers, we need not only to produce designs but also to be able to justify those designs.  The output of a neural net will be the result of thousands of different variables spread all throughout the network; it is very difficult to trace back through that mess and understand *why* it has done any particular thing, beyond the broadest answer that it did it because you have shown it examples which looked something like that in the past.  At present, these systems are capable only of blind imitation, not of reasoning through or rationalising their choices.

This is not to say that these techniques have no uses in the structural design process.  Far from it; there are dozens of potential applications.  We ourselves have utilised them in the past to help to categorise different design configurations, and are investigating their use to help ‘short-cut’ generative optimisation processes and understand client preferences.  There are doubtless plenty of other opportunities for these kinds of techniques to fill in the gaps where more rigidly-defined algorithms struggle.  Outside of structures, Ramboll’s SiteSee initiative is applying machine learning techniques to data collected via drones to help with mining site restoration.  On a larger scale, I suspect their most wide-scale utilisation in AEC will come in maintenance; performing continuous inspections of built assets and identifying when human intervention is likely to be required.  Some steps have already been taken in that direction.

But in the midst of all this potential it is important not to be carried away by the hype.  If I seem to be focusing overly on the negative aspects here it is only because I would like to counterbalance the breathless uncritical excitement with which this technology is often promoted.  It is important to remember that neural nets are only one cog in the machine, rather than the all-in-one panacea that they are sometimes presented as.  They have a role to play in digitalising the industry, but that role is part of a broader tapestry consisting of a range of different algorithmic approaches appropriate to different tasks.  So, I wouldn’t recommend focusing exclusively on A.I. as the means to revolutionise the industry (and suggest retaining a healthy skepticism of anybody trying to use the term as a selling point).  There are plenty of other computational techniques which are both more accessible and more immediately applicable and we as an industry are still a long way off realising the full potential of even the most basic of these.  The revolution which is already underway is the use of digital and algorithmic design techniques to augment and enhance human intelligence.  That human intelligence is still key, and where we choose to supplement it with the artificial kind it needs to be done with full consideration of the applicability and limitations of the technology.

We are recruiting!

To help us continue to scale up our groundbreaking and award-winning design technology to reach a larger audience, we are also scaling up our team.  We are looking for:

  • A .NET/Azure Developer to help us develop a robust and scalable design tech infrastructure.
  • A Unity Developer to help with creating slick, user-friendly front-end tools and visualisations.
  • A Design Technologist to help with building smart algorithms to embody design intelligence and using and customising the software to meet the needs of particular projects.  (Ideally you would already be comfortable with Unity as well, but if you’re good enough at the other stuff we can train you up in that!)

If any (or all) of those sound like something you could do then click the links above to find out more and apply!

Video: Computational Design at Scale

The recording of the lecture ‘Digital Transformation: Computational Design at Scale’ which I gave recently at the IStructE in London has now been posted to the institution’s YouTube channel:

The lecture starts with a basic summary of the core principles and philosophy of Computational Design and builds up through project examples to show how these techniques can be scaled to different types and sizes of projects (including a sneak-peak of our SiteSolve design platform).  It ends with a set of practical tips and ‘first steps’ to help you to upskill and integrate these technologies into your design practice.

Unfortunately (though understandably) this recording does not include the Q&A session after the lecture, which is a shame as there were many interesting questions (and a few challenges) and the discussion touched on a variety of areas including the computational skills ‘generation gap’, the role of institutions, the application of artificial intelligence and the commercialisation of software.

A lot of these are things that I frequently get asked about but which are not discussed much in the literature, so I’m going to take this as an excuse to, over the next few posts on this blog, pick out some of these questions and write up my thoughts on them.  Check back over the next couple of weeks as these go live.

High Rise Explorer

RCD recently teamed up with some of our tall building specialists for a two-day hack on high rise digitalisation.  The result was a new parametric tool for the exploration of tall buildings.  The tool draws on the framework of our Dynamic Masterplanning toolkit to enable the rapid generation and evaluation of tower design options to a number of engineering criteria.  We can adjust a variety of different control parameters and see in real-time the impacts of those changes on the key performance indicators of the building.  This gives us the power to rapidly explore design options live with the client and other members of the design team.

What makes this tool unique is not the geometry generation (which is relatively straightforward) but the amount of embedded engineering expertise which allows the tool to produce results with the benefit of expert judgement.  What makes it useful is that while the relationships between some inputs and outputs can be intuited, others are difficult to predict without calculation.  For example, different combinations of parameters will require different numbers and sizes of lifts, which then has major knock-on effects on the size and shape of the core, which in turn affects available floor area and structural stiffness (which may the necessitate further changes).  Calculating all of this by hand could take a long time and would typically involve several different specialists.  By automating the process adjustments and iteration can be performed near-instantly with data on the potential impacts of design decisions available immediately.  This allows for the various considerations of tall building design to be easily understood and balanced to enable a holistic approach to finding the optimal design solution.

Christmas Tensegritree

For our office Christmas tree this year we decided to do something a bit different and build our own.  We also needed a new centrepiece for our London reception area after the Leadership Bridge moved to our new Birmingham offices.  The design team behind that earlier project was reconvened to tackle this new challenge and once again RCD took responsibility for the geometric design.

We decided to take the opportunity to combine two of our favourite structural forms; tensegrity and hyperboloids. Continue reading “Christmas Tensegritree”

A beginner’s guide to visual scripting with Grasshopper

In this tutorial, I will provide a very simple demonstration of the use of Grasshopper, a visual scripting environment embedded into the 3D modelling package Rhinoceros and a very useful computational design tool.  This example is intended to give a brief overview of how the software works to people with no prior exposure to it and explain the core theoretical principles.  Some basic prior knowledge of Rhino itself is assumed, however (i.e. you need to at least be familiar with the general interface – this video will cover most of what you need).

The example should take under 30 minutes to run through but will teach you everything you need to know in order to start using the software by yourself.  Each step is accompanied by an animation showing exactly what you need to do. Continue reading “A beginner’s guide to visual scripting with Grasshopper”

RCD Tadpole (Ramboll Load Take-Down Tool)

A Load Take-Down is a procedure frequently performed by structural engineers to assess the amount of loading carried by the columns of a building into its foundations.  It is an important early-stage analysis necessary to inform the choice of column layout and foundation system, but it is also a notoriously tedious and time-consuming process that is regarded as something of a ‘rite of passage’ for young engineers to endure. Continue reading “RCD Tadpole (Ramboll Load Take-Down Tool)”

Ramboll Leadership Conference 2017 Bridge

For the 2017 Ramboll Leadership Conference in Copenhagen, which took place on the 22nd and 23rd of January, RCD was involved in a collaboration between the Transport and Buildings departments to design and construct a ‘bridge’ installation between their respective stands.  We had a little over a month to develop and manufacture the design so timescales were tight and we had several key criteria to consider – the bridge was to support a model train running between the two stands (in reference to the Holmestrand Mountain Station project), it needed to be light and easily demountable enough for us to carry from London to Copenhagen, build in an afternoon, break down in an hour and then return back to London (for later re-assembly in our home office).  We also wanted it to form an interactive part of the conference rather than merely being a static display piece.

We approached the project the same way we would any other – pulling together a team with relevant expertise, brainstorming ideas, analysing and developing them.  For the interactive element, we realised that business cards made an ideal impromptu craft material and were one of the few things we could rely on most of the attendees to be bringing with them.  The decision was thus made to allow people at the conference to leave their business card, folded into a specific 3D form, as part of the bridge’s cladding. Continue reading “Ramboll Leadership Conference 2017 Bridge”

Parametric Engineering Course at Imperial College London

From January 2017, Imperial College London will be running an evening course on Parametric Engineering, co-taught by RCD lead Paul Jeffries.  The course will cover the application of Rhino and Grasshopper for computational design within an engineering context and is open to anybody in full time education or academic employment.  To apply contact Simply Rhino.

What is Computational Design?

If you’ve arrived at this blog, you will probably have had some exposure to the concept of ‘computational design’.  You may also have heard some of the related terms that fall under this heading – ‘parametric design’, ‘algorithmic design’, ‘generative design’ and so on.  As computational design is still a relatively young and evolving field the meanings of these terms can be a little vague and are used by different practitioners in different ways. This article presents the vision of computational design that we have in Ramboll and the role that we see it having in the future of the industry.  This is what *we* mean by computational design. Continue reading “What is Computational Design?”