For our office Christmas tree this year we decided to do something a bit different and build our own. We also needed a new centrepiece for our London reception area after the Leadership Bridge moved to our new Birmingham offices. The design team behind that earlier project was reconvened to tackle this new challenge and once again RCD took responsibility for the geometric design.
We decided to take the opportunity to combine two of our favourite structural forms; tensegrity and hyperboloids.
Tensegrity structures were so-named by Buckminster Fuller as a portmanteau of ‘tension’ and ‘integrity’. While most structures support themselves through continuous solid elements (such as walls or columns) that carry compressive loads directly into the ground, in a tensegrity structure the compression elements are instead separated from one another and held in place by tension members (such as ropes, cables or chains). They are fascinating structures because their behaviour is so counter-intuitive – the solid parts seem to float in mid-air and look as though they should simply drop to the ground. Individually, they would – it is the overall arrangement and precise balancing of tension and compression that provides stability. They are therefore notoriously difficult to design and construct, so it is fortunate that we enjoy a challenge.
Hyperboloid geometries (not to be confused with Hyperbolic Paraboloids) are a special kind of double-ruled surface formed between two circles. They possess interesting structural properties and have been notably used in architecture by Vladimir Shukhov and Antoni Gaudi, among others. It is also possible to use them as the basis for a stable tensegrity module, which is what we did here. (For more information about them, including how to generate them for yourself in Grasshopper, see this recording of my lecture on the topic at Imperial.)
Each module is actually formed of two hyperboloids – the compression elements lie on one surface and the tension elements on another. It is the difference between these two surfaces that give the structure its stiffness and prevent it from collapsing.
The overall tree consists of a stack of five of these modules with differing dimensions. The bottom of each module hangs from the top of the one below with an overlap to give the classical Christmas tree ‘zig-zag’ profile. A third hyperboloid surface formed of tension elements connects the top of each module to the top of the one above – this provides lateral stiffness to the structure and prevents the modules from being displaced.
The individual modules
The geometry was parametrically defined in Grasshopper. We used Kangaroo 2 for some early stage stability checks and to test out different ideas, but then moved onto using a custom-written dynamic relaxation module built on our own Salamander 3 tool, as Kangaroo does not use physically accurate properties in its simulation. For final checks, the model was exported to Oasys GSA (again via Salamander). This was all complemented by some modelling of the old-fashioned kind in order to demonstrate the concept.
No Christmas tree would be complete without lights; ours came in the form of an illuminated base kindly custom-designed for us and provided by Clearvision Lighting.
As a final touch, every tree also needs a star on top. Our star, however, isn’t technically on top of our tree – instead the criss-crossing pattern of elements itself forms a star-like pattern in plan which is then projected onto the ceiling above the tree. The tree is its own star.
In this tutorial, I will provide a very simple demonstration of the use of Grasshopper, a visual scripting environment embedded into the 3D modelling package Rhinoceros and a very useful computational design tool. This example is intended to give a brief overview of how the software works to people with no prior exposure to it and explain the core theoretical principles. Some basic prior knowledge of Rhino itself is assumed, however (i.e. you need to at least be familiar with the general interface – this video will cover most of what you need).
The example should take under 30 minutes to run through but will teach you everything you need to know in order to start using the software by yourself. Each step is accompanied by an animation showing exactly what you need to do.
We’re focusing on Grasshopper in this case, but most of the concepts shown here are also transferable to other similar node-based visual programming environments (for example, Dynamo).
Grasshopper is a free plugin for Rhino and can be obtained from its official website: www.grasshopper3d.com. In Rhino 6+, Grasshopper will be incorporated into the main Rhino install and will no longer need to be downloaded separately. Rhino itself can be downloaded here and will run as a free evaluation version for a full 90 days.
In this example, we will create a very straightforward parametric definition which will draw a line between two points. These two points will be our inputs; create these in Rhino by using the ‘Point’ command twice.
Once Grasshopper is installed, you can run it from inside Rhino by typing the command ‘Grasshopper’ at the command prompt.
Grasshopper’s subwindow will appear, and should look something like this:
The title bar. This shows the name of the currently opened file (if any). It can also be double-clicked to collapse Grasshopper to just this title bar – useful if you are working on a single screen and want to get Grasshopper out of the way quickly.
The menu bar. We’ll talk more about this in a minute…
The component library. This is categorised into several different tabs for different kinds of functionality. (You probably won’t have as many as are shown in the images – many of these are optional plugins.)
The component library is further sub-categorised into different groups. You can click on the title bar at the bottom of each group to expand it and see all of the components in that group with their names.
The canvas toolbar. Contains several quick-tools to save the file, scribble on the canvas and change the way that things are being displayed.
The main canvas. This is where the magic happens.
The recent files grid. You probably won’t see this if it is your first time opening Grasshopper as you won’t have any recent files! This will disappear as soon as you start adding things to the canvas.
The status bar – occasionally displays useful information.
Let’s go back to the menu bar to make a couple of important points:
Grasshopper files can be opened, saved, etc. via the File menu. Grasshopper definitions are saved separately from Rhino files, so make sure you save both if you don’t want to lose any data!
In the ‘View’ menu you can turn on an ‘Obscure Components’ option which will show more components in the library ribbon than you get by default. This option is presumably there to stop people getting scared by lots of component icons when they start out, but also makes it harder to find things.
In the display menu, turn on ‘Draw Icons’. This changes the way components are displayed on the canvas. This is a matter of personal taste and some people prefer the default (which just shows text), but these people are wrong.
You should also make sure ‘Draw Fancy Wires’ option is on (which it should be by default). This is not even a matter of personal taste; having this turned on will make certain definitions much, much easier to understand.
3. Grasshopper Basics
3.1 Adding Components
Now onto the creation of our actual definition.
The first step is to get the points we created in Rhino and put them into Grasshopper in a form that we can use. To do this we’ll need a couple of Point Parameter Components which you can find on the ‘Params’ tab in the ‘Geometry’ group on the component library ribbon. Left click on the icon and then left click again somewhere on the canvas to create one of those components.
This is one way of adding components to the definition. The other way is to search for them by name. To do this, double-click somewhere on the canvas (not on a component). In the text box that appears, type in ‘Point’. It will then show a range of suggestions – click on the component just called ‘Point’.
3.2 Referencing Rhino Geometry
These components are intended to store point data, but at the moment no data has been assigned to them. This is why they are showing up in orange – this indicates a warning, typically that the component does not have all of the inputs it needs to do whatever it is meant to be doing.
We’ll need to set up these components to refer to the two points we created earlier in Rhino. Right-click on the first component to bring up its context menu. Select the option ‘Set one Point’ and then in Rhino pick the first point.
This will assign the point to the parameter and the component should turn grey to indicate that everything is working as planned.
Repeat this for the second component and the second point.
You may notice that little red ‘x’s have appeared over the points in Rhino – these indicate that the geometry is also present in the Grasshopper model. If you left-click on one of the Grasshopper components it will become selected and will turn green. The ‘x’ in Rhino related to that component should also turn green. You can use this to remind yourself which Grasshopper component refers to which bit of Rhino geometry.
3.3 Creating Data Flows
Now that we have our input points in Grasshopper we can generate the line between them. Navigate to the ‘Primitive’ group on the ‘Curve’ tab and find the component called ‘Line’. Left click on the icon and left click again on the canvas to create a Line component.
This component has a few more features than the Point parameter components. Whereas the Point components simply store a bit of data, this Line component represents a process which will consume input data and generate an output from it. The process inputs are shown on the left hand side of the component (called ‘A’ and ‘B’) and the outputs are shown on the right hand side (called ‘L’). Hover your mouse over these letters and you should see tooltips that provide more information about them.
A and B are the start and end points of the line, respectively. We can populate these inputs using the data stored in our Point parameter components. To do this, hover your mouse over the small nodule on the right hand side of one of the point components. You should see a small arrow icon below your cursor. Click and hold the left mouse button and drag the mouse away to see a snaking arrow follow your mouse. Move your mouse over the first input of the line component and release.
This will create a connection between the output of the Point parameter component and the input of the Line component.
Repeat this for the second point and you should see the Line component turn grey and a red line appear between the two points in Rhino.
Congratulations! You now know how to use Grasshopper!
Grasshopper allows you to describe parametric models by essentially drawing a flow diagram of the process you want to follow to create that model. If you can diagram a process, you can use Grasshopper.
All components essentially work the same way; inputs on the left and outputs on the right. Click and drag to create connections between inputs and outputs and choose the way that data will flow between different operations. Simple!
So far we’ve just used this to draw a line, which isn’t enormously useful – we could have done the same thing in Rhino just by using the ‘Line’ command. But the power of Grasshopper comes from the fact that several different processes can be daisy-chained together, with the output of one operation feeding the input of another.
To demonstrate this, we’ll take the curve output from the Line component and we’ll create a tubular surface around it using the ‘Pipe’ component from under ‘Surface’/’Freeform’. Drop one of these onto the canvas and connect the ‘L’ output from the Line component to it’s ‘C’ input.
The ‘C’ input is the centreline curve around which the pipe surface will be created. You should be able to see this surface in the Rhino view. Click and drag one of the two initial points to move it and you should see the pipe geometry automatically update.
This is the power of Grasshopper. Changing an input (in this case a point) will prompt an update of any geometry which is linked to it. The process you have set out will be run again automatically and the model regenerated, without you having to go through all the pain of manually remodelling everything.
Complex chains of hundreds of different operations can be built up and whole buildings can be defined and controlled by just a few simple inputs, with changes automatically propagating throughout the model.
3.4 Number Sliders
The Pipe component is already working even though we haven’t put anything into the ‘R’ and ‘E’ inputs. This is because these inputs have default values – hover your mouse over these letters to see what those default values are. ‘R’ is the radius of the pipe, which we might want to be able to adjust. (We won’t bother looking at ‘E’ in any detail, but this can be used to control what the ends of the pipe look like).
We’ll control the radius with a Number Slider component from ‘Params’/’Input’. Drop one onto the canvas and connect the output to ‘R’ to override the default value.
This is a little input widget that we can use to control a numeric input just by dragging the slider left and right. If you want to change the maximum and minimum values right click on the slider and click on ‘Edit’ to access a form which will let you set up the properties of the slider, including the numeric domain it covers.
Grasshopper features many different input widgets that allow you to enter and modify different types of data easily.
We’ve now finished creating our definition for this example, but we will use the model that we’ve made to explore a few other aspects of the program.
4. Geometry Preview
You may already have noticed that the red transparent geometry that you can see in Rhino has some peculiar properties – you can’t select it, it won’t be saved in the Rhino file if you try to save it, if you hit the render button then it won’t show up, etc. This is because none of this geometry actually exists in Rhino yet – it is merely a ‘preview’ that Grasshopper is drawing in the Rhino viewport to show you what is going on.
If you want to turn this preview off – now that we have our pipe you might no longer care about seeing the centreline geometry, for example – you can turn it off by right-clicking on the middle part of the relevant component (not over one of the inputs or outputs) and toggling on or off the ‘Preview’ option.
The preview geometry associated with that component should disappear and the component should turn a darker shade of grey to indicate that its preview is turned off.
You can also change how everything is displayed using the first three buttons on the right-hand side of the toolbar just above the canvas, to ‘off’, ‘wireframe’ and ‘shaded’ modes respectively.
To add this previewed geometry to Rhino, so that we can manually modify it, export it, render it, etc. we need to ‘bake’ it. This will add a copy of that geometry into the current Rhino document.
To bake some Grasshopper geometry, right-click on the component whose geometry you wish to add to Rhino (again, this needs to be on the centre part of the component, not over any of the input or output components) and click on the ‘Bake’ option. This will throw up a small form which allows you to select certain properties of the new object in Rhino (for example, the layer it will be placed on). Click ‘OK’ to bake the geometry.
You can now modify, delete, move, export etc. this geometry the same way you would any other Rhino object. Note that there is no link between this baked object and the Grasshopper definition that created it – if you change the model in Grasshopper these changes will not be reflected in the Rhino model and likewise changes made to the geometry in Rhino will not matter a jot to Grasshopper. If you wish to later update the Rhino geometry from the Grasshopper model you will need to delete it and re-bake; for this reason it is a very good idea to keep baked geometry on its own set of layers in Rhino so that it can be easily selected and deleted in one go.
6. Data Matching
One advantage of Grasshopper is that, as we have already seen, (non-baked) geometry can be parametrically linked and automatically updated. Another advantage is that once we have a process defined, we can apply that process over and over and over again on multiple inputs, which is what we will do now. We do not have to modify our actual model definition at all for this; we simply need to change the inputs.
Rather than creating a single pipe between two points, we will now use our definition to create multiple pipes between multiple pairs of points. Add four more point objects to your Rhino model (using the ‘Point’ or ‘Points’ command) for a total of six.
Right-click on the first Point parameter component. Just below the option to ‘Set one Point’ is another which allows you to ‘Set multiple Points’. Click on this option and select in Rhino the three points you want to use as pipe start points. Press return or right-click once you have finished.
Note that this will override the data previously stored in this component, so you’ll need to include the original start point in this selection if you want to include it.
This component now contains multiple bits of data. Those multiple points are being passed along to the Line component and it is now generating three different lines from each of those three points to the single end point we currently have selected. Those three lines are being passed in turn to the Pipe component to create three different pipes. You can tell at a glance that multiple pieces of data are being passed between components by looking at the wires between them – provided you have the ‘Draw Fancy Wires’ option turned on these should now appear as double-lines rather than one. This indicates that a list of data is being passed along that connection instead of just one individual piece of data (which will be a single line).
Repeat this operation to set the three end points.
We now have three start points and three end points going into our Line component, and as an output we are getting three lines (and consequently, pipes). You might have expected that we would get nine lines connecting every start point to every end point, but instead Grasshopper is ‘pairing up’ start and end points and just creating one line for each pair – this behaviour is known as ‘Data Matching’ and it is a very important concept to understand when using Grasshopper.
Whenever a component has multiple pieces of input data plugged in Grasshopper will first determine which sets of inputs to use together and then will run the process once for each set. To figure out which inputs belong together, Grasshopper follows two simple rules:
RULE 1: When matching two or more lists of objects, items at equivalent positions in those lists will be matched together.
So, the first item in the first list will be matched with the first item in the second list, the second item in the first list will be matched to the second item in the second list, third with third, fourth with fourth and so on.
Imagine we have two lists of letters – in the first list we have A, B, C, D and E, while in the second we have F, G, H, I and J. Data matching these two lists together would give us the pairings A-F, B-G, C-H, D-I and E-J.
So, the order that things are stored in is important – in this case the order that the points were selected will be the order that they are matched up in.
This all works great when we have lists which are all the same length, but what if one is shorter than the others? This is where the second rule comes in.
RULE 2: When one list is shorter than the others, the last item in the list will be matched with subsequent items in the others.
Grasshopper will ‘re-use’ the last item in a list when there aren’t any further pieces of data to match up. If we dispose of the last two letters in our second list (so list 2 is now just F, G and H) the resultant pairings will be A-F, B-G, C-H, D-H, E-H. H will be used in three different pairings!
We can see this effect in action by setting our ‘end points’ input to only contain two points (or just one, as we originally had it) while the start points have three:
Now, the last end point will be connected to the last two start points.
This behaviour applies to any component and any type of data, not just lines and points. This means that we can take advantage of this to give us individual control over the diameter of each of the pipes we are creating.
Create a second Number Slider (you can press Ctrl-C, Ctrl-V to copy and paste the one we already made) and connect it to the Pipe component ‘R’ input. If you try and do this normally it will automatically replace the connection to our original slider, but if we hold down the shift key as we’re making the connection we can connect multiple outputs to one input.
Our ‘R’ input will now be a list of numbers comprising the values of the sliders that we’ve plugged in (in the order that you plugged them in) and these will be data-matched with the list of curves going into ‘C’. As a consequence, the first slider will control the radius of the first pipe and the second will control the radius of the others. If we wanted, we could add more sliders to give us total control over each pipe.
You now know everything you need to get started using Grasshopper. There is certainly a lot more to learn – there are thousands of different components available and as well as flat lists data can also be passed around in the form of multidimensional ‘Data Trees’ (essentially, lists of lists), which can make data matching a lot more confusing, but these all follow the basic principles we have covered here.
Becoming more proficient is largely just a matter of learning what tools are available to you and of getting used to manipulating the data flows between components to achieve the effect that you want. The best way to start is simply to choose something that you want to model, think about the basic geometric steps you would take to create it manually and then try to express that process in Grasshopper.
The official Grasshopper forums feature a very active and helpful community and are a useful resource to get help. For a little more structured learning, the ‘Parametric Engineering’ course that I co-teach at Imperial College London is available to view on YouTube. You can also discover how to use Grasshopper to create parametric structural analysis models via RCD’s Salamander plugin in the video below:
RCD’s Footbridge Layout Early Assessment (FLEA) tool is an interactive client-focussed App which was developed rapidly in the space of just two weeks in order to address a specific project’s needs.
The context of the project was a busy public road and complex junction separating one of our client’s buildings from the rest of their campus. The need had been identified for a footbridge to provide a safe and secure route for their staff to move between the two sides, but the precise location and alignment of this new bridge was not yet fixed.
To aid with the decision-making process RCD, in close collaboration with bridge engineers in our London and Southampton offices, developed a small generative App that would allow exploration of the various options. The tool allows the client to simply click and drag to move the bridge ends. The structure between these two points is generated, following various set-out rules coded into the software.
A simple static analysis is performed by the tool itself, which allows key members to be automatically sized. Complementing this was a series of far more detailed studies done by our bridge engineers on a range of geometries within the continuum of possible options. By using these data points as a guide, we were able to have the tool calculate and display in real-time an estimation of the overall structural tonnage for any arrangement the client cared to investigate, which we could be confident would be accurate.
Typically, a structural engineer might investigate only two or three different options in such a study. By instead developing a bespoke tool that could interactively analyse any potential arrangement we were able to be far more analogue and put the client firmly back in the driving seat.
A Load Take-Down is a procedure frequently performed by structural engineers to assess the amount of loading carried by the columns of a building into its foundations. It is an important early-stage analysis necessary to inform the choice of column layout and foundation system, but it is also a notoriously tedious and time-consuming process that is regarded as something of a ‘rite of passage’ for young engineers to endure.
Typically, the take-down is performed in one of two ways. Either the tributary areas (the region of loading that each column nominally supports) must be calculated manually for each column on each floor and then tallied up (commonly via a spreadsheet), or a full 3D finite element model of the entire building must be constructed and the forces extracted. The latter requires resolution of a level of detail which is often inappropriate during the early phases of a project and the former is both slow and prone to errors. Most importantly, both methods can require significant re-work in order to adapt the analysis to modifications of the geometry and this limits our ability to experiment and respond to design changes.
RCD’s TADPOLE (TAke-Down Process On Loaded Elements) is an in-house software project that provides a new alternative method that automates and greatly speeds up the analysis. The standalone tool can read in 2D floor plan drawings and assemble them, level by level, into a complete representation of the building. Loading areas and column positions can be automatically interpreted by the tool if present, otherwise the software contains a full suite of drawing tools to allow the engineer to sketch out loads, columns, walls etc. Once this data has been input the software automatically determines tributary areas and performs the take-down. Changes to the input data can be made easily and the impacts assessed instantly.
This eliminates the need for tedious manual calculation and, because the application is designed and streamlined for this specific purpose, there is no need for any extraneous data to be input. Because the tool is graphical, odd results and input errors can be spotted and traced far more easily than in a spreadsheet.
To help further manage the data the results of the analysis can be output to an interactive online dashboard via Power BI, making it easy for the lead engineer and client to interrogate. A full report can also be generated to document the process, results and assumptions. To eliminate re-work, the tool can also assemble the input plans into a full 3D building model that can be exported to Autodesk Robot to form the basis of a more detailed analysis.
This has allowed us to do in hours what would previously have taken days, and in a way that would not have been possible without building the tool ourselves. Commercial software is typically made to be as broad as possible in order to capture a wide user base. This means that it is often poorly optimised for certain tasks. By developing our own tools designed to meet our exact requirements and workflow we can plug these gaps and work more efficiently, enabling us to beat time pressures by responding faster, iterating more often and, ultimately, to produce better, more rigorously-checked designs.
Salamander 3, a new structural modelling and interoperability tool developed by RCD lead Paul Jeffries, is now in open beta and available to download from Food4Rhino. The tool adds the ability to model structural elements such as beams, slabs, nodes etc. inside Rhino and for this data to be exchanged with analysis packages (at present, Autodesk Robot and Oasys GSA).
The tutorial videos below demonstrate how to install the Rhino plugin and utilise some of the basic modelling commands in the tool to develop a simple structure.
A recording of the talk I recently gave as part of the ‘Design Discourse’ series at Imperial is now available on YouTube, here:
Unfortunately many of the animated embedded .gifs in the presentation did not display properly on Imperial’s hardware (computers, eh?), so they have been included below instead – click on each one to view the animation:
SketchPad – the first graphical CAD tool – in operation.
On Tuesday 16th May Paul Jeffries will be delivering a public lecture at Imperial College London entitled ‘Emergence: The development and future of computational design’. The talk will be held in Room 201 of the Skempton Building and begins at 18:30. All are welcome to attend.
For the 2017 Ramboll Leadership Conference in Copenhagen, which took place on the 22nd and 23rd of January, RCD was involved in a collaboration between the Transport and Buildings departments to design and construct a ‘bridge’ installation between their respective stands. We had a little over a month to develop and manufacture the design so timescales were tight and we had several key criteria to consider – the bridge was to support a model train running between the two stands (in reference to the Holmestrand Mountain Station project), it needed to be light and easily demountable enough for us to carry from London to Copenhagen, build in an afternoon, break down in an hour and then return back to London (for later re-assembly in our home office). We also wanted it to form an interactive part of the conference rather than merely being a static display piece.
We approached the project the same way we would any other – pulling together a team with relevant expertise, brainstorming ideas, analysing and developing them. For the interactive element, we realised that business cards made an ideal impromptu craft material and were one of the few things we could rely on most of the attendees to be bringing with them. The decision was thus made to allow people at the conference to leave their business card, folded into a specific 3D form, as part of the bridge’s cladding.
Design of the overall structure progressed rapidly through several meetings, based around a flexible parametric Grasshopper model developed by RCD that allowed for collaboration around real-time adjustments to the geometry. After the examination of several options we settled on a timber shell/arch structure as an aesthetically pleasing, lightweight, robust solution that would reference both Ramboll UK’s expertise in timber structures and previous RCD project the TRADA pavillion and which could be rapidly manufactured and assembled.
Throughout the development of the bridge the geometry was exported to and analysed in MIDAS by the London Bridges team in order to make sure the design was structurally feasible and to guide further refinement of the form and material thicknesses. Additionally preliminary samples of sections of the bridge were laser cut to allow us to physically examine and test the manufacturing process and connection detail design.
In order to enable the bridge to be rapidly assembled and disassembled we wanted to avoid the use of adhesives or mechanical fixings. The connections were therefore designed as simple slotted plates, held in place laterally by a matching slot in one of the plates they joined and restrained laterally by small standard ‘U’-shaped clips, all cut from the same 6mm plywood as the rest of the structure. The nature of the shell form meant that the angle between each panel (per quarter of the structure) was different. Generation of these connector pieces was thus integrated into the Grasshopper model in order to determine cutting patterns for each connector and panel, each of which was also automatically labelled with a number to be engraved onto the inner side of each piece to allow easy identification of which pieces connected together during construction. Each connector also incorporated a small hole through which the line which would support the bridge deck could be passed.
The slots into which business cards could be placed were likewise incorporated into the Grasshopper model, arranged so as to fit in the maximum amount of business cards without compromising the structural integrity of the panels. Due to the variety of panel shapes and sizes no one placement algorithm was found to give consistently good results, consequently two separate arrangement algorithms were utilised to determine slot placement and the best of the two automatically selected for each panel to give the final arrangement.
Foundation design is a key component of any project and this one was no different. Two pedestals were designed to support the feet of the bridge. As an arch, the natural reaction of the structure under load was to try and push outwards. To resist these thrusts without having to tie the base of the arch together or carry over heavy weights in our luggage, these pedestals contained hidden compartments to conceal bottles of water which were procured on-site and provided the necessary ballast.
This being a conference for engineers in Denmark, it was a foregone conclusion that the train the bridge would carry should be made out of LEGO. The train in question came with a seven-speed remote control, however to avoid having to manually drive the train for two days straight it also fell to RCD to automate this by hacking the controller. The rotary dial which controlled the train’s speed produced different signals when turned clockwise or anticlockwise – instructing the train to accelerate and decelerate. By hooking up these contacts to an Arduino Uno board programmed to mimic these impulse patterns it was possible to control the train’s movements programmatically and have it moving backwards and forwards across the bridge without human intervention. Unfortunately several key wires were damaged in transit, requiring some frantic (but ultimately successful) repair work with a borrowed soldering iron the day before the conference.
Besides that, the bridge made it to Copenhagen without damage and was erected successfully at the conference. It proved very popular with the conference attendees, becoming packed with business cards by the end of the second day and successfully demonstrating the capabilities of computational design and collaboration to the wider business.
“Our team was made up of people with different skills sets and backgrounds, who were unified by a desire to create something unique. The bridge was a success because all team members contributed their technical expertise, yet listened to and challenged each other to continually improve and refine the design.
This project shows that having the right mix of people with a passion for a common goal can generate great design in a short period of time.”– Sarah Ord, Project Manager
“The Transport and Buildings teams collaborated seamlessly, bringing our respective strengths together created a more complete and superior design
“The use of parametric modelling and rapid prototyping and manufacture released the team’s time to concentrate on the creative design of the bridge through swift iterations. Designing and building the bridge in one month would not be possible without this approach”– Ollie Wildman, Director
“I worked on the structural analysis of the bridge ensuring that the design was robust enough to stand and carry the applied loads. It was great to have worked on such an innovative project and of course it could not have been done without this amazing and passionate team. Overall it was a brilliant experience and I am looking forward to work on similar stuff in the future!”– Neophytos Yiannakou, Bridge Engineer
“Parametric modelling has enabled quick optimisation and adjustment of the bridge geometry, making it easier to model and analyse. In a short period of time we were ready to print and test a first prototype, which has been key to meet the project deadline
“It has been a wonderful experience to design and actually build the bridge with such a diverse and motivated team. It is in projects like this where you realise the potential of combining different disciplines.”– Xavier Echegaray Jaile, Bridge Engineer
The complete bridge is now on display in the reception area of Ramboll’s London offices at 240 Blackfriars Road.
From January 2017, Imperial College London will be running an evening course on Parametric Engineering, co-taught by RCD lead Paul Jeffries. The course will cover the application of Rhino and Grasshopper for computational design within an engineering context and is open to anybody in full time education or academic employment. To apply contact Simply Rhino.
If you’ve arrived at this blog, you will probably have had some exposure to the concept of ‘computational design’. You may also have heard some of the related terms that fall under this heading – ‘parametric design’, ‘algorithmic design’, ‘generative design’ and so on. As computational design is still a relatively young and evolving field the meanings of these terms can be a little vague and are used by different practitioners in different ways. This article presents the vision of computational design that we have in Ramboll and the role that we see it having in the future of the industry. This is what *we* mean by computational design.
But, before we can answer the title question we need to first answer another – what is design?
Even within a single discipline, we might divide the process of delivering a project into two – the mental and the physical. In the former category we have the cerebral work that goes into a design – generating ideas, understanding requirements, thinking (and talking) through problems and deciding on the fundamental principles that go into forming ‘the design’. But this cannot stay a purely ephemeral undertaking – we as designers also need to test our ideas and communicate them to our clients and colleagues and for this we must engage in a range of more tangible activities – performing calculations, writing documents, producing drawings and models and so on. These are not merely end-products, however – they are integral to producing a better understanding of the problem we are trying to solve and the implications of our assumptions in solving it. There is thus an interplay between the mental and physical sides of design. The process as a whole is highly iterative, with many embryonic design options dreamt up, examined and refined or discarded on the way to the ultimate solution.
Recently, computers have been increasingly used as a method of production, to the extent that the second half of the above equation might often be termed ‘virtual’ rather than ‘physical’. Whereas previously we would have produced drawings by hand, we now more commonly draw on the computer using CAD (Computer-Aided-Design) packages such as AutoCAD and Rhino. Whereas in the past we would have had to physically construct an architectural model to see what a project looked like in 3D, we can now build and view a virtual 3D model, perhaps with additional detailed information embedded into it. Whereas we would have had to perform engineering calculations by hand we now have a plethora of software packages available to perform analysis and run through standard calculations on our behalf.
These are some of the ways in which computers are now used in design, but is this what we mean by computational design?
These technologies augment the process of design to make it more efficient, but they do not represent any fundamental change to the process itself. The first generation of CAD software set out to replicate as closely as possible the previously existing paradigms – they swapped out the mechanical pencil for the mouse and the eraser for the delete key but otherwise the experience was maintained. To draw a line, you press down and move your hand from start to end. This was deliberate and, to an extent, necessary during the first transition into the virtual world, but in treating a computer as merely a replacement for a sheet of paper the true power of computation was overlooked.
Computers are not inanimate objects. They are machines of logic and process. They can think; not quite in the same way we do but in a way which is certainly compatible. That means that they can be integrated not only with the physical aspects of the design process but with the mental ones as well.
A (good) design is a fundamentally logical construct. Every aspect will have some reason to be the way it is, whether that is structural, functional, aesthetic or some combination of the above. Walk into the office tower of your choice, for example, and you are likely to find that the columns which support the building are not arranged randomly – they will be evenly-spaced and follow a regular grid. This is done to make the structure more efficient, easier to build and to allow for standardisation of components. Where columns deviate from this grid there will likewise be good reasons for that to be the case – perhaps to keep an auditorium space column-free, perhaps to allow enough clear space for access to be provided for large vehicles, perhaps to better support large loads from above. Each column will have an underlying logical process determining its placement.
Traditionally, it would be for humans to both decide upon this logic and then work through it to determine the arrangement it suggested, drawing or modelling the result. But this second stage is well within the capabilities of the computer, which is after all nothing more or less than a machine for the evaluation of logical processes. If the human can describe the principles driving the design in a form that the computer can understand – i.e. as an algorithm – then the computer can begin to take on a much larger role in the design process, becoming not just a recipient of data but also a generator of it, creating the design representation from the rules the designer has set. This shift is what demarks Computational Design as distinct from simply using computers in a more traditional design exercise.
In brief; Computational Design is a change in the medium of design expression from geometry to logic.
There are a number of advantages to this approach; firstly being that the geometry of the design tends to be changed far more often than the logic. As a structural engineer, I may want to try out several different arrangements of the column grid in order to find the frame that best fits the geometry and construction type of the project. I am unlikely, however, to discard the principle of using a regular grid altogether. If changing the grid means having to redraw every single column position, or perhaps even having to fully recreate from scratch whatever analysis model I am using to make my assessment, that is going to limit the number of options I can feasibly examine (and make me far more likely to stick with whatever I first came up with). If changing that grid merely means adjusting a few input parameters of my generative model and having everything else done for me by the computer then I have far more freedom to explore the design space, find a more optimal arrangement and to adapt to external changes and new information introduced later in the design process. I can, in short, come up with a better design.
Leaving the resolution of the design logic to the computer also removes the restriction that said logic must be resolvable by humans. When rules begin to combine with one another their effects can sometimes be hard for the human brain to visualise. A fractal image, for example, is typically generated by very simple operations repeated over and over and over again, but while the rules may be easy to understand it can be very difficult to anticipate the geometric result without prior experience. So too with buildings, the many competing design drivers of which are often dealt with through simplification and convention far more than they are by optimisation. Computational design allows us to break through these barriers and produce responsive virtual models to do what brainpower alone cannot.
Computational design is an excellent means of dealing with complexity, whether that complexity is caused by the interaction of the factors we have control over or the uncertainty surrounding the factors we don’t. Traditionally this approach has been applied mainly to niche projects whose obvious visual complexity demanded it – buildings with highly sculptural forms, intricate facades and so on that would be next to impossible to design through any other means. However, all projects are complex in their own way, and can benefit from automation to handle that complexity. At Ramboll we recognise this, and so are working to make computational design technology and expertise a more deeply embedded and mainstream part of our design process across all types of project.