BIM and Digital Construction

Posted by Lola Dabota Omo-Ikerodah on 03-06-2020
Lola Dabota Omo-Ikerodah

Key Takeaways from Asite’s Panel Discussion at the Festival of BIM and Digital Construction

At the core of the construction industry's digital revolution is data. How can we better capture, manage, and integrate data to facilitate modeling technologies, like BIM and digital twins, to not only build better but to ensure resilience?

Data is the foundation of decision-making and, within the context of infrastructure and the built environment, it enables the future-proofing of these assets – if used correctly. On May 18, Asite hosted a panel at the Festival of BIM and Digital Construction, which brought together preeminent industry experts to discuss and explore this question.

BIM and Digital Twins

In the knowledge-sharing session, led by our CEO, Nathan Doughty, we heard from Jennifer Whyte, Director of the Centre for Systems Engineering and Innovation at Imperial College London; Jennifer Schooling, Director of the Centre for Smart Infrastructure and Construction at the Centre for Design Built Britain; and Amy Lindsay, Global Data Architect at Laing O'Rourke.

Here are four big takeaways from the discussion.

Lifecycle Approach to Data Creation and Management

As stated by Jennifer Schooling, “as an industry, we’re less mature than other engineering sectors in managing data and understanding the value of data.” She went on to explain that, “due to the nature of our segmented supply chains, we only consider data within our own organizational structure, rather than data of the lifecycle of an asset.”

This siloed engagement with asset data and way of working makes it impossible for us to reap the full benefits of all the data that we are collecting. What we end up with is shallow pools of distributed data based on relatively short engagement for an asset whose lifetime will significantly exceed initial assumptions.

While contractor organizations are not typically in a position to take a long-view of an asset, the only way to understand the value of data is to look beyond our individual part of the lifecycle. On this, Amy Lindsay proffered that, “there is progress being made with the existence of ISO 19650.”

She posed that the focus now being given to process integration across the value chain will open up more opportunities for portfolio management and focusing on client requirements.

“We need to think about how we can get 360 views of our customers, supply chains, design partners and contractors, and how we can pool information to have portfolio management of our resources, our risk, and productivity. That’s what will really allow us to transform data management and analysis in the industry.”

The question now stands – how do we incentivize the whole supply chain, not just government and infrastructure owners, to participate in taking this full lifecycle view of assets?

 

Lifecycle approach to Data

 

People and Processes Before Technology

Jennifer Whyte made the pertinent point that “we need to look at the processes that the technology both supports and helps us rethink.” In an industry where conversations surrounding the potential of innovative technologies and the unfortunate lack of uptake dominates, it is clear that evaluating processes has taken a back seat to technology.

However, Whyte asserted that “as we start to digitize information, we start to look at different ways of delivering.” As technology begins to elucidate our existing processes, we have a great opportunity to rethink the way we deliver and manage projects.

Process control allows us oversight not just into outcomes and data deliverables but also the process by which we achieve data creation. Nathan Doughty expounded upon this, stating that “the key thing is not what the technology does but what do we need to achieve, what are the outcomes, and how are we as people going to most effectively communicate together to achieve that and then mapping the technology.”

In terms of data, technology, both software and hardware, is continuously evolving, so we need to design processes to future-proof information and carry it along the lifecycle of an asset.

As Jennifer Schooling said, “it is more about the humans and organizations than it is the technology.” Any asset will go through several ownership and caretaker hands through its lifecycle, so we have to think carefully about what data needs to be handed over, the condition, with what kinds of architecture. To ensure that data is interoperable, discoverable and useful, we need processes.

 

People and Processes before Technology

 

Working Backward to Overcome Technological Obsolescence

Building upon the discussion surrounding the user journey, the lifespan of assets and the movement of data, Amy Lindsay further emphasized the temporal nature of technology. She stated that “given the life span of built assets, obsolescence of digital technologies is a real problem that we need to take into account from the beginning.”

Technologies will, of course, change but data ought to be fundamentally transferrable. When we consider that an asset will have a centuries-long lifespan, we must think about what data we need and for what purposes. Moreover, as data will accumulate over the lifetime, we need to have a good understanding of who is responsible for this data.

On preparing data, Jennifer Schooling suggested that we relate the data gathered about our assets to the organizational objectives of the asset user (end-user). If we start to think about the organizational objective of a built asset and the processes involved, we can begin to associate those processes with functions and those functions with the built asset they take place in. From there, Schooling asserted, “you can understand more clearly what the data is that you going to need for that built asset.”

Starting with the end in mind and working backward, it essentially allows us to flow back from what data do we need to operate a built asset, to what data do we need to maintain it, through to what data do we need to design and construct it. This awareness of the user journey will enable stakeholders across the value chain to collect data that will not only be useful at their stage of the process but also in the longer life of the asset.

 

Overcoming Technological Observance

 

Moving Away from Enforced Data Structures

As, to borrow from Jennifer Whyte, “we move away from looking at documents to looking at data and ways of reorganizing data,” the question of data structures in the automation of data analysis arises.

Amy Lindsay pointed out that “the modern data environment and modern data links can definitely help facilitate [the incorporation of unstructured datasets], so we don’t have to enforce a data structure at the time of collection.”

There is a significant amount of work being undertaken around text recognition and audiovisual understanding processing of natural language, and Lindsay suggested that we will likely see similar advances to what is happening in the oil and gas industry around developing ontologies from maintenance records.

Within the construction industry, these advancements would greatly increase the flexibility with which we can deal with data. As the industry matures on the software side, we will be able to introduce innovation into the ecosystem through semi-structured data and find support from what other sectors have done, such as financial services.

There is also the opportunity here to nurture new talent and skills within the industry, specifically data scientists and practitioners, allowing us to get closer to the raw data without losing the context within which it was generated.

 

Enforced Data Structures

 

Final Thoughts

The consensus from the panel was that digital information offers an unprecedented opportunity to bring together stakeholders in a way that has not happened previously and allows us to focus on outcomes rather than outputs.

Jennifer Whyte takes the position that projects are interventions into wider infrastructure systems, and if we think about what we’re trying to achieve when we’re intervening in the built environment, then we’re trying to make the built environment and the infrastructure that supports it better for end-users.

Ultimately, the adoption of this macro perspective of data across the value chain and move to reorganize this data to incorporate heterogeneous datasets beyond geometrical data, such as behavior information, will facilitate the creation of truly dynamic and, more importantly, useful models.

Please click here to watch the full panel discussion.