Data Portal
A key contribution of IBDT is a data portal where UofT's building data will be available. The portal will allow easy access to students and researchers. The data within our portal will include:
- Structured data such as sensor data, utility data (e.g., electricity).
- Unstructured data such as building layouts, reports and other documents.
Digitization
While developing BIM (Building Information Modeling) models for UofT buildings is not the objective of IBDT, the Center will showcase best practices in digitizing existing building layout data and linking that to IoT (Internet of Things) data.
Semantic-savvy BIM
Building Information Modeling (BIM) has been central to structured data management in the domain covering data sources from design details to IoT devices. Yet, the vast majority of the data generated in the built industry is unstructured (in forms such as inspection records, work orders, discussions and communication logs). Unstructured data contains valuable tacit knowledge (by project stakeholders) and is directly related to the specific needs of each project. Any meaningful digital twinning has to be based on triangulating these two datasets. Specifically, it is important to link BIM data models (IFC) to text mining.
In this initiative, we consider methods that exploit graph theory to achieve the linkage. We are examining the re-representation of the hierarchical schema of IFC in the form of a network. At the same time, text from bsDD is presented in the form of semantic networks. In each network, clusters of connected nodes are detected. A link is created between corresponding clusters in both networks. This way, IFC classes are annotated with related text from bsDD. With that, concepts from bsDD text provide semantic depth to IFC. On top of these two basic networks, project documents and reports will also be transferred into concept networks. By matching clusters form document networks and bsDD networks, we can create a link between project document concepts and IFC—bsDD acts as a mediatory layer (network).
This approach is dynamic—as project documents evolve, their concepts networks change and the links to IFC are updated. This avoids the rigidity of typical linkage between IFC and unstructured data sources, which is typically done through mapping both IFC concepts and text keywords to a static classification system or an ontology. This unsupervised approach learns from project stakeholders to build mappings that reflect their knowledge and project conditions.
Data Analytics (for work orders): The what and the why
Institutional buildings generate a significant number of work orders—for example, UofT Facilities and Services Dept. process tens of thousands of work orders (W/O) a year. Networks of connected W/O (and their related BIM objects) will be analyzed for patterns using ML to help predict cost and duration of future W/O. Equally important, the analysis will investigate the prediction of time-to-repair and time-between-repair for key building components.
More important than developing predictions is to develop means to understand how to enhance the efficiency of the asset management program. The decision to fund/not fund a specific action for a building (or a set of BIM objects within a building) is reflected in the frequency and intensity of work orders (W/O). Patterns in W/O can then be a means to study ‘decision signatures’: what happens when we invest in a specific project? What is the ROI (performance gains) associated with a project?
To better understand the link between investments and gains in ROI, we will use ML to analyze structured data (such as cost and duration data) and Transformers to analyze text of W/O and other reports as well as social network analysis to profile stakeholders (who is engaged in what task). Graph-based neural networks will then be used for profiling the relationship between project decisions and W/O on three levels: node level (e.g., predicting the post-decision performance of an asset), edge level (e.g., inter-dependency between W/O), and graph level (e.g. holistic contrasting of projects). We can learn what W/O issues are typically associated with specific BIM objects or decision-making tasks. What (localized) operational features relate to increased frequency or intensity of W/O? What patterns in stakeholder relationships (teaming) are associated with conflicts or their resolution?
Business process analysis
The process of making operational decisions is overwhelmingly complex due to the dynamic relationships between parameters, the diverse perspective about priorities, and the conflicting objectives. Inefficiencies in the process lead to suboptimal decisions: missing options that could have the greatest ROI. Business process models (BPM) will capture business logic in the practice of the Facilities & Services Dept. at UofT. The main objective is to map operational data in different business processes. This is primarily done by establishing a link between data models and business models. this is an essential step to reengineering business processes to be data-driven not based on functional silos.
Modeling tasks and profiling their data signature allow us to understand which of the predictive models (or outcomes of ML and data analytics) are relevant to which task. This way, as decision makers progress in authoring a project or making changes to operational schemes, they will be served meaningful and timely business intelligence insights.
Interactive authoring environment
Create an occupant-led option authoring environment in the form of integrated co-editable documents and BIM models. Occupants will be able to interactively develop and discuss gaols and design ideas. As they debate, semantic network analysis will be used to capture key issues—especially ones that they struggle with. Advising them on these issues requires harnessing complex, evolving (even contested) knowledge. Can we help them use web material in a streamlined process. Can we synthesize to them this milieu of knowledge and use it to annotate BIM objects with ‘what we know’?
Knowledge graphs (KG) will be used to distill and serve knowledge to users. In contrast to ontology (a top-down model of a domain) and semantic networks (key terms extracted from user discourse), KG 1) harmonize knowledge from diverse (web) sources into a network of concepts and 2) support reasoning/ intelligent queries. KG will be used to serve climate knowledge (facts, estimates, best practices) to operators/occupants as they author options. Users can find answers to the following:
- What climate: expected change in climate elements such as temperature, precipitation, storms.
- What impacts: e.g. should migration associated with climate change be considered?
- In what way: the link between hazards and performance, which has three challenges:
- Existence of relation: not every asset is sensitive to each climate hazard.
- Inter-hazard interaction: how to consider the interplay between different climate hazards.
- Relationship formula: how to quantify hazard impacts on asset performance.
- How much: delta costs associated with the positive/negative change in hazards.
Generative DT
CTo break the lock of normative thinking and the limitations of current practices, we need disruptive, but rational new approaches to imagine new futures for buildings. A computer-assisted platform for option generation will be built using Generative adversarial networks (GAN) to build generative DT: ones that generate and explore possible permutations of potential solutions. Students will study four dimensions of generative DT: the authoring schema, the rules, the method of creating alternatives, and means for selecting the desired outcome. Five key contributions are targeted.
- Generate new BPM: who can do what, when, and how (based on what guidelines)? The aim is to put occupants at the center of the process and sustainability as core criteria.
- Utilize patterns from user-generated solutions to guide GAN. Knowledge is situated. GAN should be built to recognize the profile of decision makers—what are their values, needs, problems.
- Feed project-W/O relationship patterns. Generative DT should be built to learn from reality: what worked in previous projects/what causes conflict.
- Instead of using generic rules, examine the extraction of rules from analyzing the semantic-social networks of the users (concept-to-concept-to-actor clusters).
- Support automated decision-making: use decisioning KG to build option recommender systems. Unlike actioning KG, which aims to synthesize knowledge, decisioning KG use ML to capture decision patterns to help evaluate/maximize ROI.