Modelling and Evaluating Human-Machine Network Designs

If you’re working on the design of a Human-Machine Network, you may wonder what would happen if you implemented a particular network design. It may look promising, but what would happen in practice? How would it affect people’s behaviour? Would it provide the benefits you hope for? We can help answer such questions via simulation modelling.

The HUMANE typology (Følstad et al., 2015, 2016, 2017) allows us to characterise Human-Machine Networks (HMNs) at a high level, to understand, analyse and communicate key aspects of the network pertaining to both their design and how they may be used by the humans and machines that make up the network.

We have created a method for applying the HUMANE typology to the design process, which is intended as a supplement to the human-centred design process (ISO, 2010). In order to further help understand the potential impact of the design options that may emerge from following the HUMANE method (Følstad et al., 2017), we have explored an approach to modelling and simulation HMNs.

Modelling and simulating HMNs is non-trivial. They are complex networks prone to unpredictable and emergent behaviour stemming from the interactions between both humans and machines. Therefore, we have developed a Core HMN Model to aid the modelling task. This model reflects key aspects of HMNs captured in the HUMANE typology to describe the actors, their interactions and structure of the network. A conceptual view of this model is shown below in Figure 1.

core-hmn-model-conceptual

Figure 1 – Class hierarchy of entities of the Core HMN Model

At the most basic level, an HMN can be considered as a collection of Nodes and Edges that are connected in a network. A Node, also known as a vertex, could be one of two types: an Artefact or an Agent. The key difference between these two is that the latter has agency and the former does not, representing, e.g., a file, a forum post or a Wikipedia article. Depending on whether we talk about “conscious intentionality” or “programmed intentionality” we can distinguish between Human and Machine agents and attribute agency to both as active participants in HMNs as per the HUMANE typology (Følstad et al., 2015, 2016, 2017).

An Edge is a link between two Nodes, signifying that there are one or more types of relationship between the two Nodes. The nature and properties of the respective relationships, such as trust and trustworthiness, influence the interactions between the two agents, which is encapsulated within a Connection (from the Agent to the Edge, see Figure 2). We have included a Connection entity to reflect i) the possibility that there may be multiple relationships between two nodes, and ii) the possibility that the relationship properties from Node A to B may be different from Node B to A. For example, if nodes A and B are both human agents, one person may trust the other more than the other trusts them back. Consequently, their actions may differ when they interact with one another.

 

node-connection-edge

Figure 2 – Illustration of two nodes with their connections to two directional edges.

The Core HMN Model has been applied to two HMNs as a proof of concept to demonstrate the approach. We have successfully modelled edit wars in Wikipedia and how increasing the agency of bots may address this emergent behaviour wherein two agents mutually revert each other. The core model was extended to include human contributors and bots; both of which have the ability to create new articles, edit existing articles and revert contributions. The simulation model was able to predict the emergence of edit wars with a 91.5% accuracy on average (as high as 100% for some time periods). With the aim of increasing the reliability and quality of information in Wikipedia, we simulated increasing machine agency by introducing a bot with the capability to detect edit wars and notify agents to end ongoing edit wars. By doing this, we observed a significant reduction in the duration of edit wars.

We have also modelled design options for an HMN that is under development, Truly Media, to determine how to best help journalists collaboratively verify user-generated content to avoid running stories based on content that consists of hoaxes, rumours or deliberately misleading information (e.g. propaganda, fake news, and other untrue statements). The core model was extended to include journalists (human) interacting with a conflict resolution tool (machine) part of Truly Media platform to verify Twitter content (artefact). While the simulation results showed a positive impact of the conflict resolution tool, sophisticated approaches to evaluate users’ credibility, for example, had a negligible impact on the verification process. As such, the simulation model helped inform the prioritisation and implementation of features in the platform.

Interested readers will be able to read the full report on this work on the HUMANE website later this year when it has been formally reviewed by the EC. Until then, you may reach out to us by email for a copy.

The Core HMN Model is freely available as an open source Java library on GitHub.

https://github.com/it-innovation/hmn-core

Featured image credit: http://www.presentermedia.com/

References

Følstad, A., Eide, A. W., Pickering, J. B., Tsvetkova, M., Gavilanes, R. G., Yasseri, T., & Engen, V. (2015). D2.1 Typology and Method v1.

Følstad, A., Engen, V., Mulligan, W., Pickering, B., Pultier, A., Yasseri, T., & Walland, P. (2017). D2.3 The HUMANE typology and method.

Følstad, A., Engen, V., Yasseri, T., Gavilanes, R. G., Tsvetkova, M., Jaho, E., … Pultier, A. (2016). D2.2 Typology and Method v2.

ISO. (2010). Ergonomics of human–system interaction — Part 210: Human-centred design for interactive systems. Geneva, Switzerland: International Organization for Standardization.