From Actors to Interactions

Image provided by PresenterMedia.com

Over the course of the HUMANE project, we have developed a typology based on individual dimensions across four analytical layers including networks, behaviours, actors and interactions. Further, we have identified a number of key challenges for human-machine networks (HMNs) – including motivation, collaboration, innovation and issues relating to trust – and explored design options to support them [1], as well as developing roadmaps across different domains [2,3]. But as we move towards the conclusion of the project, we need to take stock of what the dimensions of the typology, or some of them at least, really tells us about HMNs and what they might mean for the future of such networks.

In the recent workshop, some of our research outcomes were helpfully bracketed by our keynote speakers, who highlighted two major themes for HMNs. In the morning, David De Roure reminded us that collaboration between humans and technology has a long and distinguished history remembering that well before the launch of Wikipedia at the beginning of the noughties, the co-creation of knowledge and content goes back a very long way revealed in part by prosopographic investigation of personal narratives. Developing a contemporary metaphor, the concept of SOCIAM GO! underlines the fact that human actors in the network will adapt as they move offline activities online to exploit the greater reach and efficiencies enabled by increasing machine capabilities in the virtual world. Those engaging with technologies over the ages have therefore developed strategies together or independently to achieve their own goals. More recently, emergent behaviours have begun to signal that there is more to come.

Exploring the logical possibilities of Moore’s law as well as increasing machine agency [4] and the power of automation [5], Gina Neff set out a number of thought provoking propositions. The interplay of human and machine agency [6] may be usefully summarised as symbiotic agency: ignoring what might go wrong and the as-yet unresolved regulation of bots in political life [7], human-machine interaction is now about collaboratively exploring possibilities constrained only by our imagination. One consequence of this, though, is that instead of looking at the legal demands of privacy regulation  with its misdirected focus on data subject empowerment [8], we need to appreciate that it’s not so much personal data which may need protection but rather the derived data, the notional offspring from a human-machine coupling (see also [9,10]). Agency is therefore coming of age and is no longer concerned solely with the fine-grained distinction between human and machine actors of intentionality.

Elsewhere, we have begun to explore the potential afforded by increasing machine agency [5] as well as the relationship between agency on the one hand and regulation as well as self-efficacy on the other [6]. But other dimensions of the HUMANE typology now deserve additional attention. Interactions between human actors (Social Tie Strength), as well as Human-to-Machine Interaction, may well provide the key to taking our understanding of the dynamics of HMNs to the next level. Social psychology has already provided some insight into the migration of human relationships to the virtual world [11,12], the potential for robot exploitation in healthcare, interventions for developmental disorders, and trust as an organising principle [13] leading to trust transfer from human interactions to ecommerce [14]. However, if the co-creation of personal data is really the result of the intimate union of human and technology [15], then this will have both societal as well as economic implications. Value and rights management are not only about the service provider controlling access to their services against the reuse of such data perhaps for customised marketing purposes. Instead, with advanced machine learning techniques unleashing unexpected complexities via data analytics, the advent of blockchain [16] provides a basis for innovative economic models to ensure that both human participant and technology providers can cooperate on an equal footing and most importantly assume joint and equal responsibilities for the accuracy and curation of those data.

Today’s HMNs already exploit workflow interdependence and network organisation in ensuring increasing geographic reach supporting ever greater network size. The HUMANE profile identifies such networks, while related work shows both the cultural diversity of common network interaction [17] and the dissolution of previous spatio-temporal barriers to network efficiency [18]. Tomorrow’s HMNs will need to understand the agency dimensions and how they affect each other to facilitate network complexity and sophistication [4], [5]. Agency opens up the possibilities for emergent network-level behaviours. Future HMNs, though, will also need to explore and respond to the interaction dimensions of the network to ensure the selection of appropriate economic models and the fair use of network outcomes.

[1] A. Følstad, V. Engen, T. Yasseri, R. G. Gavilanes, M. Tsvetkova, E. Jaho, J. B. Pickering, and A. Pultier, “D2.2 Typology and Method v2,” 2016.

[2] E. Jaho, E. T. Meyer, B. Pickering, P. Walland, T. C. Lech, A. Følstad, and N. Sarris, “D4.1: Report on implications of future thinking,” 2016.

[3] E. Jaho, M. Klitsi, A. Følstad, T. C. Lech, P. Walland, J. B. Pickering, and E. T. Meyer, “D4.2 Roadmap of future human-machine networks,” 2017.

[4] V. Engen, J. B. Pickering, and P. Walland, “Machine Agency in Human-Machine Networks; Impacts and Trust Implications,” in HCI International, 2016.

[5] A. Følstad, V. Engen, I. M. Haugstveit, and J. B. Pickering, “Automation in Human-Machine Networks: How Increasing Machine Agency Affects Human Agency,” in International Conference on Man-Machine Interactions [submitted], 2017.

[6] J. B. Pickering, V. Engen, and P. Walland, “The Interplay Between Human and Machine Agency,” in HCI International, 2017.

[7] H. Ford, E. Dubois, and C. Puschmann, “Automation, Algorithms, and Politics | Keeping Ottawa Honest—One Tweet at a Time? Politicians, Journalists, Wikipedians and Their Twitter Bots,” Int. J. Communcation, vol. 10, 2016.

[8] C. L. Miltgen and H. J. Smith, “Exploring information privacy regulation, risks, trust, and behavior,” Inf. Manag., vol. 52, no. 6, pp. 741–759, 2015.

[9] M. Hildebrandt, Smart Technologies and the End of Law: Novel Entanglements of Law and Technology. Cheltenham, UK: Edward Elgar Publishing Ltd, 2015.

[10] M. Hildebrandt, “Promiscuous Data-Sharing in times of Data-driven Animism,” Ethics Symposium. Taylor Wessing, London, 2016.

[11] S. Henderson and M. Gilding, “‘I’ve Never Clicked this Much with Anyone in My Life’: Trust and Hyperpersonal Communication in Online Friendships,” New Media Soc., vol. 6, no. 4, pp. 487–506, Aug. 2004.

[12] N. Ellison, R. Heino, and J. Gibbs, “Managing Impressions Online: Self-Presentation Processes in the Online Dating Environment,” J. Comput. Commun., vol. 11, no. 2, pp. 415–441, Jan. 2006.

[13] B. McEvily, V. Perrone, and A. Zaheer, “Trust as an organizing principle,” Organ. Sci., vol. 14, no. 1, pp. 91–103, 2003.

[14] K. J. Stewart, “Trust transfer on the world wide web,” Organ. Sci., vol. 14, no. 1, pp. 5–17, 2003.

[15] B. Maurer, “Principles of descent and alliance for big data,” in Data, Now Bigger and Better!, G. Bell, T. Boellstorff, M. Gregg, B. Maurer, and N. Seaver, Eds. Prickly Paradigm Press, 2015, pp. 67–86.

[16] M. Pilkington, “Blockchain Technology: Principles and Applications,” in Research Handbook on Digital Transformations, F. X. Olleros and M. Zhegu, Eds. 2015.

[17] M. Tsvetkova, R. García-Gavilanes, and T. Yasseri, “Dynamics of Disagreement: Large-Scale Temporal Network Analysis Reveals Negative Interactions in Online Collaboration,” Sci. Rep., vol. 6, 2016.

[18] T. Yasseri, R. Sumi, and J. Kertész, “Circadian Patterns of Wikipedia Editorial Activity: A Demographic Analysis,” PLoS One, vol. 7, no. 1, p. e30091, Jan. 2012.