From Actors to Interactions

Image provided by PresenterMedia.com

Over the course of the HUMANE project, we have developed a typology based on individual dimensions across four analytical layers including networks, behaviours, actors and interactions. Further, we have identified a number of key challenges for human-machine networks (HMNs) – including motivation, collaboration, innovation and issues relating to trust – and explored design options to support them [1], as well as developing roadmaps across different domains [2,3]. But as we move towards the conclusion of the project, we need to take stock of what the dimensions of the typology, or some of them at least, really tells us about HMNs and what they might mean for the future of such networks.

In the recent workshop, some of our research outcomes were helpfully bracketed by our keynote speakers, who highlighted two major themes for HMNs. In the morning, David De Roure reminded us that collaboration between humans and technology has a long and distinguished history remembering that well before the launch of Wikipedia at the beginning of the noughties, the co-creation of knowledge and content goes back a very long way revealed in part by prosopographic investigation of personal narratives. Developing a contemporary metaphor, the concept of SOCIAM GO! underlines the fact that human actors in the network will adapt as they move offline activities online to exploit the greater reach and efficiencies enabled by increasing machine capabilities in the virtual world. Those engaging with technologies over the ages have therefore developed strategies together or independently to achieve their own goals. More recently, emergent behaviours have begun to signal that there is more to come.

Exploring the logical possibilities of Moore’s law as well as increasing machine agency [4] and the power of automation [5], Gina Neff set out a number of thought provoking propositions. The interplay of human and machine agency [6] may be usefully summarised as symbiotic agency: ignoring what might go wrong and the as-yet unresolved regulation of bots in political life [7], human-machine interaction is now about collaboratively exploring possibilities constrained only by our imagination. One consequence of this, though, is that instead of looking at the legal demands of privacy regulation  with its misdirected focus on data subject empowerment [8], we need to appreciate that it’s not so much personal data which may need protection but rather the derived data, the notional offspring from a human-machine coupling (see also [9,10]). Agency is therefore coming of age and is no longer concerned solely with the fine-grained distinction between human and machine actors of intentionality.

Elsewhere, we have begun to explore the potential afforded by increasing machine agency [5] as well as the relationship between agency on the one hand and regulation as well as self-efficacy on the other [6]. But other dimensions of the HUMANE typology now deserve additional attention. Interactions between human actors (Social Tie Strength), as well as Human-to-Machine Interaction, may well provide the key to taking our understanding of the dynamics of HMNs to the next level. Social psychology has already provided some insight into the migration of human relationships to the virtual world [11,12], the potential for robot exploitation in healthcare, interventions for developmental disorders, and trust as an organising principle [13] leading to trust transfer from human interactions to ecommerce [14]. However, if the co-creation of personal data is really the result of the intimate union of human and technology [15], then this will have both societal as well as economic implications. Value and rights management are not only about the service provider controlling access to their services against the reuse of such data perhaps for customised marketing purposes. Instead, with advanced machine learning techniques unleashing unexpected complexities via data analytics, the advent of blockchain [16] provides a basis for innovative economic models to ensure that both human participant and technology providers can cooperate on an equal footing and most importantly assume joint and equal responsibilities for the accuracy and curation of those data.

Today’s HMNs already exploit workflow interdependence and network organisation in ensuring increasing geographic reach supporting ever greater network size. The HUMANE profile identifies such networks, while related work shows both the cultural diversity of common network interaction [17] and the dissolution of previous spatio-temporal barriers to network efficiency [18]. Tomorrow’s HMNs will need to understand the agency dimensions and how they affect each other to facilitate network complexity and sophistication [4], [5]. Agency opens up the possibilities for emergent network-level behaviours. Future HMNs, though, will also need to explore and respond to the interaction dimensions of the network to ensure the selection of appropriate economic models and the fair use of network outcomes.

[1] A. Følstad, V. Engen, T. Yasseri, R. G. Gavilanes, M. Tsvetkova, E. Jaho, J. B. Pickering, and A. Pultier, “D2.2 Typology and Method v2,” 2016.

[2] E. Jaho, E. T. Meyer, B. Pickering, P. Walland, T. C. Lech, A. Følstad, and N. Sarris, “D4.1: Report on implications of future thinking,” 2016.

[3] E. Jaho, M. Klitsi, A. Følstad, T. C. Lech, P. Walland, J. B. Pickering, and E. T. Meyer, “D4.2 Roadmap of future human-machine networks,” 2017.

[4] V. Engen, J. B. Pickering, and P. Walland, “Machine Agency in Human-Machine Networks; Impacts and Trust Implications,” in HCI International, 2016.

[5] A. Følstad, V. Engen, I. M. Haugstveit, and J. B. Pickering, “Automation in Human-Machine Networks: How Increasing Machine Agency Affects Human Agency,” in International Conference on Man-Machine Interactions [submitted], 2017.

[6] J. B. Pickering, V. Engen, and P. Walland, “The Interplay Between Human and Machine Agency,” in HCI International, 2017.

[7] H. Ford, E. Dubois, and C. Puschmann, “Automation, Algorithms, and Politics | Keeping Ottawa Honest—One Tweet at a Time? Politicians, Journalists, Wikipedians and Their Twitter Bots,” Int. J. Communcation, vol. 10, 2016.

[8] C. L. Miltgen and H. J. Smith, “Exploring information privacy regulation, risks, trust, and behavior,” Inf. Manag., vol. 52, no. 6, pp. 741–759, 2015.

[9] M. Hildebrandt, Smart Technologies and the End of Law: Novel Entanglements of Law and Technology. Cheltenham, UK: Edward Elgar Publishing Ltd, 2015.

[10] M. Hildebrandt, “Promiscuous Data-Sharing in times of Data-driven Animism,” Ethics Symposium. Taylor Wessing, London, 2016.

[11] S. Henderson and M. Gilding, “‘I’ve Never Clicked this Much with Anyone in My Life’: Trust and Hyperpersonal Communication in Online Friendships,” New Media Soc., vol. 6, no. 4, pp. 487–506, Aug. 2004.

[12] N. Ellison, R. Heino, and J. Gibbs, “Managing Impressions Online: Self-Presentation Processes in the Online Dating Environment,” J. Comput. Commun., vol. 11, no. 2, pp. 415–441, Jan. 2006.

[13] B. McEvily, V. Perrone, and A. Zaheer, “Trust as an organizing principle,” Organ. Sci., vol. 14, no. 1, pp. 91–103, 2003.

[14] K. J. Stewart, “Trust transfer on the world wide web,” Organ. Sci., vol. 14, no. 1, pp. 5–17, 2003.

[15] B. Maurer, “Principles of descent and alliance for big data,” in Data, Now Bigger and Better!, G. Bell, T. Boellstorff, M. Gregg, B. Maurer, and N. Seaver, Eds. Prickly Paradigm Press, 2015, pp. 67–86.

[16] M. Pilkington, “Blockchain Technology: Principles and Applications,” in Research Handbook on Digital Transformations, F. X. Olleros and M. Zhegu, Eds. 2015.

[17] M. Tsvetkova, R. García-Gavilanes, and T. Yasseri, “Dynamics of Disagreement: Large-Scale Temporal Network Analysis Reveals Negative Interactions in Online Collaboration,” Sci. Rep., vol. 6, 2016.

[18] T. Yasseri, R. Sumi, and J. Kertész, “Circadian Patterns of Wikipedia Editorial Activity: A Demographic Analysis,” PLoS One, vol. 7, no. 1, p. e30091, Jan. 2012.

Putting People Centre Stage

What do networks of humans and machines actually do?

We expend a lot of time and energy, especially in a project like HUMANE, trying to understand the ‘what’ and the ‘how’ of human-machine networks, but it is during a workshop such as the recent, excellent, discussions in Oxford that bring to the fore the question of ‘why’.

We are apt to think of the machines in the network as the important feature – after all, the humans have been there all the time, it is the machines that are the innovation. Aren’t they? Maybe not. As Eric Meyer reminded us at the start of his talk, people have been building machines ever since they climbed out of the trees and started banging rocks together. We may not think of a piece of bent stick as a machine, but the use of a tool to dig furrows and plant seeds heralded a major social shift from nomadic to agricultural lifestyles.

Dave De Roure furnished us with more examples, citing the printing press and its social impact in 15th Century Europe leading to the libraries and social records we have today. So does this make the printing press a social machine, in line with the definitions coming from Tim Berners-Lee et al., where he defines social machines as abstract entities living on the web that do the ‘heavy lifting’ of administration, leaving the people to be creative? Or is the concept more abstract still? Whereas the plough-share enabled the people using it to be more productive by making a task more manageable, it also permitted a social change as a consequence of the introduction of a different way of life that was not possible before the machine arrived.

Similarly, but perhaps not so obviously, the printing press caused social change. People could write and distribute their ideas before the printing press arrived, but if they wanted to distribute their ideas widely they were reliant on monastic scribes to create copies. With the arrival of the printing press it became very easy to replicate and distribute ideas in print without involving the monks. This is very much in line with the new forms of social process that Berners-Lee also associates with social machines, and has obvious corollary with the social changes brought about by the rapid expansion of social media at the start of this century.

Of course, we have to ask whether all such changes are beneficial, and who defines what ‘beneficial’ is. Each new technology-led innovation ushers in a Utopian ideal in which the human beneficiaries are enabled to achieve idealised goals – or at least that is what the technologists behind the innovation would have them believe. What we see in reality, whilst not necessarily dystopian, is nonetheless very far from this idealised world. There is, and always will be, a huge difference between the way the humans behave and the way that machines behave. No matter how complex the machine, and how closely it appears to mimic human thought, a machine will never be human, it will always be a machine.

The dystopic view of our developing relationship with machines comes not from machines developing some kind of emergent consciousness and taking over the world, but from the behaviour of the people who exploit them or rely on them. Machines are a product of their design and programming – they have limitations. People, on the other hand, are driven by their very nature to explore outside the boundaries of experience. They don’t ask ‘what does this machine do’, they ask ‘what can I do with this machine’.

Vegard Engen introduced the concept of ‘intentionality’ as a distinction between the ‘agency’ exhibited by machines and the ‘agency’ exhibited by the humans in a network. Humans will intentionally set out to get the machine to do what they want it to do, whereas the machine will only do those things that are within its design parameters.

In the descriptive model presented by Brian Pickering ‘Human Behaviour’ takes centre stage, usurping the earlier focus of such models on the technical capability within networks. This is an important shift of emphasis taking place in the study and understanding of human machine networks, including as it does the social science and humanities component as an intrinsic part of network functionality.

In her review of the roadmaps being developed by the HUMANE project, Eva Jaho talked about policy and regulation as well as technological development – reflecting the need to manage the behaviour and activity of the people in a network whilst recognising that evolving technology allows for emergent beneficial behaviour that could be supressed by over-enthusiastic regulators. We should remember that machines operate on the principle of prescription – they do what they are designed to do – whilst people operate on the principle of proscription – they will do anything they can get away with unless they are prevented from doing it.

Dave DeRoure reminded us that people are subversive – they will be inventive to get the machines to do what they want to do, not what the designers expected the machines to do. The best networks are the ones that celebrate and encourage the inventive ability of humans – Grant Miller provided the example of Zooniverse and its ability to satisfy the higher human ideals of curiosity, satisfaction and achievement whilst eschewing any financial reward.

So, I will return to my original question of what human machine networks, or social machines, actually do. Gina Neff talked about symbiotic agency, reflecting the developing understanding of networks coming out of HUMANE.

Humans and machines work together to achieve a human-defined goal. Different humans within the network may have different goals, leading to conflicts and battles such as those described by Taha Yasseri in his studies of Wikipedia, but this is a result of human nature, not machine intervention. Human machine networks and social machines allow people to do what people do best – communicate, explore, discover, invent, manipulate, subvert and revolutionise.

People have a symbiotic relationship with the machines they invent – but they always have done. Where machines come to dominate or control lives it is only because we have allowed them to do so. We lay ourselves open to Perrow’s ‘Normal Accidents’ but, as Perrow describes, they do not arise because of the technology but because of human reliance and organisational failure. Our understanding and appreciation of the value and benefits of human machine networks must be based on their social context and on the resultant behaviour of the people forming part of the network, we can no longer study networks as purely technological artefacts.

There is no other ghost in the machine than the people who live within it, who seek to achieve their goals and ambitions, their wants and needs in symbiosis with machine capabilities. And this is what human machine networks do – they give us the power to be more human and to do better what we, as humans, have always strived to achieve.

HUMANE Workshop, Wrap-up

On 21st of March, we held the HUMANE Intentional Workshop in Oxford. We had some 50+ participants from across different sectors; academia, industry, and public sectors, as well as technicians and freelancers. Among different events and workshops that I have attended in the recent years, I can easily say that our workshop was unique in terms of the wide range of topics, speakers, and attendees.

We started the day with a great keynote by David De Roure  from the Oxford e-Research Centre. David spoke about Social Machines and How to Study them. For me, the most thought-provoking part of David’s talk was his call to pay extra attention to the unanticipated and unpredictable outcomes of large assemblies of humans and machines.

We continued with a HUMANE presentation by Asbjørn Følstad, the project co-ordinator from SINTEF. Asbjørn explained how we built the HUMANE Typology and walked us through the HUMANE method. After that, Eric Meyer from the Oxford Internet Institute reviewed the existing literature on Human-Machine networks in a talk titled What’s Humane about Machines?. We closed the morning session with a talk by Vegard Engen of the IT Innovation Centre on Agency in Human-Machine Networks. Vegard’s focus was on the impact of both humans and machines’ agency on trust and user behaviour.

Then we had the lunch break!

We kicked off the afternoon session with a talk by Eva Jaho from ATC Innovation Lab. Eva presented the HUMANE Roadmaps and how they help us to think about the future of human-machine networks. Then we had Grant Miller from Zooniverse to talk about Zooniverse: Humans, Machines, and Penguins; the title tells it all! And finally, just before the coffee break, I presented our work on the edit wars between humans and between bots on Wikipedia.

The last part of the day started with a talk by Brian Pickering, also from IT Innovation Centre. Brian’s talk titled Decision Support for Crowd Management was about the eVACUATE project and how the HUMANE typology and method help us to understand and design better crowd management systems.

Last but not least, Gina Neff, also from the Oxford Internet Institute, presented the other keynote talk of the day titled Making Sense of Self-Tracking Data: Possible futures of the Human-Machine Relationship. Gina introduced self-tracking data as the result of human and machine relationship and then discussed some important aspects of this co-produced outcome: Affordances, Valences, and Symbiotic Agency in relation to the self-tracking data.

Paul Walland (IT Innovation) had the job to summarize the day at the end of the workshop. He not only did this very well, but also sent me a note as a contribution to this wrap-up post. See Paul’s note below, but before that, let me thank all the presenters and the participants again for their contribution to the success of our workshop.

 

Paul Walland:

There was a huge range of very interesting stuff presented at the workshop, and it would be both impossible and unfair to try to summarise it all, so I’m not going to do that. What I will do is bring the discussion back to the core objective of the HUMANE project, and think about roadmaps – there are two that occur to me.

One is the technology roadmap – we must not forget that technology is continually advancing, and therefore what people can do with the technology is developing as well. In parallel with this, we have an evolution in what people are trying to do and achieve within networks, which can lead to human capacity increasing, as machines take on the roles that machines do better than people. This is the symbiotic increase in agency that Gina described, reflecting the aspects of agency that introduced in the HUMANE presentations from Vegard and Brian.

The increase in agency – that is the capacity of the machines in the network to take on new roles that might have been the role of a human in the past – liberates the human in the network to focus on achieving their goals without being concerned about the actions the technology is taking to support them. I used to drive rather old MG motor cars. That is to say, I used to spend 90 percent of my time trying to get the motor car to work, and 10 percent of my time coaxing it to get me to where I wanted to go. Now I have a motor car that just works – I don’t need to think about what is going on under the bonnet:- I just get in and drive myself to my destination. But technology continues to advance, and in a few years’ time I won’t need to sit behind a steering wheel and make sure that the motor car follows the road and arrives at a destination, I will simply tell the car where I want to get to and allow it – trust it – to get me where I want to go. My goal hasn’t changed, but the agency, the role taken by the machine, has changed, and in consequence my behaviour has changed to take advantage of the ability to do what I want to do without having to manage or direct the technology that helps me get there.

I am a physicist by background, and my first experience of networks was in the mechanics of how networks are physically assembled and how they transported data. Over the years I have become increasingly interested in the human aspect of networks, since the ultimate goal of technology is to help people achieve their objectives by giving them access to capabilities they do not have without the support of the machines. The two things go hand in hand.

As we have heard from the speakers, serendipitous actions and events can lead to new insights or the emergence of new behaviours. This does not come from machines replacing humans, it comes from humans doing what they do well, and machines doing the things, whether that is fast reaction or massive data crunching, that they can do and humans cannot. It is this cross-cutting of technology and social behaviour/human behaviour that is so interesting and so fruitful. The HUMANE roadmap embraces this interconnectivity of human ambition and machine capability, and I for one am very much looking forward to seeing where it leads.

So I would like to express my thanks to OII and the HUMANE consortium for putting together this fascinating day, and thank all the presenters who have done such a great job of keeping us both entertained and informed.

 

Three solutions: increased automation, social ties, and extended use of common systems

Crisis management systems (CMS) are human-machine networks consisting of a diversity of actors working together towards achieving the common goal of saving human lives and values. In addition to organizations and people with different capabilities, CMS are important in coping with disastrous events. These systems are meant to support humans in coordinating the handling of an event, and providing information and decision support.

Collaboration is a core requirement for efficient crisis management. The HUMANE typology and framework is helpful in understanding the implications the network’s characteristics have on collaboration. It can provide valuable insight to how to strengthen the design of CMS to better support collaboration and efficient crisis management. The following are three examples.

Increasing machine agency through higher degree of automation

CMS are often intended for use by several crisis response organizations and they are often designed with a high level of human agency and a low level of machine agency. The human actors of the network are given great freedom to configure the system to fit their organization’s needs. The background for this design rational is that the variety of crisis management organizations often have different requirement, thus the system needs to be flexible to fit the needs of all its user organizations.

It can, however, be argued that applying higher degree of automation to certain parts of CMS could streamline human-machine networks for crisis management and make them more efficient. By assigning appropriate tasks to the system, the crisis responders can be given greater leeway to perform tactical or strategic activities, such as planning the handling of an event, making decisions, or other activities that are based on human experience and knowledge and require handling from human actors.

Strengthening the social ties of dispersed human resources

The strength of social ties in crisis management networks varies. The challenge of social ties is especially apparent during handling of a crisis that require the collaboration between several actors and organizations, where social ties are often weak. Knowing the role and authority of one another is an important part of knowing others within crisis management. It is often assumed that a person with a certain work position will handle his or her responsibilities in a sufficient manner. However, weak social ties can sometimes hinder efficient collaboration between people or organizations, as the essential knowledge of and trust in each other is missing.

A well-designed crisis management system has the potential of increasing social ties. By providing a common platform for collaboration, providing information about participating actors and organization, and being a means for information sharing, CMS can strengthen collaboration between crisis responders.  In addition, common meeting arenas and training sessions where people across crisis management organizations are trained together, preferably with a common crisis management system, is of high importance for strengthening social ties.

Extending the use of a common crisis management system

There exist a variety of CMS. An issue in today’s crisis management networks is, however, that different crisis management organizations often use different systems that do not support sharing of information, communication, and coordination across the systems. This clearly limits and affects the efficiency of collaboration during management of crisis events. Furthermore, the lack of use among some organizations has implications on the network’s motivation for using the system, which might have implications on the use itself, as users might not see the value of the system when important collaboration partners are absent.

To function as common platform, all relevant crisis management actors should ideally use CMS with possibility of supporting collaboration through joint coordination, communication, and sharing of information. Such system should hold the possibility of integration with other systems.

The introduction of new technology causes concern for the future of work. What is the role of humans in a work life in which an increasing number of tasks are conducted better and more efficiently by machines than by humans.

In a much cited paper on automation of work through computerization, Frey and Osborne, take a starting point in the premise that new technology makes old jobs redundant faster than new jobs are created. They then move on to claim that advances in machine learning and mobile robotics in the 21st century may render not only manual routine work victim to automation, but also work previously thought of as non-routine such as car driving, medical diagnostics, financial trading, or educational tutoring. Think only of the self-driving cars, entities that are able to perform tasks that only a few years back were considered beyond the computational capacities of machines. Tasks that represents engineering bottlenecks for computerizations, such as those associated with perception and manipulation in highly diverse environments, creativity, or social intelligence, are considered low risk for automation also in the foreseeable future. Hence, workers in work that are at risk for automation may need to acquire skills that are not easily automated.

While there is no doubt that automation will replace human workers, the picture may not be as bleak as sometimes suggested in popular reports on the subject. Autor, in an essay on workplace automation, argue that “journalists and even expert commentators tend to overstate the extent of machine substitution for human labor and ignore the strong complementarities between automation and labor that increase productivity, raise earnings, and augment demand for labor”. One example is the technological improvements in the health sector which lead to increasingly larger shares of income being spent on health. Another, is the value creation in the computer industry itself, where automating machinery spawn myriads of previously non-existing jobs.

In HUMANE, we have used the typology dimensions human agency and machine agency as a framework for discussing the role of automation in the complex systems. While Frey and Osborne, as well as Autor, discuss the effect of automation on work at a societal level, we discuss how automation may affect the work of humans within specific human-machine networks. Through a series of case studies on systems for decision support, crisis management, and evacuation support, we investigate how increasing the range of tasks allocated to computerized machines in such settings may actually strengthen the range of tasks, opportunities for influence, and opportunities for creativity in human operators. In these domains, all characterized by highly procedural work tasks and the need to adhere to regulation and policy, allowing machines to take over procedural decision making, human operators may instead spend their time and resources at the tactical and strategical levels of decision making. Here, automation does not remove the need for human operators, but redefines its purpose, allowing for novel ways of value creation.

We often seem to think of automation in terms similar to that of the self-driving cars, where the role of the human driver simply evaporates. The reality, however, may often be that automation enables new forms of value creation where the combined capabilities of humans and machines provide better outcomes in a more efficient manner than was previously possible. By understanding how to design the networked interaction between humans and machines, as we aim for in the HUMANE project, such an optimistic take on the social challenge of automation may be even more feasible.

The HUMANE project is building roadmaps that can help guide future policies in specific social domains such as Sharing Economy, eHealth, and Citizens’ Participation. The HUMANE roadmaps act as a reference on which a collaborative effort for a complex task, such as the one needed for finding and implementing efficient policies for Human-Machine Networks (HMNs), can be based on. It helps all the involved parties recognize the goals and the steps needed for their achievement, and to better understand their roles and interrelations.

Through this Survey we aim to collect information, which we will process and use to develop the HUMANE roadmaps.

The survey shouldn’t take you more than 10-15 minutes to complete. We won’t collect any personal information about you: it is entirely anonymous. Your responses will be used for scientific research purposes only as part of the HUMANE project.

Thank you on behalf of the HUMANE project and we are looking forward to receiving your valuable feedback!

Cyberbullying: no place to hide

In an excellent cross-cultural study on Wikipedia edit/revert behaviours [1], Tsvetkova and her colleagues argue among other things for a mediating effect of culture in accounting for different dominance patterns to the editings in different language editions of the online encyclopaedia. The Wikipedia Humane-Machine Network is biased in some sense towards large geographical reach and network size, along with high human agency, low workflow interdependence, but low network organisation. And facebook, as highlighted in a previous post, also displays high human agency, again geographically disparate across a very extensive network. What might these vast networks with a great deal of human agency, but only moderate social tie strength do?

One area that is increasingly brought into focus, however, is cyberbullying [2]. Individuals, especially those in public focus (Jonah Lehrer) or who might be expected to know better (Justine Sacco), may be subjected to the cascading effects of viral relational or indirect aggression in full view of the virtual world (Ronson, 2015 [3]). The vulnerable and impressionable, such as children, may be subject to grooming as well as aggression, with little chance of refuge [4], leading to potentially greater affective trauma especially in connection with real-world bullying [5]. Situated within a generalised model of aggression [6], cyberbullying may be subject to similar social factors [7] as offline behaviours such as an assumed reluctance to intervene [8] and a diffusion of responsibility [9].

Perhaps the reality though, as underlined by the HUMANE profile for these networks, is that network size and geographical dispersion along with high levels of human agency and few controls (low network organisation) lead to what Suler had put down in part to the combination of dissociative anonymity, invisibility and the asynchronic nature of communication and interactions [10]. The perpetrators of online aggression are not easily identifiable, when they hide behind pseudonyms and different online personae, whilst social contagion [11] creates the domino effect.

We might ask whether increasing Tie strength might mitigate against cyberbullying, by encouraging a shared understanding of its detrimental effects [12], facilitating participative discussion and understanding [13] and developing social identity which might encourage protective intervention [14]. Networks with high levels of human agency, as well as large membership and geographic distribution will need to consider carefully how to handle potential problems therefore of latent or weak Tie strength. A number of strategies are possible (see the forthcoming D2.2). But for the unauthorised distribution of personal data  and unwanted behaviours outlined here, the risk of not adopting those or similar strategies may be detrimental to the interests or well-being of human participants in the HMN.

Picture credit: By User:Sonia Sevilla – Own work, Public Domain, https://commons.wikimedia.org/w/index.php?curid=23789972

[1] https://arxiv.org/abs/1602.01652

[2] http://www.bullying.co.uk/cyberbullying/; see also https://www.internetmatters.org/issues/cyberbullying/

[3] Ronson, J. (2015). So You’ve Publically Shamed. Oxford, England: Picador

[4] Tokunaga, R. S. (2010). Following you home from school: A critical review and synthesis of research on cyberbullying victimization. Computers in Human Behavior, 26(3), 277-287. doi:10.1016/j.chb.2009.11.014

[5] Schneider, S. K., O’Donnell, L., Stueve, A., & Coulter, R. W. (2012). Cyberbullying, school bullying, and psychological distress: A regional census of high school students. American Journal of Public Health, 102(1), 171-177. doi:10.2105/AJPH.2011.300308

[6] Anderson, C. A., & Bushman, B. J. (2002). Human aggression. Annual Review of Psychology, 53, 27-51. doi:10.1146/annurev.psych.53.100901.135231

[7] Kowalski, R. M., Giumetti, G. W., Schroeder, A. N., & Lattanner, M. R. (2014). Bullying in the digital age: A critical review and meta-analysis of cyberbullying research among youth. Psychological bulletin, 140(4), 1073. doi:10.1037/a0035618

[8] Latané, B., & Darley, J. M. (1969). Bystander” Apathy”. American scientist, 57(2), 244-268; though see also Levine, M. (2012). Helping in Emergencies: Revisiting Latané and Darley’s bystander studies. In J. R. Smith & S. A. Haslam (Eds.), Social Psychology: Revisiting the Classic Studies (pp. 192-208). London, UK: SAGE Publications Ltd

[9] See the early Bandura study: Bandura, A., Barbaranelli, C., Caprara, G. V., & Pastorelli, C. (1996). Mechanisms of moral disengagement in the exercise of moral agency. Journal of Personality and Social Psychology, 71(2), 364. doi:10.1037/0022-3514.71.2.364

[10] Suler, J. (2004). The online disinhibition effect. CyberPsychology & Behavior, 7(3), 321-326. doi:10.1089/1094931041291295

[11] Langley, D. J., Hoeve, M. C., Ortt, J. R., Pals, N., & van der Vecht, B. (2014). Patterns of Herding and their Occurrence in an Online Setting. Journal of Interactive Marketing, 28(1), 16-25. doi:10.1016/j.intmar.2013.06.005; Pentland, A. (2014). Social physics: How good ideas spread-the lessons from a new science: Penguin.

[12] Slonje, R., Smith, P. K., & FriséN, A. (2013). The nature of cyberbullying, and strategies for prevention. Computers in Human Behavior, 29(1), 26-32. doi:10.1016/j.chb.2012.05.024

[13] Although not about online activity, see, for example, Veale, A., McKay, S., Worthen, M., & Wessells, M. G. (2013). Participation as Principle and Tool in Social Reintegration: Young Mothers Formerly Associated with Armed Groups in Sierra Leone, Liberia, and Northern Uganda. Journal of Aggression, Maltreatment & Trauma, 22(8), 829-848. doi:10.1080/10926771.2013.82363

[14] Levine, M. (2012). Helping in Emergencies: Revisiting Latané and Darley’s bystander studies. In J. R. Smith & S. A. Haslam (Eds.), Social Psychology: Revisiting the Classic Studies (pp. 192-208). London, UK: SAGE Publications Ltd.

HUMANE Roadmapping Process

In the course of the HUMANE project, we examine a sample of social domains, where human-machine interaction is expected to be significant in the future. We study the type of interactions, the roles of humans and machines, and the challenges that must be addressed to ensure the successful integration of machines in a way that is beneficial for society. We then create a roadmap implementation for each domain that can guide future policies.

We have formalized the HUMANE roadmapping process to be used to construct the roadmap for each social domain where we want to improve Human-Machine Network (HMN) design.

The HUMANE roadmapping process consists of the following steps:

Figure: Illustration of the HUMANE roadmap process

  1. Initiation: In this first step, we describe what we want to achieve and the need to design or improve the HMN in the social domain of interest. The needs will be further explained by listing the shortcomings of current HMN designs and by discussing the emerging and future trends in the domain of interest.
  2. Background knowledge: Here we describe the current technological situation, policy background and regulatory context., This background knowledge will help to identify the gaps between the current state of affairs and where we want to arrive at, and will provide a reference framework for the future work and proposed policy actions.
  3. Goals and expected outcomes: This step is carried out in cooperation with stakeholders with a view on describing the goals that are feasible to implement in a relatively short timeframe, and on describing the actual outputs of the roadmap. An output could be a new regulation or code of practice, a novel service, a report on case studies, etc. Together with the current situation described in step 2, it is used to make a gap analysis between the current and the desired HMNs we want to have in the future.
  4. Required actions to achieve the goals: This is also a collaborative step with stakeholders. The objective is to describe the stakeholders’ roles, comprising the actions that are necessary to achieve the goals in the previous step. Emphasis is on highlighting the complementary roles of different stakeholders in achieving the goals, and the synergistic effects of their actions.
  5. Design strategies: This is a crucial step in the HUMANE roadmap process, as it will help to identify the necessary design strategies based on the characteristics of humans and machines in the social domain of interest and will apply the HUMANE topology, method and tools to find appropriate design patterns.
  6. Implementation priorities and timeline: In the last step of the roadmap construction, implementation priorities for the different tasks will be set, based on the logical sequence of actions, but also the importance of each action. The current level of implementation, as well the complexity of the tasks will be taken into account to set a timeline for implementation. In addition, the output from the gap analysis will help to estimate the investment of time, money and human resources required to achieve the desired outcomes.
  7. Roadmap dissemination: The HUMANE roadmaps can be disseminated to policy makers, ICT designers, as well as other stakeholders to serve as a guide for future policies and for possible implementation.This process will be used by the HUMANE project partners to construct roadmaps in the domains of Sharing Economy, eHealth, and Citizen Participation. It also can be taken as guidelines by policy designers to develop HMN roadmaps for other domains.

Are users always worried about their data?

One consequence of the Wanless report is a need for more distributed healthcare. This means that an ageing and expanding patient population can be supported at home and in the community. But also not everyone in rural communities will be able to travel any distance for specialist care. This really is the essence of telemedicine or eHealth. So the idea is that ICT can mediate human-to-human (patient-clinician) interaction such that patients can be supported remotely, not least to supplement face-to-face consultation.

screen-shot-2016-12-19-at-02-16-43

A typical eHealth Human-Machine Network

A recent pilot study in this area, TRIFoRM, engaged with a small opportunity sample of self-selected patients suffering from a chronic painful condition. Patient-clinician interactions, it was envisaged, would be supplemented by an app that the patients would use to gather daily monitoring data as well as regular self-reports. These would be collated at a central server for clinical staff (consultant, specialist nurse, etc.) to query and review. The idea is that the app running on the patient’s own personal device would supplement their care regime: clinical staff would be able to dispense with asking routine questions about how any exercise regime was going during precious consultation time since this would be available in advance; in so doing, clinicians could devote more time to the patient’s affective state and the holistic effects of the care regime.

Step back for a moment, and consider the data in such a network. It is not just personal data (contact details, for instance) but sensitive personal data (also see See GDPR, Article 9): especially for a chronic condition, worst case is that the data could be used prejudicially to increase insurance premiums or prevent access to certain benefits. Would this affect user trust in the network? As far as the legislative context is concerned, would users be more concerned about their personal data given its defined sensitivity? On one level, perhaps a reflection of the nature of the patients’ condition, technology is a great benefit and takes some of the strain from users. As one participant in a semi-structured interview remarked:

“…if you’re feeling really tired it’s really easy to get brain fog and do something really stupid” [1],

which, of course, is a practical illustration of what Norman sees as the main cognitive-support role for machines [2]. In technology acceptance terms, technology is “useful” and so more likely to be adopted, which for Thatcher and his colleagues translates to “helpfulness” and “functionality” in their post adoption trust modelling.

Within this context, the HUMANE profile indicates low human and machine agency:  both actor types are restricted in what they can do. But high in terms of tie strength and human-to-machine interaction: human actors rely on the machines to achieve their goals, and rely on each other for the overall efficacy of the care regime. Perhaps not surprisingly given the limited scope for creativity and emergent behaviours, network organisation tends to be high too: there is a top-down structure which limits what can be done. Are these factors which contribute to a more trusting attitude to engaging with the network?

Consider the high tie strength in particular. It turns out there are two main features at least: support to the community of sufferers as well as to the individual’s specific care regime.

“I’m happy to help. It might help me as well but just being part of this community, it’s like let’s all help each other is what I say.”[1]

is one strand which refers to an emergent community of fellow-sufferers both now and in the future who might benefit from the collection and aggregation of such data. The social tie strength in the network then is not simply between patient and clinician, but also to other patients who may not be ‘known’ personally, sharing a common bond with the data subject. If both may benefit, then sensitive personal data can be released. That’s not all though:

“So at those [consultations], it’s not a case of me just reporting and [them] listening to my report let alone what electronic reports might be coming, but it’s the communication. It’s the two-way communication. It’s not just [them] being fed stuff and … going: ‘I don’t need to see you because I’ve got everything here. You can sit there being quiet’ or something” [1]

Allowing sensitive personal data to be shared in the HMN is about enhancing the tie strength existing between clinician and patient; it’s about enriching the communicative context within a specific dyadic interconnection. In association with strong interactions of this sort, then, data release and data sharing are viewed quite differently.

In a previous post, GDPR and right to be forgotten, weak or latent tie strength may involve serendipitous data access, possibly enhanced through the necessity of physical replication at the carrier level, seems to undermine data subject control over their data. Here, increasing tie strength associated with a specific and very personal goal (immediate care needs as well as long term community benefit) seems to affect data subject willingness to share even sensitive personal data. We should probably look further in future at the aspects of trust and the valence of human-to-human interaction as it affects the management of privacy and trust.

[1]  These quotations come direct from the transcripts of interviews carried out as part of TRIFoRM

[2] Norman, D. A. (2010). Living with Complexity. Cambridge, MA: MIT Press.

[3] Thatcher, J. B., McKnight, D., Baker, E. W., Arsal, R. E., & Roberts, N. H. (2011). The role of trust in postadoption it exploration: An empirical examination of knowledge management systems. Engineering Management, IEEE Transactions on, 58(1), 56-70. doi: 10.1109/TEM.2009.2028320 

GDPR: the right to be forgotten

What does HUMANE profiling tell us about data protection? 

Back in 1995, the European Parliament issued the Directive for data protection, which by 1998 had passed into national law in the UK. Now, eleven years on and after much consultation such as Working Party 29, in April 2016 the Parliament issued a corresponding regulation – the General Data Protection Regulation (GDPR) – which will automatically pass into law across Member States by May 2018, and will colour the corresponding legislation in non-EU countries wishing to collaborate with the Union. The GDPR harmonises regional and national laws: under the GDPR, for example, registration will be required with a single Data Protection Authority (DPA) in any Member State; a Data Processor now shares liability with the Controller, and may be prosecuted if demonstrably at fault; and, of course, data subjects now have the right to be forgotten (Article 17 [1]), to a certain extent at least. This is reassuring. And it means that we can all be confident that our personal data are safe. OK, but what happens when those data are released into a human-machine network (HMN)?

Let’s look at a social network, a typical example, of course, being Facebook (see the profile). The network is characterised by its size (“the network has a large or massive number of users…”) and geographical reach (“the network has wide geographical reach, spanning multiple jurisdictions and cultural groups”); human agency is high (“the users to a great degree define their own tasks towards goals they set themselves”); and machine agency (“the behaviour of the machine components in the network to some degree is autonomous and intelligent”) as well as social ties are intermediate (“the users of the network typically have only latent or weak ties, characterized by low levels of intimacy and short duration”). What does the combination of autonomous machine nodes, high human agency, with a highly distributed HMN for the GDPR and the right to erasure?

Tie strength is weak or latent, and so there may be no notion of loyalty or mutual support amongst human actors in the network. Although this is not always the case. In a recent focus group discussion with software engineers in training (to be reported in D3.3), one participant remarked about their use of social media:

…there are a lot of people that if I was in the same room as them I’d talk to them but messaging them on Facebook would be weird because we’re not that close. That would be strange.”

The assumption here is that facebook is somehow reserved for more private and intimate interactions, which of course the privacy settings might allow if users are prepared to spend time understanding and maintaining them. Alternatively, it may be that the profile dimension represents only an aggregate of all connections between different nodes which may have different roles.

In the context of data privacy, though, this is important. Can users really assume privacy and what is more that they know where their data go and who seems them? Machine agency has been described as “autonomous and intelligent”. One practical outcome not peculiar to social media per se is the common last mile problem (see this for example) in communication networks, the final and often non-optimal link between the backbone network and a retail or private user. One component of a solution where speed is important may be, for instance, to replicate content to a local server. On top of that though, for networks with “wide geographical reach, spanning multiple jurisdictions and cultural group”, content would most clearly be replicated across boundaries, even into different jurisdictions with different laws about personal data. In such an environment, then, demanding that my data be removed as the GDPR seems to promise is almost impossible beyond the safe haven of the EU and its immediate collaborators. Add to this the issue of multiple data sources on an individual being mined and cross-correlated and you have a situation where even the modest requirement for pseudonymisation which the GDPR portrays cannot be guaranteed: with lots of data out there, jigsaw attacks become a real possibility.

The HUMANE profile at least makes it possible to begin to understand the practical implications of reliance on legislation as far as data protection, and specifically the right to be forgotten, are concerned. As one of our focus group participants pointed out when viewing the network diagram created:

“You rarely think about [where the data will go] when you’re like randomly scrolling through things and clicking stuff and things”

This is something that perhaps we as network users should take into account. And in future work, we need to consider how the profile dimensions might highlight implications of the HMN configuration.

[1] Article 17 Right to erasure (right to be forgotten) is not the blanket mandate which some may assume, but provides some promise that data can be withdrawn if the data subject so wishes.