Image provided by PresenterMedia.com
Over the course of the HUMANE project, we have developed a typology based on individual dimensions across four analytical layers including networks, behaviours, actors and interactions. Further, we have identified a number of key challenges for human-machine networks (HMNs) – including motivation, collaboration, innovation and issues relating to trust – and explored design options to support them , as well as developing roadmaps across different domains [2,3]. But as we move towards the conclusion of the project, we need to take stock of what the dimensions of the typology, or some of them at least, really tells us about HMNs and what they might mean for the future of such networks.
In the recent workshop, some of our research outcomes were helpfully bracketed by our keynote speakers, who highlighted two major themes for HMNs. In the morning, David De Roure reminded us that collaboration between humans and technology has a long and distinguished history remembering that well before the launch of Wikipedia at the beginning of the noughties, the co-creation of knowledge and content goes back a very long way revealed in part by prosopographic investigation of personal narratives. Developing a contemporary metaphor, the concept of SOCIAM GO! underlines the fact that human actors in the network will adapt as they move offline activities online to exploit the greater reach and efficiencies enabled by increasing machine capabilities in the virtual world. Those engaging with technologies over the ages have therefore developed strategies together or independently to achieve their own goals. More recently, emergent behaviours have begun to signal that there is more to come.
Exploring the logical possibilities of Moore’s law as well as increasing machine agency  and the power of automation , Gina Neff set out a number of thought provoking propositions. The interplay of human and machine agency  may be usefully summarised as symbiotic agency: ignoring what might go wrong and the as-yet unresolved regulation of bots in political life , human-machine interaction is now about collaboratively exploring possibilities constrained only by our imagination. One consequence of this, though, is that instead of looking at the legal demands of privacy regulation with its misdirected focus on data subject empowerment , we need to appreciate that it’s not so much personal data which may need protection but rather the derived data, the notional offspring from a human-machine coupling (see also [9,10]). Agency is therefore coming of age and is no longer concerned solely with the fine-grained distinction between human and machine actors of intentionality.
Elsewhere, we have begun to explore the potential afforded by increasing machine agency  as well as the relationship between agency on the one hand and regulation as well as self-efficacy on the other . But other dimensions of the HUMANE typology now deserve additional attention. Interactions between human actors (Social Tie Strength), as well as Human-to-Machine Interaction, may well provide the key to taking our understanding of the dynamics of HMNs to the next level. Social psychology has already provided some insight into the migration of human relationships to the virtual world [11,12], the potential for robot exploitation in healthcare, interventions for developmental disorders, and trust as an organising principle  leading to trust transfer from human interactions to ecommerce . However, if the co-creation of personal data is really the result of the intimate union of human and technology , then this will have both societal as well as economic implications. Value and rights management are not only about the service provider controlling access to their services against the reuse of such data perhaps for customised marketing purposes. Instead, with advanced machine learning techniques unleashing unexpected complexities via data analytics, the advent of blockchain  provides a basis for innovative economic models to ensure that both human participant and technology providers can cooperate on an equal footing and most importantly assume joint and equal responsibilities for the accuracy and curation of those data.
Today’s HMNs already exploit workflow interdependence and network organisation in ensuring increasing geographic reach supporting ever greater network size. The HUMANE profile identifies such networks, while related work shows both the cultural diversity of common network interaction  and the dissolution of previous spatio-temporal barriers to network efficiency . Tomorrow’s HMNs will need to understand the agency dimensions and how they affect each other to facilitate network complexity and sophistication , . Agency opens up the possibilities for emergent network-level behaviours. Future HMNs, though, will also need to explore and respond to the interaction dimensions of the network to ensure the selection of appropriate economic models and the fair use of network outcomes.
 A. Følstad, V. Engen, T. Yasseri, R. G. Gavilanes, M. Tsvetkova, E. Jaho, J. B. Pickering, and A. Pultier, “D2.2 Typology and Method v2,” 2016.
 E. Jaho, E. T. Meyer, B. Pickering, P. Walland, T. C. Lech, A. Følstad, and N. Sarris, “D4.1: Report on implications of future thinking,” 2016.
 E. Jaho, M. Klitsi, A. Følstad, T. C. Lech, P. Walland, J. B. Pickering, and E. T. Meyer, “D4.2 Roadmap of future human-machine networks,” 2017.
 V. Engen, J. B. Pickering, and P. Walland, “Machine Agency in Human-Machine Networks; Impacts and Trust Implications,” in HCI International, 2016.
 A. Følstad, V. Engen, I. M. Haugstveit, and J. B. Pickering, “Automation in Human-Machine Networks: How Increasing Machine Agency Affects Human Agency,” in International Conference on Man-Machine Interactions [submitted], 2017.
 J. B. Pickering, V. Engen, and P. Walland, “The Interplay Between Human and Machine Agency,” in HCI International, 2017.
 H. Ford, E. Dubois, and C. Puschmann, “Automation, Algorithms, and Politics | Keeping Ottawa Honest—One Tweet at a Time? Politicians, Journalists, Wikipedians and Their Twitter Bots,” Int. J. Communcation, vol. 10, 2016.
 C. L. Miltgen and H. J. Smith, “Exploring information privacy regulation, risks, trust, and behavior,” Inf. Manag., vol. 52, no. 6, pp. 741–759, 2015.
 M. Hildebrandt, Smart Technologies and the End of Law: Novel Entanglements of Law and Technology. Cheltenham, UK: Edward Elgar Publishing Ltd, 2015.
 M. Hildebrandt, “Promiscuous Data-Sharing in times of Data-driven Animism,” Ethics Symposium. Taylor Wessing, London, 2016.
 S. Henderson and M. Gilding, “‘I’ve Never Clicked this Much with Anyone in My Life’: Trust and Hyperpersonal Communication in Online Friendships,” New Media Soc., vol. 6, no. 4, pp. 487–506, Aug. 2004.
 N. Ellison, R. Heino, and J. Gibbs, “Managing Impressions Online: Self-Presentation Processes in the Online Dating Environment,” J. Comput. Commun., vol. 11, no. 2, pp. 415–441, Jan. 2006.
 B. McEvily, V. Perrone, and A. Zaheer, “Trust as an organizing principle,” Organ. Sci., vol. 14, no. 1, pp. 91–103, 2003.
 K. J. Stewart, “Trust transfer on the world wide web,” Organ. Sci., vol. 14, no. 1, pp. 5–17, 2003.
 B. Maurer, “Principles of descent and alliance for big data,” in Data, Now Bigger and Better!, G. Bell, T. Boellstorff, M. Gregg, B. Maurer, and N. Seaver, Eds. Prickly Paradigm Press, 2015, pp. 67–86.
 M. Pilkington, “Blockchain Technology: Principles and Applications,” in Research Handbook on Digital Transformations, F. X. Olleros and M. Zhegu, Eds. 2015.
 M. Tsvetkova, R. García-Gavilanes, and T. Yasseri, “Dynamics of Disagreement: Large-Scale Temporal Network Analysis Reveals Negative Interactions in Online Collaboration,” Sci. Rep., vol. 6, 2016.
 T. Yasseri, R. Sumi, and J. Kertész, “Circadian Patterns of Wikipedia Editorial Activity: A Demographic Analysis,” PLoS One, vol. 7, no. 1, p. e30091, Jan. 2012.
What do networks of humans and machines actually do?
We expend a lot of time and energy, especially in a project like HUMANE, trying to understand the ‘what’ and the ‘how’ of human-machine networks, but it is during a workshop such as the recent, excellent, discussions in Oxford that bring to the fore the question of ‘why’.
We are apt to think of the machines in the network as the important feature – after all, the humans have been there all the time, it is the machines that are the innovation. Aren’t they? Maybe not. As Eric Meyer reminded us at the start of his talk, people have been building machines ever since they climbed out of the trees and started banging rocks together. We may not think of a piece of bent stick as a machine, but the use of a tool to dig furrows and plant seeds heralded a major social shift from nomadic to agricultural lifestyles.
Dave De Roure furnished us with more examples, citing the printing press and its social impact in 15th Century Europe leading to the libraries and social records we have today. So does this make the printing press a social machine, in line with the definitions coming from Tim Berners-Lee et al., where he defines social machines as abstract entities living on the web that do the ‘heavy lifting’ of administration, leaving the people to be creative? Or is the concept more abstract still? Whereas the plough-share enabled the people using it to be more productive by making a task more manageable, it also permitted a social change as a consequence of the introduction of a different way of life that was not possible before the machine arrived.
Similarly, but perhaps not so obviously, the printing press caused social change. People could write and distribute their ideas before the printing press arrived, but if they wanted to distribute their ideas widely they were reliant on monastic scribes to create copies. With the arrival of the printing press it became very easy to replicate and distribute ideas in print without involving the monks. This is very much in line with the new forms of social process that Berners-Lee also associates with social machines, and has obvious corollary with the social changes brought about by the rapid expansion of social media at the start of this century.
Of course, we have to ask whether all such changes are beneficial, and who defines what ‘beneficial’ is. Each new technology-led innovation ushers in a Utopian ideal in which the human beneficiaries are enabled to achieve idealised goals – or at least that is what the technologists behind the innovation would have them believe. What we see in reality, whilst not necessarily dystopian, is nonetheless very far from this idealised world. There is, and always will be, a huge difference between the way the humans behave and the way that machines behave. No matter how complex the machine, and how closely it appears to mimic human thought, a machine will never be human, it will always be a machine.
The dystopic view of our developing relationship with machines comes not from machines developing some kind of emergent consciousness and taking over the world, but from the behaviour of the people who exploit them or rely on them. Machines are a product of their design and programming – they have limitations. People, on the other hand, are driven by their very nature to explore outside the boundaries of experience. They don’t ask ‘what does this machine do’, they ask ‘what can I do with this machine’.
Vegard Engen introduced the concept of ‘intentionality’ as a distinction between the ‘agency’ exhibited by machines and the ‘agency’ exhibited by the humans in a network. Humans will intentionally set out to get the machine to do what they want it to do, whereas the machine will only do those things that are within its design parameters.
In the descriptive model presented by Brian Pickering ‘Human Behaviour’ takes centre stage, usurping the earlier focus of such models on the technical capability within networks. This is an important shift of emphasis taking place in the study and understanding of human machine networks, including as it does the social science and humanities component as an intrinsic part of network functionality.
In her review of the roadmaps being developed by the HUMANE project, Eva Jaho talked about policy and regulation as well as technological development – reflecting the need to manage the behaviour and activity of the people in a network whilst recognising that evolving technology allows for emergent beneficial behaviour that could be supressed by over-enthusiastic regulators. We should remember that machines operate on the principle of prescription – they do what they are designed to do – whilst people operate on the principle of proscription – they will do anything they can get away with unless they are prevented from doing it.
Dave DeRoure reminded us that people are subversive – they will be inventive to get the machines to do what they want to do, not what the designers expected the machines to do. The best networks are the ones that celebrate and encourage the inventive ability of humans – Grant Miller provided the example of Zooniverse and its ability to satisfy the higher human ideals of curiosity, satisfaction and achievement whilst eschewing any financial reward.
So, I will return to my original question of what human machine networks, or social machines, actually do. Gina Neff talked about symbiotic agency, reflecting the developing understanding of networks coming out of HUMANE.
Humans and machines work together to achieve a human-defined goal. Different humans within the network may have different goals, leading to conflicts and battles such as those described by Taha Yasseri in his studies of Wikipedia, but this is a result of human nature, not machine intervention. Human machine networks and social machines allow people to do what people do best – communicate, explore, discover, invent, manipulate, subvert and revolutionise.
People have a symbiotic relationship with the machines they invent – but they always have done. Where machines come to dominate or control lives it is only because we have allowed them to do so. We lay ourselves open to Perrow’s ‘Normal Accidents’ but, as Perrow describes, they do not arise because of the technology but because of human reliance and organisational failure. Our understanding and appreciation of the value and benefits of human machine networks must be based on their social context and on the resultant behaviour of the people forming part of the network, we can no longer study networks as purely technological artefacts.
There is no other ghost in the machine than the people who live within it, who seek to achieve their goals and ambitions, their wants and needs in symbiosis with machine capabilities. And this is what human machine networks do – they give us the power to be more human and to do better what we, as humans, have always strived to achieve.
Three solutions: increased automation, social ties, and extended use of common systems
Crisis management systems (CMS) are human-machine networks consisting of a diversity of actors working together towards achieving the common goal of saving human lives and values. In addition to organizations and people with different capabilities, CMS are important in coping with disastrous events. These systems are meant to support humans in coordinating the handling of an event, and providing information and decision support.
Collaboration is a core requirement for efficient crisis management. The HUMANE typology and framework is helpful in understanding the implications the network’s characteristics have on collaboration. It can provide valuable insight to how to strengthen the design of CMS to better support collaboration and efficient crisis management. The following are three examples.
Increasing machine agency through higher degree of automation
CMS are often intended for use by several crisis response organizations and they are often designed with a high level of human agency and a low level of machine agency. The human actors of the network are given great freedom to configure the system to fit their organization’s needs. The background for this design rational is that the variety of crisis management organizations often have different requirement, thus the system needs to be flexible to fit the needs of all its user organizations.
It can, however, be argued that applying higher degree of automation to certain parts of CMS could streamline human-machine networks for crisis management and make them more efficient. By assigning appropriate tasks to the system, the crisis responders can be given greater leeway to perform tactical or strategic activities, such as planning the handling of an event, making decisions, or other activities that are based on human experience and knowledge and require handling from human actors.
Strengthening the social ties of dispersed human resources
The strength of social ties in crisis management networks varies. The challenge of social ties is especially apparent during handling of a crisis that require the collaboration between several actors and organizations, where social ties are often weak. Knowing the role and authority of one another is an important part of knowing others within crisis management. It is often assumed that a person with a certain work position will handle his or her responsibilities in a sufficient manner. However, weak social ties can sometimes hinder efficient collaboration between people or organizations, as the essential knowledge of and trust in each other is missing.
A well-designed crisis management system has the potential of increasing social ties. By providing a common platform for collaboration, providing information about participating actors and organization, and being a means for information sharing, CMS can strengthen collaboration between crisis responders. In addition, common meeting arenas and training sessions where people across crisis management organizations are trained together, preferably with a common crisis management system, is of high importance for strengthening social ties.
Extending the use of a common crisis management system
There exist a variety of CMS. An issue in today’s crisis management networks is, however, that different crisis management organizations often use different systems that do not support sharing of information, communication, and coordination across the systems. This clearly limits and affects the efficiency of collaboration during management of crisis events. Furthermore, the lack of use among some organizations has implications on the network’s motivation for using the system, which might have implications on the use itself, as users might not see the value of the system when important collaboration partners are absent.
To function as common platform, all relevant crisis management actors should ideally use CMS with possibility of supporting collaboration through joint coordination, communication, and sharing of information. Such system should hold the possibility of integration with other systems.
The introduction of new technology causes concern for the future of work. What is the role of humans in a work life in which an increasing number of tasks are conducted better and more efficiently by machines than by humans.
In a much cited paper on automation of work through computerization, Frey and Osborne, take a starting point in the premise that new technology makes old jobs redundant faster than new jobs are created. They then move on to claim that advances in machine learning and mobile robotics in the 21st century may render not only manual routine work victim to automation, but also work previously thought of as non-routine such as car driving, medical diagnostics, financial trading, or educational tutoring. Think only of the self-driving cars, entities that are able to perform tasks that only a few years back were considered beyond the computational capacities of machines. Tasks that represents engineering bottlenecks for computerizations, such as those associated with perception and manipulation in highly diverse environments, creativity, or social intelligence, are considered low risk for automation also in the foreseeable future. Hence, workers in work that are at risk for automation may need to acquire skills that are not easily automated.
While there is no doubt that automation will replace human workers, the picture may not be as bleak as sometimes suggested in popular reports on the subject. Autor, in an essay on workplace automation, argue that “journalists and even expert commentators tend to overstate the extent of machine substitution for human labor and ignore the strong complementarities between automation and labor that increase productivity, raise earnings, and augment demand for labor”. One example is the technological improvements in the health sector which lead to increasingly larger shares of income being spent on health. Another, is the value creation in the computer industry itself, where automating machinery spawn myriads of previously non-existing jobs.
In HUMANE, we have used the typology dimensions human agency and machine agency as a framework for discussing the role of automation in the complex systems. While Frey and Osborne, as well as Autor, discuss the effect of automation on work at a societal level, we discuss how automation may affect the work of humans within specific human-machine networks. Through a series of case studies on systems for decision support, crisis management, and evacuation support, we investigate how increasing the range of tasks allocated to computerized machines in such settings may actually strengthen the range of tasks, opportunities for influence, and opportunities for creativity in human operators. In these domains, all characterized by highly procedural work tasks and the need to adhere to regulation and policy, allowing machines to take over procedural decision making, human operators may instead spend their time and resources at the tactical and strategical levels of decision making. Here, automation does not remove the need for human operators, but redefines its purpose, allowing for novel ways of value creation.
We often seem to think of automation in terms similar to that of the self-driving cars, where the role of the human driver simply evaporates. The reality, however, may often be that automation enables new forms of value creation where the combined capabilities of humans and machines provide better outcomes in a more efficient manner than was previously possible. By understanding how to design the networked interaction between humans and machines, as we aim for in the HUMANE project, such an optimistic take on the social challenge of automation may be even more feasible.
The HUMANE project is building roadmaps that can help guide future policies in specific social domains such as Sharing Economy, eHealth, and Citizens’ Participation. The HUMANE roadmaps act as a reference on which a collaborative effort for a complex task, such as the one needed for finding and implementing efficient policies for Human-Machine Networks (HMNs), can be based on. It helps all the involved parties recognize the goals and the steps needed for their achievement, and to better understand their roles and interrelations.
Through this Survey we aim to collect information, which we will process and use to develop the HUMANE roadmaps.
The survey shouldn’t take you more than 10-15 minutes to complete. We won’t collect any personal information about you: it is entirely anonymous. Your responses will be used for scientific research purposes only as part of the HUMANE project.
Thank you on behalf of the HUMANE project and we are looking forward to receiving your valuable feedback!
In the course of the HUMANE project, we examine a sample of social domains, where human-machine interaction is expected to be significant in the future. We study the type of interactions, the roles of humans and machines, and the challenges that must be addressed to ensure the successful integration of machines in a way that is beneficial for society. We then create a roadmap implementation for each domain that can guide future policies.
We have formalized the HUMANE roadmapping process to be used to construct the roadmap for each social domain where we want to improve Human-Machine Network (HMN) design.
The HUMANE roadmapping process consists of the following steps:
Figure: Illustration of the HUMANE roadmap process
Are users always worried about their data?
One consequence of the Wanless report is a need for more distributed healthcare. This means that an ageing and expanding patient population can be supported at home and in the community. But also not everyone in rural communities will be able to travel any distance for specialist care. This really is the essence of telemedicine or eHealth. So the idea is that ICT can mediate human-to-human (patient-clinician) interaction such that patients can be supported remotely, not least to supplement face-to-face consultation.
A recent pilot study in this area, TRIFoRM, engaged with a small opportunity sample of self-selected patients suffering from a chronic painful condition. Patient-clinician interactions, it was envisaged, would be supplemented by an app that the patients would use to gather daily monitoring data as well as regular self-reports. These would be collated at a central server for clinical staff (consultant, specialist nurse, etc.) to query and review. The idea is that the app running on the patient’s own personal device would supplement their care regime: clinical staff would be able to dispense with asking routine questions about how any exercise regime was going during precious consultation time since this would be available in advance; in so doing, clinicians could devote more time to the patient’s affective state and the holistic effects of the care regime.
Step back for a moment, and consider the data in such a network. It is not just personal data (contact details, for instance) but sensitive personal data (also see See GDPR, Article 9): especially for a chronic condition, worst case is that the data could be used prejudicially to increase insurance premiums or prevent access to certain benefits. Would this affect user trust in the network? As far as the legislative context is concerned, would users be more concerned about their personal data given its defined sensitivity? On one level, perhaps a reflection of the nature of the patients’ condition, technology is a great benefit and takes some of the strain from users. As one participant in a semi-structured interview remarked:
“…if you’re feeling really tired it’s really easy to get brain fog and do something really stupid” ,
which, of course, is a practical illustration of what Norman sees as the main cognitive-support role for machines . In technology acceptance terms, technology is “useful” and so more likely to be adopted, which for Thatcher and his colleagues translates to “helpfulness” and “functionality” in their post adoption trust modelling.
Within this context, the HUMANE profile indicates low human and machine agency: both actor types are restricted in what they can do. But high in terms of tie strength and human-to-machine interaction: human actors rely on the machines to achieve their goals, and rely on each other for the overall efficacy of the care regime. Perhaps not surprisingly given the limited scope for creativity and emergent behaviours, network organisation tends to be high too: there is a top-down structure which limits what can be done. Are these factors which contribute to a more trusting attitude to engaging with the network?
Consider the high tie strength in particular. It turns out there are two main features at least: support to the community of sufferers as well as to the individual’s specific care regime.
“I’m happy to help. It might help me as well but just being part of this community, it’s like let’s all help each other is what I say.”
is one strand which refers to an emergent community of fellow-sufferers both now and in the future who might benefit from the collection and aggregation of such data. The social tie strength in the network then is not simply between patient and clinician, but also to other patients who may not be ‘known’ personally, sharing a common bond with the data subject. If both may benefit, then sensitive personal data can be released. That’s not all though:
“So at those [consultations], it’s not a case of me just reporting and [them] listening to my report let alone what electronic reports might be coming, but it’s the communication. It’s the two-way communication. It’s not just [them] being fed stuff and … going: ‘I don’t need to see you because I’ve got everything here. You can sit there being quiet’ or something” 
Allowing sensitive personal data to be shared in the HMN is about enhancing the tie strength existing between clinician and patient; it’s about enriching the communicative context within a specific dyadic interconnection. In association with strong interactions of this sort, then, data release and data sharing are viewed quite differently.
In a previous post, GDPR and right to be forgotten, weak or latent tie strength may involve serendipitous data access, possibly enhanced through the necessity of physical replication at the carrier level, seems to undermine data subject control over their data. Here, increasing tie strength associated with a specific and very personal goal (immediate care needs as well as long term community benefit) seems to affect data subject willingness to share even sensitive personal data. We should probably look further in future at the aspects of trust and the valence of human-to-human interaction as it affects the management of privacy and trust.
 These quotations come direct from the transcripts of interviews carried out as part of TRIFoRM
 Norman, D. A. (2010). Living with Complexity. Cambridge, MA: MIT Press.
 Thatcher, J. B., McKnight, D., Baker, E. W., Arsal, R. E., & Roberts, N. H. (2011). The role of trust in postadoption it exploration: An empirical examination of knowledge management systems. Engineering Management, IEEE Transactions on, 58(1), 56-70. doi: 10.1109/TEM.2009.2028320
What does HUMANE profiling tell us about data protection?
Back in 1995, the European Parliament issued the Directive for data protection, which by 1998 had passed into national law in the UK. Now, eleven years on and after much consultation such as Working Party 29, in April 2016 the Parliament issued a corresponding regulation – the General Data Protection Regulation (GDPR) – which will automatically pass into law across Member States by May 2018, and will colour the corresponding legislation in non-EU countries wishing to collaborate with the Union. The GDPR harmonises regional and national laws: under the GDPR, for example, registration will be required with a single Data Protection Authority (DPA) in any Member State; a Data Processor now shares liability with the Controller, and may be prosecuted if demonstrably at fault; and, of course, data subjects now have the right to be forgotten (Article 17 ), to a certain extent at least. This is reassuring. And it means that we can all be confident that our personal data are safe. OK, but what happens when those data are released into a human-machine network (HMN)?
Let’s look at a social network, a typical example, of course, being Facebook (see the profile). The network is characterised by its size (“the network has a large or massive number of users…”) and geographical reach (“the network has wide geographical reach, spanning multiple jurisdictions and cultural groups”); human agency is high (“the users to a great degree define their own tasks towards goals they set themselves”); and machine agency (“the behaviour of the machine components in the network to some degree is autonomous and intelligent”) as well as social ties are intermediate (“the users of the network typically have only latent or weak ties, characterized by low levels of intimacy and short duration”). What does the combination of autonomous machine nodes, high human agency, with a highly distributed HMN for the GDPR and the right to erasure?
Tie strength is weak or latent, and so there may be no notion of loyalty or mutual support amongst human actors in the network. Although this is not always the case. In a recent focus group discussion with software engineers in training (to be reported in D3.3), one participant remarked about their use of social media:
“…there are a lot of people that if I was in the same room as them I’d talk to them but messaging them on Facebook would be weird because we’re not that close. That would be strange.”
The assumption here is that facebook is somehow reserved for more private and intimate interactions, which of course the privacy settings might allow if users are prepared to spend time understanding and maintaining them. Alternatively, it may be that the profile dimension represents only an aggregate of all connections between different nodes which may have different roles.
In the context of data privacy, though, this is important. Can users really assume privacy and what is more that they know where their data go and who seems them? Machine agency has been described as “autonomous and intelligent”. One practical outcome not peculiar to social media per se is the common last mile problem (see this for example) in communication networks, the final and often non-optimal link between the backbone network and a retail or private user. One component of a solution where speed is important may be, for instance, to replicate content to a local server. On top of that though, for networks with “wide geographical reach, spanning multiple jurisdictions and cultural group”, content would most clearly be replicated across boundaries, even into different jurisdictions with different laws about personal data. In such an environment, then, demanding that my data be removed as the GDPR seems to promise is almost impossible beyond the safe haven of the EU and its immediate collaborators. Add to this the issue of multiple data sources on an individual being mined and cross-correlated and you have a situation where even the modest requirement for pseudonymisation which the GDPR portrays cannot be guaranteed: with lots of data out there, jigsaw attacks become a real possibility.
The HUMANE profile at least makes it possible to begin to understand the practical implications of reliance on legislation as far as data protection, and specifically the right to be forgotten, are concerned. As one of our focus group participants pointed out when viewing the network diagram created:
“You rarely think about [where the data will go] when you’re like randomly scrolling through things and clicking stuff and things”
This is something that perhaps we as network users should take into account. And in future work, we need to consider how the profile dimensions might highlight implications of the HMN configuration.
 Article 17 Right to erasure (right to be forgotten) is not the blanket mandate which some may assume, but provides some promise that data can be withdrawn if the data subject so wishes.