In the HUMANE project, we are developing a typology of human-machine networks and a method for its use. The aim of the HUMANE typology and method is to aid system designers develop new and successful systems in order to improve both public and private services. In order to achieve this, HUMANE is collaborating with related Research and Innovation (R&I) projects, which provide case studies for developing and validating the HUMANE typology and method. Additionally, the case studies feed into the development of a roadmap of future human-machine networks.
The Center for Service Innovation (CSI) is organized as a virtual centre encompassing four Norwegian research partners, two international research partners and 11 business partners. The main research and innovation themes of the centre are (1) Business model innovation; (2) Managing and organizing for service innovation and transformation; (3) Design for service; and (4) Service innovation economics. CSI aims to increase the quality, efficiency, and commercial success of innovation activities at leading Norwegian service providers and enhance the innovation capabilities of its business and academic partners.
The HUMANE CSI case-study addresses open innovation as an approach employed by several of the CSI company-partners. Within the CSI-context, such efforts are part of theme (3) Design for service; more specifically as a method to enhance and harness opportunities for online co-creation of services between customers and service providers.
Online open innovation holds great potential. Yet to unleash that potential, the next generation of open innovation platforms will benefit from being designed in a way that takes into consideration how different types of networks are involved. In HUMANE we will study how platforms for open innovation are used by banking and postal services in order to identify and model the particular human-machine networks involved and their consequences for co-creation, collaboration and innovation.
Conserve & Consume consists of a consortium of three research partners and three business-partners. The business partners develop services within the domain commonly entitled the sharing economy or collaborative consumption; more precisely within customer-to-customer (C2C) redistribution markets and within after-sale services prolonging the lifecycle of smart-phones and tablets. Their aim is to transform sustainable and collaborative consumption services from niche phenomena to mass market services.
Redistribution markets will be addressed and studied within the HUMANE project. Such markets may seem simple at first glance with benefits and use-value seemingly apparent for all actors: sellers get to de-clutter their belongings and even earn a bit of money in the process, and buyers get access to a less expensive and greener pool of products. Additionally, the combination of the Internet and the widespread adoption of smartphones make these redistribution markets readily available and easily accessible for users as both potential sellers and buyers. Yet the process of matching sellers and buyers in these types of human-machine networks is complicated with several show-stoppers (as seen in the figure below).
In HUMANE we are interested in examining C2C redistribution markets as human-machine networks. Initially, we expect associations between human actors in the network to be ephemeral and linked with transactions between sellers and buyers. Once a transaction is complete, the association ends.
We aim to understand how sellers and buyers experience the various tasks involved (e.g. listing items for sale, searching, filtering, contacting peers, agreeing on transaction) in the context of these ephemeral social associations. The tasks involved and the absent and latent ties between peers in the network are expected to have consequences for aspects such as motivation, trust, experienced responsibility, and reputation (as sellers and buyers).
A holistic approach to C2C redistribution markets requires a parallel examination of all of these characteristics. This includes studying experiences of existing and potential machine elements that ease and facilitate the matching of sellers and buyers.
eVACUATE is about getting people out of dangerous situations. It is an EC funded project with 19 partners, including 4 use case partners. Each use case partner represents a different location and situation for evacuation, all of which are potential case studies in HUMANE. These are: a football stadium, an airport, a cruise ship and a metro station.
Figure 3 illustrates a network diagram for eVACUATE, which is generic for all four eVACUATE use cases. The main participants of evacuation scenarios are: operational staff who are responsible for making sure people get out safely; the people to be evacuated; and the emergency services who are quasi autonomous but responsible for the safe evacuation of the site(s). There are two different types of machine “actors”: 1) the site itself, which is often equipped with various sensors (especially the cruise ship and airport cases), characterised by equipment with no particular autonomy in terms of execution, and 2) a Decision Support System (DSS) developed in the eVACUATE project to assist the operational staff, which by contrast to (1) supports the operation of software components that act on and interpret the information coming from the non-autonomous equipment. Thus, we see this is a human-machine network where the ‘machine’ actors are both active and passive elements.
An interesting opportunity in HUMANE is to look at solutions where people can be active participants. They are on occasion in a better position to see what is going on, and could provide valuable information to the authorities, operational staff and to other evacuees. There are risks and their input would need validation, which relates to trust and reputation, which is one of the foci in HUMANE.
Human to human collaborations in terms of self-categorisation theory and social identity theory, for example, is well studied and relevant to an understanding of the mutually supportive behaviours seen in eye-witness reports from many disaster situations. However, the inclusion of machines in terms of ICT platforms, sensors and potentially the dynamic recruitment of user devices introduces a new dimension which has not previously been addressed.
Journalists increasingly turn to Social Media to find both news and background information. Although there’s often a plethora of real-time on-the-scene reports, it is imperative for journalists to be able to quickly identify whether the information is trustworthy. In some cases, the information is not – even deliberately published to mislead. REVEAL is an EC funded project, which aims to develop tools, components and strategies that aid journalists in identifying, assessing and verifying user-generated content (UGC) on Social Networks.
We depict a human-machine network view of the REVEAL system and stakeholders below in Figure 4. It comprises several machine “actors”, which inter-connected with the human actors of the overall REVEAL network, namely: (i) the contributors, (ii) the REVEAL users/journalists and (iii) people/community in general. Data acquisition services extract information from social networks, i.e., content from potential eyewitnesses and people on the scene. This data feeds into a Decision Support System (DSS) developed in REVEAL, which aggregates and visualises the social media content in a manner that should allow journalists to quickly get evidence relevant to the verification of breaking news.
REVEAL is a case study in HUMANE for (a) identifying and modelling envisioned human-machine networks for the REVEAL service, and (b) evaluating how the REVEAL service and the proposed advanced tools perform in terms of the envisioned human-machine networks. We will explore and analyse the way in which journalists interact with sources, the public and other journalists as well as how they use the tools provided by the project case study for revealing information with respect to the contributor (source of the information), content (the information itself) and context.
Wikipedia is a free-access, free-content Internet encyclopaedia, supported and hosted by the non-profit Wikimedia Foundation. Everyone with access to Internet can edit most of its articles. Editors are expected to follow the website’s rules, which are made by the community of editors. Wikipedia is ranked among the ten most popular websites and constitutes the Internet’s largest and most popular general reference work. Wikipedia has editions in more than 270 languages and a large community of millions of editors from all around the world.
Figure 5 illustrates a network diagram of Wikipedia, only depicting contributors and the possibility of bots editing articles (hence omitting the very large mass of users merely using Wikipedia as an encyclopaedia without contributing).
Various aspects of Wikipedia are relevant to HUMANE. Wikipedia is the largest network of individuals who collaborate online to produce a common product. Also the networks of articles, languages, robots, make it a complex ecosystem of different types of socio-technical interactions. The main challenge in understanding Wikipedia is its user motivations and the way the reach consensus. It is hard to explain how a large community of individuals from all around the world and different backgrounds and ideologies can collaborate in meaningful way and produce the most used encyclopaedia of the history of knowledge.
However, sustainability and enhancing user engagement is also a problem which became more important especially after the first phase of rapid growth in Wikipedia. The saturation in the size and the decline in the number of active editors are not limited only to the large language editions and even small language editions which are clearly incomplete suffer the lack of fresh human resources.
Another interesting opportunity in HUMANE is to analyse the way in which the software and the “bots” interact with users and each other. There is little research on how robot-contributions interact with human users and how the overall efficiency of the network is influenced by these interactions.
Zooniverse is the most successful and popular citizen science portal which hosts more than 20 different citizen science projects. In each project users execute large number of simple tasks that despite their simplicity, lead to significant scientific advancements. The first of those projects was Galaxy Zoo, a project in which users classified the galaxies based on their pictures provided by the Sloan Digital Sky Survey at the Apache Point Observatory in New Mexico, USA. The huge success of Galaxy Zoo led to creation of many more projects ranging from astrophysics to life sciences, and humanities.
Zooniverse is a portal for citizen science projects. Each project consists of series of tasks (classifications in general) which need to be executed by contributors. A task can be for example classifying a galaxy based on its shape, or counting/finding certain type of animals in a picture, automatically taken in a forest. Tasks and the original datasets are provided by researchers and fed to the “subject allocator”. The subject allocator allocates the tasks to different users, and users interact with tasks through the web interface. The results of classifications are sent back to the “Classification API”, where different users’ outputs are aggregated and cross-validated (see Figure 6).
The Zooniverse case provides a great opportunity to study issues related to crowdsourcing, trust, validation and task distribution in a variety of settings and within an empirical framework. Basic concepts in crowdsourcing, such as the optimum group size, are still only partially studied, and most of the research on crowdsourcing has been limited to artificial tasks executed in the laboratory settings. However, Zooniverse deals with real world and unsolved large scale scientific problems.