In our four-part workshop, we will focus on data as a primary resource for our society and, with methods from soft computing, sense-making of raw data to augment human conditions—for example expanding the smart city concept to a concept of cognitive cities that, in a humanistic sense, gears to human-centered computing. With this in mind, in the first part, we focus on deep learning to generate natural language descriptions of data mining results. In the second part, we concentrate on granular computing as a forceful toolbox for representing and reasoning with different types and levels of data abstraction. In part three, we demonstrate, how the previous parts are critical elements to overcome the concept of smart cities towards cognitive cities, in which—opposed to many of todays smart city concepts—humans are the center of development. In the fourth and last part of our workshop, we will collectively develop solutions to introduced challenges.
Sensors are becoming prevalent in our society. For example, they monitor weather and environmental pollution, they help track goods along supply chains, and they enable more flexible and adaptable manufacturing processes. In the medical domain, wearable sensors and sensors built into smartphones enable the monitoring of activities and vital data of patients.
While it is simple to collect tons of sensor data, it remains difficult to make sense out of them. Data mining algorithms are employed to gain insights from the data by determining correlations and hidden patterns in the data. In many situations, however, sensor data cannot directly be fed into data mining algorithms because the data is too low-level and not significant enough to be used as features. Instead, appropriate features need to be derived first. However, in most cases it is not clear beforehand which features would work best, so that determining meaningful features requires a lot of experimentation. Deep learning is an unsupervised feature learning approach that can be used to derive higher-level features from low-level sensor data and thus helps to avoid time-consuming experiments to identify relevant features.
Another central building block for making sense out of sensor data is how to communicate the insights gained to the user in a comprehensible way. Visualizations are a powerful form of communication but are in many cases not suited to represent patterns identified in a data stream. For example, correlations between a user’s daytime behavior, environmental factors and his or her perceived quality of sleep cannot be properly visualized, but it could be nicely verbalized. Natural language descriptions of data mining results need to make use of fuzzy concepts to appropriately reflect their level of accuracy—or vagueness.
This part of the workshop will therefore focus on principles underlying deep learning as an unsupervised feature learning approach on sensor data and on the generation of natural language descriptions of data mining results. These principles will be discussed in the context of application scenarios from mobile health and can be transferred to other application areas where sensors play an important role. Instead of presenting ready-to-go recipes we will focus on sharing our experience collected in various projects and on fostering a lively discussion.
As introduced in the previous part, the proliferation of ubiquitous computing of recent years brings with it a dramatic increase in data collection, data generation and data processing. Raw and low-level data is manually and automatically processed to be used and reused by different parties in different semantic contexts and with different intentions. To this end, data is usually interpreted using aggregation, generalization, simplification or approximation. Within the resulting complex ecosystems of data processing flows it becomes increasingly important to automatically control and/or track the change in meaning of derived data. The ability to automatically differentiate necessary from unnecessary detail in a given context, as well as the ability to assess people’s intentions requires that computers are able to process information in a way that emulates humans reasoning and sense-making. One pivotal element of human reasoning in this context is the ability to process information on different abstraction levels—in other words, in different granularities.
This second part of the workshop will thus provide an opportunity to discuss granular computing—and in particular fuzzy logic—as a framework, theoretical basis and methodological toolbox for explicitly representing and reasoning with different types and levels of abstraction, aggregation, generalization or approximation. Particularly, we will discuss its potential for mimicking contextual human reasoning and sense-making.
Cognitive cities are a form of future cities applying elements of smart cities intertwined with elements of cognitive computing such as an ongoing human-computer interaction. Underlying cognitive computing is an interdisciplinary field encompassing different topics such as collective and computational intelligence, computational thinking, and connected learning and cognition theories. These features improve cities‘ abilities to face technological and civil challenges. Similar to the concept of responsive cities, cognitive cities focus on a human-centered approaches supported by technology.
In this third part, thus, focusing on soft computing methods such as granular computing, computational intelligence, fuzzy logic and computing with words, we bring together the previous parts to develop livable future cities in a humanistic sense. Primarily, with the stated goal to balance smart cities with cognitive computing research by adhering to soft computing methods, we concentrate on user experiences. Thereby it comprises the practical, experiential, affective, meaningful and valuable aspects of human–computer interaction. It primarily focuses on the citizens’ perceptions of a city such as utility, ease of use and efficiency. It is subjective in nature to the degree that it is about individual perception and thought with respect to the system and—thereby—is an ideal field of application of introduced soft computing methods.
In this last part, based on use cases that are related to our previous parts, together we develop new concepts and solutions to challenges of our society. To this end, we introduce a couple of real-world use cases, which we address collectively in groups to make sense of data.
Prof. Dr. Edy Portmann is a researcher, specialist and consultant for semantic search, social media, and soft computing. At present, he is working as an Assistant Professor of information science at the University of Bern, Switzerland. In the past, Edy Portmann studied for a BSc in information systems, for an MSc in business and economics, and for a PhD in computer sciences. He was a Visiting Research Scholar at National University of Singapore (NUS), as well as a Postdoctoral Researcher at University of California at Berkeley, USA. During his studies, Edy Portmann worked several years in a number of organizations in study-related disciplines. He is repeated nominee for Marquis Who’s Who, a selected member of the Heidelberg Laureate Forum, co-founder of Mediamatics (i.e., Media and Informatics) think tank, and co-editor of the Springer series ‘Fuzzy Management Methods,’ as well as author of two popular books in his field.
Prof. Dr. Ulrich Reimer studied computer science and received his doctorate in 1987 at the Information Science Dept. of the University of Konstanz with a thesis on formal ontologies for natural language understanding. In 1992 he received his habilitation at the University of Konstanz. For more than 10 years Ulrich Reimer was head of the IT R&D group of Swiss Life, the biggest life insurance company in Switzerland, and responsible for large-scale research projects, some of them funded by the EU. Since 2005 he has been with the Institute of Information and Process Management at the University of Applied Sciences St. Gallen where his main research activities are in the areas of Semantic Technologies, Behavioural Change Support Systems and Model-Driven Development, primarily in the application area of eHealth.
Gwendolin Wilke holds a Ph.D. degree in Geoinformatics (2012) from the Vienna University of Technology (Austria), with a Thesis on uncertainty modelling for geodata. On invitation of Prof. Lotfi Zadeh she worked as a visiting researcher at UC Berkeley (USA) in 2009 in the field of fuzzy geometry. She extended this work as a visiting researcher at the Knowledge Representation and Reasoning Group at the University of Leeds (UK) where she worked on qualitative geometric reasoning in 2010. From 2012 to 2015, Gwen worked as a researcher at the information and knowledge management group at the University of Applied Sciences and Arts Northwestern Switzerland where she was involved in numerous national and international research projects. She regularly publishes research papers on the theory and application of fuzzy logic, soft computing and granular computing. Since 2015, Gwen is a lecturer for Business Intelligence at the School of Information Technology of the Lucerne University of Applied Sciencs and Arts.
Gwen is a member of the North American Fuzzy Information Processing Society and board member of the special interest group on Knowledge Structures and Metadata of the Swiss Informatics Society. In her current research, Gwen focuses on the theory and application of qualitative and granular computing to business intelligence, and, more generally, to knowledge management, specifically the challenges and opportunities of today’s ever more complex data driven society.