Where does knowledge come from? Information society deconstructed
Against a backdrop of rapid technological shift, social chatbots and human data collection, Jakub Komárek explores what it means to ‘have knowledge’ — and how that knowledge is put to use, for better or worse, in the 21st century.
Where does knowledge come from?
“From experience,” argues the empiricist. Even Cambridge Dictionary states that knowledge is the “understanding of or information about a subject,” obtained through “experience or study”. However, the empirical rule that our knowledge comes primarily from sensual experience is, perhaps, outdated. When I seek new information, the first place I turn to is Google or Wikipedia — and yet, despite its convenience, there is something bothersome about the fact that our process of acquiring knowledge has become, essentially, outsourced.
According to David Hume, knowledge is a set of ideas we can modify through three principles of association: resemblance, contiguity, and cause-and-effect. On the one hand, we can knit together a set of ideas to construct something new, a more complex idea; on the other, we can deconstruct existing ideas, uncovering 'hidden' links in the process. The omnipresent Internet and its search engines, however, has changed the whole thinking paradigm profoundly. We are no longer dependent on our limited and unreliable memories. Instead, we can easily copy-and-paste the ideas of others. We are constantly plugged into the fountain of (virtual) knowledge, and have, in turn, become more productive and informed. Indeed, three billion of the world's seven billion people own some kind of connective device. That is more than the number of people with electricity at home. As such, humanity can now combine intellectual ability with technology’s computing powerhouse.
The overall picture looks promising. Yet, if we zoom in, a different, a more sinister image emerges. Whilst our societies have been developing swiftly, our brains have stayed more or less the same. That is not, of course, to say that our brains do not change: they are extremely malleable and trainable. We can shape them by regularly exposing them to a particular task. However, our brains have retained their primitive, genetic blueprints. For example, we are still programmed to fight or flee when danger occurs — it is an instinctive reaction, operating below the level of consciousness. Daniel Goleman, a scientific journalist, theorises a bio-psychological 'low-road' and 'high-road'. We use the 'high-road' when we pay our full attention to an activity, e.g. listening to a friend. The 'low-road' is primarily responsible for our emotions, empathy, and intuition (to name just a few).
The society we live in today, with its immediate inter-connectivity and all-over access, requires our cognitive 'high-road' abilities more than ever: for motivation, to manage one's tasks efficiently, to work methodically. As a result, our brains adapt in ways to fulfill ‘high-road’ 21st-century demands. It prioritises and places an emphasis on ‘high-road’ connections — our 'low-road' and its needs are sidelined in the learning processes. Consequently, we do not possess the same sort of knowledge related to our sensual and emotional experience. Our minds are instead preoccupied with duties, tasks or entertainment, and we lose touch with how to cope with the moments of impatience, stress, boredom, and loneliness. To tune out, we seek novelty and social connection on the techno-social platforms.
Whilst we continue to lose the type of knowledge associated with our paleolithic emotions, tech giants are gathering data on our everyday online behaviour and, subsequently, obtaining their own knowledge, which can be utilised to modify our future decisions. They use a priming technique to access our unknown 'low-road' to create demand in the future markets. This unsettling technique is, to use a term coined by Shoshana Zuboff, the basis of ‘surveillance capitalism’. Focus on our emotional 'low-road' often manifests as marketing messages with no real informative value and political discourse, with the sole purpose of polarisation.
Tristan Harris, a co-founder of Center for Humane Technology, calls this process a 'human downgrading.' The knowledge technology obtains and employs is uncannily similar to its human counterparts. It too accumulates data, piece by piece, and find associations among the fragments. However, the mechanisms of the digital world are not burdened with our human 'low road' and are often better at predicting (or modifying) our future than we are. Of course, tech giants are reticent to share this knowledge. Their algorithms are a commercial secret. More worryingly still, as we feel more depressed, anxious, stressed, or lonely, the vulnerability of our 'low-road' increases and we crave connection even more. Cue the entrance of virtual intervention. Personal experience makes this all too credible - there was definitely something Orwellian about that time I, feeling lonely, had Messenger target me with an advert for a social chatbot.
Microsoft Xiaoice, a social chatbot, now has over 660 million users worldwide. AI technology recognises human feelings, stating and responding to user needs throughout long conversations. Alarmingly, in a recent study on the social effects of the chatbot, the subject stated Xiaoice as their preferred choice of interaction after just nine weeks of usage. Google — keen to avoid falling behind — is developing its own social chatbot, Meena. It also created a metric, Sensibleness and Specificity Average, which measures the ability of a programme to maintain responses that 1) make sense and 2) are specific to the recipient. To put this in context: the average human score according to this system is 86%. Xiaoice comes in at 31% and, frighteningly, Meena at 79%. The social chatbot has been trained using data drawn from social media communications. The computing power of modern machines, the vast data covering our most intimate conversations, AI technology with a capacity to learn, and a knowledge of modern psychology combine. The result? A 'conversational agent' we are citing as our preferred source of social connection.
The Utopian dream of ending our loneliness once and for all by means of technology turns into a nightmare when we realise our interests and the interests of tech giants are not aligned. The more we covet the products they create, the more valuable data points — arguably, our innermost thoughts made digital — algorithms gain access to. As such, companies can now pinpoint the moments when we are most vulnerable to sublime cues and nudges. Behaviour becomes a commodity: our personal data points can be sold for commercial means to the highest bidder. In Zuboff's words, the “surveillance capitalists’ ability to evade our awareness is an essential condition for knowledge production.”
So, where does knowledge come from? From experience, yes. Yet we are no longer the sole possessors of our experiences as we communicate via platforms, that, under the smokescreen of “constantly improving services,” are making digital notes for their own commercial gain. Whilst the empirical study of our thoughts, experiences, and states can increase autonomy, freedom of will — by turning to technological companies to satisfy our needs — could well come to be reliant on (and regulated by) others.