My research currently, and by some extension for the past ten years, has concerned uncertainties in large infrastructure systems and how they are managed as risks in various different field sites. As my work continues on these themes for the next year or so, it is apt time to make an overview of the coming work. The specific focus shall be on concepts. Social scientists can and at times also tend to use their research concepts loosely and heuristically as I have often done, but even in that case, a concept definition is a good place to start an analysis. The question here could be: What objects are we talking to exactly when exploring issues with systems and their management as risks in the social science context?
These posts will draw from resources of some of what I read to introduce contours of what researchers might mean with these two concepts, system and risk. In so doing, I want to underscore the open and even subjective nature of both concepts, but not in order to critique them. Rather, I want to understand why certain scholars have decided to leave these concepts relatively loose when developing an analysis of system-level risks and risks to systems. To be relatively concise, I will start my overview with the concept of system in this marking. In the coming days, another post will be written wholly on the concept of risk but again in the context of systems.
A notable omission should be mentioned before I move on. I will only briefly touch the work of prominent sociologists who have made the system concept popular in social science debates for the past decades. For some research concepts like world systems theory, I simply do not know it yet that well to make claims about it or discuss it in depth. For others, like those following immediately below, this is because their interest has been quite different than what I want to study, namely large technological networks or infrastructures and their risks. Classical ideas can and should be applied to new topics as the credo of sociological classics goes. But some initial choices made by classics may make technology seem a less varied theme than it is from other perspectives. At any rate, only a short summary can be made of these works in the scope of this marking.
To begin with, systems are not a novel concern in social science and sociology in particular. American sociologist Talcott Parsons delineated an interest in the social system several decades ago. His idea seems to have been that when certain “social” phenomena — interactions, roles, collectives, norms, values — show specific kinds of patterns and have boundaries separating them from other things, they can be understood as system-like. Ultimately, even the society as a whole falls under same framing: as “a collectivity, i.e., a system of concrete interacting human individuals, which is the primary bearer of a distinctive institutionalized culture”.
That such sociological matters can be viewed as systemic is still and yet again highly interesting. For example, one train of thought took ideas about social systems rather directly in the 1950s and the 1960s, sociological disaster research. According to a characterization from just few years ago, a disaster is an event that endangers the maintenance of essential societal functions and disrupts social order, structures, and normal routines. In other words, a disaster destabilizes social systems or brings new systems of co-operation and help to the fore as it often does.
As relevant and important as these considerations are – see, for example, how much disasters against social systems and societies are again in the centre of public imagination with global climate change, energy security, public health, economic crises, and many other things – there is something revealing about the attention that they gave to systems that are not “social” to begin with. Certainly there were other systems under this taxonomy: not just social, but also “cultural and personality systems, the behavioral and other subsystems of the organism, and, through the organism, the physical environment”. These other systems “environ” the social system and are “engaged in complicated processes of interchange” with it. So other processes can influence social systems which is an apt remark. But it was also noted that as for the physical environment which would include technologies, “the immediately environing systems of a social system are not those of the physical environment”. Rather than “physical” systems, the systems thus bordering the social system are personality, behavioral, and cultural systems.
This suddenly becomes an intriguing point about the scope of some sociological research. Maybe technology is such an uncanny topic to some because of system interfaces, if we acknowledge them following from above. Many perhaps draw more readily from the “environing” sciences of social psychology, anthropology, and cultural studies than say the traditions of engineering or geography when doing sociology because, according to the classical view, the physical environment is not part of the social system and not even on its immediate border. But from another vantage point the very distinction between “social” and “technical” or “physical” systems is difficult to follow all the way through. This happens when you draw from a classical topic in Science and Technology Studies and history of technology, that of large technological systems. It asks, are massive systems of provision like electric power supply merely combinations of technical components and connections? Or rather, should they be understood both through these components as well through “the physical, intellectual, and symbolic resources of the society that constructs them”, by examining “the changing resources and aspirations of organizations, groups, and individuals”, and by attending to their “interacting components of a various kinds” including but not limited to differing technologies and institutions that are all connected when determining what systems become like? My own attention has tended to be on this latter conception, which some also term as socio-technical systems.
In its concerns about systems classical sociology has not been alone, though my knowledge is not exhaustive. But for example, German sociologist Jürgen Habermas seems to begin with the notion that people’s “life-world” – their communications, agreements, and consensus – is very different than systems, in his case, the economic and the political system. The latter systems can indeed tend to “colonize” people’s normal life-world with yearning for stricter control and more instrumental rationality. Here is how crisis researchers used this critical idea when analyzing vulnerability to flooding in the Swedish country side a few years ago:
In the concrete work in cleaning river, the group met obstacles from the bureaucratic procedures of the county administrative board. To draw on Jürgen Habermas’s theory of communicative action, the life-world of the local group crossed with the system world of the county administrators and both worlds had different language and procedures on how to manage the cleaning of the river.
So, systems of administration and bureaucracy are something very different than localities and their culture. This then develops an interest that delineates systems as a certain kind of entity to begin with.
Another German sociologist Niklas Luhmann, who has doubtless widely read in systems theory like the others mentioned here, also framed his most famous interest around systems that are social, societal subsystems. These included the economy, politics, law, and so forth and concerned their functions as systems of communication in the modern society. Again, it is the societal system that is of interest rather than other environments or systems that are not seen as social to begin with. In his 1993 book Risk: A Sociological Theory Luhmann also harbors a rather interesting concept of a technology. But then he rather directly contradicts my interests by writing that he does not use the expression large technology “to distinguish our subject matter from analyses, which, taking for example telephone networks or transport networks, seek to emphasize the network structure: for this is little interest for the topic of risk.” (p. 83)
Hence, in sum, when I say system, rather than these sociological works directly, I mean terminology mainly stemming from technology studies and concerns made famous in organization studies and organizational sociology. The first context has been mentioned in passing before: it comes from the work of historians of technology, especially Thomas Hughes’s 1983 book Networks of Power and its followers, whose impacts were summarized in a detailed review essay by Erik Van Der Vleuten some while ago. That large technological systems are both social and technical in their composition I stressed already above – what was not yet mentioned is that this argument, in the context of historical scholarship, suddenly gives rise to a rather loose definition of a system.
In 1983 Hughes explained this by noting that a “historian’s definition (of a system) cannot be as precise as the scientist’s” exactly because a historian sees how socio-technical systems vary over time and from one place to another. For instance, one can doubtless find very similar components and connections in the high-speed broadband Internet networks in US cities and in Finland, or Europe-wide rail networks in the 1980s and those in the 2010s, but there are also bound to be multiple variations among physical resources, licensing strategies, organization structures, policy traditions, economic practices, and even the technologies themselves. A bit later Hughes characterizes his notion of a system consequently as an “inadequate approximation” and notes only certain characteristics of systems “transcend time and space”: these are that a system has “related parts and components” that are in their turn connected by “a network, or structure”. This looser definition will suffice for many ends.
Van Der Vleuten notes closely related reasons as to why the system concept was initially framed so openly in this tradition. He begins by acknowledging what large technological systems research borrowed from general systems theory: its “synthetic and multidisciplinary features” which includes framing under the same scope matters as different as technical artefacts, organizational systems, practices of marketing and advertising, and several other things. But, at the same time, the approach also preconfigured as few concepts as possible thus “liberating historical imagination to formalization”.
The reason for this is partly because the systems that are studied change from one place and time to another. Another reason seems to lie in the double meaning of a large technological system. As Van Der Vleuten poignantly notes, a system can be both something more durable and an analytical tool: “it refers to a category of phenomena as well as a research methodology”. In social science in general, when a method is tailored to do analysis, it benefits not to preconfigure it too much if only to keep the study open for variations.
The domain of organizations, accidents, and high-risk technologies was not inherent to the research on large technological systems though more work has been carried out in this context over the past decades. Yet, I believe an analogical line of reasoning bridges the above ideas to this latter use of the system concept. Sociologist, organization scholar Charles Perrow wrote his noted book on accidents in the 1980s, Normal Accidents: Living with High Risk Technologies, which focused on risks in various human-built provisions including airplanes, ships, mining, dams, nuclear energy, space missions, and laboratories. He chose to designate all of these as systems, which is fruitful. The sites he compared and detailed are admittedly different from one another. However, a concept of a system is needed, for one part, for the risks and accidents that are covered in the book – system accidents to use the term the work coined.
The definition of a system, given in the book, seems actually somewhat more nuanced and rigid than the above has yet provided. The work designates systems as entities having four levels: that of individual parts; that of units that collect functionally similar parts; that of subsystems that collects an array of units; and finally the system that combines all the previous levels together. The key conclusion made from this outset is well-known. This is that when the parts in a system are tightly coupled – when disturbances in one system part quickly spread to other system parts – and when these parts exhibit complex interactions – when the interactions among the parts are unexpected and difficult to conceive, manifesting complexity – then accidents in the system become almost inevitable, “system accidents” or “normal accidents”.
The idea is simple and effective, but further questions about the system concept can be highlighted here. One is the severity of an accident or a disturbance: to be systemic, it concerns something more than an individual system part and its failure. Rather, system accidents span from parts to other parts and henceforth units, subsystems, and entire systems. But what should be included in the definition of such a system? Surely that depends on what you want to understand as the system, tying with ideas about large socio-technical systems. Like a ship sailing from the sea into a channel, the parts enclosed in a system – from weather and other ships at the sea to rocks and bridges in a channel – change according to circumstances as Perrow’s work noted many decades ago.
At the same time, there seem to be further reasons to keep the system concept open that stem from traits of systemic risks rather directly. In many cases, by definition, system accidents have been unexpected and inconceivable prior to a disaster. Often, it is indeed the disaster or a crisis that reveals that some parts formed a system and interacted in an unforeseen manner. The current banking and financial crises offer timely and critical examples of this idea. There are older instances as well. The managers and designers of the failed Apollo 13 mission could not conceive that a high-tech spaceship would break until it did through unforeseen interactions among the components of the whole system. In another case in Perrow’s book, a lake drained into the mine below due to an oil drilling error. Viewed individually, both lakes and mines are loosely coupled, relatively linear, and well understood entities, but together, they formed a complex interacting system. It may often be case that such systems are only discovered ex-post and that is why the concept of a system should be a loose one.
It should be added however that this idea creates a possible unwanted side-effect when risk and potential for catastrophe are considered. Namely, often, it may be tempting to assume that those systems that have failed catastrophically must have been complex and tightly coupled in their components. For example, I notice this quite often with my research into electricity and reliability. By observing that large systems of electricity supply fail over wide regions, some commentators have tended to conclude that the electrical systems must be prone to rare but catastrophic cascading failures. Cascades are disturbances that quickly propagate from one part of the network to the other and are line with ideas about normal accidents. Nonetheless, there is a potential fallacy with this reasoning: the accident starts to explains the system, not the other way around which resembles a circular argument. In such cases only a tentative analysis may be made of the complexity and the coupling of the system, as crisis researchers did about the Auckland electricity blackout in the 1990s noting how several things coupled and caused a disaster:
The heat sensitivity of the cables in Auckland coupled with elevated temperatures and thermal conductivity as well as demand provides a further example of tight coupling. … There was a very sharp increase in electricity demand in January which is reflected on plots of all four of the cables. … In sum, several key aspects in the crisis development trajectory in this case coincide with elements of Perrow’s Normal Accident Theory. (pp. 109-110)
There is much insight in using the system concept creatively and openly as these scholars have done, as they include not only technical factors but also weather situations and demand patterns when explaining why the power system may have failed. Yet, not full use seems to be made of Perrow’s apparatus: according to his original notion, a system should not only be tightly coupled but also complex for there to be a normal accident. At the same time, the complexity or the linearity of weather interacting with demand interacting with electrical supply is not mentioned in the above argument. Indeed, in assumptions such as these that failures cascade based on observing one failure, there tends to be less consideration on how far the faults cascaded or whether they were stopped quickly and how common such occurrences are empirically speaking. Notions about cascades and coupling can become more of a warning to others, less of an empirical concern.
Let me now summarize the points made in this post: why social scientists interested in large socio-technical systems, organizations, and systemic accidents may benefit from keeping their notion of a system relatively open-ended.
- In many cases, all in all, social science works are maybe not seeking for a comprehensive, scientific definition of a system: a matter that natural scientists have written whole books about over many decades and that cannot probably be covered in the scope of a single empirical study, a research article, or even several articles.
- The scope of a sociologist and historian in particular sees large systems as changing over time and varying from one location to another. This variation does not only concern technical components and their connections, but also different policy traditions, economic practices, institutional contexts, laws, regulations, raw resources, and many other things that all shape socio-technical systems on an intensive basis. One might add, to borrow from sociologist Anthony Giddens, that the systemness, internal unity, and coherence may vary significantly among socio-technical assemblages. To keep sensitive to these kinds of variations, the system concept should also be kept relatively loose, spotlighting the most generalizable traits of systems, like their composition from related components and their connections by a network or a structure.
- In social science debates a system can be seen in two different ways: both as an entity in its own right and an analytic tool to discern certain aspects of reality. Especially in the latter sense, it is fair to leave flexibility to the tool that is deployed.
- With accidents in complex systems, it may often be the case that components interacting in a system was not conceived before that system failed. The disaster reveals the system ex post in cases like these.
- However, one should be careful about concluding that systems that have failed catastrophically are complex and tightly coupled in their composition. Unless it is the traits of systems that explain failures and not the other way around, there is a potential for circular argumentation.