The purpose of this short piece
is to discuss the historian Melvin Kranzberg’s ‘first law of technology’. Kranzberg
laid out his six laws of technology at his inaugural address as President of the
Society for the History of Technology (SHOT) on October 19th 1985.[1]
The first of these laws he laid out in the following manner:
Kranzberg's First Law reads as follows: Technology
is neither good nor bad; nor is it neutral.
By that I mean that technology's interaction with
the social ecology is such that technical developments frequently have
environmental, social, and human consequences that go far beyond the immediate
purposes of the technical devices and practices themselves, and the same
technology can have quite different results when introduced into different
contexts or under different circumstances.
Many of our technology-related problems arise
because of the un foreseen consequences when apparently benign technologies are
employed on a massive scale. Hence many technical applications that seemed a
boon to mankind when first introduced became threats when their use became
widespread. For example, DDT was employed to raise agricultural productivity
and to eliminate disease-carrying pests. Then we discovered that DDT not only
did that but also threatened ecological systems, including the food chain of
birds, fishes, and eventually man. So the Western industrialized nations banned
DDT. They could afford to do so, because their high technological level enabled
them to use alternative means of pest control to achieve the same results at a
slightly higher cost.
But India continued to employ DDT, despite the
possibility of environmental damage, because it was not economically feasible
to change to less persistent insecticides and because, to India, the use of DDT
in agriculture was secondary to its role in disease prevention. According to
the World Health Organization, the use of DDT in the 1950s and 1960s in India
cut the incidence of malaria in that country from 100 million cases a year to
only 15,000, and the death toll from 750,000 to 1,500 a year. Is it surprising
that the Indians viewed DDT differently from us, welcoming it rather than
banning it? The point is that the same technology can answer questions
differently, depending on the context into which it is introduced and the
problem it is designed to solve.
Thus while some American scholars point to the
dehumanizing character of work in a modern factory,[2]
D. S. Naipaul, the great Indian author, assesses it differently from the
standpoint of his culture, saying, "Indian poverty is more dehumanizing
than any machine."[3]
Hence in judging the efficacy of technological development, we historians must
take cognizance of varying social contexts.
It is also imperative that we compare short-range
and long-range impacts. In the 19th century, Romantic writers and social
critics condemned industrial technology for the harsh conditions under which
the mill workers and coal miners laboured. Yet, according to Fernand Braudel,
conditions on the medieval manor were even worse.[4]
Certain economic historians have pointed out that, although the conditions of
the early factory workers left much to be desired, in the long run the worker's
living standards improved as industrialization brought forth a torrent of goods
that were made available to an ever-wider public.' Of course, those long-run
benefits were small comfort to those who suffered in the short run; yet it is
the duty of the historian to show the differences between the immediate and
long-range implications of technological developments.
Although our technological advances have yielded
manifold benefits in increasing food supply, in providing a deluge of material
goods, and in prolonging human life, people do not always appreciate
technology’s contributions to their lives and comfort. Nicholas Rescher, citing
statistical data on the way people perceive their conditions, explains their
dissatisfaction on the paradoxical ground that technical progress inflates
their expectations faster than it can actually meet them.
Of course, the public's perception of technological
advantages can change over time. A century ago, smoke from industrial
smokestacks was regarded as a sign of a region's prosperity; only later was it
recognized that the smoke was despoiling the environment. There were
"technological fixes," of course. Thus, one of the aims of the Clean
Air Act of 1972 was to prevent the harmful particulates emitted by smokestacks
from falling on nearby communities. One way to do away with this problem was to
build the smokestacks hundreds of feet high; then a few years later we
discovered that the sulphur dioxide and other oxides, when sent high into the
air, combined with water vapor to shower the earth with acid rain that has
polluted lakes and caused forests to die hundreds of miles away.
Unforeseen "dis-benefits" can thus arise
from presumably beneficent technologies. For example, although advances in
medical technology and water and sewage treatment have freed millions of people
from disease and plague and have lowered infant mortality, these have also
brought the possibility of overcrowding the earth and producing, from other
causes, human suffering on a vast scale. Similarly, nuclear technology offers
the prospect of unlimited energy resources, but it has also brought the
possibility of worldwide destruction.
That is why I think that my first law-Technology is
neither good nor bad; nor is it neutral-should constantly remind us that it is
the historian's duty to compare short-term versus long-term results, the
utopian hopes versus the spotted actuality, the what-might-have-been against
what actually happened, and the trade-offs among various "goods" and
possible "bads". All of this can be done only by seeing how
technology interacts in different ways with different values and institutions,
indeed, with the entire sociocultural milieu.
What we see in Kranzberg’s ‘first
law of technology’ is a sense that technology is open-ended in relation to its
moral and ethical potentiality. Technology is neither inherently good, nor
inherently bad, but rather contains the capability to adapt and change in
relation to both (a) the user or employer of technology, and (b) the locality or
social and normative context in which that piece of technology is employed or engaged
with. Nonetheless, the key takeaway from Kranzberg’s first law should be the
overarching common ‘unforeseen-ness’ that sits within the potentiality of all
technology. A single piece of technology can shift in its condition as being ‘good’,
‘bad’ or ‘neutral’ in result from any unforeseen circumstances, adjusting its
past trajectory in such a manner. The example that Kranzberg gives is of the pesticide
Dichlorodiphenyltrichloroethane (DDT), with which the unforeseen circumstances
of its ecological effect led to its ultimate regulation and its shift from so-called
‘good technology’ - one that dramatically aids agricultural yields in reducing
those plants spoiled by insects and pests - to that of ‘bad technology’,
causing harm.
Hence, it is significant that we
understand technology in the political domain with Kranzberg’s first law in
mind. For instance, amidst her exceptionally accessible theorising and
recalling of experiences with networked protest movements in the early 2010s,
Zeynep Tufekci recalls how “technology alters the landscape in which human
social interaction takes place”, citing Kranzberg’s first law in regards to the
effect of social media on the political sphere.[5]
As social media began as a means for old friends, past schoolmates and family
members to reunite and communicate, it would not necessarily have been within
the foresight of their Silicon-Valley founders, coders and investors that
Facebook, Twitter, BlackBerry Messenger and Reddit would become the political
salons for democratic activists across the globe; much in the way that philosopher
and critical theorist Jürgen Habermas argues that coffee houses became the discursive
site for political dissent as a corner-stone of the public sphere during the enlightenment,
as “seedbeds of political unrest”.[6]
Similarly, the same could be said
for the use of precisely these same platforms for illiberal activity.
The fact that Election Rigging, the January 6th 2021 ‘Storming of
the Capitol’ or the 2017 ‘Unite the Right’ rally in Charlottesville, VA were equally
coordinated on these same platforms demonstrates that social media, as a mode
of technology, is not simply ‘good’ in itself. Rather it can just as much lead
to mass-democratization as it can democratic backsliding and mass-surveillance,
given the context and the manner of its use outside that of its intended
purpose as technology, i.e. in a context that was not foreseen by the
neoliberal Prometheans of Silicon-Valley.
Broadening our use of the term ‘technology’
to include a wider array of increasingly abstract entities, the creation and
use of certain algorithms can be utilised as an illustration of precisely this
same necessity to keep Kranzberg’s first law of technology in mind. In 2018, whilst
researching face perception, Stanford University’s Yilun Wang and Michal
Kosinski forged an artificially intelligent computer algorithm that could
determine the facial features of those who self-identify as both Gay and
Lesbian to a 91% and 83% accuracy, respectively.[7]
On the one hand, such a technology (the algorithm) allows us to greater
understand and research the relationship between sexuality and physical composition,
a link that is still subject to wide discussion as even to its existence. This
is the beneficial capacity of such a technology, but only within a liberal
humanist context. If such a technology were to be acquired by, say, the
governments of Afghanistan, Brunei, Iran, Mauritania, Nigeria, Qatar, Saudi
Arabia, Somalia, Sudan, The United Arab Emirates, or Yemen, for instance,
whereby the capital punishment of homosexuality is statutorily exercised, its
effect would be the thoroughly dystopian abetting of persecution and
state-sanctioned mass-murder.
Although it would be the most
indulgent of speculation to claim a trace of any influence in his work, when
reading Kranzberg’s thoughts, one cannot help but be drawn back to the ‘Tool
analysis’ of Martin Heidegger in both ‘Being and Time’ and his ‘The
Question Concerning Technology’.[8]
It is well beyond the scope and intention of this short piece to comment on the
relationship between Heidegger’s grasp of technology and how this compares and
contrasts to that of Kranzberg. However, for the sake of drawing out a
surface-level connection, such a potential to moral and ethical shift in
regards to the unforeseen-ness of technological potentiality and usage can be
thought to stem, in a Heideggerian and phenomenological mode of analysis, to the
manner in which the physical usage and purpose of usage for a particular object
is permanently in flux. With potential usages and purposes of usages withdrawn,
ready to be unveiled, for Heidegger, technical objects as ‘equipment’ [Zeug]
slide between readiness-at-hand [Zuhandenheit] – a tool in its withdrawn
state – and presence-at-hand [Vorhandenheit] – a tool in a condition of
usable accessibility.
An aspect of overlap, perhaps,
between these two frameworks of thinking about technology, equipment and tools,
is the condition of flux they are in between usage as a particular piece of
technology, i.e., between its cryptic condition in withdrawal from employment
and its renewed presence through a distinct usage. For instance, a bayonet can
be thought of in one condition as piece of ‘bad’ technology, present-at-hand as
a tool of war on the frontline of a conflict. After its withdrawal into a condition
of readiness-at-hand, slipping from existence as a forgotten object sitting in
the sheath attached to one’s belt, much like the way the floor-as-object or oxygen-as-object
slips from existence constantly for us, the bayonet regains a presence-at-hand
as a cooking implement around the campfire, or as an antique for aesthetic
show.
In this, any object, and thus by
extension any technology, is neither ‘good’, nor ‘bad’, nor ‘neutral’, but
intimately connected to the contextual milieu in which it is utilised,
withdrawn and redefined as present-to-hand – open always to the manner in which
an object may become present-to-hand in a state outside the horizontal purview
of its creator; open always to potentiality. This is known to any who have
attempted to use the handle of a screwdriver as a hammer, their teeth as
scissors, a calculator as a ruler, a boot as a drinking receptacle, or their Facebook
profile to incite political dissent.
Thus, through its emphasis on the
non-neutrality and a-morality of technology, what we can tease out from
Kranzberg’s first law is a certain social responsibility for the technology
that we forge. When technology is utilised outside of its initial context, in
an unforeseen and ‘regressive’ or ‘reactionary’ manner[9],
it has surpassed its usage from within the horizon of its creator, and at
this point becomes unanchored from its initial purpose or use as a piece of ‘good’
technology in its initial stages. In this vein, it is the responsibility of our
social and political associations to be aware of this always immanently potential
shift.
To be clear, this is not to state
that regulation is required to be all-encompassing, nor criminally enforceable,
necessarily. Rather, the claim, as an extension of Kranzberg’s first law of
technology, is that we cannot assume that any mode or use of technology is
immune from moral or ethical inversion through a wholly novel usage of that
same object in a distinct context or environment that was not foreseen by its
creator. An appreciation of this condition may just make for the more
responsible public regulation of new technologies and a wider normative
appreciation of how any technology can be turned on its head for some gain by someone,
somewhere, for some purpose.
[1] Melvin Kranzberg (1986) ‘Technology
and History: “Kranzberg’s Law”’, Technology and Culture, 27(3):
544-560, pp. 545-548.
[2] E.g., Christopher Lasch (1984) The
Minimal Self: Psychic Survival in Troubled Times. New York: W.W. Norton and
Company.
[3] Quoted in Dennis H. Wrong (October
28th 1984) “The Case against Modernity”, New York Times Book Review.
Available at: https://www.nytimes.com/1984/10/28/books/the-case-against-modernity.html
(Accessed 22nd August 2022).
[4] Fernand Braudel (1981) Civilization
and Capitalism, 15th-18th Century – Volume 1 – The Structures
of Everyday Life. Berkeley, CA: University of California Press.
[5] Zeynep Tufekci (2017) Twitter
and Tear Gas: The Power and Fragility of Networked Protest. New Haven, CT:
Yale University Press. p. 124.
[6]
Jürgen Habermas (2015) The Structural
Transformation of the Public Sphere: An Inquiry Into a Category of Bourgeois
Society. Cambridge: Polity Press. p.59.
[7] Discussed at length in: David A.
Kenny (2020) Interpersonal Perception: The Foundation of Social
Relationships. Second Edition. New York: Guilford Press. p.77.
[8] Matin Heidegger (2001) Being
and Time. Oxford: Blackwell; (2011) “The Question Concerning Technology”,
David Farrell Krell (Ed.), Basic Writings: From Being and Time (1927) to The
Task of Thinking (1964). Abingdon: Routledge. pp. 213-238. For an excellent
discussion of Heidegger’s ‘Tool analysis’ see the work of Speculative Realist
Graham Harman, a founder of Object-Oriented Ontology (Triple O): Graham Harman
(2010) ‘Technology as Objects and Things in Heidegger’, Cambridge Journal of
Economics 34(1): 17-25; (2002) ‘Tool Being’: Heidegger and The
Metaphysics of Objects. Chicago, IL: Open Court.
[9] In contradistinction to the ‘progressive’
manner that most technology is created within, in relation to ‘necessity’ – see
Kranzberg’s second law.