• Undergraduate
  • High School
  • Architecture
  • American History
  • Asian History
  • Antique Literature
  • American Literature
  • Asian Literature
  • Classic English Literature
  • World Literature
  • Creative Writing
  • Linguistics
  • Criminal Justice
  • Legal Issues
  • Anthropology
  • Archaeology
  • Political Science
  • World Affairs
  • African-American Studies
  • East European Studies
  • Latin-American Studies
  • Native-American Studies
  • West European Studies
  • Family and Consumer Science
  • Social Issues
  • Women and Gender Studies
  • Social Work
  • Natural Sciences
  • Pharmacology
  • Earth science
  • Agriculture
  • Agricultural Studies
  • Computer Science
  • IT Management
  • Mathematics
  • Investments
  • Engineering and Technology
  • Engineering
  • Aeronautics
  • Medicine and Health
  • Alternative Medicine
  • Communications and Media
  • Advertising
  • Communication Strategies
  • Public Relations
  • Educational Theories
  • Teacher's Career
  • Chicago/Turabian
  • Company Analysis
  • Education Theories
  • Shakespeare
  • Canadian Studies
  • Food Safety
  • Relation of Global Warming and Extreme Weather Condition
  • Movie Review
  • Admission Essay
  • Annotated Bibliography
  • Application Essay
  • Article Critique
  • Article Review
  • Article Writing
  • Book Review
  • Business Plan
  • Business Proposal
  • Capstone Project
  • Cover Letter
  • Creative Essay
  • Dissertation
  • Dissertation - Abstract
  • Dissertation - Conclusion
  • Dissertation - Discussion
  • Dissertation - Hypothesis
  • Dissertation - Introduction
  • Dissertation - Literature
  • Dissertation - Methodology
  • Dissertation - Results
  • GCSE Coursework
  • Grant Proposal
  • Marketing Plan
  • Multiple Choice Quiz
  • Personal Statement
  • Power Point Presentation
  • Power Point Presentation With Speaker Notes
  • Questionnaire
  • Reaction Paper
  • Research Paper
  • Research Proposal
  • SWOT analysis
  • Thesis Paper
  • Online Quiz
  • Literature Review
  • Movie Analysis
  • Statistics problem
  • Math Problem
  • All papers examples
  • How It Works
  • Money Back Policy
  • Terms of Use
  • Privacy Policy
  • We Are Hiring

Relation Between Humans and Technology, Essay Example

Pages: 3

Words: 925

Hire a Writer for Custom Essay

Use 10% Off Discount: "custom10" in 1 Click 👇

You are free to use it as an inspiration or a source for your own work.

Technological determinism is a theory that the development of both social structures and cultural values driven by the society’s technology. The term is believed to have been invented by an American Sociologist, Thorstein Veblen (1857-1929). Technological momentum is also a theory that states the relationship between the society and technology over a period. It was coined by Thomas P. Hughes a Historian of technology. Hughes believes in having two distinct models to see how technology and society interact, he claims that the new, upcoming technologies are the one that affects the society. These technologies are brought up in ways that they are lasting and irremediable, and the society has to change to be able to survive.

Technology and the human community have drastically grown together given the fact that, technology puts the human community on its toes each and every second. Time is the unifying factor between the social and technological determinism. Social claims that the societies have the power to control how technology will develop and uses are true. When technology is still young, the society has control over it and can easily mould it to fit its suitability. Although, when it has matured, the society will be the one to adjust to accommodate to the new technology. The current technology has increasingly improved the education sector. According to Thomas Edison State College “technology has improved the ability to do research and elevates our knowledge on contemporary problems and extends our ability to address those issues with scope and depth”. Education research has become easier, and fun to do with the new technology, which makes it faster to acquire required materials for use. However, technology cannot work independently it requires knowledge on cultural values to be able to compliment it (Murphie & Potts 19).

In relation to cultural values and history, technology had to be there even in the ancient time where hunting tools, sculptures and jewelry were being shaped and decorated either by curving or painting.

Currently in leadership, technology has demonstrated a great deal of importance in numerous occasions, like in the case of Second World War. Nazi Germany with the use of technology like new rockets and jet planes achieved technological innovations, although he did not change the fate of the war. It would have been different if the innovation had come earlier and would have determined the momentum (Feenberg 2003). Technology has also improved the communication standards, accountability and accessibility. Leaders can be easily reached through the availability of Internet, telephones and fax machines. Therefore, it is imperative for leaders to gain knowledge and understanding of vast technological advancements and how they help in managerial and decision making processes.

Technology and human interaction are things that need to be advocated in institutions and professions because they have changed thinking patterns and mode of presentation. This accommodates the needs and preferences of individuals in the society, which includes the disabled and physically challenged. Technology influence on humanity has affected all sectors in marketing (private, public, off line and or on-line). This intersection has disrupted old models of business, industrial theories and systems of belief underpinning ancient knowledge and concepts (Croteau & Hoynes 305). This has promoted movement comprehension curves and results in the creation of meaning and significance from changes emerging in technology and human interaction.

Technology, on the other hand, can also affect negatively in other cases, for example, in a war, where destructive technology is used by the major technological powers of Soviet Union and United States in the cold war. They could not achieve clear victories because it would destroy everything and nothing would be left to win. In other cases, executive members of a given firm use this media (Croteau & Hoynes 306) in confronting sub-ordinate staff members and harass female counterparts’ sub-ordinate staff members and harass female counterparts sexually. This has affected working conditions negatively and hence, production is hampered.

Civilization and technology work hand in hand. Therefore, in order to achieve some of the civilization, technology has to be in place first. Some historians believe that the higher the technology the influential the civilization will be to the neighboring cultures (Murphie & Potts 21). Technology has made life easier like in the availability of ATM’s, which has made society able to access the banking services faster and conveniently. The changes currently seen in the society is due to the arising technology. For example, in the ancient history, communication used to be in the form of signs (smoke, sounds among others). Unlike today, where people communicate through email thus, negatively affecting the social life of the human community since people rarely meet.

In conclusion, technology has brought both positive and negative effects in the human community. Although it has some negative factors, we cannot do without it because it has become a requirement. In the current world, one cannot live without a mobile phone. Although, having it also have some negative factors like wave radiation, which can affect the human health after using it for long. This has also brought forth the use of other technology to reduce radiation of the wave to the human body to allow him use the mobile phone more. This shows that technology and humanity co-exist with each other.

Works Cited

Croteau, D & Hoynes, W. Media Society: Industries, Images and Audiences. Thousand Oaks: Pine Forge Press, 2003. Print

Davies, F. Technology and Business Ethics Theory. Business Ethics: A European Review vol. 6, no. 2, pp. 76-80, 2002. Print

Feenberg, A. (2003). What Is Philosophy of Technology ? Retrieved 26 April, 2012 from <http://www.sfu.ca/~andrewf/books/What_is_Philosophy_of_Technology.pdf >

Murphie, A and Potts, J. Culture and Technology . London: Palgrave, 2003. Print

Stuck with your Essay?

Get in touch with one of our experts for instant help!

J Rishard Branson & the Virgin Group, Case Study Example

Technology Through Time, Essay Example

Time is precious

don’t waste it!

Plagiarism-free guarantee

Privacy guarantee

Secure checkout

Money back guarantee

E-book

Related Essay Samples & Examples

Voting as a civic responsibility, essay example.

Pages: 1

Words: 287

Utilitarianism and Its Applications, Essay Example

Words: 356

The Age-Related Changes of the Older Person, Essay Example

Pages: 2

Words: 448

The Problems ESOL Teachers Face, Essay Example

Pages: 8

Words: 2293

Should English Be the Primary Language? Essay Example

Pages: 4

Words: 999

The Term “Social Construction of Reality”, Essay Example

Words: 371

essay about human and technology

Photo by Gary Hershorn/Getty

Our tools shape our selves

For bernard stiegler, a visionary philosopher of our digital age, technics is the defining feature of human experience.

by Bryan Norton   + BIO

It has become almost impossible to separate the effects of digital technologies from our everyday experiences. Reality is parsed through glowing screens, unending data feeds, biometric feedback loops, digital protheses and expanding networks that link our virtual selves to satellite arrays in geostationary orbit. Wristwatches interpret our physical condition by counting steps and heartbeats. Phones track how we spend our time online, map the geographic location of the places we visit and record our histories in digital archives. Social media platforms forge alliances and create new political possibilities. And vast wireless networks – connecting satellites, drones and ‘smart’ weapons – determine how the wars of our era are being waged. Our experiences of the world are soaked with digital technologies.

But for the French philosopher Bernard Stiegler, one of the earliest and foremost theorists of our digital age, understanding the world requires us to move beyond the standard view of technology. Stiegler believed that technology is not just about the effects of digital tools and the ways that they impact our lives. It is not just about how devices are created and wielded by powerful organisations, nation-states or individuals. Our relationship with technology is about something deeper and more fundamental. It is about technics .

According to Stiegler, technics – the making and use of technology, in the broadest sense – is what makes us human. Our unique way of existing in the world, as distinct from other species, is defined by the experiences and knowledge our tools make possible, whether that is a state-of-the-art brain-computer interface such as Neuralink, or a prehistoric flint axe used to clear a forest. But don’t be mistaken: ‘technics’ is not simply another word for ‘technology’. As Martin Heidegger wrote in his essay ‘The Question Concerning Technology’ (1954), which used the German term Technik instead of Technologie in the original title: the ‘essence of technology is by no means anything technological.’ This aligns with the history of the word: the etymology of ‘technics’ leads us back to something like the ancient Greek term for art – technē . The essence of technology, then, is not found in a device, such as the one you are using to read this essay. It is an open-ended creative process, a relationship with our tools and the world.

This is Stiegler’s legacy. Throughout his life, he took this idea of technics, first explored while he was imprisoned for armed robbery, further than anyone else. But his ideas have often been overlooked and misunderstood, even before he died in 2020. Today, they are more necessary than ever. How else can we learn to disentangle the effects of digital technologies from our everyday experiences? How else can we begin to grasp the history of our strange reality?

S tiegler’s path to becoming the pre-eminent philosopher of our digital age was anything but straightforward. He was born in Villebon-sur-Yvette, south of Paris, in 1952, during a period of affluence and rejuvenation in France that followed the devastation of the Second World War. By the time he was 16, Stiegler participated in the revolutionary wave of 1968 (he would later become a member of the Communist Party), when a radical uprising of students and workers forced the president Charles de Gaulle to seek temporary refuge across the border in West Germany. However, after a new election was called and the barricades were dismantled, Stiegler became disenchanted with traditional Marxism, as well as the political trends circulating in France at the time. The Left in France seemed helplessly torn between the postwar existentialism of Jean-Paul Sartre and the anti-humanism of Louis Althusser. While Sartre insisted on humans’ creative capacity to shape their own destiny, Althusser argued that the pervasiveness of ideology in capitalist society had left us helplessly entrenched in systems of power beyond our control. Neither of these options satisfied Stiegler because neither could account for the rapid rise of a new historical force: electronic technology. By the 1970s and ’80s, Stiegler sensed that this new technology was redefining our relationship to ourselves, to the world, and to each other. To account for these new conditions, he believed the history of philosophy would have to be rewritten from the ground up, from the perspective of technics. Neither existentialism nor Marxism nor any other school of philosophy had come close to acknowledging the fundamental link between human existence and the evolutionary history of tools.

Stiegler describes his time in prison as one of radical self-exploration and philosophical experimentation

In the decade after 1968, Stiegler opened a jazz club in Toulouse that was shut down by the police a few years later for illegal prostitution. Desperate to make ends meet, Stiegler turned to robbing banks to pay off his debts and feed his family. In 1978, he was arrested for armed robbery and sentenced to five years in prison. A high-school dropout who was never comfortable in institutional settings, Stiegler requested his own cell when he first arrived in prison, and went on a hunger strike until it was granted. After the warden finally acquiesced, Stiegler began taking note of how his relationship to the outside world was mediated through reading and writing. This would be a crucial realisation. Through books, paper and pencils, he was able to interface with people and places beyond the prison walls.

It was during his time behind bars that Stiegler began to study philosophy more intently, devouring any books he could get his hands on. In his philosophical memoir Acting Out (2009), Stiegler describes his time in prison as one of radical self-exploration and philosophical experimentation. He read classic works of Greek philosophy, studied English and memorised modern poetry, but the book that really drew his attention was Plato’s Phaedrus. In this dialogue between Socrates and Phaedrus, Plato outlines his concept of anamnesis , a theory of learning that states the acquisition of new knowledge is just a process of remembering what we once knew in a previous life. Caught in an endless cycle of death and rebirth, we forget what we know each time we are reborn. For Stiegler, this idea of learning as recollection would become less spiritual and more material: learning and memory are tied inextricably to technics. Through the tools we use – including books, writing, archives – we can store and preserve vast amounts of knowledge.

After an initial attempt at writing fiction in prison, Stiegler enrolled in a philosophy programme designed for inmates. While still serving his sentence, he finished a degree in philosophy and corresponded with prominent intellectuals such as the philosopher and translator Gérard Granel, who was a well-connected professor at the University of Toulouse-Le Mirail (later known as the University of Toulouse-Jean Jaurès). Granel introduced Stiegler to some of the most prominent figures in philosophy at the time, including Jean-François Lyotard and Jacques Derrida . Lyotard would oversee Stiegler’s master’s thesis after his eventual release; Derrida would supervise his doctoral dissertation, completed in 1993, which was reworked and published a year later as the first volume in his Technics and Time series. With the help of these philosophers and their novel ideals, Stiegler began to reshape his earlier political commitment to Marxist materialism, seeking to account for the ways that new technologies shape the world.

B y the start of the 1970s, a growing number of philosophers and political theorists began calling into question the immediacy of our lived experience. The world around us was no longer seen by these thinkers as something that was simply given, as it had been for phenomenologists such as Immanuel Kant and Edmund Husserl. The world instead presented itself as a built environment composed of things such as roads, power plants and houses, all made possible by political institutions, cultural practices and social norms. And so, reality also appeared to be a construction, not a given.

One of the French philosophers who interrogated the immediacy of reality most closely was Louis Althusser. In his essay ‘Ideology and Ideological State Apparatuses’ published in 1970, years before Stiegler was taught by him, Althusser suggests that ideology is not something that an individual believes in, but something that goes far beyond the scale of a single person, or even a community. Just as we unthinkingly turn around when we hear our name shouted from behind, ideology has a hold on us that is both automatic and unconscious – it seeps in from outside. Michel Foucault , a former student of Althusser at the École Normale Supérieure in Paris, developed a theory of power that functions in a similar way. In Discipline and Punish (1975) and elsewhere, Foucault argues that social and political power is not concentrated in individuals but is produced by ‘discourses, institutions, architectural forms, regulatory decisions, laws, administrative measures, scientific statements, philosophical, moral and philanthropic propositions’. Foucault’s insight was to show how power shapes every facet of the world, from classroom interactions between a teacher and student to negotiations of a trade agreement between representatives of two different nations. From this perspective, power is constituted in and through material practices, rather than something possessed by individual subjects.

We don’t simply ‘use’ our digital tools – they enter and pharmacologically change us, like medicinal drugs

These are the foundations on which Stiegler assembled his idea of technics. Though he appreciated the ways that Foucault and Althusser had tried to account for technology, he remained dissatisfied by the lack of attention to particular types of technology – not to mention the fact that neither thinker had offered any real alternatives to the forms of power they described. In his book Taking Care of Youth and the Generations (2008), Stiegler explains that he was able to move beyond Foucault with the help of his mentor Derrida’s concept of the pharmakon . In his essay ‘Plato’s Pharmacy’ (1972), Derrida began developing the idea as he explored how our ability to write can create and undermine (‘cure’ and ‘poison’) an individual subject’s sense of identity. For Derrida, the act of writing – itself a kind of technology – has a Janus-faced relationship to individual memory. Though it allows us to store knowledge and experience across vast periods of time, writing disincentivises us from practising our own mental capacity for recollection. The written word short-circuits the immediate connection between lived experience and internal memory. It ‘cures’ our cognitive limits, but also ‘poisons’ our cognition by limiting our abilities.

In the late 20th century, Stiegler began applying this idea to new media technologies, such as television, which led to the development of a concept he called pharmacology – an idea that suggests we don’t simply ‘use’ our digital tools. Instead, they enter and pharmacologically change us, like medicinal drugs. Today, we can take this analogy even further. The internet presents us with a massive archive of formatted, readily accessible information. Sites such as Wikipedia contain terabytes of knowledge, accumulated and passed down over millennia. At the same time, this exchange of unprecedented amounts of information enables the dissemination of an unprecedented amount of misinformation, conspiracy theories, and other harmful content. The digital is both a poison and a cure, as Derrida would say.

This kind of polyvalence led Stiegler to think more deliberately about technics rather than technology. For Stiegler, there are inherent risks in thinking in terms of the latter: the more ubiquitous that digital technologies become in our lives, the easier it is to forget that these tools are social products that have been constructed by our fellow humans. How we consume music, the paths we take to get from point A to point B , how we share ourselves with others, all of these aspects of daily life have been reshaped by new technologies and the humans that produce them. Yet we rarely stop to reflect on what this means for us. Stiegler believed this act of forgetting creates a deep crisis for all facets of human experience. By forgetting, we lose our all-important capacity to imagine alternative ways of living. The future appears limited, even predetermined, by new technology.

I n the English-speaking world, Stiegler is best known for his first book Technics and Time, 1: The Fault of Epimetheus (1994). In the first sentence, he highlights the vital link between our understanding of the technologies we use and our capacity to imagine the future. ‘The object of this work is technics,’ he writes, ‘apprehended as the horizon of all possibility to come and of all possibility of a future.’ He views our relationship with tools as the determining force for all future possibilities; technics is the defining feature of human experience, one that has been overlooked by philosophers from Plato and Aristotle down to the present. While René Descartes, Husserl and other thinkers asked important questions about consciousness and lived experience (phenomenology), and the nature of truth (metaphysics) or knowledge (epistemology), they failed to account for the ways that technologies help us find – or guide us toward – answers to these questions. In the history of philosophy, ‘Technics is the unthought,’ according to Stiegler.

To further stress the importance of technics, Stiegler turns to the creation myth told by the Greek poet Hesiod in Works and Days , written around 700 BCE . During the world’s creation, Zeus asks the Titan Epimetheus to distribute individual talents to each species. Epimetheus gives wings to birds so they can fly, and fins to fish so they can swim. By the time he gets to humans, however, Epimetheus has no talents left over. Epimetheus, whose name (according to Stiegler) means the ‘forgetful one’ in Greek, turns to his brother Prometheus for help. Prometheus then steals fire from the gods, presenting it to humans in place of a biological talent. Humans, once more, are born out of an act of forgetting, just like in Plato’s theory of anamnesis. The difference with Hesiod’s story is that technics here provides a material basis for human experience. Bereft of any physiological talents, Homo sapiens must survive by using tools, beginning with fire.

Factories, server farms and even psychotropic drugs possess the capacity to poison or cure our world

The pharmacology of technics, for Stiegler, presents opportunities for positive or negative relationships with tools. ‘But where the danger lies,’ writes the poet Friedrich Hölderlin in a quote Stiegler often turned to, ‘also grows the saving power.’ While Derrida focuses on the ability of the written word to subvert the sovereignty of the individual subject, Stiegler widens this understanding of pharmacology to include a variety of media and technologies. Not just writing, but factories, server farms and even psychotropic drugs possess the pharmacological capacity to poison or cure our world and, crucially, our understanding of it. Technological development can destroy our sense of ourselves as rational, coherent subjects, leading to widespread suffering and destruction. But tools can also provide us with a new sense of what it means to be human, leading to new modes of expression and cultural practices.

In Symbolic Misery, Volume 2: The Catastrophe of the Sensible (2015) , Stiegler considers the effect that new technologies, especially those accompanying industrialisation, have had on art and music. Industry, defined by mass production and standardisation, is often regarded as antithetical to artistic freedom and expression. But Stiegler urges us to take a closer look at art history to see how artists responded to industrialisation. In response to the standardising effects of new machinery, for example, Marcel Duchamp and other members of the 20th-century avant-garde used industrial tools to invent novel forms of creative expression. In the painting Nude Descending a Staircase, No 2 (1912), Duchamp employed the new temporal perspectives made possible by photography and cinema to paint a radically different kind of portrait. Inspired by the camera’s ability to capture movement, frame by frame, Duchamp paints a nude model who appears in multiple instants at once, like a series of time-lapse photographs superimposed onto each other. The image became an immediate sensation, an icon of modernity and the resulting entanglement of art and industrial technology.

Technical innovations are never without political and social implications for Stiegler. The phonograph, for example, may have standardised classical musical performances after its invention in the late 1800s, but it also contributed to the development of jazz, a genre that was popular among musicians who were barred from accessing the elite world of classical music. Thanks to the gramophone, Black musicians such as the pianist and composer Duke Ellington were able to learn their instruments by ear, without first learning to read musical notation. The phonograph’s industrialisation of musical performance paradoxically led to the free-flowing improvisation of jazz performers.

T echnics draws our attention to the world-making capabilities of our tools, while reminding us of the constructed nature of our technological reality. Stiegler’s capacious understanding of technics, encompassing everything from early agricultural tools to the television set, does not disregard new innovations, either. In 2006, Stiegler founded the Institute for Research and Innovation, an organisation at the Centre Pompidou in Paris devoted to exploring the impact digital technology has on contemporary society. Stiegler’s belief in the power of technology to shape the world around us has often led to the charge that he is a techno-determinist who believes the entire course of history is shaped by tools and machines. It’s true that Stiegler thinks technology defines who we are as humans, but this process does not always lock us into predetermined outcomes. Instead, it simultaneously provides us with a material horizon of possible experience. Stiegler’s theory of technics urges us to rethink the history of philosophy, art and politics in order that we might better understand how our world has been shaped by technology. And by acquiring this historical consciousness, he hopes that we will ultimately design better tools, using technology to improve our world in meaningful ways.

This doesn’t mean Stiegler is a techno-optimist, either, who blindly sees digital technology as a panacea for our problems. One particular concern he expresses about digital technology is its capacity to standardise the world we inhabit. Big data, for Stiegler, threatens to limit our sense of what is possible, rather than broadening our horizons and opening new opportunities for creative expression. Just as Hollywood films in the 20th century manufactured and distributed the ideology of consumer capitalism to the rest of the globe, Stiegler suggests that tech firms such as Google and Apple often disseminate values that are hidden from view. A potent example of this can be found in the first fully AI-judged beauty pageant. As discussed by the sociologist Ruha Benjamin in her book Race After Technology (2019), the developers of Beauty.AI advertised the contest as an opportunity for beauty to be judged in a way that was free of prejudice. What they found, however, was that the tool they had designed exhibited an overwhelming preference for white contestants.

The digital economy doesn’t always offer desirable alternatives as former ways of working and living are destroyed

In Automatic Society, Volume 1: The Future of Work (2016), Stiegler shows how big data can standardise our world by reorganising work and employment. Digital tools were first seen as a disruptive force that could break the monotonous rhythms of large industry, but the rise of flexible forms of employment in the gig economy has created a massive underclass. A new proletariat of Uber drivers and other precarious workers now labour under extremely unstable conditions. They are denied even the traditional protections of working-class employment. The digital economy doesn’t always offer desirable alternatives as former ways of working and living are destroyed.

A particularly pressing concern Stiegler took up before his untimely death in 2020 is the capacity of digital tools to surveil us. The rise of big tech firms such as Google and Amazon has meant the intrusion of surveillance tools into every aspect of our lives. Smart homes have round-the-clock video feeds, and marketing companies spend billions collecting data about everything we do online. In his last two books published in English, The Neganthropocene ( 2018 ) and The Age of Disruption: Technology and Madness in Computational Capitalism ( 2019 ), Stiegler suggests that the growth of widespread surveillance tools is at odds with the pharmacological promise of new technology. Though tracking tools can be useful by, for example, limiting the spread of harmful diseases, they are also used to deny us worlds of possible experience.

Technology, for better or worse, affects every aspect of our lives. Our very sense of who we are is shaped and reshaped by the tools we have at our disposal. The problem, for Stiegler, is that when we pay too much attention to our tools, rather than how they are developed and deployed, we fail to understand our reality. We become trapped, merely describing the technological world on its own terms and making it even harder to untangle the effects of digital technologies and our everyday experiences. By encouraging us to pay closer attention to this world-making capacity, with its potential to harm and heal, Stiegler is showing us what else is possible. There are other ways of living, of being, of evolving. It is technics, not technology, that will give the future its new face.

A street intersection; a wall is painted with the word Soulsville in large letters with peeling paint

Economic history

The southern gap

In the American South, an oligarchy of planters enriched itself through slavery. Pervasive underdevelopment is their legacy

Keri Leigh Merritt

Artwork depicting a family group composed of angular lines and triangles, some but not all coloured, on a paper background

Family life

A patchwork family

After my marriage failed, I strove to create a new family – one made beautiful by the loving way it’s stitched together

essay about human and technology

The cell is not a factory

Scientific narratives project social hierarchies onto nature. That’s why we need better metaphors to describe cellular life

Charudatta Navare

essay about human and technology

Stories and literature

Terrifying vistas of reality

H P Lovecraft, the master of cosmic horror stories, was a philosopher who believed in the total insignificance of humanity

Sam Woodward

essay about human and technology

The dangers of AI farming

AI could lead to new ways for people to abuse animals for financial gain. That’s why we need strong ethical guidelines

Virginie Simoneau-Gilbert & Jonathan Birch

essay about human and technology

Thinkers and theories

A man beyond categories

Paul Tillich was a religious socialist and a profoundly subtle theologian who placed doubt at the centre of his thought

Feb 13, 2023

200-500 Word Example Essays about Technology

Got an essay assignment about technology check out these examples to inspire you.

Technology is a rapidly evolving field that has completely changed the way we live, work, and interact with one another. Technology has profoundly impacted our daily lives, from how we communicate with friends and family to how we access information and complete tasks. As a result, it's no surprise that technology is a popular topic for students writing essays.

But writing a technology essay can be challenging, especially for those needing more time or help with writer's block. This is where Jenni.ai comes in. Jenni.ai is an innovative AI tool explicitly designed for students who need help writing essays. With Jenni.ai, students can quickly and easily generate essays on various topics, including technology.

This blog post aims to provide readers with various example essays on technology, all generated by Jenni.ai. These essays will be a valuable resource for students looking for inspiration or guidance as they work on their essays. By reading through these example essays, students can better understand how technology can be approached and discussed in an essay.

Moreover, by signing up for a free trial with Jenni.ai, students can take advantage of this innovative tool and receive even more support as they work on their essays. Jenni.ai is designed to help students write essays faster and more efficiently, so they can focus on what truly matters – learning and growing as a student. Whether you're a student who is struggling with writer's block or simply looking for a convenient way to generate essays on a wide range of topics, Jenni.ai is the perfect solution.

The Impact of Technology on Society and Culture

Introduction:.

Technology has become an integral part of our daily lives and has dramatically impacted how we interact, communicate, and carry out various activities. Technological advancements have brought positive and negative changes to society and culture. In this article, we will explore the impact of technology on society and culture and how it has influenced different aspects of our lives.

Positive impact on communication:

Technology has dramatically improved communication and made it easier for people to connect from anywhere in the world. Social media platforms, instant messaging, and video conferencing have brought people closer, bridging geographical distances and cultural differences. This has made it easier for people to share information, exchange ideas, and collaborate on projects.

Positive impact on education:

Students and instructors now have access to a multitude of knowledge and resources because of the effect of technology on education . Students may now study at their speed and from any location thanks to online learning platforms, educational applications, and digital textbooks.

Negative impact on critical thinking and creativity:

Technological advancements have resulted in a reduction in critical thinking and creativity. With so much information at our fingertips, individuals have become more passive in their learning, relying on the internet for solutions rather than logic and inventiveness. As a result, independent thinking and problem-solving abilities have declined.

Positive impact on entertainment:

Technology has transformed how we access and consume entertainment. People may now access a wide range of entertainment alternatives from the comfort of their own homes thanks to streaming services, gaming platforms, and online content makers. The entertainment business has entered a new age of creativity and invention as a result of this.

Negative impact on attention span:

However, the continual bombardment of information and technological stimulation has also reduced attention span and the capacity to focus. People are easily distracted and need help focusing on a single activity for a long time. This has hampered productivity and the ability to accomplish duties.

The Ethics of Artificial Intelligence And Machine Learning

The development of artificial intelligence (AI) and machine learning (ML) technologies has been one of the most significant technological developments of the past several decades. These cutting-edge technologies have the potential to alter several sectors of society, including commerce, industry, healthcare, and entertainment. 

As with any new and quickly advancing technology, AI and ML ethics must be carefully studied. The usage of these technologies presents significant concerns around privacy, accountability, and command. As the use of AI and ML grows more ubiquitous, we must assess their possible influence on society and investigate the ethical issues that must be taken into account as these technologies continue to develop.

What are Artificial Intelligence and Machine Learning?

Artificial Intelligence is the simulation of human intelligence in machines designed to think and act like humans. Machine learning is a subfield of AI that enables computers to learn from data and improve their performance over time without being explicitly programmed.

The impact of AI and ML on Society

The use of AI and ML in various industries, such as healthcare, finance, and retail, has brought many benefits. For example, AI-powered medical diagnosis systems can identify diseases faster and more accurately than human doctors. However, there are also concerns about job displacement and the potential for AI to perpetuate societal biases.

The Ethical Considerations of AI and ML

A. Bias in AI algorithms

One of the critical ethical concerns about AI and ML is the potential for algorithms to perpetuate existing biases. This can occur if the data used to train these algorithms reflects the preferences of the people who created it. As a result, AI systems can perpetuate these biases and discriminate against certain groups of people.

B. Responsibility for AI-generated decisions

Another ethical concern is the responsibility for decisions made by AI systems. For example, who is responsible for the damage if a self-driving car causes an accident? The manufacturer of the vehicle, the software developer, or the AI algorithm itself?

C. The potential for misuse of AI and ML

AI and ML can also be used for malicious purposes, such as cyberattacks and misinformation. The need for more regulation and oversight in developing and using these technologies makes it difficult to prevent misuse.

The developments in AI and ML have given numerous benefits to humanity, but they also present significant ethical concerns that must be addressed. We must assess the repercussions of new technologies on society, implement methods to limit the associated dangers, and guarantee that they are utilized for the greater good. As AI and ML continue to play an ever-increasing role in our daily lives, we must engage in an open and frank discussion regarding their ethics.

The Future of Work And Automation

Rapid technological breakthroughs in recent years have brought about considerable changes in our way of life and work. Concerns regarding the influence of artificial intelligence and machine learning on the future of work and employment have increased alongside the development of these technologies. This article will examine the possible advantages and disadvantages of automation and its influence on the labor market, employees, and the economy.

The Advantages of Automation

Automation in the workplace offers various benefits, including higher efficiency and production, fewer mistakes, and enhanced precision. Automated processes may accomplish repetitive jobs quickly and precisely, allowing employees to concentrate on more complex and creative activities. Additionally, automation may save organizations money since it removes the need to pay for labor and minimizes the danger of workplace accidents.

The Potential Disadvantages of Automation

However, automation has significant disadvantages, including job loss and income stagnation. As robots and computers replace human labor in particular industries, there is a danger that many workers may lose their jobs, resulting in higher unemployment and more significant economic disparity. Moreover, if automation is not adequately regulated and managed, it might lead to stagnant wages and a deterioration in employees' standard of life.

The Future of Work and Automation

Despite these difficulties, automation will likely influence how labor is done. As a result, firms, employees, and governments must take early measures to solve possible issues and reap the rewards of automation. This might entail funding worker retraining programs, enhancing education and skill development, and implementing regulations that support equality and justice at work.

IV. The Need for Ethical Considerations

We must consider the ethical ramifications of automation and its effects on society as technology develops. The impact on employees and their rights, possible hazards to privacy and security, and the duty of corporations and governments to ensure that automation is utilized responsibly and ethically are all factors to be taken into account.

Conclusion:

To summarise, the future of employment and automation will most certainly be defined by a complex interaction of technological advances, economic trends, and cultural ideals. All stakeholders must work together to handle the problems and possibilities presented by automation and ensure that technology is employed to benefit society as a whole.

The Role of Technology in Education

Introduction.

Nearly every part of our lives has been transformed by technology, and education is no different. Today's students have greater access to knowledge, opportunities, and resources than ever before, and technology is becoming a more significant part of their educational experience. Technology is transforming how we think about education and creating new opportunities for learners of all ages, from online courses and virtual classrooms to instructional applications and augmented reality.

Technology's Benefits for Education

The capacity to tailor learning is one of technology's most significant benefits in education. Students may customize their education to meet their unique needs and interests since they can access online information and tools. 

For instance, people can enroll in online classes on topics they are interested in, get tailored feedback on their work, and engage in virtual discussions with peers and subject matter experts worldwide. As a result, pupils are better able to acquire and develop the abilities and information necessary for success.

Challenges and Concerns

Despite the numerous advantages of technology in education, there are also obstacles and considerations to consider. One issue is the growing reliance on technology and the possibility that pupils would become overly dependent on it. This might result in a lack of critical thinking and problem-solving abilities, as students may become passive learners who only follow instructions and rely on technology to complete their assignments.

Another obstacle is the digital divide between those who have access to technology and those who do not. This division can exacerbate the achievement gap between pupils and produce uneven educational and professional growth chances. To reduce these consequences, all students must have access to the technology and resources necessary for success.

In conclusion, technology is rapidly becoming an integral part of the classroom experience and has the potential to alter the way we learn radically. 

Technology can help students flourish and realize their full potential by giving them access to individualized instruction, tools, and opportunities. While the benefits of technology in the classroom are undeniable, it's crucial to be mindful of the risks and take precautions to guarantee that all kids have access to the tools they need to thrive.

The Influence of Technology On Personal Relationships And Communication 

Technological advancements have profoundly altered how individuals connect and exchange information. It has changed the world in many ways in only a few decades. Because of the rise of the internet and various social media sites, maintaining relationships with people from all walks of life is now simpler than ever. 

However, concerns about how these developments may affect interpersonal connections and dialogue are inevitable in an era of rapid technological growth. In this piece, we'll discuss how the prevalence of digital media has altered our interpersonal connections and the language we use to express ourselves.

Direct Effect on Direct Interaction:

The disruption of face-to-face communication is a particularly stark example of how technology has impacted human connections. The quality of interpersonal connections has suffered due to people's growing preference for digital over human communication. Technology has been demonstrated to reduce the usage of nonverbal signs such as facial expressions, tone of voice, and other indicators of emotional investment in the connection.

Positive Impact on Long-Distance Relationships:

Yet there are positives to be found as well. Long-distance relationships have also benefited from technological advancements. The development of technologies such as video conferencing, instant messaging, and social media has made it possible for individuals to keep in touch with distant loved ones. It has become simpler for individuals to stay in touch and feel connected despite geographical distance.

The Effects of Social Media on Personal Connections:

The widespread use of social media has had far-reaching consequences, especially on the quality of interpersonal interactions. Social media has positive and harmful effects on relationships since it allows people to keep in touch and share life's milestones.

Unfortunately, social media has made it all too easy to compare oneself to others, which may lead to emotions of jealousy and a general decline in confidence. Furthermore, social media might cause people to have inflated expectations of themselves and their relationships.

A Personal Perspective on the Intersection of Technology and Romance

Technological advancements have also altered physical touch and closeness. Virtual reality and other technologies have allowed people to feel physical contact and familiarity in a digital setting. This might be a promising breakthrough, but it has some potential downsides. 

Experts are concerned that people's growing dependence on technology for intimacy may lead to less time spent communicating face-to-face and less emphasis on physical contact, both of which are important for maintaining good relationships.

In conclusion, technological advancements have significantly affected the quality of interpersonal connections and the exchange of information. Even though technology has made it simpler to maintain personal relationships, it has chilled interpersonal interactions between people. 

Keeping tabs on how technology is changing our lives and making adjustments as necessary is essential as we move forward. Boundaries and prioritizing in-person conversation and physical touch in close relationships may help reduce the harm it causes.

The Security and Privacy Implications of Increased Technology Use and Data Collection

The fast development of technology over the past few decades has made its way into every aspect of our life. Technology has improved many facets of our life, from communication to commerce. However, significant privacy and security problems have emerged due to the broad adoption of technology. In this essay, we'll look at how the widespread use of technological solutions and the subsequent explosion in collected data affects our right to privacy and security.

Data Mining and Privacy Concerns

Risk of Cyber Attacks and Data Loss

The Widespread Use of Encryption and Other Safety Mechanisms

The Privacy and Security of the Future in a Globalized Information Age

Obtaining and Using Individual Information

The acquisition and use of private information is a significant cause for privacy alarm in the digital age. Data about their customers' online habits, interests, and personal information is a valuable commodity for many internet firms. Besides tailored advertising, this information may be used for other, less desirable things like identity theft or cyber assaults.

Moreover, many individuals need to be made aware of what data is being gathered from them or how it is being utilized because of the lack of transparency around gathering personal information. Privacy and data security have become increasingly contentious as a result.

Data breaches and other forms of cyber-attack pose a severe risk.

The risk of cyber assaults and data breaches is another big issue of worry. More people are using more devices, which means more opportunities for cybercriminals to steal private information like credit card numbers and other identifying data. This may cause monetary damages and harm one's reputation or identity.

Many high-profile data breaches have occurred in recent years, exposing the personal information of millions of individuals and raising serious concerns about the safety of this information. Companies and governments have responded to this problem by adopting new security methods like encryption and multi-factor authentication.

Many businesses now use encryption and other security measures to protect themselves from cybercriminals and data thieves. Encryption keeps sensitive information hidden by encoding it so that only those possessing the corresponding key can decipher it. This prevents private information like bank account numbers or social security numbers from falling into the wrong hands.

Firewalls, virus scanners, and two-factor authentication are all additional security precautions that may be used with encryption. While these safeguards do much to stave against cyber assaults, they are not entirely impregnable, and data breaches are still possible.

The Future of Privacy and Security in a Technologically Advanced World

There's little doubt that concerns about privacy and security will persist even as technology improves. There must be strict safeguards to secure people's private information as more and more of it is transferred and kept digitally. To achieve this goal, it may be necessary to implement novel technologies and heightened levels of protection and to revise the rules and regulations regulating the collection and storage of private information.

Individuals and businesses are understandably concerned about the security and privacy consequences of widespread technological use and data collecting. There are numerous obstacles to overcome in a society where technology plays an increasingly important role, from acquiring and using personal data to the risk of cyber-attacks and data breaches. Companies and governments must keep spending money on security measures and working to educate people about the significance of privacy and security if personal data is to remain safe.

In conclusion, technology has profoundly impacted virtually every aspect of our lives, including society and culture, ethics, work, education, personal relationships, and security and privacy. The rise of artificial intelligence and machine learning has presented new ethical considerations, while automation is transforming the future of work. 

In education, technology has revolutionized the way we learn and access information. At the same time, our dependence on technology has brought new challenges in terms of personal relationships, communication, security, and privacy.

Jenni.ai is an AI tool that can help students write essays easily and quickly. Whether you're looking, for example, for essays on any of these topics or are seeking assistance in writing your essay, Jenni.ai offers a convenient solution. Sign up for a free trial today and experience the benefits of AI-powered writing assistance for yourself.

Try Jenni for free today

Create your first piece of content with Jenni today and never look back

Issue Cover

  • Previous Article
  • Next Article

Promises and Pitfalls of Technology

Politics and privacy, private-sector influence and big tech, state competition and conflict, author biography, how is technology changing the world, and how should the world change technology.

[email protected]

  • Split-Screen
  • Article contents
  • Figures & tables
  • Supplementary Data
  • Peer Review
  • Open the PDF for in another window
  • Guest Access
  • Get Permissions
  • Cite Icon Cite
  • Search Site

Josephine Wolff; How Is Technology Changing the World, and How Should the World Change Technology?. Global Perspectives 1 February 2021; 2 (1): 27353. doi: https://doi.org/10.1525/gp.2021.27353

Download citation file:

  • Ris (Zotero)
  • Reference Manager

Technologies are becoming increasingly complicated and increasingly interconnected. Cars, airplanes, medical devices, financial transactions, and electricity systems all rely on more computer software than they ever have before, making them seem both harder to understand and, in some cases, harder to control. Government and corporate surveillance of individuals and information processing relies largely on digital technologies and artificial intelligence, and therefore involves less human-to-human contact than ever before and more opportunities for biases to be embedded and codified in our technological systems in ways we may not even be able to identify or recognize. Bioengineering advances are opening up new terrain for challenging philosophical, political, and economic questions regarding human-natural relations. Additionally, the management of these large and small devices and systems is increasingly done through the cloud, so that control over them is both very remote and removed from direct human or social control. The study of how to make technologies like artificial intelligence or the Internet of Things “explainable” has become its own area of research because it is so difficult to understand how they work or what is at fault when something goes wrong (Gunning and Aha 2019) .

This growing complexity makes it more difficult than ever—and more imperative than ever—for scholars to probe how technological advancements are altering life around the world in both positive and negative ways and what social, political, and legal tools are needed to help shape the development and design of technology in beneficial directions. This can seem like an impossible task in light of the rapid pace of technological change and the sense that its continued advancement is inevitable, but many countries around the world are only just beginning to take significant steps toward regulating computer technologies and are still in the process of radically rethinking the rules governing global data flows and exchange of technology across borders.

These are exciting times not just for technological development but also for technology policy—our technologies may be more advanced and complicated than ever but so, too, are our understandings of how they can best be leveraged, protected, and even constrained. The structures of technological systems as determined largely by government and institutional policies and those structures have tremendous implications for social organization and agency, ranging from open source, open systems that are highly distributed and decentralized, to those that are tightly controlled and closed, structured according to stricter and more hierarchical models. And just as our understanding of the governance of technology is developing in new and interesting ways, so, too, is our understanding of the social, cultural, environmental, and political dimensions of emerging technologies. We are realizing both the challenges and the importance of mapping out the full range of ways that technology is changing our society, what we want those changes to look like, and what tools we have to try to influence and guide those shifts.

Technology can be a source of tremendous optimism. It can help overcome some of the greatest challenges our society faces, including climate change, famine, and disease. For those who believe in the power of innovation and the promise of creative destruction to advance economic development and lead to better quality of life, technology is a vital economic driver (Schumpeter 1942) . But it can also be a tool of tremendous fear and oppression, embedding biases in automated decision-making processes and information-processing algorithms, exacerbating economic and social inequalities within and between countries to a staggering degree, or creating new weapons and avenues for attack unlike any we have had to face in the past. Scholars have even contended that the emergence of the term technology in the nineteenth and twentieth centuries marked a shift from viewing individual pieces of machinery as a means to achieving political and social progress to the more dangerous, or hazardous, view that larger-scale, more complex technological systems were a semiautonomous form of progress in and of themselves (Marx 2010) . More recently, technologists have sharply criticized what they view as a wave of new Luddites, people intent on slowing the development of technology and turning back the clock on innovation as a means of mitigating the societal impacts of technological change (Marlowe 1970) .

At the heart of fights over new technologies and their resulting global changes are often two conflicting visions of technology: a fundamentally optimistic one that believes humans use it as a tool to achieve greater goals, and a fundamentally pessimistic one that holds that technological systems have reached a point beyond our control. Technology philosophers have argued that neither of these views is wholly accurate and that a purely optimistic or pessimistic view of technology is insufficient to capture the nuances and complexity of our relationship to technology (Oberdiek and Tiles 1995) . Understanding technology and how we can make better decisions about designing, deploying, and refining it requires capturing that nuance and complexity through in-depth analysis of the impacts of different technological advancements and the ways they have played out in all their complicated and controversial messiness across the world.

These impacts are often unpredictable as technologies are adopted in new contexts and come to be used in ways that sometimes diverge significantly from the use cases envisioned by their designers. The internet, designed to help transmit information between computer networks, became a crucial vehicle for commerce, introducing unexpected avenues for crime and financial fraud. Social media platforms like Facebook and Twitter, designed to connect friends and families through sharing photographs and life updates, became focal points of election controversies and political influence. Cryptocurrencies, originally intended as a means of decentralized digital cash, have become a significant environmental hazard as more and more computing resources are devoted to mining these forms of virtual money. One of the crucial challenges in this area is therefore recognizing, documenting, and even anticipating some of these unexpected consequences and providing mechanisms to technologists for how to think through the impacts of their work, as well as possible other paths to different outcomes (Verbeek 2006) . And just as technological innovations can cause unexpected harm, they can also bring about extraordinary benefits—new vaccines and medicines to address global pandemics and save thousands of lives, new sources of energy that can drastically reduce emissions and help combat climate change, new modes of education that can reach people who would otherwise have no access to schooling. Regulating technology therefore requires a careful balance of mitigating risks without overly restricting potentially beneficial innovations.

Nations around the world have taken very different approaches to governing emerging technologies and have adopted a range of different technologies themselves in pursuit of more modern governance structures and processes (Braman 2009) . In Europe, the precautionary principle has guided much more anticipatory regulation aimed at addressing the risks presented by technologies even before they are fully realized. For instance, the European Union’s General Data Protection Regulation focuses on the responsibilities of data controllers and processors to provide individuals with access to their data and information about how that data is being used not just as a means of addressing existing security and privacy threats, such as data breaches, but also to protect against future developments and uses of that data for artificial intelligence and automated decision-making purposes. In Germany, Technische Überwachungsvereine, or TÜVs, perform regular tests and inspections of technological systems to assess and minimize risks over time, as the tech landscape evolves. In the United States, by contrast, there is much greater reliance on litigation and liability regimes to address safety and security failings after-the-fact. These different approaches reflect not just the different legal and regulatory mechanisms and philosophies of different nations but also the different ways those nations prioritize rapid development of the technology industry versus safety, security, and individual control. Typically, governance innovations move much more slowly than technological innovations, and regulations can lag years, or even decades, behind the technologies they aim to govern.

In addition to this varied set of national regulatory approaches, a variety of international and nongovernmental organizations also contribute to the process of developing standards, rules, and norms for new technologies, including the International Organization for Standardization­ and the International Telecommunication Union. These multilateral and NGO actors play an especially important role in trying to define appropriate boundaries for the use of new technologies by governments as instruments of control for the state.

At the same time that policymakers are under scrutiny both for their decisions about how to regulate technology as well as their decisions about how and when to adopt technologies like facial recognition themselves, technology firms and designers have also come under increasing criticism. Growing recognition that the design of technologies can have far-reaching social and political implications means that there is more pressure on technologists to take into consideration the consequences of their decisions early on in the design process (Vincenti 1993; Winner 1980) . The question of how technologists should incorporate these social dimensions into their design and development processes is an old one, and debate on these issues dates back to the 1970s, but it remains an urgent and often overlooked part of the puzzle because so many of the supposedly systematic mechanisms for assessing the impacts of new technologies in both the private and public sectors are primarily bureaucratic, symbolic processes rather than carrying any real weight or influence.

Technologists are often ill-equipped or unwilling to respond to the sorts of social problems that their creations have—often unwittingly—exacerbated, and instead point to governments and lawmakers to address those problems (Zuckerberg 2019) . But governments often have few incentives to engage in this area. This is because setting clear standards and rules for an ever-evolving technological landscape can be extremely challenging, because enforcement of those rules can be a significant undertaking requiring considerable expertise, and because the tech sector is a major source of jobs and revenue for many countries that may fear losing those benefits if they constrain companies too much. This indicates not just a need for clearer incentives and better policies for both private- and public-sector entities but also a need for new mechanisms whereby the technology development and design process can be influenced and assessed by people with a wider range of experiences and expertise. If we want technologies to be designed with an eye to their impacts, who is responsible for predicting, measuring, and mitigating those impacts throughout the design process? Involving policymakers in that process in a more meaningful way will also require training them to have the analytic and technical capacity to more fully engage with technologists and understand more fully the implications of their decisions.

At the same time that tech companies seem unwilling or unable to rein in their creations, many also fear they wield too much power, in some cases all but replacing governments and international organizations in their ability to make decisions that affect millions of people worldwide and control access to information, platforms, and audiences (Kilovaty 2020) . Regulators around the world have begun considering whether some of these companies have become so powerful that they violate the tenets of antitrust laws, but it can be difficult for governments to identify exactly what those violations are, especially in the context of an industry where the largest players often provide their customers with free services. And the platforms and services developed by tech companies are often wielded most powerfully and dangerously not directly by their private-sector creators and operators but instead by states themselves for widespread misinformation campaigns that serve political purposes (Nye 2018) .

Since the largest private entities in the tech sector operate in many countries, they are often better poised to implement global changes to the technological ecosystem than individual states or regulatory bodies, creating new challenges to existing governance structures and hierarchies. Just as it can be challenging to provide oversight for government use of technologies, so, too, oversight of the biggest tech companies, which have more resources, reach, and power than many nations, can prove to be a daunting task. The rise of network forms of organization and the growing gig economy have added to these challenges, making it even harder for regulators to fully address the breadth of these companies’ operations (Powell 1990) . The private-public partnerships that have emerged around energy, transportation, medical, and cyber technologies further complicate this picture, blurring the line between the public and private sectors and raising critical questions about the role of each in providing critical infrastructure, health care, and security. How can and should private tech companies operating in these different sectors be governed, and what types of influence do they exert over regulators? How feasible are different policy proposals aimed at technological innovation, and what potential unintended consequences might they have?

Conflict between countries has also spilled over significantly into the private sector in recent years, most notably in the case of tensions between the United States and China over which technologies developed in each country will be permitted by the other and which will be purchased by other customers, outside those two countries. Countries competing to develop the best technology is not a new phenomenon, but the current conflicts have major international ramifications and will influence the infrastructure that is installed and used around the world for years to come. Untangling the different factors that feed into these tussles as well as whom they benefit and whom they leave at a disadvantage is crucial for understanding how governments can most effectively foster technological innovation and invention domestically as well as the global consequences of those efforts. As much of the world is forced to choose between buying technology from the United States or from China, how should we understand the long-term impacts of those choices and the options available to people in countries without robust domestic tech industries? Does the global spread of technologies help fuel further innovation in countries with smaller tech markets, or does it reinforce the dominance of the states that are already most prominent in this sector? How can research universities maintain global collaborations and research communities in light of these national competitions, and what role does government research and development spending play in fostering innovation within its own borders and worldwide? How should intellectual property protections evolve to meet the demands of the technology industry, and how can those protections be enforced globally?

These conflicts between countries sometimes appear to challenge the feasibility of truly global technologies and networks that operate across all countries through standardized protocols and design features. Organizations like the International Organization for Standardization, the World Intellectual Property Organization, the United Nations Industrial Development Organization, and many others have tried to harmonize these policies and protocols across different countries for years, but have met with limited success when it comes to resolving the issues of greatest tension and disagreement among nations. For technology to operate in a global environment, there is a need for a much greater degree of coordination among countries and the development of common standards and norms, but governments continue to struggle to agree not just on those norms themselves but even the appropriate venue and processes for developing them. Without greater global cooperation, is it possible to maintain a global network like the internet or to promote the spread of new technologies around the world to address challenges of sustainability? What might help incentivize that cooperation moving forward, and what could new structures and process for governance of global technologies look like? Why has the tech industry’s self-regulation culture persisted? Do the same traditional drivers for public policy, such as politics of harmonization and path dependency in policy-making, still sufficiently explain policy outcomes in this space? As new technologies and their applications spread across the globe in uneven ways, how and when do they create forces of change from unexpected places?

These are some of the questions that we hope to address in the Technology and Global Change section through articles that tackle new dimensions of the global landscape of designing, developing, deploying, and assessing new technologies to address major challenges the world faces. Understanding these processes requires synthesizing knowledge from a range of different fields, including sociology, political science, economics, and history, as well as technical fields such as engineering, climate science, and computer science. A crucial part of understanding how technology has created global change and, in turn, how global changes have influenced the development of new technologies is understanding the technologies themselves in all their richness and complexity—how they work, the limits of what they can do, what they were designed to do, how they are actually used. Just as technologies themselves are becoming more complicated, so are their embeddings and relationships to the larger social, political, and legal contexts in which they exist. Scholars across all disciplines are encouraged to join us in untangling those complexities.

Josephine Wolff is an associate professor of cybersecurity policy at the Fletcher School of Law and Diplomacy at Tufts University. Her book You’ll See This Message When It Is Too Late: The Legal and Economic Aftermath of Cybersecurity Breaches was published by MIT Press in 2018.

Recipient(s) will receive an email with a link to 'How Is Technology Changing the World, and How Should the World Change Technology?' and will not need an account to access the content.

Subject: How Is Technology Changing the World, and How Should the World Change Technology?

(Optional message may have a maximum of 1000 characters.)

Citing articles via

Email alerts, affiliations.

  • Special Collections
  • Review Symposia
  • Info for Authors
  • Info for Librarians
  • Editorial Team
  • Emerging Scholars Forum
  • Open Access
  • Online ISSN 2575-7350
  • Copyright © 2024 The Regents of the University of California. All Rights Reserved.

Stay Informed

Disciplines.

  • Ancient World
  • Anthropology
  • Communication
  • Criminology & Criminal Justice
  • Film & Media Studies
  • Food & Wine
  • Browse All Disciplines
  • Browse All Courses
  • Book Authors
  • Booksellers
  • Instructions
  • Journal Authors
  • Journal Editors
  • Media & Journalists
  • Planned Giving

About UC Press

  • Press Releases
  • Seasonal Catalog
  • Acquisitions Editors
  • Customer Service
  • Exam/Desk Requests
  • Media Inquiries
  • Print-Disability
  • Rights & Permissions
  • UC Press Foundation
  • © Copyright 2024 by the Regents of the University of California. All rights reserved. Privacy policy    Accessibility

This Feature Is Available To Subscribers Only

Sign In or Create an Account

  • Subject List
  • Take a Tour
  • For Authors
  • Subscriber Services
  • Publications
  • African American Studies
  • African Studies
  • American Literature
  • Anthropology
  • Architecture Planning and Preservation
  • Art History
  • Atlantic History
  • Biblical Studies
  • British and Irish Literature
  • Childhood Studies
  • Chinese Studies
  • Cinema and Media Studies
  • Communication
  • Criminology
  • Environmental Science
  • Evolutionary Biology
  • International Law
  • International Relations
  • Islamic Studies
  • Jewish Studies
  • Latin American Studies
  • Latino Studies
  • Linguistics
  • Literary and Critical Theory
  • Medieval Studies
  • Military History
  • Political Science
  • Public Health
  • Renaissance and Reformation

Social Work

  • Urban Studies
  • Victorian Literature
  • Browse All Subjects

How to Subscribe

  • Free Trials

In This Article Expand or collapse the "in this article" section Technology, Human Relationships, and Human Interaction

Introduction, introductory works.

  • Reference Works
  • Organizations
  • Technology-Mediated Communication
  • Theoretical Approaches
  • Social Work Practice Implications

Related Articles Expand or collapse the "related articles" section about

About related articles close popup.

Lorem Ipsum Sit Dolor Amet

Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Aliquam ligula odio, euismod ut aliquam et, vestibulum nec risus. Nulla viverra, arcu et iaculis consequat, justo diam ornare tellus, semper ultrices tellus nunc eu tellus.

  • Computational Social Welfare: Applying Data Science in Social Work
  • Digital Storytelling for Social Work Interventions
  • Ethical Issues in Social Work and Technology
  • Evidence-based Social Work Practice
  • Health Social Work
  • Human Needs
  • Impact of Emerging Technology in Social Work Practice
  • Internet and Video Game Addiction
  • Technology Adoption in Social Work Education
  • Technology for Social Work Interventions
  • Technology in Social Work
  • Virtual Reality and Social Work

Other Subject Areas

Forthcoming articles expand or collapse the "forthcoming articles" section.

  • Child Welfare Effectiveness
  • Immigration and Child Welfare
  • International Human Trafficking
  • Find more forthcoming articles...
  • Export Citations
  • Share This Facebook LinkedIn Twitter

Technology, Human Relationships, and Human Interaction by Angela N. Bullock , Alex D. Colvin LAST MODIFIED: 27 April 2017 DOI: 10.1093/obo/9780195389678-0249

The utilization of technology to create and maintain relationships among people has become commonplace. According to the Pew Research Center, the percentage of American adults who own a tablet computer increased from 3 percent in 2010 to 45 percent in 2015, and the percentage of American adults who own a cell phone increased from 53 percent in 2000 to 92 percent in 2015. Furthermore, in 2015, 76 percent of online adults used some type of social networking site, compared to 8 percent in 2005. Technology is often introduced into a social system with the stated intention of making life easier for people. As technology becomes more pervasive in everyday life, the assessment of technology’s presence in relationships and its impact on how humans interact with one another is an emerging area of study. There are many perspectives on the relationship between technology and human interactions and relationships. It is purported that the integration of technologies in everyday life can have profound effects on human relationships, in both positive and negative ways. More notably, technologies impact on or interfere with how individuals engage in interpersonal relationships, behave within relationships, and project feelings and meanings including displays of emotions and love. Essentially, the new technological landscape now connects to what it means to be human.

This section presents a sample of early works that guided research into the fostering of relationships and interpersonal interactions through technology. Kiesler, et al. 1984 looks beyond the efficiency and technical capabilities of computer communication technologies and provides insight into the psychological, social, and cultural significance of technology. Jones 1994 provides a comprehensive examination of the varying aspects of social relationships in cyberspace. Preliminary studies that provide best-practice recommendations for the adoption of technology-based intervention in social work practice include Pardeck and Schulte 1990 ; Cwikel and Cnaan 1991 ; Schopler, et al. 1998 ; and Gonchar and Adams 2000 . Lea and Spears 1995 ; Kraut, et al. 1998 ; and Nie and Erbring 2000 offer early insight into how the Internet began to shape the way humans interact.

Cwikel, Julie, and Ram Cnaan. 1991. Ethical dilemmas in applying second-wave information technology to social work practice. Social Work 36.2: 114–120.

These authors consider ethical dilemmas brought about by the use of information technology in social work practice. They examine the effects on the client–worker relationship of the use of client databases, expert systems, therapeutic programs, and telecommunications.

Gonchar, Nancy, and Joan R. Adams. 2000. Living in cyberspace: Recognizing the importance of the virtual world in social work assessments. Journal of Social Work Education 36:587–600.

Utilizing the person-in-environment approach, this source explores the opportunities online communication provides individuals in fostering relationships, either healthy or unhealthy.

Jones, Steve, ed. 1994. CyberSociety: Computer-mediated communication and community . Thousand Oaks, CA: SAGE.

Explores the construction, maintenance, and mediation of emerging cybersocieties. Aspects of social relationships generated by computer-mediated communication are discussed.

Kiesler, Sara, Jane Siegel, and Timothy W. McGuire. 1984. Social psychological aspects of computer-mediated communication. American Psychologist 39.10: 1123–1134.

DOI: 10.1037/0003-066X.39.10.1123

The authors present potential behavior and social effects of computer-mediated communication.

Kraut, Robert, Michael Patterson, Vickie Lundmark, Sara Kiesler, Tridas Mukopadhyay, and William Scherlis. 1998. Internet paradox: A social technology that reduces social involvement and psychological well-being? American Psychologist 53.9: 1017–1031.

DOI: 10.1037/0003-066X.53.9.1017

This study examines the positive and negative impacts of the Internet on social relationships, participation in community life, and psychological well-being. The implications for research, policy, and technology development are discussed.

Lea, Martin, and Russell Spears. 1995. Love at first byte? Building personal relationships over computer networks. In Understudied relationships: Off the beaten track . Edited by J. T. Wood and S. Duck, 197–233. Thousand Oaks, CA: SAGE.

This chapter focuses on the connection between personal relationships and computer networks. Previous studies that examine dynamics of online relationships are reviewed.

Nie, Norman H., and Lutz Erbring. 2000. Internet and society: A preliminary report . Stanford, CA: Stanford Institute for the Quantitative Study of Society.

This study presents the results of an early study that explores the sociological impact of information technology and the role of the Internet in shaping interpersonal relationships and interactions.

Pardeck, John T., and Ruth S. Schulte. 1990. Computers in social intervention: Implications for professional social work practice and education. Family Therapy 17.2: 109.

The authors discuss the impact of computer technology on aspects of social work intervention including inventory testing, client history, clinical assessment, computer-assisted therapy, and computerized therapy.

Schopler, Janice H., Melissa D. Abell, and Maeda J. Galinsky. 1998. Technology-based groups: A review and conceptual framework for practice. Social Work 43.3: 254–267.

DOI: 10.1093/sw/43.3.254

The authors examine studies of social work practice using telephone and computer groups. Social work practice guidelines for technology-based groups are discussed.

Turkle, Sherry. 1984. The second self: Computers and the human spirit . New York: Simon & Schuster.

Explores the use of computers not as tools but as part of our social and psychological lives and how computers affect our awareness of ourselves, of one another, and of our relationship with the world.

Weizenbaum, Joseph. 1976. Computer power and human reason: From judgment to calculation . San Francisco: W. H. Freeman.

Examines the sources of the computer’s power including the notions of the brilliance of computers and offers evaluative explorations of computer power and human reason. The book presents common theoretical issues and applications of computer power such as computer models of psychology, natural language, and artificial intelligence.

back to top

Users without a subscription are not able to see the full content on this page. Please subscribe or login .

Oxford Bibliographies Online is available by subscription and perpetual access to institutions. For more information or to contact an Oxford Sales Representative click here .

  • About Social Work »
  • Meet the Editorial Board »
  • Adolescent Depression
  • Adolescent Pregnancy
  • Adolescents
  • Adoption Home Study Assessments
  • Adult Protective Services in the United States
  • African Americans
  • Aging out of foster care
  • Aging, Physical Health and
  • Alcohol and Drug Abuse Problems
  • Alcohol and Drug Problems, Prevention of Adolescent and Yo...
  • Alcohol Problems: Practice Interventions
  • Alcohol Use Disorder
  • Alzheimer's Disease and Other Dementias
  • Anti-Oppressive Practice
  • Asian Americans
  • Asian-American Youth
  • Autism Spectrum Disorders
  • Baccalaureate Social Workers
  • Behavioral Health
  • Behavioral Social Work Practice
  • Bereavement Practice
  • Bisexuality
  • Brief Therapies in Social Work: Task-Centered Model and So...
  • Bullying and Social Work Intervention
  • Canadian Social Welfare, History of
  • Case Management in Mental Health in the United States
  • Central American Migration to the United States
  • Child Maltreatment Prevention
  • Child Neglect and Emotional Maltreatment
  • Child Poverty
  • Child Sexual Abuse
  • Child Welfare
  • Child Welfare and Child Protection in Europe, History of
  • Child Welfare Practice with LGBTQ Youth and Families
  • Children of Incarcerated Parents
  • Christianity and Social Work
  • Chronic Illness
  • Clinical Social Work Practice with Adult Lesbians
  • Clinical Social Work Practice with Males
  • Cognitive Behavior Therapies with Diverse and Stressed Pop...
  • Cognitive Processing Therapy
  • Cognitive-Behavioral Therapy
  • Community Development
  • Community Policing
  • Community-Based Participatory Research
  • Community-Needs Assessment
  • Comparative Social Work
  • Computational Social Welfare: Applying Data Science in Soc...
  • Conflict Resolution
  • Council on Social Work Education
  • Counseling Female Offenders
  • Criminal Justice
  • Crisis Interventions
  • Cultural Competence and Ethnic Sensitive Practice
  • Culture, Ethnicity, Substance Use, and Substance Use Disor...
  • Dementia Care
  • Dementia Care, Ethical Aspects of
  • Depression and Cancer
  • Development and Infancy (Birth to Age Three)
  • Differential Response in Child Welfare
  • Direct Practice in Social Work
  • Disabilities
  • Disability and Disability Culture
  • Domestic Violence Among Immigrants
  • Early Pregnancy and Parenthood Among Child Welfare–Involve...
  • Eating Disorders
  • Ecological Framework
  • Economic Evaluation
  • Elder Mistreatment
  • End-of-Life Decisions
  • Epigenetics for Social Workers
  • Ethics and Values in Social Work
  • European Institutions and Social Work
  • European Union, Justice and Home Affairs in the
  • Evidence-based Social Work Practice: Finding Evidence
  • Evidence-based Social Work Practice: Issues, Controversies...
  • Experimental and Quasi-Experimental Designs
  • Families with Gay, Lesbian, or Bisexual Parents
  • Family Caregiving
  • Family Group Conferencing
  • Family Policy
  • Family Services
  • Family Therapy
  • Family Violence
  • Fathering Among Families Served By Child Welfare
  • Fetal Alcohol Spectrum Disorders
  • Field Education
  • Financial Literacy and Social Work
  • Financing Health-Care Delivery in the United States
  • Forensic Social Work
  • Foster Care
  • Foster care and siblings
  • Gender, Violence, and Trauma in Immigration Detention in t...
  • Generalist Practice and Advanced Generalist Practice
  • Grounded Theory
  • Group Work across Populations, Challenges, and Settings
  • Group Work, Research, Best Practices, and Evidence-based
  • Harm Reduction
  • Health Care Reform
  • Health Disparities
  • History of Social Work and Social Welfare, 1900–1950
  • History of Social Work and Social Welfare, 1950-1980
  • History of Social Work and Social Welfare, pre-1900
  • History of Social Work from 1980-2014
  • History of Social Work in China
  • History of Social Work in Northern Ireland
  • History of Social Work in the Republic of Ireland
  • History of Social Work in the United Kingdom
  • HIV/AIDS and Children
  • HIV/AIDS Prevention with Adolescents
  • Homelessness
  • Homelessness: Ending Homelessness as a Grand Challenge
  • Homelessness Outside the United States
  • Human Trafficking, Victims of
  • Immigrant Integration in the United States
  • Immigrant Policy in the United States
  • Immigrants and Refugees
  • Immigrants and Refugees: Evidence-based Social Work Practi...
  • Immigration and Health Disparities
  • Immigration and Intimate Partner Violence
  • Immigration and Poverty
  • Immigration and Spirituality
  • Immigration and Substance Use
  • Immigration and Trauma
  • Impaired Professionals
  • Implementation Science and Practice
  • Indigenous Peoples
  • Individual Placement and Support (IPS) Supported Employmen...
  • In-home Child Welfare Services
  • Intergenerational Transmission of Maltreatment
  • International Social Welfare
  • International Social Work
  • International Social Work and Education
  • International Social Work and Social Welfare in Southern A...
  • Interpersonal Psychotherapy
  • Intervention with Traumatized Populations
  • Interviewing
  • Intimate-Partner Violence
  • Juvenile Justice
  • Kinship Care
  • Korean Americans
  • Latinos and Latinas
  • Law, Social Work and the
  • LGBTQ Populations and Social Work
  • Mainland European Social Work, History of
  • Major Depressive Disorder
  • Management and Administration in Social Work
  • Maternal Mental Health
  • Measurement, Scales, and Indices
  • Medical Illness
  • Men: Health and Mental Health Care
  • Mental Health
  • Mental Health Diagnosis and the Addictive Substance Disord...
  • Mental Health Needs of Older People, Assessing the
  • Mental Illness: Children
  • Mental Illness: Elders
  • Meta-analysis
  • Microskills
  • Middle East and North Africa, International Social Work an...
  • Military Social Work
  • Mixed Methods Research
  • Moral distress and injury in social work
  • Motivational Interviewing
  • Multiculturalism
  • Native Americans
  • Native Hawaiians and Pacific Islanders
  • Neighborhood Social Cohesion
  • Neuroscience and Social Work
  • Nicotine Dependence
  • Occupational Social Work
  • Organizational Development and Change
  • Pain Management
  • Palliative Care
  • Palliative Care: Evolution and Scope of Practice
  • Pandemics and Social Work
  • Parent Training
  • Personalization
  • Person-in-Environment
  • Philosophy of Science and Social Work
  • Physical Disabilities
  • Podcasts and Social Work
  • Police Social Work
  • Political Social Work in the United States
  • Positive Youth Development
  • Postmodernism and Social Work
  • Postsecondary Education Experiences and Attainment Among Y...
  • Post-Traumatic Stress Disorder (PTSD)
  • Practice Interventions and Aging
  • Practice Interventions with Adolescents
  • Practice Research
  • Primary Prevention in the 21st Century
  • Productive Engagement of Older Adults
  • Profession, Social Work
  • Program Development and Grant Writing
  • Promoting Smart Decarceration as a Grand Challenge
  • Psychiatric Rehabilitation
  • Psychoanalysis and Psychodynamic Theory
  • Psychoeducation
  • Psychometrics
  • Psychopathology and Social Work Practice
  • Psychopharmacology and Social Work Practice
  • Psychosocial Framework
  • Psychosocial Intervention with Women
  • Psychotherapy and Social Work
  • Qualitative Research
  • Race and Racism
  • Readmission Policies in Europe
  • Redefining Police Interactions with People Experiencing Me...
  • Rehabilitation
  • Religiously Affiliated Agencies
  • Reproductive Health
  • Research Ethics
  • Restorative Justice
  • Risk Assessment in Child Protection Services
  • Risk Management in Social Work
  • Rural Social Work in China
  • Rural Social Work Practice
  • School Social Work
  • School Violence
  • School-Based Delinquency Prevention
  • Services and Programs for Pregnant and Parenting Youth
  • Severe and Persistent Mental Illness: Adults
  • Sexual and Gender Minority Immigrants, Refugees, and Asylu...
  • Sexual Assault
  • Single-System Research Designs
  • Social and Economic Impact of US Immigration Policies on U...
  • Social Development
  • Social Insurance and Social Justice
  • Social Intervention Research
  • Social Justice and Social Work
  • Social Movements
  • Social Planning
  • Social Policy
  • Social Policy in Denmark
  • Social Security in the United States (OASDHI)
  • Social Work and Islam
  • Social Work and Social Welfare in East, West, and Central ...
  • Social Work and Social Welfare in Europe
  • Social Work Education and Research
  • Social Work Leadership
  • Social Work Luminaries: Luminaries Contributing to the Cla...
  • Social Work Luminaries: Luminaries contributing to the fou...
  • Social Work Luminaries: Luminaries Who Contributed to Soci...
  • Social Work Regulation
  • Social Work Research Methods
  • Social Work with Interpreters
  • Solution-Focused Therapy
  • Strategic Planning
  • Strengths Perspective
  • Strengths-Based Models in Social Work
  • Supplemental Security Income
  • Survey Research
  • Sustainability: Creating Social Responses to a Changing En...
  • Syrian Refugees in Turkey
  • Systematic Review Methods
  • Task-Centered Practice
  • Technology, Human Relationships, and Human Interaction
  • Terminal Illness
  • The Impact of Systemic Racism on Latinxs’ Experiences with...
  • Transdisciplinary Science
  • Translational Science and Social Work
  • Transnational Perspectives in Social Work
  • Transtheoretical Model of Change
  • Trauma-Informed Care
  • Triangulation
  • Tribal child welfare practice in the United States
  • United States, History of Social Welfare in the
  • Universal Basic Income
  • Veteran Services
  • Vicarious Trauma and Resilience in Social Work Practice wi...
  • Vicarious Trauma Redefining PTSD
  • Victim Services
  • Welfare State Reform in France
  • Welfare State Theory
  • Women and Macro Social Work Practice
  • Women's Health Care
  • Work and Family in the German Welfare State
  • Workforce Development of Social Workers Pre- and Post-Empl...
  • Working with Non-Voluntary and Mandated Clients
  • Young and Adolescent Lesbians
  • Youth at Risk
  • Youth Services
  • Privacy Policy
  • Cookie Policy
  • Legal Notice
  • Accessibility

Powered by:

  • [66.249.64.20|185.80.150.64]
  • 185.80.150.64

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Dialogues Clin Neurosci
  • v.22(2); 2020 Jun

Language: English | Spanish | French

Going digital: how technology use may influence human brains and behavior


Camino a la digitalización: influencia de la tecnología en el cerebro y el comportamiento humano, passage au tout numérique : influence de la technologie sur le cerveau et le comportement humains, margret r. hoehe.

Author affiliations: Department of Computational Molecular Biology, Max Planck Institute for Molecular Genetics, Berlin, Germany

Florence Thibaut

University Hospital Cochin - site Tarnier; University of Paris; INSERM U1266, Institute of Psychiatry and Neuroscience, Paris, France

The digital revolution has changed, and continues to change, our world and our lives. Currently, major aspects of our lives have moved online due to the coronavirus pandemic, and social distancing has necessitated virtual togetherness. In a synopsis of 10 articles we present ample evidence that the use of digital technology may influence human brains and behavior in both negative and positive ways. For instance, brain imaging techniques show concrete morphological alterations in early childhood and during adolescence that are associated with intensive digital media use. Technology use apparently affects brain functions, for example visual perception, language, and cognition. Extensive studies could not confirm common concerns that excessive screen time is linked to mental health problems, or the deterioration of well-being. Nevertheless, it is important to use digital technology consciously, creatively, and sensibly to improve personal and professional relationships. Digital technology has great potential for mental health assessment and treatment, and the improvement of personal mental performance.


La revolución digital ha cambiado y continúa cambiando nuestro mundo y nuestras vidas. Actualmente, los principales aspectos de nuestras vidas han migrado hacia el funcionamiento “online” debido a la pandemia del coronavirus, y el distanciamiento social ha requerido de cercanías virtuales. En una sinopsis de 10 artículos, se presenta una amplia evidencia de que el empleo de la tecnología digital puede influir en el cerebro y en el comportamiento humano de manera negativa y positiva. Por ejemplo, las técnicas de imágenes cerebrales muestran alteraciones morfológicas concretas en la primera infancia y durante la adolescencia, las cuales están asociadas con el empleo intenso de medios digitales. En apariencia, la utilización de la tecnología afecta las funciones cerebrales, como la percepción visual, el lenguaje y la cognición. Numerosos estudios no pudieron confirmar las preocupaciones comunes en cuanto a que el tiempo excesivo de pantalla esté relacionado con problemas de salud mental o el deterioro del bienestar. Sin embargo, es importante emplear la tecnología digital de manera consciente, creativa y sensata para mejorar las relaciones personales y profesionales. La tecnología digital tiene un gran potencial para la evaluación y el tratamiento de la salud mental, y el aumento del rendimiento mental personal.

La révolution numérique a modifié et continue à modifier notre monde et nos vies. La pandémie actuelle due au coronavirus a fait basculer en ligne de nombreux pans de notre existence et la distanciation sociale a imposé la virtualité des rassemblements. Les données des dix articles présentés ici attestent de l’influence de la technologie numérique sur les cerveaux et les comportements, de manière positive et négative. Par exemple,l’imagerie cérébrale montre des altérations morphologiques concrètes apparaissant tôt dans l’enfance et pendant l’adolescence lors d’une pratique intensive des media numériques. Cela concernerait certaines fonctions cérébrales comme la perception visuelle, le langage et la cognition. Des études approfondies n’ont pas confirmé les inquiétudes courantes quant aux répercussions d’un temps excessif passé devant un écran en termes de santé mentale ou de qualité de vie. Il est néanmoins important de privilégier une utilisation consciente, créative et raisonnable des technologies numériques afin d’améliorer les relations personnelles et professionnelles. Ces technologies ont un grand potentiel dans l’évaluation et le traitement de la santé mentale ainsi que dans l’amélioration des performances mentales personnelles.

The “Digital Revolution”: remaking the world


Within a few decades, digital technology has transformed our lives. At any time, we can access almost unlimited amounts of information just as we can produce, process, and store colossal amounts of data. We can constantly interact, and connect, with each other by use of digital devices and social media. Coping with the daily demands of life as well as pursuing pleasure in recreational activities appears inconceivable without the use of smartphones, tablets, computers, and access to Internet platforms. Presently, over 4.57 billion people, 59% of the world population, use the Internet according to recent estimates (December 31 st , 2019), ranging between 39% (Africa) and 95% (North America). 1 People are spending an enormous, “insane” amount of time online, according to the latest Digital 2019 report compiled by Ofcom 2 : on average 6 hours and 42 minutes (06:42) each day (between 03:45 in Japan and 10:02 in the Philippines), half of that on mobile devices, on average equating to more than 100 days per year for every Internet user. According to a landmark report on the impact of the “decade of the smartphone,” 3 the average person in the UK spends 24 hours a week online, with 20% of all adults spending as much as 40 hours, and those aged 16 to 24 on average 34.3 hours a week. Britons are checking their smartphones on average every 12 minutes. In the US, teen screen time averages over 7 hours a day, excluding time for homework. Digital technology has become ubiquitous and entwined with our modern lives. As Richard Hodson in the Nature Outlook on “Digital Revolution,” 2018, concluded, “an explosion in information technology is remaking the world, leaving few aspects of society untouched. In the space of 50 years, the digital world has grown to become crucial to the functioning of society.” 4 This period of societal transformation has been considered “the most recent long wave of humanity’s socio-economic evolution”. As a “meta-paradigm of societal modernization based on technological change” induced by the transformation of information, it supersedes earlier periods of technological revolution based on the transformation of material and energy, respectively, spanning over 2 million years altogether (Hilbert, p 189 in this issue). 


In particular, the excessive use of digital technology during adolescence has given rise to grave concerns that this technology is harmful and damages the (developing) brain or may even cause mental health problems. Public concern culminated in Jean Twenge’s 2017 article “Have Phones Destroyed a Generation?,” 5 which linked the rise in suicide, depression, and anxiety among teens after 2012 to the appearance of smartphones. All-too-familiar pictures: parents and children, or couples, or friends, at the table, staring at their phones, texting; colleagues staring at screens, busy with emails; individuals, heads down, hooked on their phones, blind to their surroundings, wherever they are. Individuals interacting with their devices, not with each other. “The flight from conversation,” which may erode (close) human relationships and with them the capacity for empathy, introspection, creativity, and productivity - ultimately, the social fabric of our communities. Sherry Turkle, who has studied the relationship of humans with technology for decades, has articulated these concerns in Alone Together and Reclaiming Conversation . 6 , 7 Thus, “life offline” has become a consideration and advice to limit screen time and practice digital minimalism has become popular. 8 The concerns about screen time and efforts to keep us from staring at our devices and detox our digital lives came to a sudden end with the COVID-19 coronavirus pandemic. 9 Almost overnight, nearly our entire personal, professional, educational, cultural, and political activities were moved online. The dictum of social distancing necessitated virtual togetherness.


Changing human brains and behavior?


The use of digital technology has changed, and continues to change, our lives. How could this affect human brains and behavior, in both negative and positive ways? Apparently, the ability of the human brain to adapt to any changes plays a key role in generating structural and/or functional changes induced by the usage of digital devices. The most direct evidence for an effect of frequent smart phone use on the brain is provided by the demonstration of changes in cortical activity (Korte, p 101 in this issue). Touching the screen repetitively – the average American user touches it 2176 times a day 10 – induces an increase of the cortical potentials allotted to the tactile receptors on the fingertips, leading to an enlargement, ie, reorganization of the motor and sensory cortex. It remains to be determined whether this reshaping of cortical sensory representation occurs at the expense of other motor coordination skills. Processes of neuroplasticity are particularly active in the developing brain, especially during stages of dynamic brain growth in early childhood. For instance, as demonstrated by functional magnetic resonance imaging (fMRI), extensive childhood experience with the game “Pokémon” influences the organization of the visual cortex, with distinct effects on the perception of visual objects even decades later. Furthermore, as shown by diffusion tensor MRI, early extensive screen-based media use is significantly associated with lower microstructural integrity of brain white matter tracts supporting language and literacy skills in preschoolers. 11 Also, adolescence is a time of significant development, with the brain areas involved in emotional and social behavior undergoing marked changes. Social media use can have a profound effect; eg, the size of an adolescent’s online social network was closely linked to brain anatomy alterations as demonstrated by structural MRI. The impact of digital technology use, both negative and positive, on these and many more brain-related phenomena has been elaborated in the review by Korte, who provides a comprehensive overview of the field. 


The most direct approach to assess the effect of excessive digital media use on (adolescent) brains presently appears to be the analysis of the neurobiological mechanisms underlying Internet and Gaming Disorder (IGD) (Weinstein and Lejoyeux, p 113 in this issue). The authors thoroughly survey existing brain imaging studies, summarizing the effects of IGD on the resting state, the brain’s gray matter volume and white matter density, cortical thickness, functional connectivity, and brain activations, especially in regions related to reward and decision making, and neurotransmitter systems. Taken together, individuals with IGD share many typical neurobiological alterations with other forms of addiction, but also show unique patterns of activation specifically in brain regions which are associated with cognitive, motor, and sensory function. The effects of the Internet on cognition have been comprehensively elaborated by Firth et al. 12 Examining psychological, psychiatric, and neuroimaging data, they provide evidence for both acute and sustained alterations in specific areas of cognition, which may reflect structural and functional changes in the brain. These affect: (i) attentional capacities, which are divided between multiple online sources at the loss of sustained concentration on a single task; (ii) memory processes - permanently accessible online information can change the ways in which we retrieve, store, recall and even value knowledge; and (iii) social cognition; the prospects for social interactions and the contexts within which social relationships can happen have dramatically changed. A complementary contribution rounding up these reviews is provided by Small et al (p 179 in this issue). Among the possible harmful “brain health consequences,” these investigators emphasize attention problems and their potential link to symptoms of attention deficit-hyperactivity disorder (ADHD); furthermore the (paradoxical) association of excessive social media use with the perception of social isolation, observable at any age; the impaired emotional and social intelligence, poorer cognitive/language and brain development, and disrupted sleep. A substantial part of this review is devoted to the positive effects benefiting brain health in adults and the elderly, which are referred to below. Independent of ongoing research on the negative and positive implications of digital technology use, there remains a common feeling that there is something about the whole phenomenon that is just not “natural.” “We did not evolve to be staring at a screen for most of our waking hours. We evolved to be interacting with each other face-to-face, using our senses of smell and touch and taste – not just sight and sound… it cannot be healthy to stray so far from the activities for which nature has shaped our brains and our bodies.” Giedd (p 127 in this issue) challenges this notion in his fascinating review on “The natural allure of digital media,” putting the intensive digital media use during adolescence into a grand evolutionary perspective. He argues that the “desire for digital media is in fact exquisitely aligned with the biology of the teen brain and our evolutionary heritage,” with three features of adolescence being particularly relevant to this issue: (i) hunger for human connectedness; (ii) appetite for adventure; and (iii) desire for information.


Screen time: boon or bane?


As with any major innovation that has a profound impact on our lives, finding useful information and orientation means discerning scientific evidence from media narratives. Thus, synthesizing data from recent narrative reviews and meta-analyses including more than 50 studies, Odgers and Jensen (p 143 in this issue) could not confirm a strong linkage between the quantity of adolescents’ digital technology engagement and mental health problems. “There doesn’t seem to be an evidence base that would explain the level of panic and consternation around these issues” said Odgers, in the New York Times. 13 The authors point to significant limitations and foundational flaws in the existing knowledge base related to this topic; for instance, the nearly sole reliance on screen time metrics; the disregard of individual differences; the circumstance that almost none of the study designs allowed causal inference. On the other hand, a highly robust finding across multiple studies was that offline vulnerabilities (such as risks present in low-income families, communities, etc) tend to mirror and shape online risks. The observed social and digital divides are presently being magnified through the coronavirus crisis and most likely to increase in the future, further amplifying the existing inequalities in education, mental health, and prospects for youth. The authors strongly advocate the need and opportunities to leverage digital technology to support youth in an increasingly digital, unequal society in an uncertain age; see their suggestions for parents, clinicians, educators, designers and adolescents in Box 1 . Similarly, performing an in depth overview of the existing literature, Dienlin and Johannes (p 135 in this issue) could not substantiate the common concerns that digital technology use has a negative impact on young (and adult) peoples’ mental well-being. Their findings imply that the general effects are in the negative spectrum but very small – potentially too small to matter. Importantly, different types of use have different effects: thus, procrastination and passive use were related to more negative effects, and social and active use to more positive effects. Thus, “screen time” has different effects for different people. Digital technology use tends to exert short-term effects on well-being rather than long-lasting effects on life satisfaction. “The dose makes the poison”: both low and excessive use are related to decreased well-being, while moderate use increases well-being. With a strong sense for clear explanation, the authors introduce the concepts, terms, and definitions underlying this complex field, a most valuable primer to educate the interested reader, while also addressing the methodological shortcomings that contribute to the overall controversial experimental evidence. 


Thus, against common concerns, digital technology as such does not affect mental health or deteriorate well-being. Its use can have both negative and positive consequences. Technology simply does not “happen” to people. Individuals can shape the experiences they have with technologies and the results of those experiences. Thus, it is important to shift the focus towards an active, conscious use of this technology, with the intention to improve our lives and meaningfully connect with each other. This has become, more than ever, important now: “There is increased urgency, due to coronavirus, to use technology in ways that strengthen our relationships. Much of the world has been working, educating, and socializing online for months, and many important activities will remain virtual for the foreseeable future. This period of physical distancing has shed light on what we need from technology and each other… “ Morris (p 151 in this issue) introduces her article addressing the enhancement of relationships through technology in the most timely manner with a preface on “Connecting during COVID-19 and beyond.” In this synopsis, she sums up five directions to “build on as we connect during and after the pandemic.” Furthermore, in her review, she examines how technology can be shaped in positive ways by parents, caregivers, romantic partners, and clinicians and illustrates with real life examples creative and sensible ways to adapt technology to personal and relational goals (see also ref 14 ). Highlighting the importance of context, motivation, and the nuances of use, this review encourages people to understand how technologies can be optimally used to improve personal and clinical relationships. 


Digital tools in diagnosis and therapy


The use of digital tools for practical clinical applications and improvement of mental health conditions is gaining increasing acceptance, especially due to smartphone accessibility. This could fill at least in part the treatment gap and lack of access to specialized (psychotherapeutic) care, particularly in developing countries. Even in countries with well-developed health care systems, only a minority of patients receives treatment in line with the recommendations provided by evidence-based treatment guidelines. Thus, as elaborated in a thorough, comprehensive review by Hegerl and Oehler (p 161 in this issue), web-based interventions, especially in the case of Major Depression (MD), a highly prevalent and severe disorder, promise to be a method that provides resource-efficient and widespread access to psychotherapeutic support. The authors provide detailed information on available tools for digital intervention and their core principles; these are mostly based on principles of cognitive behavioral therapy, but also include elements of other psychotherapeutic approaches. As evident from meta-analyses summarizing studies that use face-to-face psychotherapy as a comparator, digital interventions can have equivalent antidepressant efficacy. Importantly, web-based interventions are most efficient when accompanied by adequate professional guidance and, if well designed, can be successfully integrated into routine care. The authors also address carefully the risks and limitations as well as unwanted effects of available digital interventions. Another powerful digital technology is gaining importance as a clinical tool in mental health research and practice, virtual reality (VR). According to Valmaggia and collaborators (p 169 in this issue), “At any time or place, individuals can be transported into immersive and interactive virtual worlds that are in full control of the researcher or clinician. This capability is central to recent interest in how VR might be harnessed in both treatment and assessment of mental health conditions.” To date, VR exposure treatments have proven effective across a range of disorders including schizophrenia, anxiety, and panic disorders. In their review, the authors summarize comprehensively the advantages of using VR as a clinical assessment tool, which could “radically transform the landscape of assessment in mental health.” Thus, VR may overcome many of the limitations concerning the diagnosis of psychological phenomena through its ability to generate highly controlled environments, that is, real-world experiences. In addition to increasing ecological validity, VR enhances personalization, that is, VR experiences can be tailored to match individual needs, abilities, or preferences. Furthermore, VR enhances an individual’s engagement with the test or assessment. Additional advantages include the capture of real-time, automated data in real-world contexts. In sum, the authors have thoroughly addressed the opportunities and challenges of VR in any relevant aspect. Finally, to complement the applications of digital technology to improve mental health, Small et al (p 179 in this issue) provide, in the second part of their review, rich information about specific programs, videogames, and other online tools, particularly for the aging brain. These may provide mental exercises that activate neural circuitry, improve cognitive functioning, reduce anxiety, increase restful sleep, and offer many other brain health benefits.


Emerging key messages


Several key messages emerge from these reviews, which cover a substantial amount of studies: first of all, scientific evidence does not support the common concerns that excessive use of digital technology causes mental health problems and a deterioration of well-being. There is increasing consensus that the methodological foundation is weak in many studies, in part explaining the controversial results and small effect sizes obtained to date. Above all, it appears absurd to collapse, as was common practice, the highly complex interaction between “machine and man” into a uniform quantitative screen time measure. Research, public policies, and interventions need to focus on the user , and not the extent of usage of technology. Who spends time and in what form with the digital devices is what is important. This leads us to what should be the main subject of interest, but has mostly — conceptually and factually — been disregarded: the human “individual” with its motivation, intentions, goals, needs, predispositions, familial, educational and social background, and support systems, or lack thereof. Needless to say, this calls for the consideration of individual differences in all aspects of research and application. Thus, digital technology is not intrinsically good or bad: it depends on the uses it is being put to by the user, and it can be utilized by individuals in both negative and positive ways. Now, more than ever, during and post coronavirus times, it is important that technology is taken advantage of to improve communication and enhance personal, professional, and societal relationships, guaranteeing equal opportunities for access and development for all.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts

Collection  13 February 2023

Rethinking human-technology relations

Human-technology relations (HTR) are of crucial interest in the humanities and social sciences. In sociology, philosophy, cultural studies, media studies, and psychology, the implications, effects, and dynamics of human-technology relations are controversially debated and analysed.  

This Article Collection aims to address these debates and provide a systematic overview of different disciplinary approaches to the topic of human-technology relations by shifting the focus from human-machine interactions to human-machine relations. Thus, we ask how relations, interactions and relationships between humans and technical systems can be distinguished at the conceptual level and how these distinctions are reflected in practice. On the other hand, different modes of human-technology relations and their preconditions are addressed and analysed: How does, e.g., anthropomorphization influence our relationship to technical systems and "intelligent" environments? How do we need to understand and analyse processes of embodiment in human-technology relations? And how does the question of control and agency change in new HTR?

These questions will be discussed from the perspectives of philosophy, sociology, cultural studies, psychology, and law, thus providing a systematic and interdisciplinary overview of the sociopolitically pressing topic of human-technology relations.

The Article Collection is based on a workshop held at the Ludwig Maximilian University of Munich, from March 29 - 31, 2023 (organised by the Emmy Noether Research Group “The Phenomenon of Interaction in Human-Machine Interaction (HMI)” ( https://interactionphilosophy.wordpress.com/ ), in cooperation with the Institute of Ethics, History and Theory of Medicine (LMU Munich)).

login with fingerprint scanning technology. fingerprint to identify personal, security system concept

Orsolya Friedrich

FernUniversität in Hagen, Germany

Sebastian Schleidgen

Johanna seifert.

  • Collection content
  • How to submit
  • About the Guest Editors
  • Collection policies

Mitigating emotional risks in human-social robot interactions through virtual interactive environment indication

  • Aorigele Bao

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

essay about human and technology

Talk to our experts

1800-120-456-456

  • Technology Essay

ffImage

Essay on Technology

The word "technology" and its uses have immensely changed since the 20th century, and with time, it has continued to evolve ever since. We are living in a world driven by technology. The advancement of technology has played an important role in the development of human civilization, along with cultural changes. Technology provides innovative ways of doing work through various smart and innovative means. 

Electronic appliances, gadgets, faster modes of communication, and transport have added to the comfort factor in our lives. It has helped in improving the productivity of individuals and different business enterprises. Technology has brought a revolution in many operational fields. It has undoubtedly made a very important contribution to the progress that mankind has made over the years.

The Advancement of Technology:

Technology has reduced the effort and time and increased the efficiency of the production requirements in every field. It has made our lives easy, comfortable, healthy, and enjoyable. It has brought a revolution in transport and communication. The advancement of technology, along with science, has helped us to become self-reliant in all spheres of life. With the innovation of a particular technology, it becomes part of society and integral to human lives after a point in time.

Technology is Our Part of Life:

Technology has changed our day-to-day lives. Technology has brought the world closer and better connected. Those days have passed when only the rich could afford such luxuries. Because of the rise of globalisation and liberalisation, all luxuries are now within the reach of the average person. Today, an average middle-class family can afford a mobile phone, a television, a washing machine, a refrigerator, a computer, the Internet, etc. At the touch of a switch, a man can witness any event that is happening in far-off places.  

Benefits of Technology in All Fields: 

We cannot escape technology; it has improved the quality of life and brought about revolutions in various fields of modern-day society, be it communication, transportation, education, healthcare, and many more. Let us learn about it.

Technology in Communication:

With the advent of technology in communication, which includes telephones, fax machines, cellular phones, the Internet, multimedia, and email, communication has become much faster and easier. It has transformed and influenced relationships in many ways. We no longer need to rely on sending physical letters and waiting for several days for a response. Technology has made communication so simple that you can connect with anyone from anywhere by calling them via mobile phone or messaging them using different messaging apps that are easy to download.

Innovation in communication technology has had an immense influence on social life. Human socialising has become easier by using social networking sites, dating, and even matrimonial services available on mobile applications and websites.

Today, the Internet is used for shopping, paying utility bills, credit card bills, admission fees, e-commerce, and online banking. In the world of marketing, many companies are marketing and selling their products and creating brands over the internet. 

In the field of travel, cities, towns, states, and countries are using the web to post detailed tourist and event information. Travellers across the globe can easily find information on tourism, sightseeing, places to stay, weather, maps, timings for events, transportation schedules, and buy tickets to various tourist spots and destinations.

Technology in the Office or Workplace:

Technology has increased efficiency and flexibility in the workspace. Technology has made it easy to work remotely, which has increased the productivity of the employees. External and internal communication has become faster through emails and apps. Automation has saved time, and there is also a reduction in redundancy in tasks. Robots are now being used to manufacture products that consistently deliver the same product without defect until the robot itself fails. Artificial Intelligence and Machine Learning technology are innovations that are being deployed across industries to reap benefits.

Technology has wiped out the manual way of storing files. Now files are stored in the cloud, which can be accessed at any time and from anywhere. With technology, companies can make quick decisions, act faster towards solutions, and remain adaptable. Technology has optimised the usage of resources and connected businesses worldwide. For example, if the customer is based in America, he can have the services delivered from India. They can communicate with each other in an instant. Every company uses business technology like virtual meeting tools, corporate social networks, tablets, and smart customer relationship management applications that accelerate the fast movement of data and information.

Technology in Education:

Technology is making the education industry improve over time. With technology, students and parents have a variety of learning tools at their fingertips. Teachers can coordinate with classrooms across the world and share their ideas and resources online. Students can get immediate access to an abundance of good information on the Internet. Teachers and students can access plenty of resources available on the web and utilise them for their project work, research, etc. Online learning has changed our perception of education. 

The COVID-19 pandemic brought a paradigm shift using technology where school-going kids continued their studies from home and schools facilitated imparting education by their teachers online from home. Students have learned and used 21st-century skills and tools, like virtual classrooms, AR (Augmented Reality), robots, etc. All these have increased communication and collaboration significantly. 

Technology in Banking:

Technology and banking are now inseparable. Technology has boosted digital transformation in how the banking industry works and has vastly improved banking services for their customers across the globe.

Technology has made banking operations very sophisticated and has reduced errors to almost nil, which were somewhat prevalent with manual human activities. Banks are adopting Artificial Intelligence (AI) to increase their efficiency and profits. With the emergence of Internet banking, self-service tools have replaced the traditional methods of banking. 

You can now access your money, handle transactions like paying bills, money transfers, and online purchases from merchants, and monitor your bank statements anytime and from anywhere in the world. Technology has made banking more secure and safe. You do not need to carry cash in your pocket or wallet; the payments can be made digitally using e-wallets. Mobile banking, banking apps, and cybersecurity are changing the face of the banking industry.

Manufacturing and Production Industry Automation:

At present, manufacturing industries are using all the latest technologies, ranging from big data analytics to artificial intelligence. Big data, ARVR (Augmented Reality and Virtual Reality), and IoT (Internet of Things) are the biggest manufacturing industry players. Automation has increased the level of productivity in various fields. It has reduced labour costs, increased efficiency, and reduced the cost of production.

For example, 3D printing is used to design and develop prototypes in the automobile industry. Repetitive work is being done easily with the help of robots without any waste of time. This has also reduced the cost of the products. 

Technology in the Healthcare Industry:

Technological advancements in the healthcare industry have not only improved our personal quality of life and longevity; they have also improved the lives of many medical professionals and students who are training to become medical experts. It has allowed much faster access to the medical records of each patient. 

The Internet has drastically transformed patients' and doctors’ relationships. Everyone can stay up to date on the latest medical discoveries, share treatment information, and offer one another support when dealing with medical issues. Modern technology has allowed us to contact doctors from the comfort of our homes. There are many sites and apps through which we can contact doctors and get medical help. 

Breakthrough innovations in surgery, artificial organs, brain implants, and networked sensors are examples of transformative developments in the healthcare industry. Hospitals use different tools and applications to perform their administrative tasks, using digital marketing to promote their services.

Technology in Agriculture:

Today, farmers work very differently than they would have decades ago. Data analytics and robotics have built a productive food system. Digital innovations are being used for plant breeding and harvesting equipment. Software and mobile devices are helping farmers harvest better. With various data and information available to farmers, they can make better-informed decisions, for example, tracking the amount of carbon stored in soil and helping with climate change.

Disadvantages of Technology:

People have become dependent on various gadgets and machines, resulting in a lack of physical activity and tempting people to lead an increasingly sedentary lifestyle. Even though technology has increased the productivity of individuals, organisations, and the nation, it has not increased the efficiency of machines. Machines cannot plan and think beyond the instructions that are fed into their system. Technology alone is not enough for progress and prosperity. Management is required, and management is a human act. Technology is largely dependent on human intervention. 

Computers and smartphones have led to an increase in social isolation. Young children are spending more time surfing the internet, playing games, and ignoring their real lives. Usage of technology is also resulting in job losses and distracting students from learning. Technology has been a reason for the production of weapons of destruction.

Dependency on technology is also increasing privacy concerns and cyber crimes, giving way to hackers.

arrow-right

FAQs on Technology Essay

1. What is technology?

Technology refers to innovative ways of doing work through various smart means. The advancement of technology has played an important role in the development of human civilization. It has helped in improving the productivity of individuals and businesses.

2. How has technology changed the face of banking?

Technology has made banking operations very sophisticated. With the emergence of Internet banking, self-service tools have replaced the traditional methods of banking. You can now access your money, handle transactions, and monitor your bank statements anytime and from anywhere in the world. Technology has made banking more secure and safe.

3. How has technology brought a revolution in the medical field?

Patients and doctors keep each other up to date on the most recent medical discoveries, share treatment information, and offer each other support when dealing with medical issues. It has allowed much faster access to the medical records of each patient. Modern technology has allowed us to contact doctors from the comfort of our homes. There are many websites and mobile apps through which we can contact doctors and get medical help.

4. Are we dependent on technology?

Yes, today, we are becoming increasingly dependent on technology. Computers, smartphones, and modern technology have helped humanity achieve success and progress. However, in hindsight, people need to continuously build a healthy lifestyle, sorting out personal problems that arise due to technological advancements in different aspects of human life.

Essay on Technology – A Boon or Bane for Students

500+ words essay on technology for students.

In this essay on technology, we are going to discuss what technology is, what are its uses, and also what technology can do? First of all, technology refers to the use of technical and scientific knowledge to create, monitor, and design machinery. Also, technology helps in making other goods that aid mankind.

Essay on Technology – A Boon or Bane?

Experts are debating on this topic for years. Also, the technology covered a long way to make human life easier but the negative aspect of it can’t be ignored. Over the years technological advancement has caused a severe rise in pollution . Also, pollution has become a major cause of many health issues. Besides, it has cut off people from society rather than connecting them. Above all, it has taken away many jobs from the workers class.

Essay on technology

Familiarity between Technology and Science

As they are completely different fields but they are interdependent on each other. Also, it is due to science contribution we can create new innovation and build new technological tools. Apart from that, the research conducted in laboratories contributes a lot to the development of technologies. On the other hand, technology extends the agenda of science.

Vital Part of our Life

Regularly evolving technology has become an important part of our lives. Also, newer technologies are taking the market by storm and the people are getting used to them in no time. Above all, technological advancement has led to the growth and development of nations.

Negative Aspect of Technology

Although technology is a good thing, everything has two sides. Technology also has two sides one is good and the other is bad. Here are some negative aspects of technology that we are going to discuss.

Get the huge list of more than 500 Essay Topics and Ideas

With new technology the industrialization increases which give birth to many pollutions like air, water, soil, and noise. Also, they cause many health-related issues in animals, birds, and human beings.

Exhaustion of Natural Resources

New technology requires new resources for which the balance is disturbed. Eventually, this will lead to over-exploitation of natural resources which ultimately disturbs the balance of nature.

Unemployment

A single machine can replace many workers. Also, machines can do work at a constant pace for several hours or days without stopping. Due to this, many workers lost their job which ultimately increases unemployment .

Types of Technology

Generally, we judge technology on the same scale but in reality, technology is divided into various types. This includes information technology, industrial technology , architectural technology, creative technology and many more. Let’s discuss these technologies in brief.

Industrial Technology

This technology organizes engineering and manufacturing technology for the manufacturing of machines. Also, this makes the production process easier and convenient.

Creative Technology

This process includes art, advertising, and product design which are made with the help of software. Also, it comprises of 3D printers , virtual reality, computer graphics, and other wearable technologies.

Information Technology

This technology involves the use of telecommunication and computer to send, receive and store information. Internet is the best example of Information technology.

essay about human and technology

FAQs on Essay on Technology

Q.1 What is Information technology?

A –  It is a form of technology that uses telecommunication and computer systems for study. Also, they send, retrieve, and store data.

Q.2 Is technology harmful to humans?

 A – No, technology is not harmful to human beings until it is used properly. But, misuses of technology can be harmful and deadly.

Download Toppr – Best Learning App for Class 5 to 12

Toppr provides free study materials, last 10 years of question papers, 1000+ hours of video lectures, live 24/7 doubts solving, and much more for FREE! Download Toppr app for Android and iOS or signup for free.

Customize your course in 30 seconds

Which class are you in.

tutor

  • Travelling Essay
  • Picnic Essay
  • Our Country Essay
  • My Parents Essay
  • Essay on Favourite Personality
  • Essay on Memorable Day of My Life
  • Essay on Knowledge is Power
  • Essay on Gurpurab
  • Essay on My Favourite Season
  • Essay on Types of Sports

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Download the App

Google Play

SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

Philosophy of Technology

If philosophy is the attempt “to understand how things in the broadest possible sense of the term hang together in the broadest possible sense of the term”, as Sellars (1962) put it, philosophy should not ignore technology. It is largely by technology that contemporary society hangs together. It is hugely important not only as an economic force but also as a cultural force. Indeed during the last two centuries, when it gradually emerged as a discipline, philosophy of technology has mostly been concerned with the meaning of technology for, and its impact on, society and culture, rather than with technology itself. Mitcham (1994) calls this type of philosophy of technology “humanities philosophy of technology” because it accepts “the primacy of the humanities over technologies” and is continuous with the overall perspective of the humanities (and some of the social sciences). Only recently a branch of the philosophy of technology has developed that is concerned with technology itself and that aims to understand both the practice of designing and creating artifacts (in a wide sense, including artificial processes and systems) and the nature of the things so created. This latter branch of the philosophy of technology seeks continuity with the philosophy of science and with several other fields in the analytic tradition in modern philosophy, such as the philosophy of action and decision-making, rather than with the humanities and social science.

The entry starts with a brief historical overview, then continues with a presentation of the themes on which modern analytic philosophy of technology focuses. This is followed by a discussion of the societal and ethical aspects of technology, in which some of the concerns of humanities philosophy of technology are addressed. This twofold presentation takes into consideration the development of technology as the outcome of a process originating within and guided by the practice of engineering, by standards on which only limited societal control is exercised, as well as the consequences for society of the implementation of the technology so created, which result from processes upon which only limited control can be exercised.

1.1 The Greeks

1.2 later developments; humanities philosophy of technology, 1.3 a basic ambiguity in the meaning of technology, 2.1 introduction: science and technology’s different relations to philosophy, 2.2 the relationship between technology and science, 2.3 the centrality of design to technology, 2.4 methodological issues: design as decision making, 2.5 metaphysical issues: the status and characteristics of artifacts, 2.6 other topics, 3.1 the development of the ethics of technology, 3.2.1 cultural and political approaches, 3.2.2 engineering ethics, 3.2.3 ethics of specific technologies, 3.3.1 neutrality versus moral agency, 3.3.2 responsibility, 3.3.3 design, 3.3.4 technological risks, encyclopedias, other internet resources, related entries, 1. historical developments.

Philosophical reflection on technology is about as old as philosophy itself. Our oldest testimony is from ancient Greece. There are four prominent themes. One early theme is the thesis that technology learns from or imitates nature (Plato, Laws X 899a ff.). According to Democritus, for example, house-building and weaving were first invented by imitating swallows and spiders building their nests and nets, respectively (Diels 1903 and Freeman 1948: 154). Perhaps the oldest extant source for the exemplary role of nature is Heraclitus (Diels 1903 and Freeman 1948: 112). Aristotle referred to this tradition by repeating Democritus’ examples, but he did not maintain that technology can only imitate nature: “generally technè in some cases completes what nature cannot bring to a finish, and in others imitates nature” ( Physics II.8, 199a15; see also Physics II.2, and see Schummer 2001 and this encyclopedia’s entry on episteme and techne for discussion).

A second theme is the thesis that there is a fundamental ontological distinction between natural things and artifacts. According to Aristotle ( Physics II.1), the former have their principles of generation and motion inside, whereas the latter, insofar as they are artifacts, are generated only by outward causes, namely human aims and forms in the human soul. Natural products (animals and their parts, plants, and the four elements) move, grow, change, and reproduce themselves by inner final causes; they are driven by purposes of nature. Artifacts, on the other hand, cannot reproduce themselves. Without human care and intervention, they vanish after some time by losing their artificial forms and decomposing into (natural) materials. For instance, if a wooden bed is buried, it decomposes to earth or changes back into its botanical nature by putting forth a shoot.

The thesis that there is a fundamental difference between man-made products and natural substances has had a long-lasting influence. In the Middle Ages, Avicenna criticized alchemy on the ground that it can never produce ‘genuine’ substances (Briffault 1930: 147). Even today, some still maintain that there is a difference between, for example, natural and synthetic vitamin C. The modern discussion of this theme is taken up in Section 2.5 .

Aristotle’s doctrine of the four causes—material, formal, efficient and final—can be regarded as a third early contribution to the philosophy of technology. Aristotle explained this doctrine by referring to technical artifacts such as houses and statues ( Physics II.3). The four causes are still very much present in modern discussions related to the metaphysics of artifacts. Discussions of the notion of function, for example, focus on its inherent teleological or ‘final’ character and the difficulties this presents to its use in biology. And the notorious case of the ship of Theseus—see this encyclopedia’s entries on material constitution , identity over time , relative identity , and sortals —was introduced in modern philosophy by Hobbes as showing a conflict between unity of matter and unity of form as principles of individuation. This conflict is seen by many as characteristic of artifacts. David Wiggins (1980: 89) takes it even to be the defining characteristic of artifacts.

A fourth point that deserves mentioning is the extensive employment of technological images by Plato and Aristotle. In his Timaeus , Plato described the world as the work of an Artisan, the Demiurge. His account of the details of creation is full of images drawn from carpentry, weaving, ceramics, metallurgy, and agricultural technology. Aristotle used comparisons drawn from the arts and crafts to illustrate how final causes are at work in natural processes. Despite their negative appreciation of the life led by artisans, who they considered too much occupied by the concerns of their profession and the need to earn a living to qualify as free individuals, both Plato and Aristotle found technological imagery indispensable for expressing their belief in the rational design of the universe (Lloyd 1973: 61).

Although there was much technological progress in the Roman empire and during the Middle Ages, philosophical reflection on technology did not grow at a corresponding rate. Comprehensive works such as Vitruvius’ De architectura (first century BC) and Agricola’s De re metallica (1556) paid much attention to practical aspects of technology but little to philosophy.

In the realm of scholastic philosophy, there was an emergent appreciation for the mechanical arts. They were generally considered to be born of—and limited to—the mimicry of nature. This view was challenged when alchemy was introduced in the Latin West around the mid-twelfth century. Some alchemical writers such as Roger Bacon were willing to argue that human art, even if learned by imitating natural processes, could successfully reproduce natural products or even surpass them (Newman 2004). The result was a philosophy of technology in which human art was raised to a level of appreciation not found in other writings until the Renaissance. However, the last three decades of the thirteenth century witnessed an increasingly hostile attitude by religious authorities toward alchemy that culminated eventually in the denunciation Contra alchymistas , written by the inquisitor Nicholas Eymeric in 1396 (Newman 2004).

The Renaissance led to a greater appreciation of human beings and their creative efforts, including technology. As a result, philosophical reflection on technology and its impact on society increased. Francis Bacon is generally regarded as the first modern author to put forward such reflection. His view, expressed in his fantasy New Atlantis (1627), was overwhelmingly positive. This positive attitude lasted well into the nineteenth century, incorporating the first half-century of the industrial revolution. Karl Marx, for example, did not condemn the steam engine or the spinning mill for the vices of the bourgeois mode of production; he believed that ongoing technological innovation allowed for the necessary steps toward the more blissful stages of socialism and communism of the future. A discussion of different views on the role of technology in Marx’s theory of historical development can be found in Bimber 1990. See Van der Pot 1985 [1994/2004] for an extensive historical overview of appreciations of the development of technology generally.

A turning point in the appreciation of technology as a socio-cultural phenomenon is marked by Samuel Butler’s Erewhon (1872), written under the influence of the Industrial Revolution, and Darwin’s On the Origin of Species (1859). Butler’s book gave an account of a fictional country where all machines are banned and the possession of a machine or the attempt to build one is a capital crime. The people of this country had become convinced by an argument that ongoing technical improvements are likely to lead to a ‘race’ of machines that will replace mankind as the dominant species on earth. This introduced a theme that has remained influential in the perception of technology ever since.

During the last quarter of the nineteenth century and most of the twentieth century a critical attitude predominated in philosophical reflection on technology. The representatives of this attitude were, overwhelmingly, schooled in the humanities or the social sciences and had virtually no first-hand knowledge of engineering practice. Whereas Bacon wrote extensively on the method of science and conducted physical experiments himself, Butler, being a clergyman, lacked such first-hand knowledge. Ernst Kapp, who was the first to use the term ‘philosophy of technology’ in his book Eine Philosophie der Technik (1877 [2018]), was a philologist and historian. Most of the authors who wrote critically about technology and its socio-cultural role during the twentieth century were philosophers of a general outlook, such as Martin Heidegger (1954 [1977]), Hans Jonas (1979 [1984]), Arnold Gehlen (1957 [1980]), Günther Anders (1956), and Andrew Feenberg (1999). Others had a background in one of the other humanities or in social science, such as literary criticism and social research in the case of Lewis Mumford (1934), law in the case of Jacques Ellul (1954 [1964]), political science in the case of Langdon Winner (1977, 1980, 1983) and literary studies in the case of Albert Borgmann (1984). The form of philosophy of technology constituted by the writings of these and others has been called by Carl Mitcham (1994) “humanities philosophy of technology”, because it takes its point of departure from the humanities and the social sciences rather than from the practices of science and engineering, and it approaches technology accepting “the primacy of the humanities over technologies” (1994: 39), since technology originates from the goals and values of humans.

Humanities philosophers of technology tend to take the phenomenon of technology itself largely for granted; they treat it as a ‘black box’, a given, a unitary, monolithic, inescapable phenomenon. Their interest is not so much to analyze and understand this phenomenon itself but to grasp its relations to morality (Jonas, Gehlen), politics (Winner), the structure of society (Mumford), human culture (Ellul), the human condition (Hannah Arendt), or metaphysics (Heidegger). In this, these philosophers are almost all openly critical of technology: all things considered, they tend to have a negative judgment of the way technology has affected human society and culture, or at least they single out for consideration the negative effects of technology on human society and culture. This does not necessarily mean that technology itself is pointed out as the principal cause of these negative developments. In the case of Heidegger, in particular, the paramount position of technology in modern society is rather a symptom of something more fundamental, namely a wrongheaded attitude towards Being which has been on the rise for almost 25 centuries. It is therefore questionable whether Heidegger should be considered as a philosopher of technology, although within the humanities view he is considered to be among the most important ones. Much the same could be said about Arendt, in particular her discussion of technology in The Human Condition (1958), although her position in the canon of humanities philosophy of technology is not as prominent as is Heidegger’s.

To be sure, the work of these founding figures of humanities philosophy of technology has been taken further by a second and third generation of scholars—in particular the work of Heidegger remains an important source of inspiration—but who in doing so have adopted a more neutral rather than overall negative view of technology and its meaning for human life and culture. Notable examples are Ihde (1979, 1993) and Verbeek (2000 [2005]).

In its development, humanities philosophy of technology continues to be influenced not so much by developments in philosophy (e.g., philosophy of science, philosophy of action, philosophy of mind) but by developments in the social sciences and humanities. Although, for example, Ihde and those who take their point of departure with him, position their work as phenomenologist or postphenomenologist, there does not seem to be much interest in either the past or the present of this diffuse notion in philosophy, and in particular not much interest in the far from easy question to what extent Heidegger can be considered a phenomenologist. Of particular significance has been the emergence of ‘Science and Technology Studies’ (STS) in the 1980s, which studies from a broad social-scientific perspective how social, political, and cultural values affect scientific research and technological innovation, and how these in turn affect society, politics, and culture. We discuss authors from humanities philosophy of technology in Section 3 on ‘Ethical and Social Aspects of Technology’, but do not present separately and in detail the wide variety of views existing in this field. For a detailed treatment Mitcham’s 1994 book still provides an excellent overview. A recent coverage of humanities philosophy of technology is available in Coeckelbergh’s (2020a) textbook. Olsen, Selinger and Riis (2008) and Vallor (2022) offer wide-ranging collections of contributions; Scharff and Dusek (2003 [2014]) and Kaplan (2004 [2009]) present comprehensive anthologies of texts from this tradition.

Mitcham contrasts ‘humanities philosophy of technology’ to ‘engineering philosophy of technology’, where the latter refers to philosophical views developed by engineers or technologists as “attempts … to elaborate a technological philosophy” (1994: 17). Mitcham discusses only a handful of people as engineering philosophers of technology: Ernst Kapp, Peter Engelmeier, Friedrich Dessauer, and much more briefly Jacques Lafitte, Gilbert Simondon, Hendrik van Riessen, Juan David García Bacca, R. Buckminster Fuller and Mario Bunge. The label ‘engineering philosophy of technology’ raises serious questions: many of the persons discussed hardly classify as engineers or technologists. It is also not very clear how the notion of ‘a technological philosophy’ should be understood. As philosophers, these authors seem all to be rather isolated figures, whose work shows little overlap and who seem to be sharing mainly the absence of a ‘working relation’ with established philosophical disciplines. It is not so clear what sorts of questions and concerns underlie the notion of ‘engineering philosophy of technology’. A larger role for systematic philosophy could bring it quite close to some examples of humanities philosophy of technology, for instance the work of Jacques Ellul, where the analyses would be rather similar and the remaining differences would be ones of attitude or appreciation.

In the next section we discuss in more detail a form of philosophy of technology that we consider to occupy, currently, the position of alternative to the humanities philosophy of technology. It emerged in the 1960s and gained momentum in the past twenty to twenty-five years. This form of the philosophy of technology, which may be called ‘analytic’, is not primarily concerned with the relations between technology and society but with technology itself. It expressly does not look upon technology as a ‘black box’ but as a phenomenon that should be studied in detail. It does not regard technology as such as a practice but as something grounded in a practice, basically the practice of engineering. It analyses this practice, its goals, its concepts and its methods, and it relates its findings to various themes from philosophy.

In seeing technology as grounded in a practice sustained by engineers, similar to the way philosophy of science focuses on the practice of science as sustained by scientists, analytic philosophy of technology could be thought to amount to the philosophy of engineering. Indeed many of the issues related to design, discussed below in Sections 2.3 and 2.4 , could be singled out as forming the subject matter of a philosophy of engineering. The metaphysical issues discussed in Section 2.5 could not, however, and analytic philosophy of technology is therefore significantly broader than philosophy of engineering. The very title of Philosophy of Technology and Engineering Sciences (Meijers 2009), an extensive up-to-date overview, which contains contributions to all of the topics treated in the next section, suggests that technology and engineering do not coincide, but the book does not specifically address what distinguishes technology from engineering and how they are related. In fact, the existence of humanities philosophy of technology and analytic philosophy of technology next to one another reflects a basic ambiguity in the notion of technology that the philosophical work that has been going on has hardly succeeded in clarifying.

Technology can be said to have two aspects or dimensions, which can be referred to as instrumentality and productivity . Instrumentality covers the totality of human endeavours to control their lives and their environments by interfering with the world in an instrumental way, by using things in a purposeful and clever way. Productivity covers the totality of human endeavours to bring into existence new things through which certain things can be realized in a controlled and clever way. For the study of the dimension of instrumentality, it is in principle irrelevant whether the things that are made use of in controlling our lives and environments have been produced by us first; if we somehow could rely on natural objects to always be available to serve our purposes, the analysis of instrumentality and its consequences for how we live our lives would not necessarily be affected. Likewise, for the analysis of what is involved in the making of artifacts, and how the notion of artifact and of something new being brought into existence are to be understood, it is to a large extent irrelevant how human life, culture and society are changed as a result of the artifacts that are in fact produced. Notwithstanding its fundamental character, the ambiguity noted here seems hardly to be confronted directly in the literature. It is addressed by Lawson (2008, 2017) and by Franssen and Koller (2016).

Humanities philosophy of technology has been interested predominantly in the instrumentality dimension, whereas analytic philosophy of technology has focused on the productivity dimension. But technology as one of the basic phenomena of modern society, if not the most basic one, clearly is constituted by the processes centering on and involving both dimensions. It has proved difficult, however, to come to an overarching approach in which the interaction between these two dimensions of technology are adequately dealt with—no doubt partly due to the great differences in philosophical orientation and methodology associated with the two traditions and their separate foci. To improve this situation is arguably the most urgent challenge that the field of philosophy of technology as a whole is facing, since the continuation of the two orientations leading their separate lives threatens its unity and coherence as a discipline in the first place. Indeed, during the past ten to fifteen years the philosophy of engineering has established itself as a subdiscipline within the philosophy of technology, for which a comprehensive handbook was edited recently by Michelfelder and Doorn (2021).

After presenting the major issues of philosophical relevance in technology and engineering that are studied by analytic philosophers of technology in the next section, we discuss the problems and challenges that technology poses for the society in which it is practiced in the third and final section.

2. Analytic Philosophy of Technology

It may come as a surprise to those new to the topic that the fields of philosophy of science and philosophy of technology show such great differences, given that few practices in our society are as closely related as science and engineering. Experimental science is nowadays crucially dependent on technology for the realization of its research set-ups and for gathering and analyzing data. The phenomena that modern science seeks to study could never be discovered without producing them through technology.

Theoretical research within technology has come to be often indistinguishable from theoretical research in science, making engineering science largely continuous with ‘ordinary’ or ‘pure’ science. This is a relatively recent development, which started around the middle of the nineteenth century, and is responsible for great differences between modern technology and traditional, craft-like techniques. The educational training that aspiring scientists and engineers receive starts off being largely identical and only gradually diverges into a science or an engineering curriculum. Ever since the scientific revolution of the seventeenth century, characterized by its two major innovations, the experimental method and the mathematical articulation of scientific theories, philosophical reflection on science has focused on the method by which scientific knowledge is generated, on the reasons for thinking scientific theories to be true, or approximately true, and on the nature of evidence and the reasons for accepting one theory and rejecting another. Hardly ever have philosophers of science posed questions that did not have the community of scientists, their concerns, their aims, their intuitions, their arguments and choices, as a major target. In contrast it is only recently that the philosophy of technology has discovered the community of engineers.

It might be claimed that it is up to the philosophy of technology, and not the philosophy of science, to target first of all the impact of technology—and with it science—on society and culture, because science affects society only through being applied as technology. This, however, will not do. Right from the start of the scientific revolution, science affected human culture and thought fundamentally and directly, not with a detour through technology, and the same is true for later developments such as relativity, atomic physics and quantum mechanics, the theory of evolution, genetics, biochemistry, and the increasingly dominating scientific world view overall. All the same philosophers of science for a long time gave the impression that they left questions addressing the normative, social and cultural aspects of science gladly to other philosophical disciplines, or to historical studies. This has changed only during the past few decades, by scholars either focusing on these issues from the start (e.g. Longino 1990, 2002) or shifting their focus toward them (e.g. Kitcher 2001, 2011).

There is a major difference between the historical development of modern technology as compared to modern science which may at least partly explain this situation, which is that science emerged in the seventeenth century from philosophy itself. The answers that Galileo, Huygens, Newton, and others gave, by which they initiated the alliance of empiricism and mathematical description that is so characteristic of modern science, were answers to questions that had belonged to the core business of philosophy since antiquity. Science, therefore, kept the attention of philosophers. Philosophy of science can be seen as a transformation of epistemology in the light of the emergence of science. The foundational issues—the reality of atoms, the status of causality and probability, questions of space and time, the nature of the quantum world—that were so lively discussed during the end of the nineteenth and the beginning of the twentieth century are an illustration of this close relationship between scientists and philosophers. No such intimacy has ever existed between philosophers and engineers or technologists. Their worlds still barely touch. To be sure, a case can be made that, compared to the continuity existing between natural philosophy and science, a similar continuity exists between central questions in philosophy having to do with human action and practical rationality and the way technology approaches and systematizes the solution of practical problems. To investigate this connection may indeed be considered a major theme for philosophy of technology, and more is said on it in Sections 2.3 and 2.4 . This continuity appears only by hindsight, however, and dimly, as the historical development is at most a slow convening of various strands of philosophical thinking on action and rationality, not a development into variety from a single origin. Significantly it is only the academic outsider Ellul who has, in his idiosyncratic way, recognized in technology the emergent single dominant way of answering all questions concerning human action, comparable to science as the single dominant way of answering all questions concerning human knowledge (Ellul 1954 [1964]). But Ellul was not so much interested in investigating this relationship as in emphasizing and denouncing the social and cultural consequences as he saw them. It is all the more important to point out that humanities philosophy of technology cannot be differentiated from analytic philosophy of technology by claiming that only the former is interested in the social context of technology. There are studies which are rooted in analytic philosophy of science but address in particular the relation of technology to society and culture, and equally the relevance of social relations to technological practices, without taking an evaluative stand with respect to technology; an example is Preston 2012.

The close relationship between the practices of engineering and science may easily keep the important differences between the technology and science from view. The predominant position of science in the philosophical field of vision made it difficult for philosophers to recognize that technology merits special attention for involving issues that do not emerge in science. This view resulting from this lack of recognition is often presented, perhaps somewhat dramatically, as coming down to a claim that technology is ‘merely’ applied science.

A questioning of the relation between science and technology was the central issue in one of the earliest discussions among analytic philosophers of technology. In 1966, in a special issue of the journal Technology and Culture , Henryk Skolimowski argued that technology is something quite different from science (Skolimowski 1966). As he phrased it, science concerns itself with what is, whereas technology concerns itself with what is to be. A few years later, in his well-known book The Sciences of the Artificial (1969), Herbert Simon emphasized this important distinction in almost the same words, stating that the scientist is concerned with how things are but the engineer with how things ought to be. Although it is difficult to imagine that earlier philosophers were blind to this difference in orientation, their inclination, in particular in the tradition of logical empiricism, to view knowledge as a system of statements may have led to a conviction that in technology no knowledge claims play a role that cannot also be found in science. The study of technology, therefore, was not expected to pose new challenges nor hold surprises regarding the interests of analytic philosophy.

In contrast, Mario Bunge (1966) defended the view that technology is applied science, but in a subtle way that does justice to the differences between science and technology. Bunge acknowledges that technology is about action, but an action heavily underpinned by theory—that is what distinguishes technology from the arts and crafts and puts it on a par with science. According to Bunge, theories in technology come in two types: substantive theories, which provide knowledge about the object of action, and operative theories, which are concerned with action itself. The substantive theories of technology are indeed largely applications of scientific theories. The operative theories, in contrast, are not preceded by scientific theories but are born in applied research itself. Still, as Bunge claims, operative theories show a dependence on science in that in such theories the method of science is employed. This includes such features as modeling and idealization, the use of theoretical concepts and abstractions, and the modification of theories by the absorption of empirical data through prediction and retrodiction.

In response to this discussion, Ian Jarvie (1966) proposed as important questions for a philosophy of technology what the epistemological status of technological statements is and how technological statements are to be demarcated from scientific statements. This suggests a thorough investigation of the various forms of knowledge occurring in either practice, in particular, since scientific knowledge has already been so extensively studied, of the forms of knowledge that are characteristic of technology and are lacking, or of much less prominence, in science. A distinction between ‘knowing that’—traditional propositional knowledge—and ‘knowing how’—non-articulated and even impossible-to-articulate knowledge—had been introduced by Gilbert Ryle (1949) in a different context. The notion of ‘knowing how’ was taken up by Michael Polanyi under the name of tacit knowledge and made a central characteristic of technology (Polanyi 1958); the current state of the philosophical discussion is presented in this encyclopedia’s entry on knowledge how . However, emphasizing too much the role of unarticulated knowledge, of ‘rules of thumb’ as they are often called, easily underplays the importance of rational methods in technology. An emphasis on tacit knowledge may also be ill-fit for distinguishing the practices of science and engineering because the role of tacit knowledge in science may well be more important than current philosophy of science acknowledges, for example in concluding causal relationships on the basis of empirical evidence. This was also an important theme in the writings of Thomas Kuhn on theory change in science (Kuhn 1962).

To claim, with Skolimowski and Simon, that technology is about what is to be or what ought to be rather than what is may serve to distinguish it from science but will hardly make it understandable why so much philosophical reflection on technology has taken the form of socio-cultural critique. Technology is an ongoing attempt to bring the world closer to the way one wishes it to be. Whereas science aims to understand the world as it is, technology aims to change the world. These are abstractions, of course. For one, whose wishes concerning what the world should be like are realized in technology? Unlike scientists, who are often personally motivated in their attempts at describing and understanding the world, engineers are seen, not in the least by engineers themselves, as undertaking their attempts to change the world as a service to the public. The ideas on what is to be or what ought to be are seen as originating outside of technology itself; engineers then take it upon themselves to realize these ideas. This view is a major source for the widely spread picture of technology as being instrumental , as delivering instruments ordered from ‘elsewhere’, as means to ends specified outside of engineering, a picture that has served further to support the claim that technology is neutral with respect to values, discussed in Section 3.3.1 . This view involves a considerable distortion of reality, however. Many engineers are intrinsically motivated to change the world, in particular the world as shaped by past technologies. As a result, much technological development is ‘technology-driven’.

To understand where technology ‘comes from’, what drives the innovation process, is of importance not only to those who are curious to understand the phenomenon of technology itself but also to those who are concerned about its role in society. Technology or engineering as a practice is concerned with the creation of artifacts and, of increasing importance, artifact-based services. The design process , the structured process leading toward that goal, forms the core of the practice of engineering. In the engineering literature, the design process is commonly represented as consisting of a series of translational steps; see for this, e.g., Suh 2001. At the start are the customer’s needs or wishes. In the first step these are translated into a list of functional requirements , which then define the design task an engineer, or a team of engineers, has to accomplish. The functional requirements specify as precisely as possible what the device to be designed must be able to do. This step is required because customers usually focus on just one or two features and are unable to articulate the requirements that are necessary to support the functionality they desire. In the second step, the functional requirements are translated into design specifications , which the exact physical parameters of crucial components by which the functional requirements are going to be met. The design parameters chosen to satisfy these requirements are combined and made more precise such that a blueprint of the device results. The blueprint contains all the details that must be known such that the final step to the process of manufacturing the device can take place. It is tempting to consider the blueprint as the end result of a design process, instead of a finished copy being this result. However, actual copies of a device are crucial for the purpose of prototyping and testing. Prototyping and testing presuppose that the sequence of steps making up the design process can and will often contain iterations, leading to revisions of the design parameters and/or the functional requirements. Even though, certainly for mass-produced items, the manufacture of a product for delivery to its customers or to the market comes after the closure of the design phase, the manufacturing process is often reflected in the functional requirements of a device, for example in putting restrictions on the number of different components of which the device consists. The complexity of a device will affect how difficult it will be to maintain or repair it, and ease of maintenance or low repair costs are often functional requirements. An important modern development is that the complete life cycle of an artifact is now considered to be the designing engineer’s concern, up till the final stages of the recycling and disposal of its components and materials, and the functional requirements of any device should reflect this. From this point of view, neither a blueprint nor a prototype can be considered the end product of engineering design.

The biggest idealization that this scheme of the design process contains is arguably located at the start. Only in a minority of cases does a design task originate in a customer need or wish for a particular artifact. First of all, as already suggested, many design tasks are defined by engineers themselves, for instance, by noticing something to be improved in existing products. Nevertheless design often starts with a problem pointed out by some societal agent, which engineers are then invited to solve. Many such problems, however, are ill-defined or wicked problems, meaning that it is not at all clear what the problem is exactly and what a solution to the problem would consist in. The ‘problem’ is a situation that people—not necessarily the people ‘in’ the situation—find unsatisfactory, but typically without being able to specify a situation that they find more satisfactory in other terms than as one in which the problem has been solved. In particular it is not obvious that a solution to the problem would consist in some artifact, or some artifactual system or process, being made available or installed. Engineering departments all over the world advertise that engineering is problem solving, and engineers easily seem confident that they are best qualified to solve a problem when they are asked to, whatever the nature of the problem. This has led to the phenomenon of a technological fix , the solution of a problem by a technical solution, that is, the delivery of an artifact or artifactual process, where it is questionable, to say the least, whether this solves the problem or whether it was the best way of handling the problem.

A candidate example of a technological fix for the problem of global warming would be the currently much debated option of injecting sulfate aerosols into the stratosphere to offset the warming effect of greenhouse gases such as carbon dioxide and methane. Such schemes of geoengineering would allow us to avoid facing the—in all likelihood painful—choices that will lead to a reduction of the emission of greenhouse gases into the atmosphere, but will at the same time allow the depletion of the Earth’s reservoir of fossil fuels to continue. See for a discussion of technological fixing, e.g., Volti 2009: 26–32. Given this situation, and its hazards, the notion of a problem and a taxonomy of problems deserve to receive more philosophical attention than they have hitherto received.

These wicked problems are often broadly social problems, which would best be met by some form of ‘social action’, which would result in people changing their behavior or acting differently in such a way that the problem would be mitigated or even disappear completely. In defense of the engineering view, it could perhaps be said that the repertoire of ‘proven’ forms of social action is meager. The temptation of technical fixes could be overcome—at least that is how an engineer might see it—by the inclusion of the social sciences in the systematic development and application of knowledge to the solution of human problems. This however, is a controversial view. Social engineering is to many a specter to be kept at as large a distance as possible instead of an ideal to be pursued. Karl Popper referred to acceptable forms of implementing social change as ‘piecemeal social engineering’ and contrasted it to the revolutionary but completely unfounded schemes advocated by, e.g., Marxism. In the entry on Karl Popper , however, his choice of words is called ‘rather unfortunate’. The notion of social engineering, and its cogency, deserves more attention that it is currently receiving.

An important input for the design process is scientific knowledge: knowledge about the behavior of components and the materials they are composed of in specific circumstances. This is the point where science is applied. However, much of this knowledge is not directly available from the sciences, since it often concerns extremely detailed behavior in very specific circumstances. This scientific knowledge is therefore often generated within technology, by the engineering sciences. But apart from this very specific scientific knowledge, engineering design involves various other sorts of knowledge. In his book What Engineers Know and How They Know It (Vincenti 1990), the aeronautical engineer Walter Vincenti gave a six-fold categorization of engineering design knowledge (leaving aside production and operation as the other two basic constituents of engineering practice). Vincenti distinguishes

  • Fundamental design concepts, including primarily the operational principle and the normal configuration of a particular device;
  • Criteria and specifications;
  • Theoretical tools;
  • Quantitative data;
  • Practical considerations;
  • Design instrumentalities.

The fourth category concerns the quantitative knowledge just referred to, and the third the theoretical tools used to acquire it. These two categories can be assumed to match Bunge’s notion of substantive technological theories. The status of the remaining four categories is much less clear, however, partly because they are less familiar, or not at all, from the well-explored context of science. Of these categories, Vincenti claims that they represent prescriptive forms of knowledge rather than descriptive ones. Here, the activity of design introduces an element of normativity, which is absent from scientific knowledge. Take such a basic notion as ‘operational principle’, which refers to the way in which the function of a device is realized, or, in short, how it works. This is still a purely descriptive notion. Subsequently, however, it plays a role in arguments that seek to prescribe a course of action to someone who has a goal that could be realized by the operation of such a device. At this stage, the issue changes from a descriptive to a prescriptive or normative one. An extensive discussion of the various kinds of knowledge relevant to technology is offered by Houkes (2009).

Although the notion of an operational principle—a term that seems to originate with Polanyi (1958)—is central to engineering design, no single clear-cut definition of it seems to exist. The issue of disentangling descriptive from prescriptive aspects in an analysis of technical actions and their constituents is therefore a task that has hardly begun. This task requires a clear view on the extent and scope of technology. If one follows Joseph Pitt in his book Thinking About Technology (1999) and defines technology broadly as ‘humanity at work’, then to distinguish between technical action and action in general becomes difficult, and the study of action in technology must absorb all descriptive and normative theories of action, including the theory of practical rationality, and much of theoretical economics in its wake. There have indeed been attempts at such an encompassing account of human action, for example Tadeusz Kotarbinski’s Praxiology (1965), but a perspective of such generality makes it difficult to arrive at results of sufficient depth. It would be a challenge for philosophy to specify the differences among action forms and the reasoning grounding them in, to single out three prominent fields of study, technology, organization and management, and economics.

A more restricted attempt at such an approach is Ilkka Niiniluoto’s (1993). According to Niiniluoto, the theoretical framework of technology as an activity that is concerned with what the world should be like rather than is, the framework that forms the counterpoint to the descriptive framework of science, is design science . The content of design science, the counterpoint to the theories and explanations that form the content of descriptive science, would then be formed by technical norms , statements of the form ‘If one wants to achieve X , one should do Y ’. The notion of a technical norm derives from Georg Henrik von Wright’s Norm and Action (1963). Technical norms need to be distinguished from anankastic statements expressing natural necessity, of the form ‘If X is to be achieved, Y needs to be done’; the latter have a truth value but the former have not. Von Wright himself, however, wrote that he did not understand the mutual relations between these statements. Zwart, Franssen and Kroes (2018) present a detailed discussion. Ideas on what design science is and can and should be are evidently related to the broad problem area of practical rationality—see this encyclopedia’s entries on practical reason and instrumental rationality —and also to means-ends reasoning, discussed in the next section.

Design is an activity that is subject to rational scrutiny but in which creativity is considered to play an important role as well. Since design is a form of action, a structured series of decisions to proceed in one way rather than another, the form of rationality that is relevant to it is practical rationality, the rationality incorporating the criteria on how to act, given particular circumstances. This suggests a clear division of labor between the part to be played by rational scrutiny and the part to be played by creativity. Theories of rational action generally conceive their problem situation as one involving a choice among various course of action open to the agent. Rationality then concerns the question how to decide among given options, whereas creativity concerns the generation of these options. This distinction is similar to the distinction between the context of justification and the context of discovery in science. The suggestion that is associated with this distinction, however, that rational scrutiny only applies in the context of justification, is difficult to uphold for technological design. If the initial creative phase of option generation is conducted sloppily, the result of the design task can hardly be satisfactory. Unlike the case of science, where the practical consequences of entertaining a particular theory are not taken into consideration, the context of discovery in technology is governed by severe constraints of time and money, and an analysis of the problem how best to proceed certainly seems in order. There has been little philosophical work done in this direction; an overview of the issues is given in Kroes, Franssen, and Bucciarelli (2009).

The ideas of Herbert Simon on bounded rationality (see, e.g., Simon 1982) are relevant here, since decisions on when to stop generating options and when to stop gathering information about these options and the consequences when they are adopted are crucial in decision making if informational overload and calculative intractability are to be avoided. However, it has proved difficult to further develop Simon’s ideas on bounded rationality since their conception in the 1950s. Another notion that is relevant here is means-ends reasoning. In order to be of any help here, theories of means-ends reasoning should then concern not just the evaluation of given means with respect to their ability to achieve given ends, but also the generation or construction of means for given ends. A comprehensive theory of means-ends reasoning, however, is not yet available; for a proposal on how to develop means-ends reasoning in the context of technical artifacts, see Hughes, Kroes, and Zwart 2007. In the practice of engineering, alternative proposals for the realization of particular functions are usually taken from ‘catalogs’ of existing and proven realizations. These catalogs are extended by ongoing research in technology rather than under the urge of particular design tasks.

When engineering design is conceived as a process of decision making, governed by considerations of practical rationality, the next step is to specify these considerations. Almost all theories of practical rationality conceive of it as a reasoning process where a match between beliefs and desires or goals is sought. The desires or goals are represented by their value or utility for the decision maker, and the decision maker’s problem is to choose an action that realizes a situation that, ideally, has maximal value or utility among all the situations that could be realized. If there is uncertainty concerning the situations that will be realized by a particular action, then the problem is conceived as aiming for maximal expected value or utility. Now the instrumental perspective on technology implies that the value that is at issue in the design process viewed as a process of rational decision making is not the value of the artifacts that are created. Those values are the domain of the users of the technology so created. They are supposed to be represented in the functional requirements defining the design task. Instead the value to be maximized is the extent to which a particular design meets the functional requirements defining the design task. It is in this sense that engineers share an overall perspective on engineering design as an exercise in optimization . But although optimization is a value-orientated notion, it is not itself perceived as a value driving engineering design.

The functional requirements that define most design problems do not prescribe explicitly what should be optimized; usually they set levels to be attained minimally. It is then up to the engineer to choose how far to go beyond meeting the requirements in this minimal sense. Efficiency , in energy consumption and use of materials first of all, is then often a prime value. Under the pressure of society, other values have come to be incorporated, in particular safety and, more recently, sustainability . Sometimes it is claimed that what engineers aim to maximize is just one factor, namely market success. Market success, however, can only be assessed after the fact. The engineer’s maximization effort will instead be directed at what are considered the predictors of market success. Meeting the functional requirements and being relatively efficient and safe are plausible candidates as such predictors, but additional methods, informed by market research, may introduce additional factors or may lead to a hierarchy among the factors.

Choosing the design option that maximally meets all the functional requirements (which may but need not originate with the prospective user) and all other considerations and criteria that are taken to be relevant, then becomes the practical decision-making problem to be solved in a particular engineering-design task. This creates several methodological problems. Most important of these is that the engineer is facing a multi-criteria decision problem. The various requirements come with their own operationalizations in terms of design parameters and measurement procedures for assessing their performance. This results in a number of rank orders or quantitative scales which represent the various options out of which a choice is to be made. The task is to come up with a final score in which all these results are ‘adequately’ represented, such that the option that scores best can be considered the optimal solution to the design problem. Engineers describe this situation as one where trade-offs have to be made: in judging the merit of one option relative to other options, a relative bad performance on one criterion can be balanced by a relatively good performance on another criterion. An important problem is whether a rational method for doing this can be formulated. It has been argued by Franssen (2005) that this problem is structurally similar to the well-known problem of social choice, for which Kenneth Arrow proved his notorious impossibility theorem in 1950. As a consequence, as long as we require from a solution method to this problem that it answers to some requirements that spell out its generality and rationality, no such solution method exists. In technical design, the role that individual voters play in situations of social choice is played by the various design criteria, which each have a say in what the resulting product comes to look like. This poses serious problems for the claims of engineers that their designs are optimal solutions in the sense of satisfying the totality of the design criteria best, since Arrow’s theorem implies that in most multi-criteria problems this notion of ‘optimal’ cannot be rigorously defined, just as in most multi-voter situations the notion of a best or even adequate representation of what the voters jointly want cannot be rigorously defined.

This result seems to except a crucial aspect of engineering activity from philosophical scrutiny, and it could be used to defend the opinion that engineering is at least partly an art, not a science. Instead of surrendering to the result, however, which has a significance that extends much beyond engineering and even beyond decision making in general, we should perhaps conclude instead that there is still a lot of work to be done on what might be termed, provisionally, ‘approximative’ forms of reasoning. One form of reasoning to be included here is Herbert Simon’s bounded rationality, plus the related notion of ‘satisficing’. Since their introduction in the 1950s (Simon 1957) these two terms have found wide usage, but we are still lacking a general theory of bounded rationality. It may be in the nature of forms of approximative reasoning such as bounded rationality that a general theory cannot be had, but even a systematic treatment from which such an insight could emerge seems to be lacking.

Another problem for the decision-making view of engineering design is that in modern technology almost all design is done by teams. Such teams are composed of experts from many different disciplines. Each discipline has its own theories, its own models of interdependencies, its own assessment criteria, and so forth, and the professionals belonging to these disciplines must be considered as inhabitants of different object worlds , as Louis Bucciarelli (1994) phrases it. The different team members are, therefore, likely to disagree on the relative rankings and evaluations of the various design options under discussion. Agreement on one option as the overall best one can here be even less arrived at by an algorithmic method exemplifying engineering rationality. Instead, models of social interaction, such as bargaining and strategic thinking, are relevant here. An example of such an approach to an (abstract) design problem is presented by Franssen and Bucciarelli (2004).

To look in this way at technological design as a decision-making process is to view it normatively from the point of view of practical or instrumental rationality. At the same time it is descriptive in that it is a description of how engineering methodology generally presents the issue how to solve design problems. From that somewhat higher perspective there is room for all kinds of normative questions that are not addressed here, such as whether the functional requirements defining a design problem can be seen as an adequate representation of the values of the prospective users of an artifact or a technology, or by which methods values such as safety and sustainability can best be elicited and represented in the design process. These issues will be taken up in Section 3 .

Understanding the process of designing artifacts is the theme in philosophy of technology that most directly touches on the interests of engineering practice. This is hardly true for another issue of central concern to analytic philosophy of technology, which is the status and the character of artifacts. This is perhaps not unlike the situation in the philosophy of science, where working scientists seem also to be much less interested in investigating the status and character of models and theories than philosophers are.

Artifacts are man-made objects: they have an author (see Hilpinen 1992 and Hilpinen’s article artifact in this encyclopedia). The artifacts that are of relevance to technology are, additionally, made to serve a purpose. This excludes, within the set of all man-made objects, byproducts and waste products and equally, though controversially, works of art. Byproducts and waste products result from an intentional act to make something but just not precisely, although the author at work may be well aware of their creation. Works of art result from an intention directed at their creation (although in exceptional cases of conceptual art, this directedness may involve many intermediate steps) but it is contested whether artists include in their intentions concerning their work an intention that the work serves some purpose. Nevertheless, most philosophers of technology who discuss the metaphysics of artifacts exclude artworks from their analyses. A further discussion of this aspect belongs to the philosophy of art. An interesting general account which does not do so has been presented by Dipert (1993).

Technical artifacts, then, are made to serve some purpose, generally to be used for something or to act as a component in a larger artifact, which in its turn is either something to be used or again a component. Whether end product or component, an artifact is ‘for something’, and what it is for is called the artifact’s function . Several researchers have emphasized that an adequate description of artifacts must refer both to their status as tangible physical objects and to the intentions of the people engaged with them. Kroes and Meijers (2006) have dubbed this view “the dual nature of technical artifacts”; its most mature formulation is Kroes 2012. They suggest that the two aspects are ‘tied up’, so to speak, in the notion of artifact function. This gives rise to several problems. One, which will be passed over quickly because little philosophical work seems to have been done concerning it, is that structure and function mutually constrain each other, but the constraining is only partial. It is unclear whether a general account of this relation is possible and what problems need to be solved to arrive there. There may be interesting connections with the issue of multiple realizability in the philosophy of mind and with accounts of reduction in science; an example where this is explored is Mahner and Bunge 2001.

It is equally problematic whether a unified account of the notion of function as such is possible, but this issue has received considerably more philosophical attention. The notion of function is of paramount importance for characterizing artifacts, but the notion is used much more widely. The notion of an artifact’s function seems to refer necessarily to human intentions. Function is also a key concept in biology, however, where no intentionality plays a role, and it is a key concept in cognitive science and the philosophy of mind, where it is crucial in grounding intentionality in non-intentional, structural and physical properties. Up till now there is no accepted general account of function that covers both the intentionality-based notion of artifact function and the non-intentional notion of biological function—not to speak of other areas where the concept plays a role, such as the social sciences. The most comprehensive theory, that has the ambition to account for the biological notion, cognitive notion and the intentional notion, is Ruth Millikan’s 1984; for criticisms and replies, see Preston 1998, 2003; Millikan 1999; Vermaas & Houkes 2003; and Houkes & Vermaas 2010. The collection of essays edited by Ariew, Cummins and Perlman (2002) presents an introduction to the topic of characterizing the notion of function, although the emphasis is on biological functions. This emphasis remains very strong in the literature, as can be judged from the most recent critical overview (Garson 2016), which explicitly refrains from discussing artifact functions.

Against the view that, at least in the case of artifacts, the notion of function refers necessarily to intentionality, it could be argued that in discussing the functions of the components of a larger device, and the interrelations between these functions, the intentional ‘side’ of these functions is of secondary importance only. This, however, would be to ignore the possibility of the malfunctioning of such components. This notion seems to be definable only in terms of a mismatch between actual behavior and intended behavior. The notion of malfunction also sharpens an ambiguity in the general reference to intentions when characterizing technical artifacts. These artifacts usually engage many people, and the intentions of these people may not all pull in the same direction. A major distinction can be drawn between the intentions of the actual user of an artifact for a particular purpose and the intentions of the artifact’s designer. Since an artifact may be used for a purpose different from the one for which its designer intended it to be used, and since people may also use natural objects for some purpose or other, one is invited to allow that artifacts can have multiple functions, or to enforce a hierarchy among all relevant intentions in determining the function of an artifact, or to introduce a classification of functions in terms of the sorts of determining intentions. In the latter case, which is a sort of middle way between the two other options, one commonly distinguishes between the proper function of an artifact as the one intended by its designer and the accidental function of the artifact as the one given to it by some user on private considerations. Accidental use can become so common, however, that the original function drops out of memory.

Closely related to this issue to what extent use and design determine the function of an artifact is the problem of characterizing artifact kinds. It may seem that we use functions to classify artifacts: an object is a knife because it has the function of cutting, or more precisely, of enabling us to cut. On closer inspection, however, the link between function and kind-membership seems much less straightforward. The basic kinds in technology are, for example, ‘knife’, ‘aircraft’ and ‘piston’. The members of these kinds have been designed in order to be used to cut something with, to transport something through the air and to generate mechanical movement through thermodynamic expansion, respectively. However, one cannot create a particular kind of artifact just by designing something with the intention that it be used for some particular purpose: a member of the kind so created must actually be useful for that purpose. Despite innumerable design attempts and claims, the perpetual motion machine is not a kind of artifact. A kind like ‘knife’ is defined, therefore, not only by the intentions of the designers of its members that they each be useful for cutting but also by a shared operational principle known to these designers, and on which they based their design. This is, in a different setting, also defended by Thomasson, who in her characterization of what she in general calls an artifactual kind says that such a kind is defined by the designer’s intention to make something of that kind, by a substantive idea that the designer has of how this can be achieved, and by his or her largely successful achievement of it (Thomasson 2003, 2007). Qua sorts of kinds in which artifacts can be grouped, a distinction must therefore be made between a kind like ‘knife’ and a corresponding but different kind ‘cutter’. A ‘knife’ indicates a particular way a ‘cutter’ can be made. One can also cut, however, with a thread or line, a welding torch, a water jet, and undoubtedly by other sorts of means that have not yet been thought of. A ‘cutter’ would then refer to a truly functional kind. As such, it is subject to the conflict between use and design: one could mean by ‘cutter’ anything than can be used for cutting or anything that has been designed to be used for cutting, by the application of whatever operational principle, presently known or unknown.

This distinction between artifact kinds and functional kinds is relevant for the status of such kinds in comparison to other notions of kinds. Philosophy of science has emphasized that the concept of natural kind, such as exemplified by ‘water’ or ‘atom’, lies at the basis of science. On the other hand it is generally taken for granted that there are no regularities that all knives or airplanes or pistons answer to. This, however, is loosely based on considerations of multiple realizability that fully apply only to functional kinds, not to artifact kinds. Artifact kinds share an operational principle that gives them some commonality in physical features, and this commonality becomes stronger once a particular artifact kind is subdivided into narrower kinds. Since these kinds are specified in terms of physical and geometrical parameters, they are much closer to the natural kinds of science, in that they support law-like regularities; see for a defense of this position (Soavi 2009). A recent collection of essays that discuss the metaphysics of artifacts and artifact kinds is Franssen, Kroes, Reydon and Vermaas 2014.

There is at least one additional technology-related topic that ought to be mentioned because it has created a good deal of analytic philosophical literature, namely Artificial Intelligence and related areas. A full discussion of this vast field is beyond the scope of this entry, however. Information is to be found in the entries on Turing machines , the Church-Turing thesis , computability and complexity , the Turing test , the Chinese room argument , the computational theory of mind , functionalism , multiple realizability , and the philosophy of computer science .

3. Ethical and Social Aspects of Technology

It was not until the twentieth century that the development of the ethics of technology as a systematic and more or less independent subdiscipline of philosophy started. This late development may seem surprising given the large impact that technology has had on society, especially since the industrial revolution.

A plausible reason for this late development of ethics of technology is the instrumental perspective on technology that was mentioned in Section 2.2 . This perspective implies, basically, a positive ethical assessment of technology: technology increases the possibilities and capabilities of humans, which seems in general desirable. Of course, since antiquity, it has been recognized that the new capabilities may be put to bad use or lead to human hubris . Often, however, these undesirable consequences are attributed to the users of technology, rather than the technology itself, or its developers. This vision is known as the instrumental vision of technology resulting in the so-called neutrality thesis. The neutrality thesis holds that technology is a neutral instrument that can be put to good or bad use by its users. During the twentieth century, this neutrality thesis met with severe critique, most prominently by Heidegger and Ellul, who have been mentioned in this context in Section 2 , but also by philosophers from the Frankfurt School, such as Horkheimer and Adorno (1947 [2002]), Marcuse (1964), and Habermas (1968 [1970]).

The scope and the agenda for ethics of technology to a large extent depend on how technology is conceptualized. The second half of the twentieth century has witnessed a richer variety of conceptualizations of technology that move beyond the conceptualization of technology as a neutral tool, as a world view or as a historical necessity. This includes conceptualizations of technology as a political phenomenon (Winner, Feenberg, Sclove), as a social activity (Latour, Callon, Bijker and others in the area of science and technology studies), as a cultural phenomenon (Ihde, Borgmann), as a professional activity (engineering ethics, e.g., Davis), and as a cognitive activity (Bunge, Vincenti). Despite this diversity, the development in the second half of the twentieth century is characterized by two general trends. One is a move away from technological determinism and the assumption that technology is a given self-contained phenomenon which develops autonomously to an emphasis on technological development being the result of choices (although not necessarily the intended result). The other is a move away from ethical reflection on technology as such to ethical reflection of specific technologies and to specific phases in the development of technology. Both trends together have resulted in an enormous increase in the number and scope of ethical questions that are asked about technology. The developments also imply that ethics of technology is to be adequately empirically informed, not only about the exact consequences of specific technologies but also about the actions of engineers and the process of technological development. This has also opened the way to the involvement of other disciplines in ethical reflections on technology, such as Science and Technology Studies (STS) and Technology Assessment (TA).

3.2 Approaches in the Ethics of Technology

Not only is the ethics of technology characterized by a diversity of approaches, it might even be doubted whether something like a subdiscipline of ethics of technology, in the sense of a community of scholars working on a common set of problems, exists. The scholars studying ethical issues in technology have diverse backgrounds (e.g., philosophy, STS, TA, law, political science, and STEM disciplines) and they do not always consider themselves (primarily) ethicists of technology. To give the reader an overview of the field, three basic approaches or strands that might be distinguished in the ethics of technology will be discussed.

Both cultural and political approaches build on the traditional philosophy and ethics of technology of the first half of the twentieth century. Whereas cultural approaches conceive of technology as a cultural phenomenon that influences our perception of the world, political approaches conceive of technology as a political phenomenon, i.e., as a phenomenon that is ruled by and embodies institutional power relations between people.

Cultural approaches are often phenomenological in nature or at least position themselves in relation to phenomenology as post-phenomenology. Examples of philosophers in this tradition are Don Ihde, Albert Borgmann, Peter-Paul Verbeek and Evan Selinger (e.g., Borgmann 1984; Ihde 1990; Verbeek 2000 [2005], 2011). The approaches are usually influenced by developments in STS, especially the idea that technologies contain a script that influences not only people’s perception of the world but also human behavior, and the idea of the absence of a fundamental distinction between humans and non-humans, including technological artifacts (Akrich 1992; Latour 1992, 1993; Ihde & Selinger 2003). The combination of both ideas has led some to claim that technology has (moral) agency, a claim that is discussed below in Section 3.3.1 .

Political approaches to technology mostly go back to Marx, who assumed that the material structure of production in society, in which technology is obviously a major factor, determined the economic and social structure of that society. Similarly, Langdon Winner has argued that technologies can embody specific forms of power and authority (Winner 1980). According to him, some technologies are inherently normative in the sense that they require or are strongly compatible with certain social and political relations. Railroads, for example, seem to require a certain authoritative management structure. In other cases, technologies may be political due to the particular way they have been designed. Some political approaches to technology are inspired by (American) pragmatism and, to a lesser extent, discourse ethics. A number of philosophers, for example, have pleaded for a democratization of technological development and the inclusion of ordinary people in the shaping of technology (Winner 1983; Sclove 1995; Feenberg 1999). Such ideas are also echoed in recent interdisciplinary approaches, such as Responsible Research and Innovation (RRI), that aim at opening up the innovation process to a broader range of stakeholders and concerns (Owen et al. 2013).

Although political approaches have obviously ethical ramifications, many philosophers who initially adopted such approaches do not engage in explicit ethical reflection. Also in political philosophy, technology does not seem to have been taken up as an important topic. Nevertheless, particularly in relation to digital technologies such as social media, algorithms and more generally Artificial Intelligence (AI), a range of political themes has recently been discussed, such as threats to democracy (from e.g. social media), the power of Big Tech companies, and new forms of exploitation, domination and colonialism that may come with AI (e.g., Coeckelbergh 2022; Susskind 2022; Zuboff 2017; Adams 2021). An important emerging theme is also justice, which does not just encompass distributive justice (Rawls 1999), but also recognition justice (Fraser and Honneth 2003) and procedural justice. Questions about justice have not only been raised by digital technologies, but also by climate change and energy technologies, leading to the coinage of new notions like climate justice (Caney 2014) and energy justice (Jenkins et al. 2016).

Engineering ethics started off in the 1980s in the United States, merely as an educational effort. Engineering ethics is concerned with “the actions and decisions made by persons, individually or collectively, who belong to the profession of engineering” (Baum 1980: 1). According to this approach, engineering is a profession, in the same way as medicine is a profession.

Although there is no agreement on how a profession exactly should be defined, the following characteristics are often mentioned:

  • A profession relies on specialized knowledge and skills that require a long period of study;
  • The occupational group has a monopoly on the carrying out of the occupation;
  • The assessment of whether the professional work is carried out in a competent way is done by, and it is accepted that this can only be done by, professional peers;
  • A profession provides society with products, services or values that are useful or worthwhile for society, and is characterized by an ideal of serving society;
  • The daily practice of professional work is regulated by ethical standards, which are derived from or relate to the society-serving ideal of the profession.

Typical ethical issues that are discussed in engineering ethics are professional obligations of engineers as exemplified in, for example, codes of ethics of engineers, the role of engineers versus managers, competence, honesty, whistle-blowing, concern for safety and conflicts of interest (Davis 1998, 2005). Over the years, the scope of engineering ethics has been broadened. Whereas it initially often focused on decisions of individual engineers and on questions like whistle-blowing and loyalty, textbooks now also discuss the wider context in which such decisions are made and pay attention to for example, the so-called problem of many hands (van de Poel and Royakkers 2011; Peterson 2020) (see also section 3.3.2). Initially, the focus was often primarily on safety concerns and issues like competence and conflicts of interests, but nowadays also issues of sustainability, social justice, privacy, global issues and the role of technology in society are discussed (Harris, Pritchard, and Rabins 2014; Martin and Schinzinger 2022; Taebi 2021; Peterson 2020; van de Poel and Royakkers 2011).

The last decades have witnessed an enormous increase in ethical inquiries into specific technologies. This may now be the largest of the three strands discussed, especially given the rapid growth in technology-specific ethical inquiries in the last two decades. One of the most visible new fields nowadays is digital ethics, which evolved from computer ethics (e.g., Moor 1985; Floridi 2010; Johnson 2009; Weckert 2007; van den Hoven & Weckert 2008), with more recently a focus on robotics, artificial intelligence, machine ethics, and the ethics of algorithms (Lin, Abney, & Jenkins 2017; Nucci & Santoni de Sio 2016; Mittelstadt et al. 2016; Bostrom & Yudkowsky 2014; Wallach & Allen 2009, Coeckelbergh 2020b). Other technologies like biotechnology have also spurred dedicated ethical investigations (e.g., Sherlock & Morrey 2002; P. Thompson 2007). More traditional fields like architecture and urban planning have also attracted specific ethical attention (Fox 2000). Nanotechnology and so-called converging technologies have led to the establishment of what is called nanoethics (Allhoff et al. 2007). Other examples are the ethics of nuclear deterrence (Finnis et al. 1988), nuclear energy (Taebi & Roeser 2015) and geoengineering (Christopher Preston 2016).

Obviously the establishment of such new fields of ethical reflection is a response to social and technological developments. Still, the question can be asked whether the social demand is best met by establishing new fields of applied ethics. This issue is in fact regularly discussed as new fields emerge. Several authors have for example argued that there is no need for nanoethics because nanotechnology does not raise any really new ethical issues (e.g., McGinn 2010). The alleged absence of newness here is supported by the claim that the ethical issues raised by nanotechnology are a variation on, and sometimes an intensification of, existing ethical issues, but hardly really new, and by the claim that these issues can be dealt with the existing theories and concepts from moral philosophy. For an earlier, similar discussion concerning the supposed new character of ethical issues in computer engineering, see Tavani 2002.

The new fields of ethical reflection are often characterized as applied ethics, that is, as applications of theories, normative standards, concepts and methods developed in moral philosophy. For each of these elements, however, application is usually not straightforward but requires a further specification or revision. This is the case because general moral standards, concepts and methods are often not specific enough to be applicable in any direct sense to specific moral problems. ‘Application’ therefore often leads to new insights which might well result in the reformulation or at least refinement of existing normative standards, concepts and methods. In some cases, ethical issues in a specific field might require new standards, concepts or methods. Beauchamp and Childress for example have proposed a number of general ethical principles for biomedical ethics (Beauchamp & Childress 2001). These principles are more specific than general normative standards, but still so general and abstract that they apply to different issues in biomedical ethics. In computer ethics, existing moral concepts relating to for example privacy and ownership has been redefined and adapted to deal with issues which are typical for the computer age (Johnson 2003). An example is Nissenbaum’s proposal to understand privacy in terms of contextual integrity (Nissenbaum 2010). New fields of ethical application might also require new methods for, for example, discerning ethical issues that take into account relevant empirical facts about these fields, like the fact that technological research and development usually takes place in networks of people rather than by individuals (Zwart et al. 2006). Another more general issue that applies to many new technologies is how to deal with the uncertainties about (potential) social and ethical impacts that typically surround new emerging technologies. Brey’s (2012) proposal for an anticipatory ethics may be seen as a reply to this challenge. The issue of anticipation is also one of the central concerns in the more recent interdisciplinary field of Responsible Research and Innovation (RRI) (e.g., Owen et al. 2013).

Although different fields of ethical reflection on specific technologies might well raise their own philosophical and ethical issues, it can be questioned whether this justifies the development of separate subfields or even subdisciplines. One obvious argument might be that in order to say something ethically meaningful about new technologies, one needs specialized and detailed knowledge of a specific technology. Moreover such subfields allow interaction with relevant non-philosophical experts in for example law, psychology, economy, science and technology studies (STS) or technology assessment (TA), as well as the relevant STEM (Science, Technology, Engineering, Medicine) disciplines. On the other side, it could also be argued that a lot can be learned from interaction and discussion between ethicists specializing in different technologies, and a fruitful interaction with the two other strands discussed above (cultural and political approaches and engineering ethics). In particular more political approaches to technology can be complementary to approaches that focus on ethical issue of specific technologies (such as AI) by drawing attention to justice issues, power differences and the role of larger institutional and international contexts. Currently, such interaction in many cases seems absent, although there are of course exceptions.

3.3 Some Recurrent Themes in the Ethics of Technology

We now turn to the description of some specific themes in the ethics of technology. We focus on a number of general themes that provide an illustration of general issues in the ethics of technology and the way these are treated.

One important general theme in the ethics of technology is the question whether technology is value-laden. Some authors have maintained that technology is value-neutral, in the sense that technology is just a neutral means to an end, and accordingly can be put to good or bad use (e.g., Pitt 2000). This view might have some plausibility in as far as technology is considered to be just a bare physical structure. Most philosophers of technology, however, agree that technological development is a goal-oriented process and that technological artifacts by definition have certain functions, so that they can be used for certain goals but not, or far more difficulty or less effectively, for other goals. This conceptual connection between technological artifacts, functions and goals makes it hard to maintain that technology is value-neutral. Even if this point is granted, the value-ladenness of technology can be construed in a host of different ways. Some authors have maintained that technology can have moral agency. This claim suggests that technologies can autonomously and freely ‘act’ in a moral sense and can be held morally responsible for their actions.

The debate whether technologies can have moral agency started off in computer ethics (Bechtel 1985; Snapper 1985; Dennett 1997; Floridi & Sanders 2004) but has since broadened. Typically, the authors who claim that technologies (can) have moral agency often redefine the notion of agency or its connection to human will and freedom (e.g., Latour 1993; Floridi & Sanders 2004, Verbeek 2011). A disadvantage of this strategy is that it tends to blur the morally relevant distinctions between people and technological artifacts. More generally, the claim that technologies have moral agency sometimes seems to have become shorthand for claiming that technology is morally relevant. This, however, overlooks the fact technologies can be value-laden in other ways than by having moral agency (see, e.g., Johnson 2006; Radder 2009; Illies & Meijers 2009; Peterson & Spahn 2011; Miller 2020; Klenk 2021). One might, for example, claim that technology enables (or even invites) and constrains (or even inhibits) certain human actions and the attainment of certain human goals and therefore is to some extent value-laden, without claiming moral agency for technological artifacts. A good overview of the debate can be found in Kroes and Verbeek 2014.

The debate about moral agency and technology is now particularly salient with respect to the design of intelligent artificial agents. James Moor (2006) has distinguished between four ways in which artificial agents may be or become moral agents:

  • Ethical impact agents are robots and computer systems that ethically impact their environment; this is probably true of all artificial agents.
  • Implicit ethical agents are artificial agents that have been programmed to act according to certain values.
  • Explicit ethical agents are machines that can represent ethical categories and that can ‘reason’ (in machine language) about these.
  • Full ethical agents in addition also possess some characteristics we often consider crucial for human agency, like consciousness, free will and intentionality.

It might perhaps never be possible to technologically design full ethical agents, and if it were to become possible it might be questionable whether it is morally desirable to do so (Bostrom & Yudkowsky 2014; van Wynsberghe and Robbins 2019). As Wallach and Allen (2009) have pointed out, the main problem might not be to design artificial agents that can function autonomously and that can adapt themselves in interaction with the environment, but rather to build enough, and the right kind of, ethical sensitivity into such machines.

Apart from the question whether intelligent artificial agents can have moral agency, there are (broader) questions about their moral status; for example would they—and if so under what conditions—qualify as moral patients, to which humans have certain moral obligations. Traditionally, moral status is connected to consciousness, but a number of authors have proposed more minimal criteria for moral status, particularly for (social) robots. For example, Danaher (2020) has suggested that behaviouristic criteria might suffice whereas Coeckelbergh (2014) and Gunkel (2018) have suggested a relational approach. Mosakas (2021) has argued that such approaches do not ground moral status, and hence humans have no direct moral duties towards social robots (although they may still be morally relevant in other ways). Others have suggested that social robots sometimes may deceive us into believing they have certain cognitive and emotional capabilities (that may give them also moral status) while they have not (Sharkey and Sharkey 2021).

Responsibility has always been a central theme in the ethics of technology. The traditional philosophy and ethics of technology, however, tended to discuss responsibility in rather general terms and were rather pessimistic about the possibility of engineers to assume responsibility for the technologies they developed. Ellul, for example, has characterized engineers as the high priests of technology, who cherish technology but cannot steer it. Hans Jonas (1979 [1984]) has argued that technology requires an ethics in which responsibility is the central imperative because for the first time in history we are able to destroy the earth and humanity.

In engineering ethics, the responsibility of engineers is often discussed in relation to code of ethics that articulate specific responsibilities of engineers. Such codes of ethics stress three types of responsibilities of engineers: (1) conducting the profession with integrity and honesty and in a competent way, (2) responsibilities towards employers and clients and (3) responsibility towards the public and society. With respect to the latter, most US codes of ethics maintain that engineers ‘should hold paramount the safety, health and welfare of the public’.

As has been pointed out by several authors (Nissenbaum 1996; Johnson & Powers 2005; Swierstra & Jelsma 2006), it may be hard to pinpoint individual responsibility in engineering. The reason is that the conditions for the proper attribution of individual responsibility that have been discussed in the philosophical literature (like freedom to act, knowledge, and causality) are often not met by individual engineers. For example, engineers may feel compelled to act in a certain way due to hierarchical or market constraints, and negative consequences may be very hard or impossible to predict beforehand. The causality condition is often difficult to meet as well due to the long chain from research and development of a technology till its use and the many people involved in this chain. Davis (2012) nevertheless maintains that despite such difficulties individual engineers can and do take responsibility.

One issue that is at stake in this debate is the notion of responsibility. Davis (2012), and also for example Ladd (1991), argue for a notion of responsibility that focuses less on blame and stresses the forward-looking or virtuous character of assuming responsibility. But many others focus on backward-looking notions of responsibility that stress accountability, blameworthiness or liability. Zandvoort (2000), for example has pleaded for a notion of responsibility in engineering that is more like the legal notion of strict liability, in which the knowledge condition for responsibility is seriously weakened. Doorn (2012) compares three perspectives on responsibility ascription in engineering—a merit-based, a right-based and a consequentialist perspective—and argues that the consequentialist perspective, which applies a forward-looking notion of responsibility, is most powerful in influencing engineering practice.

The difficulty of attributing individual responsibility may lead to the Problem of Many Hands (PMH). The term was first coined by Dennis Thompson (1980) in an article about the responsibility of public officials. The term is used to describe problems with the ascription of individual responsibility in collective settings. Doorn (2010) has proposed a procedurals approach, based on Rawls’ reflective equilibrium model, to deal with the PMH; other ways of dealing with the PMH include the design of institutions that help to avoid it or an emphasis on virtuous behavior in organizations (van de Poel, Royakers, & Zwart 2015).

Whereas the PMH refers to the problem of attributing responsibility among a collective of human agents, technological developments have also made it possible to allocate tasks to self-learning and intelligent systems. Such systems may function and learn in ways that are hard to understand, predict and control for humans, leading to so-called ‘responsibility gaps’ (Matthias 2004). Since knowledge and control are usually seen as (essential) preconditions for responsibility, lack thereof may make it increasingly difficult to hold humans responsible for the actions and consequences of intelligent systems.

Initially, such responsibility gaps were mainly discussed in relation to autonomous weapon systems and self-driving cars (Sparrow 2007; Danaher 2016). As a possible solution, the notion of meaningful human control has been proposed as a precondition for the development and employment of such systems to ensure that human can retain control, and hence responsibility over these systems (Santoni de Sio and van den Hoven 2018). Nyholm (2018) has argued that many alleged cases of responsibility gaps are better understood in terms collaborative human-technology agency (with humans in a supervising role) rather than the technology taking over control. While responsibility gaps may not impossible, the more difficult issue may be to attribute responsibility to the various humans involved (which brings the PMH back on the table).

More recently, responsibility gaps have become a more general concern in relation to AI. Due to the advance of machine learning, AI systems may learn in ways that are hard, or almost impossible to understand for humans. Initially, the dominant notion of responsibility addressed in the literature on responsibility gaps was blameworthiness or culpability, but Santoni de Sio and Mecacci (2021) have recently proposed to distinguish between what they call culpability gaps, moral accountability gaps, public accountability gaps, and active responsibility gaps.

In the last decades, increasingly attention is paid not only to ethical issues that arise during the use of a technology, but also during the design phase. An important consideration behind this development is the thought that during the design phase technologies, and their social consequences, are still malleable whereas during the use phase technologies are more or less given and negative social consequences may be harder to avoid or positive effects harder to achieve.

In computer ethics, an approach known as Value Sensitive Design (VSD) has been developed to explicitly address the ethical nature of design. VSD aims at integrating values of ethical importance in engineering design in a systematic way (Friedman & Hendry 2019). The approach combines conceptual, empirical and technical investigations. There is also a range of other approaches aimed at including values in design. ‘Design for X’ approaches in engineering aim at including instrumental values (like maintainability, reliability and costs) but they also include design for sustainability, inclusive design, and affective design (Holt & Barnes 2010). Inclusive design aims at making designs accessible to the whole population including, for example, handicapped people and the elderly (Erlandson 2008). Affective design aims at designs that evoke positive emotions with the users and so contributes to human well-being. Van de Hoven, Vermaas, and van de Poel 2015 gives a good overview of the state-of-the art of value sensitive design for various values and application domains.

If one tries to integrate values into design one may run into the problem of a conflict of values. The safest car is, due to its weight, not likely to be the most sustainability. Here safety and sustainability conflict in the design of cars. Traditional methods in which engineers deal with such conflicts and make trade-off between different requirements for design include cost-benefit analysis and multiple criteria analysis. Such methods are, however, beset with methodological problems like those discussed in Section 2.4 (Franssen 2005; Hansson 2007). Van de Poel (2009) discusses various alternatives for dealing with value conflicts in design including the setting of thresholds (satisficing), reasoning about values, innovation and diversity.

The risks of technology are one of the traditional ethical concerns in the ethics of technology. Risks raise not only ethical issues but other philosophical issues, such as epistemological and decision-theoretical issues as well (Roeser et al. 2012).

Risk is usually defined as the product of the probability of an undesirable event and the effect of that event, although there are also other definitions around (Hansson 2004b). In general it seems desirable to keep technological risks as small as possible. The larger the risk, the larger either the likeliness or the impact of an undesirable event is. Risk reduction therefore is an important goal in technological development and engineering codes of ethics often attribute a responsibility to engineers in reducing risks and designing safe products. Still, risk reduction is not always feasible or desirable. It is sometimes not feasible, because there are no absolutely safe products and technologies. But even if risk reduction is feasible it may not be acceptable from a moral point of view. Reducing risk often comes at a cost. Safer products may be more difficult to use, more expensive or less sustainable. So sooner or later, one is confronted with the question: what is safe enough? What makes a risk (un)acceptable?

The process of dealing with risks is often divided into three stages: risk assessment, risk evaluation and risk management. Of these, the second is most obviously ethically relevant. However, risk assessment already involves value judgments, for example about which risks should be assessed in the first place (Shrader-Frechette 1991). An important, and morally relevant, issue is also the degree of evidence that is needed to establish a risk. In establishing a risk on the basis of a body of empirical data one might make two kinds of mistakes. One can establish a risk when there is actually none (type I error) or one can mistakenly conclude that there is no risk while there actually is a risk (type II error). Science traditionally aims at avoiding type I errors. Several authors have argued that in the specific context of risk assessment it is often more important to avoid type II errors (Cranor 1990; Shrader-Frechette 1991). The reason for this is that risk assessment not just aims at establishing scientific truth but has a practical aim, i.e., to provide the knowledge on basis of which decisions can be made about whether it is desirable to reduce or avoid certain technological risks in order to protect users or the public.

Risk evaluation is carried out in a number of ways (see, e.g., Shrader-Frechette 1985). One possible approach is to judge the acceptability of risks by comparing them to other risks or to certain standards. One could, for example, compare technological risks with naturally occurring risks. This approach, however, runs the danger of committing a naturalistic fallacy: naturally occurring risks may (sometimes) be unavoidable but that does not necessarily make them morally acceptable. More generally, it is often dubious to judge the acceptability of the risk of technology A by comparing it to the risk of technology B if A and B are not alternatives in a decision (for this and other fallacies in reasoning about risks, see Hansson 2004a).

A second approach to risk evaluation is risk-cost benefit analysis, which is based on weighing the risks against the benefits of an activity. Different decision criteria can be applied if a (risk) cost benefit analysis is carried out (Kneese, Ben-David, and Schulze 1983). According to Hansson (2003: 306), usually the following criterion is applied:

… a risk is acceptable if and only if the total benefits that the exposure gives rise to outweigh the total risks, measured as the probability-weighted disutility of outcomes.

A third approach is to base risk acceptance on the consent of people who suffer the risks after they have been informed about these risks (informed consent). A problem of this approach is that technological risks usually affect a large number of people at once. Informed consent may therefore lead to a “society of stalemates” (Hansson 2003: 300).

Several authors have proposed alternatives to the traditional approaches of risk evaluation on the basis of philosophical and ethical arguments. Shrader-Frechette (1991) has proposed a number of reforms in risk assessment and evaluation procedures on the basis of a philosophical critique of current practices. Roeser (2012) argues for a role of emotions in judging the acceptability of risks. Hansson has proposed the following alternative principle for risk evaluation:

Exposure of a person to a risk is acceptable if and only if this exposure is part of an equitable social system of risk-taking that works to her advantage. (Hansson 2003: 305)

Hansson’s proposal introduces a number of moral considerations in risk evaluation that are traditionally not addressed or only marginally addressed. These are the consideration whether individuals profit from a risky activity and the consideration whether the distribution of risks and benefits is fair.

Questions about acceptable risk may also be framed in terms of risk imposition. The question is then under what conditions it is acceptable for some agent A to impose a risk on some other agent B. The criteria for acceptable risk imposition are in part similar to the ones discussed above. A risk imposition may, for example, be (more) acceptable if agent B gave their informed consent, or if the risky activity that generates the risk is beneficial for agent B. However, other considerations come in as well, like the relation between agent A and agent B. It might perhaps be acceptable for parents to impose certain risks on their children, while it would be improper for the government to impose such risks on children.

Risk impositions may particularly be problematic if they lead to domination or domination-like effects (Maheshwari and Nyholm 2022). Domination is here understood in the republican sense proposed by philosophers like Pettit (2012). Freedom from domination does not just require people to have different options to choose from, but also to be free from the (potential) arbitrary interference in the (availability of) these options by others. Non-domination thus requires that others do not have the power to arbitrary interfere with one’s options (whether that power is exercised or not). Risk imposition may lead to domination (or at least dominating-like effects) if agent A (the risk imposer) by imposing a risk on agent B (the risk bearer) can arbitrary affect the range of safe options available to agent B.

Some authors have criticized the focus on risks in the ethics of technology. One strand of criticism argues that we often lack the knowledge to reliably assess the risks of a new technology before it has come into use. We often do not know the probability that something might go wrong, and sometimes we even do not know, or at least not fully, what might go wrong and what possible negative consequences may be. To deal with this, some authors have proposed to conceive of the introduction of new technology in society as a social experiment and have urged to think about the conditions under which such experiments are morally acceptable (Martin & Schinzinger 2022; van de Poel 2016). Another strand of criticism states that the focus on risks has led to a reduction of the impacts of technology that are considered (Swierstra & te Molder 2012). Only impacts related to safety and health, which can be calculated as risks, are considered, whereas ‘soft’ impacts, for example of a social or psychological nature, are neglected, thereby impoverishing the moral evaluation of new technologies.

  • Adams, Rachel, 2021, “Can artificial intelligence be decolonized?”, Interdisciplinary Science Reviews , 46(1–2): 176–197. doi: 10.1080/03080188.2020.1840225.
  • Agricola, Georgius, 1556 [1912], De re metallica , Translated and edited by Herbert Clark Hoover and Lou Henry Hoover, London: The Mining Magazine, 1912. [ Agricola 1556 [1912] available online ]
  • Akrich, Madeleine, 1992, “The Description of Technical Objects”, in Bijker and Law (eds) 1992: 205–224.
  • Allhoff, Fritz, Patrick Lin, James H. Moor and John Weckert (eds), 2007, Nanoethics: The Ethical and Social Implications of Nanotechnology , Hoboken, NJ: Wiley-Interscience.
  • Anders, Günther, 1956, Die Antiquiertheit des Menschen (Volume I: Über die Seele im Zeitalter der zweiten industriellen Revolution ; Volume II: Über die Zerstörung des Lebens im Zeitalter der dritten industriellen Revolution ), München: C.H. Beck.
  • Arendt, Hannah, 1958, The Human Condition , Chicago: University of Chicago Press.
  • Ariew, Andrew, Robert Cummins and Mark Perlman (eds), 2002, Functions: New Essays in the Philosophy of Psychology and Biology , New York and Oxford: Oxford University Press.
  • Aristotle, Physics , Translated in The Complete Works of Aristotle, Volume 1 , The Revised Oxford Translation 2014, edited by Jonathan Barnes.
  • Bacon, Francis, 1627, New Atlantis: A Worke Vnfinished , in his Sylva Sylvarum: or a Naturall Historie, in Ten Centuries , London: William Lee.
  • Baum, Robert J., 1980, Ethics and Engineering Curricula , Hastings-on-Hudson: The Hastings Center.
  • Beauchamp, Tom L., 2003, “The Nature of Applied Ethics”, in Frey and Wellman (eds) 2003: 1–16. doi:10.1002/9780470996621.ch1
  • Beauchamp, Tom L., and James F. Childress, 2001, Principles of Biomedical Ethics , fifth edition, Oxford and New York: Oxford University Press.
  • Bechtel, William, 1985, “Attributing Responsibility to Computer Systems”, Metaphilosophy , 16(4): 296–306. doi:10.1111/j.1467-9973.1985.tb00176.x
  • Bijker, Wiebe E., and John Law (eds), 1992, Shaping Technology/Building Society: Studies in Sociotechnical Change , Cambridge, MA: MIT Press.
  • Bimber, Bruce, 1990, “Karl Marx and the Three Faces of Technological Determinism”, Social Studies of Science , 20(2): 333–351. doi:10.1177/030631290020002006
  • Borgmann, Albert, 1984, Technology and the Character of Contemporary Life: A Philosophical Inquiry , Chicago and London: University of Chicago Press.
  • Bostrom, Nick, and Eliezer Yudkowsky, 2014, “The Ethics of Artificial Intelligence”, in The Cambridge Handbook of Artificial Intelligence , edited by Keith Frankish and William M Ramsey, Cambridge: Cambridge University Press, 316–334. doi:10.1017/CBO9781139046855.020
  • Brey, Philip A.E., 2012, “Anticipatory Ethics for Emerging Technologies”, NanoEthics , 6(1): 1–13. doi:10.1007/s11569-012-0141-7
  • Briffault, R., 1930, Rational Evolution (The Making of Humanity) , New York: The Macmillan Company.
  • Bucciarelli, Louis L., 1994, Designing Engineers , Cambridge, MA: MIT Press.
  • Bunge, Mario, 1966, “Technology as Applied Science”, Technology and Culture , 7(3): 329–347. doi:10.2307/3101932
  • Butler, Samuel, 1872, Erewhon , London: Trubner and Co. [ Butler 1872 available online ]
  • Callon, Michel, 1986, “The Sociology of an Actor-Network: the Case of the Electric Vehicle”, in Mapping the Dynamics of Science and Technology: Sociology of Science in the Real World , Michel Callon, John Law and Arie Rip (eds.), London: Macmillan, pp. 19–34.
  • Caney, Simon, 2014, “Two Kinds of Climate Justice: Avoiding Harm and Sharing Burdens”, Journal of Political Philosophy 22(2): 125–149. doi:10.1111/jopp.12030
  • Coeckelbergh, Mark, 2014, “The Moral Standing of Machines: Towards a Relational and Non-Cartesian Moral Hermeneutics”, Philosophy & Technology 27(1): 61–77. doi: 10.1007/s13347-013-0133-8.
  • –––, 2020a, Introduction to Philosophy of Technology , Oxford and New York: Oxford University Press.
  • –––, 2020b, AI Ethics , Cambridge, MA: MIT Press.
  • –––, 2022, The Political Philosophy of AI: An Introduction , Cambridge: Polity.
  • Cranor, Carl F., 1990, “Some Moral Issues in Risk Assessment”, Ethics , 101(1): 123–143. doi:10.1086/293263
  • Danaher, John, 2016, “Robots, Law and the Retribution Gap”, Ethics and Information Technology 18(4): 299–309. doi: 10.1007/s10676-016-9403-3.
  • –––, 2020, “Welcoming Robots into the Moral Circle: A Defence of Ethical Behaviourism”, Science and Engineering Ethics 26(4): 2023–2049. doi: 10.1007/s11948-019-00119-x.
  • Darwin, Charles R., 1859, On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life , London: John Murray.
  • Davis, Michael, 1998, Thinking Like an Engineer: Studies in the Ethics of a Profession , New York and Oxford: Oxford University Press.
  • –––, 2005, Engineering Ethics , Aldershot/Burlington, VT: Ashgate.
  • –––, 2012, “‘Ain’t No One Here But Us Social Forces’: Constructing the Professional Responsibility of Engineers”, Science and Engineering Ethics , 18(1): 13–34. doi:10.1007/s11948-010-9225-3
  • Dennett, Daniel C., 1997, “When HAL kills, who’s to blame? Computer ethics”, in HAL’s Legacy: 2001’s Computer as Dream and Reality , edited by David G. Stork. Cambridge, MA: MIT Press, pp. 351–365.
  • Di Nucci, Ezio, and Filippo Santoni de Sio, 2016, Drones and Responsibility: Legal, Philosophical and Socio-Technical Perspectives on Remotely Controlled Weapons , Milton Park: Routledge.
  • Diels, Hermann, 1903, Die Fragmente der Vorsokratiker , Berlin: Weidmann.
  • Dipert, Randall R., 1993, Artifacts, Art Works, and Agency , Philadelphia: Temple University Press.
  • Doorn, Neelke, 2010, “A Rawlsian Approach to Distribute Responsibilities in Networks”, Science and Engineering Ethics , 16(2): 221–249. doi:10.1007/s11948-009-9155-0
  • –––, 2012, “Responsibility Ascriptions in Technology Development and Engineering: Three Perspectives”, Science and Engineering Ethics , 18(1): 69–90. doi:10.1007/s11948-009-9189-3
  • Ellul, Jacques, 1954 [1964], La technique ou L’enjeu du siècle , Paris: Armand Colin. Translated as The Technological Society , by John Wilkinson, New York: Alfred A. Knopf, 1964.
  • Erlandson, Robert F., 2008, Universal and Accessible Design for Products, Services, and Processes , Boca Raton, LA: CRC Press.
  • Feenberg, Andrew, 1999, Questioning Technology , London and New York: Routledge.
  • Finnis, John, Joseph Boyle and Germain Grisez, 1988, Nuclear Deterrence, Morality and Realism , Oxford: Oxford University Press.
  • Floridi, Luciano, 2010, The Cambridge Handbook of Information and Computer Ethics , Cambridge: Cambridge University Press. doi:10.1017/CBO9780511845239
  • Floridi, Luciano, and J.W. Sanders, 2004, “On the Morality of Artificial Agents”, Minds and Machines , 14(3): 349–379. doi:10.1023/B:MIND.0000035461.63578.9d
  • Fox, Warwick, 2000, Ethics and the Built Environment , (Professional Ethics), London and New York: Routledge.
  • Franssen, Maarten, 2005, “Arrow’s Theorem, Multi-Criteria Decision Problems and Multi-Attribute Preferences in Engineering Design”, Research in Engineering Design , 16(1–2): 42–56. doi:10.1007/s00163-004-0057-5
  • Franssen, Maarten, and Louis L. Bucciarelli, 2004, “On Rationality in Engineering Design”, Journal of Mechanical Design , 126(6): 945–949. doi:10.1115/1.1803850
  • Franssen, Maarten, and Stefan Koller, 2016, “Philosophy of Technology as a Serious Branch of Philosophy: The Empirical Turn as a Starting Point”, in Philosophy of Technology after the Empirical Turn , edited by Maarten Franssen, Pieter E. Vermaas, Peter Kroes, and Anthonie W.M. Meijers, Cham: Springer, 31–61. doi:10.1007/978-3-319-33717-3_3
  • Franssen, Maarten, Peter Kroes, Thomas A.C. Reydon and Pieter E. Vermaas (eds), 2014, Artefact Kinds: Ontology and the Human-Made World , Heidelberg/New York/Dordrecht/London: Springer. doi:10.1007/978-3-319-00801-1
  • Fraser, Nancy, and Axel Honneth, 2003, Redistribution or Recognition?: A Political-Philosophical Exchange , London and New York: Verso.
  • Freeman, K., 1948, Ancilla to the Pre-Socratic Philosophers ( A complete translation of the Fragments in Diels, Fragmente der Vorsokratiker ), Cambridge, MA: Harvard University Press.
  • Frey, R. G., and Christopher Heath Wellman (eds), 2003, A Companion to Applied Ethics , Oxford and Malden, MA: Blackwell. doi:10.1002/9780470996621
  • Friedman, Batya, and David Hendry, 2019, Value Sensitive Design: Shaping Technology with Moral Imagination , Cambridge, MA: MIT Press.
  • Garson, Justin, 2016, A Critical Overview of Biological Functions , (SpringerBriefs in Philosophy), Cham: Springer International Publishing. doi:10.1007/978-3-319-32020-5.
  • Gehlen, Arnold, 1957, Die Seele im technischen Zeitalter , Hamburg: Rowohlt. Translated as Man in the Age of Technology , by Patricia Lipscomb, New York: Columbia University Press, 1980.
  • Gunkel, David J., 2018, Robot Rights , Cambridge, MA: MIT Press.
  • Habermas, Jürgen, 1968 [1970], “Technik und Wissenschaft als ‘Ideologie’” in an an anthology of the same name, Frankfurt: Suhrkamp Verlag. Translated as “Technology and Science as ‘Ideology’”, in Toward a Rational Society: Student Protest, Science, and Politics , by Jeremy J. Shapiro, Boston, MA: Beacon Press, pp. 81–122.
  • Hansson, Sven Ove, 2003, “Ethical Criteria of Risk Acceptance”, Erkenntnis , 59(3): 291–309. doi:10.1023/A:1026005915919
  • –––, 2004a, “Fallacies of Risk”, Journal of Risk Research , 7(3): 353–360. doi:10.1080/1366987042000176262
  • –––, 2004b, “Philosophical Perspectives on Risk”, Techné , 8(1): 10–35. doi:10.5840/techne2004818
  • –––, 2007, “Philosophical Problems in Cost-Benefit Analysis”, Economics and Philosophy , 23(2): 163–183. doi:10.1017/S0266267107001356
  • Harris, Charles E., Michael S. Pritchard and Michael J. Rabins, 2014, Engineering Ethics: Concepts and Cases , fifth edition, Belmont, CA: Wadsworth.
  • Heidegger, Martin, 1954 [1977], “Die Frage nach der Technik”, in Vorträge und Aufsätze , Pfullingen: Günther Neske. Translated as “The Question concerning Technology”, in The Question Concerning Technology and Other Essays , by William Lovitt, New York: Harper and Row, 1977, pp. 3–35.
  • Herkert, Joseph R., 2001, “Future Directions in Engineering Ethics Research: Microethics, Macroethics and the Role of Professional Societies”, Science and Engineering Ethics , 7(3): 403–414. doi:10.1007/s11948-001-0062-2
  • Hilpinen, Risto, 1992, “Artifacts and Works of Art”, Theoria , 58(1): 58–82. doi:10.1111/j.1755-2567.1992.tb01155.x
  • Holt, Raymond, and Catherine Barnes, 2010, “Towards an Integrated Approach to ‘Design for X’: An Agenda for Decision-Based DFX Research”, Research in Engineering Design , 21(2): 123–136. doi:10.1007/s00163-009-0081-6
  • Horkheimer, Max, and Theodor W. Adorno, 1947 [2002], Dialektik der Aufklärung: Philosophische Fragmente , Amsterdam: Querido Verlag. Translated as Dialectic of Enlightenment: Philosophical Fragments , by Edmund Jephcott, and edited by Gunzelin Schmid Noerr, Stanford, CA: Stanford University Press, 2002.
  • Houkes, Wybo, 2009, “The Nature of Technological Knowledge”, in Meijers 2009: 309–350. doi:10.1016/B978-0-444-51667-1.50016-1
  • Houkes, Wybo, and Pieter E. Vermaas, 2010, Technical Functions: On the Use and Design of Artefacts , Dordrecht/Heidelberg/London /New York: Springer. doi:10.1007/978-90-481-3900-2
  • Hughes, Jesse, Peter Kroes, and Sjoerd Zwart, 2007, “A Semantics for Means-End Relations”, Synthese , 158(2): 207–231. doi:10.1007/s11229-006-9036-x
  • Ihde, Don, 1979, Technics and Praxis , Dordrecht/Boston/Lancaster: D. Reidel.
  • –––, 1990, Technology and the Lifeworld: from Garden to Earth , Bloomington: Indiana University Press.
  • –––, 1993, Philosophy of Technology: An Introduction , New York: Paragon.
  • Ihde, Don, and Evan Selinger, 2003, Chasing Technoscience: Matrix for Materiality , Bloomington: Indiana University Press.
  • Illies, Christian, and Anthonie Meijers, 2009, “Artefacts Without Agency”, The Monist , 92(3): 420–440. doi:10.5840/monist200992324
  • Jarvie, Ian C., 1966, “The Social Character of Technological Problems: Comments on Skolimowski’s Paper”, Technology and Culture , 7(3): 384–390. doi:10.2307/3101936
  • Jenkins, Kirsten, Darren McCauley, Raphael Heffron, Hannes Stephan and Robert Rehner, 2016, “Energy Justice: A Conceptual Review”, Energy Research & Social Science 11: 174–182. doi:10.1016/j.erss.2015.10.004
  • Johnson, Deborah G., 2003, “Computer Ethics”, in Frey and Wellman 2003: 608–619. doi:10.1002/9780470996621.ch45
  • –––, 2006, “Computer Systems: Moral Entities But Not Moral Agents”, Ethics and Information Technology , 8(4): 195–205. doi:10.1007/s10676-006-9111-5
  • –––, 2009, Computer Ethics , fourth edition. Upper Saddle River, NJ: Prentice Hall.
  • Johnson, Deborah G., and Thomas M. Powers, 2005, “Computer Systems and Responsibility: A Normative Look at Technological Complexity”, Ethics and Information Technology , 7(2): 99–107. doi:10.1007/s10676-005-4585-0
  • Jonas, Hans, 1979 [1984], Das Prinzip Verantwortung: Versuch einer Ethik für die technologische Zivilisation , Frankfurt/Main: Suhrkamp; extended English edition The Imperative of Responsibility: in Search of An Ethics for the Technological Age , Chicago and London: University of Chicago Press, 1984.
  • Kaplan, David M. (ed.), 2004 [2009], Readings in the Philosophy of Technology , Lanham, MD and Oxford: Rowman and Littlefield, first edition 2004, second revised edition 2009.
  • Kapp, Ernst, 1877 [2018], Grundlinien Einer Philosophie Der Technik: Zur Entstehungsgeschichte Der Cultur Aus Neuen Gesichtspunkten , Braunschweig: Westermann [ Kapp 1877 available online ]. Translated as Elements of a Philosophy of Technology: On the Evolutionary History of Culture , by Lauren K. Wolfe, and edited by Jeffrey West Kirkwood and Leif Weatherby, Minneapolis, MN: University of Minnesota Press, 2018.
  • Kitcher, Philip, 2001, Science, Truth, and Democracy , Oxford and New York: Oxford University Press.
  • –––, 2011. The Ethical Project , Cambridge, MA: Harvard University Press.
  • Klenk, Michael, 2021, “How Do Technological Artefacts Embody Moral Values?”, Philosophy & Technology 34(3): 525–544. doi: 10.1007/s13347-020-00401-y.
  • Kneese, Allen V., Shaul Ben-David and William D. Schulze, 1983, “The Ethical Foundations of Benefit-Cost Analysis”, in Energy and the Future , edited by Douglas E. MacLean and Peter G. Brown, Totowa, NJ: Rowman and Littefield, pp. 59–74.
  • Kotarbinski, Tadeusz, 1965, Praxiology: An Introduction to the Sciences of Efficient Action , Oxford: Pergamon Press.
  • Kroes, Peter, 2012, Technical Artefacts: Creations of Mind and Matter , Dordrecht/Heidelberg/New York/London: Springer. doi:10.1007/978-94-007-3940-6
  • Kroes, Peter, and Anthonie Meijers (eds), 2006, “The Dual Nature of Technical Artifacts”, Special issue of Studies in History and Philosophy of Science , 37(1): 1–158. doi:10.1016/j.shpsa.2005.12.001
  • Kroes, Peter, Maarten Franssen and Louis Bucciarelli, 2009, “Rationality in Design”, in Meijers (ed.) 2009: 565–600. doi:10.1016/B978-0-444-51667-1.50025-2
  • Kroes, Peter, and Peter-Paul Verbeek (eds), 2014, The Moral Status of Technical Artefacts , Dordrecht: Springer. doi:10.1007/978-94-007-7914-3
  • Kuhn, Thomas S., 1962, The Structure of Scientific Revolutions , Chicago: University of Chicago Press.
  • Ladd, John, 1991, “Bhopal: An Essay on Moral Responsibility and Civic Virtue”, Journal of Social Philosophy , 22(1): 73–91. doi:10.1111/j.1467-9833.1991.tb00022.x
  • Latour, Bruno, 1992, “Where Are the Missing Masses?”, in Bijker and Law (eds) 1992: 225–258.
  • –––, 1993, We Have Never Been Modern , New York: Harvester Wheatsheaf.
  • –––, 2005, Reassembling the Social: An Introduction to Actor-Network-Theory , Oxford and New York: Oxford University Press.
  • Lawson, Clive, 2008, “An Ontology of Technology: Artefacts, Relations and Functions”, Technè , 12(1): 48–64. doi:10.5840/techne200812114
  • –––, 2017, Technology and Isolation , Cambridge and New York: Cambridge University Press. doi:10.1017/9781316848319
  • Lin, Patrick, Keith Abney and Ryan Jenkins (eds), 2017, Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence , Oxford/New York: Oxford University Press.
  • Lloyd, G.E.R., 1973, “Analogy in Early Greek Thought”, in The Dictionary of the History of Ideas , edited by Philip P. Wiener, New York: Charles Scribner’s Sons, vol. 1 pp. 60–64. [ Lloyd 1973 available online ]
  • Lloyd, Peter A., and Jerry A. Busby, 2003, “‘Things that Went Well—No Serious Injuries or Deaths’: Ethical Reasoning in a Normal Engineering Design Process”, Science and Engineering Ethics , 9(4): 503–516. doi:10.1007/s11948-003-0047-4
  • Longino, Helen, 1990, Science as Social Knowledge: Values and Objectivity in Scientific Inquiry , Princeton: Princeton University Press.
  • –––, 2002, The Fate of Knowledge , Princeton: Princeton University Press.
  • Maheshwari, Kritika, and Sven Nyholm, 2022, “Dominating Risk Impositions”, The Journal of Ethics . doi: 10.1007/s10892-022-09407-4.
  • Mahner, Martin, and Mario Bunge, 2001, “Function and Functionalism: A Synthetic Perspective”, Philosophy of Science , 68(1): 73–94. doi:10.1086/392867
  • Marcuse, Herbert, 1964, One-Dimensional Man: Studies in the Ideology of Advanced Industrial Society , New York: Beacon Press, and London: Routledge and Kegan Paul.
  • Martin, Miles W., and Roland Schinzinger, 2022, Ethics in Engineering , fifth edition, Boston, MA: McGraw-Hill.
  • Matthias, Andreas, 2004, “The Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata”, Ethics and Information Technology 6(3): 175–183. doi: 10.1007/s10676-004-3422-1.
  • McGinn, Robert E., 2010, “What’s Different, Ethically, About Nanotechnology? Foundational Questions and Answers”, NanoEthics , 4(2): 115–128. doi:10.1007/s11569-010-0089-4
  • Meijers, Anthonie (ed.), 2009, Philosophy of Technology and Engineering Sciences , (Handbook of the Philosophy of Science, volume 9), Amsterdam: North-Holland.
  • Michelfelder, Diane P., and Neelke Doorn (eds), 2021, The Routledge Handbook of the Philosophy of Engineering , New York and Milton Park, UK: Routledge.
  • Miller, Boaz, 2020, “Is Technology Value-Neutral?”, Science, Technology & Human Values 46(1): 53–80. doi: 10.1177/0162243919900965.
  • Millikan, Ruth Garrett, 1999, “Wings, Spoons, Pills, and Quills: A Pluralist Theory of Function”, The Journal of Philosophy , 96(4): 191–206. doi:10.5840/jphil199996428
  • Mitcham, Carl, 1994, Thinking Through Technology: The Path Between Engineering and Philosophy , Chicago: University of Chicago Press.
  • Mittelstadt, Brent Daniel, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter and Luciano Floridi, 2016, “The Ethics of Algorithms: Mapping the Debate”, Big Data & Society , 3(2): 1–21. doi:10.1177/2053951716679679
  • Moor, James H., 1985, “What is Computer Ethics?” Metaphilosophy , 16(4): 266–275. doi:10.1111/j.1467-9973.1985.tb00173.x
  • –––, 2006, “The Nature, Importance, and Difficulty of Machine Ethics”, IEEE Intelligent Systems , 21(4): 18–21. doi:10.1109/MIS.2006.80
  • Mosakas, Kestutis, 2021, “On the Moral Status of Social Robots: Considering the Consciousness Criterion”, AI & Society 36(2): 429–443. doi: 10.1007/s00146-020-01002-1.
  • Mumford, Lewis, 1934, Technics and Civilization , New York: Harcourt, Brace and Company, and London: Routledge and Kegan Paul.
  • Newman, William R., 2004, Promethean Ambitions: Alchemy and the Quest to Perfect Nature , Chicago: University of Chicago Press.
  • Niiniluoto, Ilkka, 1993, “The Aim and Structure of Applied Research”, Erkenntnis , 38(1): 1–21. doi:10.1007/BF01129020
  • Nissenbaum, Helen, 1996, “Accountability in a Computerized Society”, Science and Engineering Ethics , 2(1): 25–42. doi:10.1007/BF02639315
  • –––, 2010, Privacy in Context: Technology, Policy, and the Integrity of Social Life , Stanford, CA: Stanford Law Books.
  • Nyholm, Sven, 2018, “Attributing Agency to Automated Systems: Reflections on Human-Robot Collaborations and Responsibility-Loci”, Science and Engineering Ethics 24(4): 1201–1219. doi: 10.1007/s11948-017-9943-x.
  • Olsen, Jan Kyrre Berg, Evan Selinger and Søren Riis (eds), 2009, New Waves in Philosophy of Technology , Basingstoke and New York: Palgrave Macmillan. doi:10.1057/9780230227279
  • Owen, Richard, John Bessant, and Maggy Heintz, 2013, Responsible Innovation: Managing the Responsible Emergence of Science and Innovation in Society , Chichester: John Wiley. doi:10.1002/9781118551424
  • Peterson, Martin, 2020, Ethics for Engineers , New York: Oxford University Press.
  • Peterson, Martin, and Andreas Spahn, 2011, “Can Technological Artefacts be Moral Agents?” Science and Engineering Ethics , 17(3): 411–424. doi:10.1007/s11948-010-9241-3
  • Pettit, Philip, 2012, On the People’s Terms: A Republican Theory and Model of Democracy , The Seeley lectures, Cambridge and New York: Cambridge University Press.
  • Pitt, Joseph C., 1999, Thinking About Technology: Foundations of the Philosophy of Technology , New York: Seven Bridges Press.
  • Plato, Laws , 2016, M. Schofield (ed.), T. Griffith (tr.), Cambridge: Cambridge University Press.
  • –––, Timaeus and Critias , 2008, R. Waterfield (tr.), with introduction and notes by A. Gregory, Oxford: Oxford University Press.
  • Polanyi, Michael, 1958, Personal Knowledge: Towards a Post-Critical Philosophy , London: Routledge and Kegan Paul.
  • Preston, Beth, 1998, “Why is a Wing Like a Spoon? A Pluralist Theory of Function”, The Journal of Philosophy , 95(5): 215–254. doi:10.2307/2564689
  • –––, 2003, “Of Marigold Beer: A Reply to Vermaas and Houkes”, British Journal for the Philosophy of Science , 54(4): 601–612. doi:10.1093/bjps/54.4.601
  • –––, 2012, A Philosophy of Material Culture: Action, Function, and Mind , New York and Milton Park, UK: Routledge.
  • Preston, Christopher J. (ed.), 2016, Climate Justice and Geoengineering: Ethics and Policy in the Atmospheric Anthropocene , London/New York: Rowman & Littlefield International.
  • Radder, Hans, 2009, “Why Technologies Are Inherently Normative”, in Meijers (ed.) 2009: 887–921. doi:10.1016/B978-0-444-51667-1.50037-9
  • Rawls, John, 1999, A Theory of Justice , Revised Edition, Cambridge, MA: The Belknap Press of Harvard University Press.
  • Roeser, Sabine, 2012, “Moral Emotions as Guide to Acceptable Risk”, in Roeser et al. 2012: 819–832. doi:10.1007/978-94-007-1433-5_32
  • Roeser, Sabine, Rafaela Hillerbrand, Per Sandin and Martin Peterson (eds), 2012, Handbook of Risk Theory: Epistemology, Decision Theory, Ethics, and Social Implications of Risk , Dordrecht/Heidelberg/London/New York: Springer. doi:10.1007/978-94-007-1433-5
  • Ryle, Gilbert, 1949, The Concept of Mind , London: Hutchinson.
  • Santoni de Sio, Filippo, and Giulio Mecacci, 2021, “Four Responsibility Gaps with Artificial Intelligence: Why they Matter and How to Address them”, Philosophy & Technology 34(4): 1057–1084. doi: 10.1007/s13347-021-00450-x.
  • Santoni de Sio, Filippo, and Jeroen van den Hoven, 2018. “Meaningful Human Control over Autonomous Systems: A Philosophical Account”, Frontiers in Robotics and AI . doi: 10.3389/frobt.2018.00015.
  • Scharff, Robert C., and Val Dusek (eds), 2003 [2014], Philosophy of Technology: The Technological Condition , Malden, MA and Oxford: Blackwell, first edition 2003, second [revised] edition 2014.
  • Schummer, Joachim, 2001, “Aristotle on Technology and Nature”, Philosophia Naturalis , 38: 105–120.
  • Sclove, Richard E., 1995, Democracy and Technology , New York: The Guilford Press.
  • Sellars, Wilfrid, 1962, “Philosophy and the Scientific Image of Man”, in Frontiers of Science and Philosophy , edited by R. Colodny, Pittsburgh: University of Pittsburgh Press, pp. 35–78.
  • Sharkey, Amanda, and Noel Sharkey, 2021, “We Need to Talk about Deception in Social Robotics!”, Ethics and Information Technology 23(3): 309–316. doi: 10.1007/s10676-020-09573-9.
  • Sherlock, Richard, and John D. Morrey (eds), 2002, Ethical Issues in Biotechnology , Lanham, MD: Rowman and Littlefield.
  • Shrader-Frechette, Kristen S., 1985, Risk Analysis and Scientific Method: Methodological and Ethical Problems with Evaluating Societal Hazards , Dordrecht and Boston: D. Reidel.
  • –––, 1991, Risk and Rationality: Philosophical Foundations for Populist Reform , Berkeley etc.: University of California Press.
  • Simon, Herbert A., 1957, Models of Man, Social and Rational: Mathematical Essays on Rational Human Behavior in a Social Setting , New York: John Wiley.
  • –––, 1969, The Sciences of the Artificial , Cambridge, MA and London: MIT Press.
  • –––, 1982, Models of Bounded Rationality , Cambridge, MA and London: MIT Press.
  • Skolimowski, Henryk, 1966, “The Structure of Thinking in Technology”, Technology and Culture , 7(3): 371–383. doi:10.2307/3101935
  • Snapper, John W., 1985, “Responsibility for Computer-Based Errors”, Metaphilosophy , 16(4): 289–295. doi:10.1111/j.1467-9973.1985.tb00175.x
  • Soavi, Marzia, 2009, “Realism and Artifact Kinds”, in Functions in Biological and Artificial Worlds: Comparative Philosophical Perspectives , edited by Ulrich Krohs and Peter Kroes. Cambridge, MA: MIT Press, pp. 185–202. doi:10.7551/mitpress/9780262113212.003.0011
  • Sparrow, Robert, 2007, “Killer Robots”, Journal of Applied Philosophy 24(1): 62–77. doi:10.1111/j.1468-5930.2007.00346.x
  • Suh, Nam Pyo, 2001, Axiomatic Design: Advances and Applications , Oxford and New York: Oxford University Press.
  • Susskind, Jamie, 2022, The Digital Republic: On Freedom and Democracy in the 21st Century , London: Bloomsbury.
  • Swierstra, Tsjalling, and Jaap Jelsma, 2006, “Responsibility Without Moralism in Technoscientific Design Practice”, Science, Technology & Human Values , 31(1): 309–332. doi:10.1177/0162243905285844
  • Swierstra, Tsjalling, and Hedwig te Molder, 2012, “Risk and Soft Impacts”, in Roeser et al. (eds) 2012: 1049–1066. doi:10.1007/978-94-007-1433-5_42
  • Taebi, Behnam, 2021, Ethics and Engineering: An Introduction , Cambridge Applied Ethics series, Cambridge and New York: Cambridge University Press.
  • Taebi, Behnam, and Sabine Roeser (eds), 2015, The Ethics of Nuclear Energy: Risk, Justice, and Democracy in the Post-Fukushima Era , Cambridge: Cambridge University Press. doi:10.1017/CBO9781107294905
  • Tavani, Herman T., 2002, “The Uniqueness Debate in Computer Ethics: What Exactly is at Issue, and Why Does it Matter?” Ethics and Information Technology , 4(1): 37–54. doi:10.1023/A:1015283808882
  • Thomasson, Amie L., 2003, “Realism and Human Kinds”, Philosophy and Phenomenological Research , 67(3): 580–609. doi:10.1111/j.1933-1592.2003.tb00309.x
  • –––, 2007, “Artifacts and Human Concepts”, in Creations of the Mind: Essays on Artifacts and Their Representation , edited by Eric Margolis and Stephen Laurence, Oxford: Oxford University Press, pp. 52–73.
  • Thompson, Dennis F., 1980, “Moral Responsibility and Public Officials: The Problem of Many Hands”, American Political Science Review , 74(4): 905–916. doi:10.2307/1954312
  • Thompson, Paul B., 2007, Food Biotechnology in Ethical Perspective , second edition, Dordrecht: Springer. doi:10.1007/1-4020-5791-1
  • Vallor, Shannon (ed.), 2022, The Oxford Handbook of Philosophy of Technology , Oxford and New York: Oxford University Press.
  • van den Hoven, Jeroen, and John Weckert (eds), 2008, Information Technology and Moral Philosophy , Cambridge and New York: Cambridge University Press.
  • van den Hoven, Jeroen, Pieter E. Vermaas and Ibo van de Poel (eds), 2015, Handbook of Ethics and Values in Technological Design: Sources, Theory, Values and Application Domains , Dordrecht: Springer. doi:10.1007/978-94-007-6994-6
  • van de Poel, Ibo, 2009, “Values in Engineering Design”, in Meijers (ed.) 2009: 973–1006. doi:10.1016/B978-0-444-51667-1.50040-9
  • –––, 2016, “An Ethical Framework for Evaluating Experimental Technology”, Science and Engineering Ethics , 22(3): 667–686. doi:10.1007/s11948-015-9724-3
  • van de Poel, Ibo, and Lambèr Royakkers, 2011, Ethics, Technology and Engineering , Oxford: Wiley-Blackwell.
  • van de Poel, Ibo, Lambèr Royakkers and Sjoerd D. Zwart, 2015, Moral Responsibility and the Problem of Many Hands , London: Routledge.
  • van der Pot, Johan Hendrik Jacob, 1985 [1994/2004], Die Bewertung des technischen Fortschritts: eine systematische Übersicht der Theorien , 2 volumes, Assen/Maastricht: Van Gorcum. Translated as Steward or Sorcerer’s Apprentice? the Evaluation of Technical Progress: A Systematic Overview of Theories and Opinions , by Chris Turner, 2 volumes., Delft: Eburon, 1994, second edition, 2004, under the title Encyclopedia of Technological Progress: A Systematic Overview of Theories and Opinions .
  • van Wynsberghe, Aimee, and Scott Robbins, 2019, “Critiquing the Reasons for Making Artificial Moral Agents”, Science and Engineering Ethics 25(3): 719–735. doi: 10.1007/s11948-018-0030-8.
  • Verbeek, Peter-Paul, 2000 [2005], De daadkracht der Dingen: Over Techniek, Filosofie En Vormgeving , Amsterdam: Boom. Translated as What Things Do: Philosophical Reflections on Technology, Agency, and Design , by Robert P. Crease, University Park, PA: Penn State University Press, 2005.
  • –––, 2011, Moralizing Technology: Understanding and Designing the Morality of Things , Chicago and London: The University of Chicago Press.
  • Vermaas, Pieter E., and Wybo Houkes, 2003, “Ascribing Functions to Technical Artefacts: A Challenge to Etiological Accounts of Functions”, British Journal for the Philosophy of Science , 54(2): 261–289. doi:10.1093/bjps/54.2.261
  • Vincenti, Walter A., 1990, What Engineers Know and How They Know It: Analytical Studies from Aeronautical History , Baltimore, MD and London: Johns Hopkins University Press.
  • Vitruvius, De architecture , translated as The Ten Books on Architecture , by Morris H. Morgan. Cambridge, MA: Harvard University Press, 1914. [ Vitruvius Morgan’s translation 1914 available online ]
  • Volti, Rudi, 2009, Society and Technological Change , sixth edition, New York: Worth Publications.
  • Von Wright, Georg Henrik, 1963, Norm and Action: A Logical Enquiry , London: Routledge and Kegan Paul.
  • Wallach, Wendell, and Colin Allen, 2009, Moral Machines: Teaching Robots Right from Wrong , Oxford and New York: Oxford University Press. doi:10.1093/acprof:oso/9780195374049.001.0001
  • Weckert, John, 2007, Computer Ethics , Aldershot and Burlington, VT: Ashgate.
  • Wiggins, David, 1980, Sameness and Substance , Oxford: Blackwell.
  • Winner, Langdon, 1977, Autonomous Technology: Technics-out-of-Control as a Theme in Political Thought , Cambridge, MA and London: MIT Press.
  • –––, 1980, “Do Artifacts Have Politics?” Daedalus , 109(1): 121–136.
  • –––, 1983, “Techné and Politeia: The Technical Constitution of Society”, in Philosophy and Technology , edited by Paul T. Durbin and Friedrich Rapp, Dordrecht/Boston/Lancaster: D. Reidel, pp. 97–111. doi:10.1007/978-94-009-7124-0_7
  • Zandvoort, H., 2000, “Codes of Conduct, the Law, and Technological Design and Development”, in The Empirical Turn in the Philosophy of Technology , edited by Peter Kroes and Anthonie Meijers, Amsterdam: JAI/Elsevier, pp. 193–205.
  • Zuboff, Shoshana, 2017, The Age of Surveillance Capitalism , New York: Public Affairs.
  • Zwart, Sjoerd, Maarten Franssen and Peter Kroes, 2018, “Practical Inference—A Formal Approach”, in The Future of Engineering: Philosophical Foundations, Ethical Problems and Application Cases , edited by Albrecht Fritzsche and Sascha Julian Oks, Cham: Springer, pp. 33–52. doi:10.1007/978-3-319-91029-1_3
  • Zwart, Sjoerd, Ibo van de Poel, Harald van Mil and Michiel Brumsen, 2006, “A Network Approach for Distinguishing Ethical Issues in Research and Development”, Science and Engineering Ethics , 12(4): 663–684. doi:10.1007/s11948-006-0063-2
  • Philosophy & Technology
  • Techné: Research in Philosophy and Technology
  • Science and Engineering Ethics
  • Science, Technology & Human Values
  • Ethics and Information Technology
  • Neuroethics
  • Encyclopedia of Science, Technology, and Ethics , 4 volumes, Carl Mitcham (ed.), Macmillan, 2005.
  • Encyclopedia of Applied Ethics , second edition, 4 volumes, Ruth Chadwick (editor-in-chief), Elsevier, 2012.
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.
  • Society for Philosophy and Technology
  • Forum for Philosophy, Engineering & Technology
  • Online Ethics Center
  • 4TU. Centre for Ethics and Technology
  • Oxford Uehiro Centre for Practical Ethics

Aristotle, Special Topics: causality | artifact | artificial intelligence: ethics of | Bacon, Francis | Chinese room argument | Church-Turing Thesis | computability and complexity | computer science, philosophy of | computing: and moral responsibility | episteme and techne [= scientific knowledge and expertise] | functionalism | identity: over time | identity: relative | information technology: and moral values | justice | justice: climate | knowledge how | material constitution | mind: computational theory of | moral responsibility | multiple realizability | Popper, Karl | practical reason | rationality: instrumental | responsibility: collective | risk | sortals | Turing machines | Turing test

Copyright © 2023 by Maarten Franssen < m . p . m . franssen @ tudelft . nl > Gert-Jan Lokhorst < g . j . c . lokhorst @ tudelft . nl > Ibo van de Poel < I . R . vandepoel @ tudelft . nl >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2023 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

Logo

Essay on How Technology Changed Our Lives

Students are often asked to write an essay on How Technology Changed Our Lives in their schools and colleges. And if you’re also looking for the same, we have created 100-word, 250-word, and 500-word essays on the topic.

Let’s take a look…

100 Words Essay on How Technology Changed Our Lives

The advent of technology.

Technology has revolutionized our lives in many ways. It has made tasks easier, faster, and more efficient. We use technology in our daily activities, from cooking to communicating.

Communication and Technology

Technology has drastically changed the way we communicate. With the advent of smartphones and the internet, we can now connect with anyone, anywhere, anytime.

Education and Technology

Technology has also transformed education. It has made learning more interactive and accessible. With online classes, students can learn from home.

Healthcare and Technology

In healthcare, technology has improved diagnosis and treatment. It has made healthcare more effective and convenient.

In conclusion, technology has greatly changed our lives. It has made our lives easier, faster, and more efficient.

250 Words Essay on How Technology Changed Our Lives

Technology has revolutionized our world, transforming every aspect of our lives. It has brought about a digital revolution, making tasks easier, faster, and more efficient. From communication to transportation, health to education, technology has permeated every sphere of human life.

Impact on Communication

The advent of smartphones and the internet has revolutionized communication. We can now connect with anyone, anywhere, anytime, breaking geographical boundaries. Social media platforms, video conferencing, and instant messaging apps have not only made communication instantaneous but also fostered global connections and collaborations.

Transformation in Transportation

Technology has also drastically changed transportation. With GPS technology, navigation has become easier and more precise. Electric cars and autonomous vehicles are on the rise, promising a future of sustainable and self-driving transportation.

Healthcare Advancements

In healthcare, technology has brought about advancements like telemedicine, wearable devices, and AI-driven diagnostics. These innovations have improved patient care, made health monitoring easier, and increased the accuracy of diagnoses.

Educational Innovations

The education sector has also seen significant changes with e-learning platforms, virtual classrooms, and digital resources. This has made education more accessible, interactive, and personalized.

In conclusion, technology has indeed transformed our lives, making it more connected, efficient, and innovative. As we move forward, it will continue to shape our future, opening new possibilities and challenges. It is up to us to harness its potential responsibly, ensuring it serves as a tool for progress and prosperity.

500 Words Essay on How Technology Changed Our Lives

The advent of technology has revolutionized human life, transforming the world into a global village. It has impacted every facet of our existence, from communication to transportation, health to education, and entertainment to business.

Revolutionizing Communication

One of the most profound changes brought about by technology is in the field of communication. The invention of the internet and smartphones has made it possible to connect with anyone, anywhere, at any time. Social media platforms, emails, and video calls have removed geographical barriers, fostering global collaboration and understanding.

Transforming Transportation

The transportation industry has also been profoundly impacted by technology. Innovations like GPS and satellite technology have made navigation easier and more accurate. Electric and self-driving cars, still in their nascent stages, promise to revolutionize our commute, making it safer and more sustainable.

Advancements in Health and Medicine

In the field of health and medicine, technology has been a game-changer. Advanced diagnostic tools, telemedicine, robotic surgeries, and personalized medicine have improved patient care and outcomes. Additionally, wearable technology and health apps have empowered individuals to take charge of their health.

Revamping Education

Education is another sector where technology has left an indelible mark. Online learning platforms, digital classrooms, and educational apps have democratized education, making it accessible to all. The recent pandemic has underscored the importance of technology in education, with schools and universities worldwide transitioning to remote learning.

Entertainment and Leisure

The way we consume entertainment and leisure activities has also been transformed by technology. Streaming platforms, virtual reality, and online gaming have changed our entertainment landscape, offering personalized, on-demand, and immersive experiences.

Impacting Business and Economy

Lastly, technology has significantly influenced business operations and the global economy. E-commerce, digital marketing, and remote work have redefined traditional business models, promoting efficiency and inclusivity.

In conclusion, technology has dramatically altered our lives, reshaping the way we communicate, travel, learn, stay healthy, entertain ourselves, and conduct business. While it presents challenges, such as privacy concerns and digital divide, the benefits it offers are immense. As we move forward, it is essential to harness technology responsibly and ethically, ensuring it serves as a tool for progress and inclusivity.

That’s it! I hope the essay helped you.

If you’re looking for more, here are essays on other interesting topics:

  • Essay on Benefits of Modern Technology
  • Essay on Pros and Cons of Technology
  • Essay on Importance of Technology

Apart from these, you can look at all the essays by clicking here .

Happy studying!

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Home / Essay Samples / Information Science and Technology / Dependence on Technology / Human Dependence on Technology

Human Dependence on Technology

  • Category: Information Science and Technology
  • Topic: Advantages of Technology , Dependence on Technology

Pages: 2 (887 words)

Views: 1215

  • Downloads: -->

--> ⚠️ Remember: This essay was written and uploaded by an--> click here.

Found a great essay sample but want a unique one?

are ready to help you with your essay

You won’t be charged yet!

Open Source Software Essays

Computer Graphics Essays

Graphic Design Essays

Virtual Reality Essays

Digital Era Essays

Related Essays

We are glad that you like it, but you cannot copy from our website. Just insert your email and this sample will be sent to you.

By clicking “Send”, you agree to our Terms of service  and  Privacy statement . We will occasionally send you account related emails.

Your essay sample has been sent.

In fact, there is a way to get an original essay! Turn to our writers and order a plagiarism-free paper.

samplius.com uses cookies to offer you the best service possible.By continuing we’ll assume you board with our cookie policy .--> -->