Issue Cover

  • Previous Article
  • Next Article

Promises and Pitfalls of Technology

Politics and privacy, private-sector influence and big tech, state competition and conflict, author biography, how is technology changing the world, and how should the world change technology.

[email protected]

  • Split-Screen
  • Article contents
  • Figures & tables
  • Supplementary Data
  • Peer Review
  • Open the PDF for in another window
  • Guest Access
  • Get Permissions
  • Cite Icon Cite
  • Search Site

Josephine Wolff; How Is Technology Changing the World, and How Should the World Change Technology?. Global Perspectives 1 February 2021; 2 (1): 27353. doi: https://doi.org/10.1525/gp.2021.27353

Download citation file:

  • Ris (Zotero)
  • Reference Manager

Technologies are becoming increasingly complicated and increasingly interconnected. Cars, airplanes, medical devices, financial transactions, and electricity systems all rely on more computer software than they ever have before, making them seem both harder to understand and, in some cases, harder to control. Government and corporate surveillance of individuals and information processing relies largely on digital technologies and artificial intelligence, and therefore involves less human-to-human contact than ever before and more opportunities for biases to be embedded and codified in our technological systems in ways we may not even be able to identify or recognize. Bioengineering advances are opening up new terrain for challenging philosophical, political, and economic questions regarding human-natural relations. Additionally, the management of these large and small devices and systems is increasingly done through the cloud, so that control over them is both very remote and removed from direct human or social control. The study of how to make technologies like artificial intelligence or the Internet of Things “explainable” has become its own area of research because it is so difficult to understand how they work or what is at fault when something goes wrong (Gunning and Aha 2019) .

This growing complexity makes it more difficult than ever—and more imperative than ever—for scholars to probe how technological advancements are altering life around the world in both positive and negative ways and what social, political, and legal tools are needed to help shape the development and design of technology in beneficial directions. This can seem like an impossible task in light of the rapid pace of technological change and the sense that its continued advancement is inevitable, but many countries around the world are only just beginning to take significant steps toward regulating computer technologies and are still in the process of radically rethinking the rules governing global data flows and exchange of technology across borders.

These are exciting times not just for technological development but also for technology policy—our technologies may be more advanced and complicated than ever but so, too, are our understandings of how they can best be leveraged, protected, and even constrained. The structures of technological systems as determined largely by government and institutional policies and those structures have tremendous implications for social organization and agency, ranging from open source, open systems that are highly distributed and decentralized, to those that are tightly controlled and closed, structured according to stricter and more hierarchical models. And just as our understanding of the governance of technology is developing in new and interesting ways, so, too, is our understanding of the social, cultural, environmental, and political dimensions of emerging technologies. We are realizing both the challenges and the importance of mapping out the full range of ways that technology is changing our society, what we want those changes to look like, and what tools we have to try to influence and guide those shifts.

Technology can be a source of tremendous optimism. It can help overcome some of the greatest challenges our society faces, including climate change, famine, and disease. For those who believe in the power of innovation and the promise of creative destruction to advance economic development and lead to better quality of life, technology is a vital economic driver (Schumpeter 1942) . But it can also be a tool of tremendous fear and oppression, embedding biases in automated decision-making processes and information-processing algorithms, exacerbating economic and social inequalities within and between countries to a staggering degree, or creating new weapons and avenues for attack unlike any we have had to face in the past. Scholars have even contended that the emergence of the term technology in the nineteenth and twentieth centuries marked a shift from viewing individual pieces of machinery as a means to achieving political and social progress to the more dangerous, or hazardous, view that larger-scale, more complex technological systems were a semiautonomous form of progress in and of themselves (Marx 2010) . More recently, technologists have sharply criticized what they view as a wave of new Luddites, people intent on slowing the development of technology and turning back the clock on innovation as a means of mitigating the societal impacts of technological change (Marlowe 1970) .

At the heart of fights over new technologies and their resulting global changes are often two conflicting visions of technology: a fundamentally optimistic one that believes humans use it as a tool to achieve greater goals, and a fundamentally pessimistic one that holds that technological systems have reached a point beyond our control. Technology philosophers have argued that neither of these views is wholly accurate and that a purely optimistic or pessimistic view of technology is insufficient to capture the nuances and complexity of our relationship to technology (Oberdiek and Tiles 1995) . Understanding technology and how we can make better decisions about designing, deploying, and refining it requires capturing that nuance and complexity through in-depth analysis of the impacts of different technological advancements and the ways they have played out in all their complicated and controversial messiness across the world.

These impacts are often unpredictable as technologies are adopted in new contexts and come to be used in ways that sometimes diverge significantly from the use cases envisioned by their designers. The internet, designed to help transmit information between computer networks, became a crucial vehicle for commerce, introducing unexpected avenues for crime and financial fraud. Social media platforms like Facebook and Twitter, designed to connect friends and families through sharing photographs and life updates, became focal points of election controversies and political influence. Cryptocurrencies, originally intended as a means of decentralized digital cash, have become a significant environmental hazard as more and more computing resources are devoted to mining these forms of virtual money. One of the crucial challenges in this area is therefore recognizing, documenting, and even anticipating some of these unexpected consequences and providing mechanisms to technologists for how to think through the impacts of their work, as well as possible other paths to different outcomes (Verbeek 2006) . And just as technological innovations can cause unexpected harm, they can also bring about extraordinary benefits—new vaccines and medicines to address global pandemics and save thousands of lives, new sources of energy that can drastically reduce emissions and help combat climate change, new modes of education that can reach people who would otherwise have no access to schooling. Regulating technology therefore requires a careful balance of mitigating risks without overly restricting potentially beneficial innovations.

Nations around the world have taken very different approaches to governing emerging technologies and have adopted a range of different technologies themselves in pursuit of more modern governance structures and processes (Braman 2009) . In Europe, the precautionary principle has guided much more anticipatory regulation aimed at addressing the risks presented by technologies even before they are fully realized. For instance, the European Union’s General Data Protection Regulation focuses on the responsibilities of data controllers and processors to provide individuals with access to their data and information about how that data is being used not just as a means of addressing existing security and privacy threats, such as data breaches, but also to protect against future developments and uses of that data for artificial intelligence and automated decision-making purposes. In Germany, Technische Überwachungsvereine, or TÜVs, perform regular tests and inspections of technological systems to assess and minimize risks over time, as the tech landscape evolves. In the United States, by contrast, there is much greater reliance on litigation and liability regimes to address safety and security failings after-the-fact. These different approaches reflect not just the different legal and regulatory mechanisms and philosophies of different nations but also the different ways those nations prioritize rapid development of the technology industry versus safety, security, and individual control. Typically, governance innovations move much more slowly than technological innovations, and regulations can lag years, or even decades, behind the technologies they aim to govern.

In addition to this varied set of national regulatory approaches, a variety of international and nongovernmental organizations also contribute to the process of developing standards, rules, and norms for new technologies, including the International Organization for Standardization­ and the International Telecommunication Union. These multilateral and NGO actors play an especially important role in trying to define appropriate boundaries for the use of new technologies by governments as instruments of control for the state.

At the same time that policymakers are under scrutiny both for their decisions about how to regulate technology as well as their decisions about how and when to adopt technologies like facial recognition themselves, technology firms and designers have also come under increasing criticism. Growing recognition that the design of technologies can have far-reaching social and political implications means that there is more pressure on technologists to take into consideration the consequences of their decisions early on in the design process (Vincenti 1993; Winner 1980) . The question of how technologists should incorporate these social dimensions into their design and development processes is an old one, and debate on these issues dates back to the 1970s, but it remains an urgent and often overlooked part of the puzzle because so many of the supposedly systematic mechanisms for assessing the impacts of new technologies in both the private and public sectors are primarily bureaucratic, symbolic processes rather than carrying any real weight or influence.

Technologists are often ill-equipped or unwilling to respond to the sorts of social problems that their creations have—often unwittingly—exacerbated, and instead point to governments and lawmakers to address those problems (Zuckerberg 2019) . But governments often have few incentives to engage in this area. This is because setting clear standards and rules for an ever-evolving technological landscape can be extremely challenging, because enforcement of those rules can be a significant undertaking requiring considerable expertise, and because the tech sector is a major source of jobs and revenue for many countries that may fear losing those benefits if they constrain companies too much. This indicates not just a need for clearer incentives and better policies for both private- and public-sector entities but also a need for new mechanisms whereby the technology development and design process can be influenced and assessed by people with a wider range of experiences and expertise. If we want technologies to be designed with an eye to their impacts, who is responsible for predicting, measuring, and mitigating those impacts throughout the design process? Involving policymakers in that process in a more meaningful way will also require training them to have the analytic and technical capacity to more fully engage with technologists and understand more fully the implications of their decisions.

At the same time that tech companies seem unwilling or unable to rein in their creations, many also fear they wield too much power, in some cases all but replacing governments and international organizations in their ability to make decisions that affect millions of people worldwide and control access to information, platforms, and audiences (Kilovaty 2020) . Regulators around the world have begun considering whether some of these companies have become so powerful that they violate the tenets of antitrust laws, but it can be difficult for governments to identify exactly what those violations are, especially in the context of an industry where the largest players often provide their customers with free services. And the platforms and services developed by tech companies are often wielded most powerfully and dangerously not directly by their private-sector creators and operators but instead by states themselves for widespread misinformation campaigns that serve political purposes (Nye 2018) .

Since the largest private entities in the tech sector operate in many countries, they are often better poised to implement global changes to the technological ecosystem than individual states or regulatory bodies, creating new challenges to existing governance structures and hierarchies. Just as it can be challenging to provide oversight for government use of technologies, so, too, oversight of the biggest tech companies, which have more resources, reach, and power than many nations, can prove to be a daunting task. The rise of network forms of organization and the growing gig economy have added to these challenges, making it even harder for regulators to fully address the breadth of these companies’ operations (Powell 1990) . The private-public partnerships that have emerged around energy, transportation, medical, and cyber technologies further complicate this picture, blurring the line between the public and private sectors and raising critical questions about the role of each in providing critical infrastructure, health care, and security. How can and should private tech companies operating in these different sectors be governed, and what types of influence do they exert over regulators? How feasible are different policy proposals aimed at technological innovation, and what potential unintended consequences might they have?

Conflict between countries has also spilled over significantly into the private sector in recent years, most notably in the case of tensions between the United States and China over which technologies developed in each country will be permitted by the other and which will be purchased by other customers, outside those two countries. Countries competing to develop the best technology is not a new phenomenon, but the current conflicts have major international ramifications and will influence the infrastructure that is installed and used around the world for years to come. Untangling the different factors that feed into these tussles as well as whom they benefit and whom they leave at a disadvantage is crucial for understanding how governments can most effectively foster technological innovation and invention domestically as well as the global consequences of those efforts. As much of the world is forced to choose between buying technology from the United States or from China, how should we understand the long-term impacts of those choices and the options available to people in countries without robust domestic tech industries? Does the global spread of technologies help fuel further innovation in countries with smaller tech markets, or does it reinforce the dominance of the states that are already most prominent in this sector? How can research universities maintain global collaborations and research communities in light of these national competitions, and what role does government research and development spending play in fostering innovation within its own borders and worldwide? How should intellectual property protections evolve to meet the demands of the technology industry, and how can those protections be enforced globally?

These conflicts between countries sometimes appear to challenge the feasibility of truly global technologies and networks that operate across all countries through standardized protocols and design features. Organizations like the International Organization for Standardization, the World Intellectual Property Organization, the United Nations Industrial Development Organization, and many others have tried to harmonize these policies and protocols across different countries for years, but have met with limited success when it comes to resolving the issues of greatest tension and disagreement among nations. For technology to operate in a global environment, there is a need for a much greater degree of coordination among countries and the development of common standards and norms, but governments continue to struggle to agree not just on those norms themselves but even the appropriate venue and processes for developing them. Without greater global cooperation, is it possible to maintain a global network like the internet or to promote the spread of new technologies around the world to address challenges of sustainability? What might help incentivize that cooperation moving forward, and what could new structures and process for governance of global technologies look like? Why has the tech industry’s self-regulation culture persisted? Do the same traditional drivers for public policy, such as politics of harmonization and path dependency in policy-making, still sufficiently explain policy outcomes in this space? As new technologies and their applications spread across the globe in uneven ways, how and when do they create forces of change from unexpected places?

These are some of the questions that we hope to address in the Technology and Global Change section through articles that tackle new dimensions of the global landscape of designing, developing, deploying, and assessing new technologies to address major challenges the world faces. Understanding these processes requires synthesizing knowledge from a range of different fields, including sociology, political science, economics, and history, as well as technical fields such as engineering, climate science, and computer science. A crucial part of understanding how technology has created global change and, in turn, how global changes have influenced the development of new technologies is understanding the technologies themselves in all their richness and complexity—how they work, the limits of what they can do, what they were designed to do, how they are actually used. Just as technologies themselves are becoming more complicated, so are their embeddings and relationships to the larger social, political, and legal contexts in which they exist. Scholars across all disciplines are encouraged to join us in untangling those complexities.

Josephine Wolff is an associate professor of cybersecurity policy at the Fletcher School of Law and Diplomacy at Tufts University. Her book You’ll See This Message When It Is Too Late: The Legal and Economic Aftermath of Cybersecurity Breaches was published by MIT Press in 2018.

Recipient(s) will receive an email with a link to 'How Is Technology Changing the World, and How Should the World Change Technology?' and will not need an account to access the content.

Subject: How Is Technology Changing the World, and How Should the World Change Technology?

(Optional message may have a maximum of 1000 characters.)

Citing articles via

Email alerts, affiliations.

  • Special Collections
  • Review Symposia
  • Info for Authors
  • Info for Librarians
  • Editorial Team
  • Emerging Scholars Forum
  • Open Access
  • Online ISSN 2575-7350
  • Copyright © 2024 The Regents of the University of California. All Rights Reserved.

Stay Informed

Disciplines.

  • Ancient World
  • Anthropology
  • Communication
  • Criminology & Criminal Justice
  • Film & Media Studies
  • Food & Wine
  • Browse All Disciplines
  • Browse All Courses
  • Book Authors
  • Booksellers
  • Instructions
  • Journal Authors
  • Journal Editors
  • Media & Journalists
  • Planned Giving

About UC Press

  • Press Releases
  • Seasonal Catalog
  • Acquisitions Editors
  • Customer Service
  • Exam/Desk Requests
  • Media Inquiries
  • Print-Disability
  • Rights & Permissions
  • UC Press Foundation
  • © Copyright 2024 by the Regents of the University of California. All rights reserved. Privacy policy    Accessibility

This Feature Is Available To Subscribers Only

Sign In or Create an Account

Revue d'économie industrielle

Accueil Appels à contribution Appels clos New quantitative methods for scie...

New quantitative methods for science and technology analysis

Context and aims of the special issue.

Recent advances in access to data and information processing techniques are revolutionizing data-based methods in social sciences and humanities. Among the most advanced techniques are machine learning, text analysis (Natural Language Processing—NLP), image and graph analysis. These methods make it possible not only to process very large data sets (Big Data) but also to use new, unstructured sources of information.

In the field of economics of innovation and industrial organization, patent and scientific publication data have been used for several decades in empirical research. Their advantages and disadvantages are now well known. With the new methods available, researchers can now overcome several limitations and thus improve existing indicators (e.g., technology diffusion), design new ones (e.g., novelty) and supplement data with alternative sources, often through web scraping techniques.

Data-driven approaches for science and technology analysis can be extremely powerful, but need to be well understood if their potential is to be fully exploited and their pitfalls avoided. For that purpose, more experience needs to be accumulated and its results shared in the community of researchers.

In this context, the objective of this special issue is to bring together a selection of studies using new quantitative methods for exploiting patent data, scientific publications and others. Articles can reflect both methodological research on new quantitative techniques, or applications of these techniques to specific issues in economics, management or other approaches to science and technology.

Due to the novelty of these techniques, we welcome early, exploratory research, as well as interdisciplinary research conducted with, for instance, linguists, computer scientists, engineers, etc.

Suggested topics

Submitted papers should deal with the following topics (list not exhaustive, other topics are welcome):

Methodological issues related to the use of machine learning, NLP or graph analysis with patent, scientific publication data or others (Aristodemou & Tietze, 2018; Balsmeier et al., 2018);

How new quantitative methods impact technology forecasting (Lee et al., 2018) and/or market sector dynamics forecasting (von Hippel & Kaulartz, S., 2020);

How new quantitative methods improve our understanding of the innovation process (Cockburn et al., 2018; Feng, 2020; Guerzoni et al., 2020; von Hippel and Cann, 2020)

How new quantitative methods impacts science and its relations with industry (Bianchini et al., 2020);

How new quantitative methods contribute to design new indicators and improve the measurement of innovation (Fredström et al., 2021)

How new quantitative methods can improve design, monitoring and evaluation practices of public policy?

Timing and submission process

October 31, 2021: Submission of article proposals (full versions)

January 2022: Return of the first evaluation

May 2022: Submission of the modified versions

End of 2022: Publication of the special issue

Authors are asked to send an intention of submission (title and abstract of the proposal) to the editors of the special issue before June 18, 2021 (contact: [email protected] )

Proposals for articles should be submitted on the platform:

https://journals.sfu.ca/rei/index.php/rei

Authors must select the tab «New quantitative methods for science and technology analysis» as the journal section’s choice (step 1 of the submission process).

Articles must be submitted in English.

Bibliographie

Aristodemou , L., & Tietze , F. (2018). The state-of-the-art on Intellectual Property Analytics (IPA): A literature review on artificial intelligence, machine learning and deep learning methods for analysing intellectual property (IP) data. World Patent Information, 55, 37-51.

Balsmeier , B., Assaf , M., Chesebro , T., Fierro , G., Johnson , K., Johnson , S., ... & Fleming , L. (2018). Machine learning and natural language processing on the patent corpus: Data, tools, and new measures. Journal of Economics & Management Strategy, 27(3), 535-553.

Bianchini , S., Müller , M., & Pelletier , P. (2020). Deep Learning in Science. arXiv preprint arXiv:2009.01575.

Cockburn , I. M., Henderson , R., & Stern , S. (2018). The impact of artificial intelligence on innovation (No. w24449). National bureau of economic research.

Feng , S. (2020). The proximity of ideas: An analysis of patent text using machine learning. PloS one, 15(7), e0234880.

Fredström , A., Wincent , J., Sjödin , D., Oghazi , P., & Parida , V. (2021). Tracking innovation diffusion: AI analysis of large-scale patent data towards an agenda for further research. Technological Forecasting and Social Change, 165, 120524.

Guerzoni , M., Nava , C. R., & Nuccio , M. (2020). Start-ups survival through a crisis. Combining machine learning with econometrics to measure innovation. Economics of Innovation and New Technology, 1-26.

Lee , C., Kwon , O., Kim , M., & Kwon , D. (2018). Early identification of emerging technologies: A machine learning approach using multiple patent indicators. Technological Forecasting and Social Change, 127, 291-303.

von Hippel , E., & Kaulartz , S. (2020). Next-generation consumer innovation search: Identifying early-stage need-solution pairs on the web. Research Policy, 104056.

von Hippel , C. D., & Cann , A. B. (2020). Behavioral innovation: Pilot study and new big data analysis approach in household sector user innovation. Research Policy, 103992.

Document annexe

  • New quantitative methods (application/pdf – 126k)

Derniers numéros

  • 180 | 4e trimestre 2022
  • 178-179 | 2e et 3e trimestres 2022 Les véhicules électriques, leviers pour une mobilité durable ?
  • 177 | 1er trimestre 2022
  • 176 | 4e trimestre 2021
  • 175 | 3e trimestre 2021

Numéros en texte intégral

  • 174 | 2e trimestre 2021 Industrie et comportements créatifs : leçons du passé et recherches actuelles
  • 173 | 1er trimestre 2021
  • 172 | 4e trimestre 2020 De l’économie numérique à la transformation numérique de l’économie
  • 171 | 3e trimestre 2020
  • 170 | 2e trimestre 2020
  • 169 | 1er trimestre 2020 Industry 4.0: Current Issues and Future Challenges
  • 168 | 4e trimestre 2019
  • 167 | 3e trimestre 2019
  • 166 | 2e trimestre 2019
  • 165 | 1er trimestre 2019
  • 164 | 4e trimestre 2018
  • 163 | 3e trimestre 2018 L’industrie morcelée : les chaînes de valeur globales
  • 162 | 2e trimestre 2018
  • 161 | 1er trimestre 2018
  • 160 | 4e trimestre 2017 Histoire et dynamique industrielle : faits, idées et théories
  • 159 | 3e trimestre 2017 L’innovation soutenable
  • 158 | 2e trimestre 2017
  • 157 | 1er trimestre 2017
  • 156 | 4e trimestre 2016 L'économie numérique en question
  • 155 | 3e trimestre 2016
  • 154 | 2e trimestre 2016 Les relations finance/industrie
  • 153 | 1er trimestre 2016
  • 152 | 4e trimestre 2015 Des clusters aux écosystèmes industriels locaux
  • 151 | 3e trimestre 2015
  • 150 | 2e trimestre 2015
  • 149 | 1er trimestre 2015
  • 148 | 4e trimestre 2014 Transition énergétique, industries et marchés
  • 147 | 3e trimestre 2014
  • 146 | 2e trimestre 2014 Écosystèmes et modèles d'affaires
  • 145 | 1er trimestre 2014 Manufacturing Renaissance (2/2)
  • 144 | 4e trimestre 2013 Manufacturing Renaissance (1/2)
  • 143 | 3e trimestre 2013
  • 142 | 2e trimestre 2013
  • 141 | 1er trimestre 2013 Partenariats public privé et performances des services publics 2/2
  • 140 | 4ème trimestre 2012 Partenariats public privé et performances des services publics 1/2
  • 139 | 3ème trimestre 2012
  • 138 | 2ème trimestre 2012
  • 137 | 1er trimestre 2012
  • 136 | 4ème trimestre 2011 L'économie du logiciel libre
  • 135 | 3ème trimestre 2011
  • 134 | 2e trimestre 2011 La finance et l'industrie
  • 133 | 1er trimestre 2011
  • 132 | 4e trimestre 2010
  • 131 | 3e trimestre 2010
  • 129-130 | 1er et 2e trimestres 2010 Trente ans d'économie industrielle
  • 128 | 4e trimestre 2009 La problématique des clusters : éclairages analytiques et empiriques
  • 127 | 3e trimestre 2009
  • 126 | 2e trimestre 2009
  • 125 | 1er trimestre 2009
  • 124 | 4e trimestre 2008
  • 123 | 3e trimestre 2008
  • 122 | 2e trimestre 2008
  • 121 | 1er trimestre 2008
  • 120 | 4e trimestre 2007 Recherche et innovation dans les sciences du vivant
  • 119 | 3e trimestre 2007
  • 118 | 2e trimestre 2007 Paolo Sylos Labini
  • 117 | 1er trimestre 2007
  • 116 | 4e trimestre 2006
  • 114-115 | 2e-3e trimestre 2006 Processus de contagion et interactions stratégiques
  • 113 | 1er trimestre 2006

Numéros sur Persée

Tous les numéros.

  • Politique éditoriale
  • Comités éditoriaux
  • Consignes aux auteurs
  • Abonnements

Appels à contribution

  • Appels en cours
  • Appels clos

Informations

  • Politiques de publication

Actualités

Lettres d'information

  • La lettre de Revues.org

Affiliations/partenaires

Logo Revues électroniques de l’université de Nice

ISSN électronique 1773-0198

Voir la notice dans le catalogue OpenEdition  

Plan du site  – Contacts  – Crédits  – À propos  – Flux de syndication

Politique de confidentialité  – Gestion des cookies  – Signaler un problème

Nous adhérons à OpenEdition Journals  – Édité avec Lodel  – Accès réservé

Vous allez être redirigé vers OpenEdition Search

Book cover

Handbook of Research Methods in Health Social Sciences pp 27–49 Cite as

Quantitative Research

  • Leigh A. Wilson 2 , 3  
  • Reference work entry
  • First Online: 13 January 2019

4034 Accesses

4 Citations

Quantitative research methods are concerned with the planning, design, and implementation of strategies to collect and analyze data. Descartes, the seventeenth-century philosopher, suggested that how the results are achieved is often more important than the results themselves, as the journey taken along the research path is a journey of discovery. High-quality quantitative research is characterized by the attention given to the methods and the reliability of the tools used to collect the data. The ability to critique research in a systematic way is an essential component of a health professional’s role in order to deliver high quality, evidence-based healthcare. This chapter is intended to provide a simple overview of the way new researchers and health practitioners can understand and employ quantitative methods. The chapter offers practical, realistic guidance in a learner-friendly way and uses a logical sequence to understand the process of hypothesis development, study design, data collection and handling, and finally data analysis and interpretation.

  • Quantitative
  • Epidemiology
  • Data analysis
  • Methodology
  • Interpretation

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Babbie ER. The practice of social research. 14th ed. Belmont: Wadsworth Cengage; 2016.

Google Scholar  

Descartes. Cited in Halverston, W. (1976). In: A concise introduction to philosophy, 3rd ed. New York: Random House; 1637.

Doll R, Hill AB. The mortality of doctors in relation to their smoking habits. BMJ. 1954;328(7455):1529–33. https://doi.org/10.1136/bmj.328.7455.1529 .

Article   Google Scholar  

Liamputtong P. Research methods in health: foundations for evidence-based practice. 3rd ed. Melbourne: Oxford University Press; 2017.

McNabb DE. Research methods in public administration and nonprofit management: quantitative and qualitative approaches. 2nd ed. New York: Armonk; 2007.

Merriam-Webster. Dictionary. http://www.merriam-webster.com . Accessed 20th December 2017.

Olesen Larsen P, von Ins M. The rate of growth in scientific publication and the decline in coverage provided by Science Citation Index. Scientometrics. 2010;84(3):575–603.

Pannucci CJ, Wilkins EG. Identifying and avoiding bias in research. Plast Reconstr Surg. 2010;126(2):619–25. https://doi.org/10.1097/PRS.0b013e3181de24bc .

Petrie A, Sabin C. Medical statistics at a glance. 2nd ed. London: Blackwell Publishing; 2005.

Portney LG, Watkins MP. Foundations of clinical research: applications to practice. 3rd ed. New Jersey: Pearson Publishing; 2009.

Sheehan J. Aspects of research methodology. Nurse Educ Today. 1986;6:193–203.

Wilson LA, Black DA. Health, science research and research methods. Sydney: McGraw Hill; 2013.

Download references

Author information

Authors and affiliations.

School of Science and Health, Western Sydney University, Penrith, NSW, Australia

Leigh A. Wilson

Faculty of Health Science, Discipline of Behavioural and Social Sciences in Health, University of Sydney, Lidcombe, NSW, Australia

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Leigh A. Wilson .

Editor information

Editors and affiliations.

Pranee Liamputtong

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Singapore Pte Ltd.

About this entry

Cite this entry.

Wilson, L.A. (2019). Quantitative Research. In: Liamputtong, P. (eds) Handbook of Research Methods in Health Social Sciences. Springer, Singapore. https://doi.org/10.1007/978-981-10-5251-4_54

Download citation

DOI : https://doi.org/10.1007/978-981-10-5251-4_54

Published : 13 January 2019

Publisher Name : Springer, Singapore

Print ISBN : 978-981-10-5250-7

Online ISBN : 978-981-10-5251-4

eBook Packages : Social Sciences Reference Module Humanities and Social Sciences Reference Module Business, Economics and Social Sciences

Share this entry

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Sensors (Basel)

Logo of sensors

Study and Investigation on 5G Technology: A Systematic Review

Ramraj dangi.

1 School of Computing Science and Engineering, VIT University Bhopal, Bhopal 466114, India; [email protected] (R.D.); [email protected] (P.L.)

Praveen Lalwani

Gaurav choudhary.

2 Department of Applied Mathematics and Computer Science, Technical University of Denmark, 2800 Lyngby, Denmark; moc.liamg@7777yrahduohcvaruag

3 Department of Information Security Engineering, Soonchunhyang University, Asan-si 31538, Korea

Giovanni Pau

4 Faculty of Engineering and Architecture, Kore University of Enna, 94100 Enna, Italy; [email protected]

Associated Data

Not applicable.

In wireless communication, Fifth Generation (5G) Technology is a recent generation of mobile networks. In this paper, evaluations in the field of mobile communication technology are presented. In each evolution, multiple challenges were faced that were captured with the help of next-generation mobile networks. Among all the previously existing mobile networks, 5G provides a high-speed internet facility, anytime, anywhere, for everyone. 5G is slightly different due to its novel features such as interconnecting people, controlling devices, objects, and machines. 5G mobile system will bring diverse levels of performance and capability, which will serve as new user experiences and connect new enterprises. Therefore, it is essential to know where the enterprise can utilize the benefits of 5G. In this research article, it was observed that extensive research and analysis unfolds different aspects, namely, millimeter wave (mmWave), massive multiple-input and multiple-output (Massive-MIMO), small cell, mobile edge computing (MEC), beamforming, different antenna technology, etc. This article’s main aim is to highlight some of the most recent enhancements made towards the 5G mobile system and discuss its future research objectives.

1. Introduction

Most recently, in three decades, rapid growth was marked in the field of wireless communication concerning the transition of 1G to 4G [ 1 , 2 ]. The main motto behind this research was the requirements of high bandwidth and very low latency. 5G provides a high data rate, improved quality of service (QoS), low-latency, high coverage, high reliability, and economically affordable services. 5G delivers services categorized into three categories: (1) Extreme mobile broadband (eMBB). It is a nonstandalone architecture that offers high-speed internet connectivity, greater bandwidth, moderate latency, UltraHD streaming videos, virtual reality and augmented reality (AR/VR) media, and many more. (2) Massive machine type communication (eMTC), 3GPP releases it in its 13th specification. It provides long-range and broadband machine-type communication at a very cost-effective price with less power consumption. eMTC brings a high data rate service, low power, extended coverage via less device complexity through mobile carriers for IoT applications. (3) ultra-reliable low latency communication (URLLC) offers low-latency and ultra-high reliability, rich quality of service (QoS), which is not possible with traditional mobile network architecture. URLLC is designed for on-demand real-time interaction such as remote surgery, vehicle to vehicle (V2V) communication, industry 4.0, smart grids, intelligent transport system, etc. [ 3 ].

1.1. Evolution from 1G to 5G

First generation (1G): 1G cell phone was launched between the 1970s and 80s, based on analog technology, which works just like a landline phone. It suffers in various ways, such as poor battery life, voice quality, and dropped calls. In 1G, the maximum achievable speed was 2.4 Kbps.

Second Generation (2G): In 2G, the first digital system was offered in 1991, providing improved mobile voice communication over 1G. In addition, Code-Division Multiple Access (CDMA) and Global System for Mobile (GSM) concepts were also discussed. In 2G, the maximum achievable speed was 1 Mpbs.

Third Generation (3G): When technology ventured from 2G GSM frameworks into 3G universal mobile telecommunication system (UMTS) framework, users encountered higher system speed and quicker download speed making constant video calls. 3G was the first mobile broadband system that was formed to provide the voice with some multimedia. The technology behind 3G was high-speed packet access (HSPA/HSPA+). 3G used MIMO for multiplying the power of the wireless network, and it also used packet switching for fast data transmission.

Fourth Generation (4G): It is purely mobile broadband standard. In digital mobile communication, it was observed information rate that upgraded from 20 to 60 Mbps in 4G [ 4 ]. It works on LTE and WiMAX technologies, as well as provides wider bandwidth up to 100 Mhz. It was launched in 2010.

Fourth Generation LTE-A (4.5G): It is an advanced version of standard 4G LTE. LTE-A uses MIMO technology to combine multiple antennas for both transmitters as well as a receiver. Using MIMO, multiple signals and multiple antennas can work simultaneously, making LTE-A three times faster than standard 4G. LTE-A offered an improved system limit, decreased deferral in the application server, access triple traffic (Data, Voice, and Video) wirelessly at any time anywhere in the world.LTE-A delivers speeds of over 42 Mbps and up to 90 Mbps.

Fifth Generation (5G): 5G is a pillar of digital transformation; it is a real improvement on all the previous mobile generation networks. 5G brings three different services for end user like Extreme mobile broadband (eMBB). It offers high-speed internet connectivity, greater bandwidth, moderate latency, UltraHD streaming videos, virtual reality and augmented reality (AR/VR) media, and many more. Massive machine type communication (eMTC), it provides long-range and broadband machine-type communication at a very cost-effective price with less power consumption. eMTC brings a high data rate service, low power, extended coverage via less device complexity through mobile carriers for IoT applications. Ultra-reliable low latency communication (URLLC) offers low-latency and ultra-high reliability, rich quality of service (QoS), which is not possible with traditional mobile network architecture. URLLC is designed for on-demand real-time interaction such as remote surgery, vehicle to vehicle (V2V) communication, industry 4.0, smart grids, intelligent transport system, etc. 5G faster than 4G and offers remote-controlled operation over a reliable network with zero delays. It provides down-link maximum throughput of up to 20 Gbps. In addition, 5G also supports 4G WWWW (4th Generation World Wide Wireless Web) [ 5 ] and is based on Internet protocol version 6 (IPv6) protocol. 5G provides unlimited internet connection at your convenience, anytime, anywhere with extremely high speed, high throughput, low-latency, higher reliability and scalability, and energy-efficient mobile communication technology [ 6 ]. 5G mainly divided in two parts 6 GHz 5G and Millimeter wave(mmWave) 5G.

6 GHz is a mid frequency band which works as a mid point between capacity and coverage to offer perfect environment for 5G connectivity. 6 GHz spectrum will provide high bandwidth with improved network performance. It offers continuous channels that will reduce the need for network densification when mid-band spectrum is not available and it makes 5G connectivity affordable at anytime, anywhere for everyone.

mmWave is an essential technology of 5G network which build high performance network. 5G mmWave offer diverse services that is why all network providers should add on this technology in their 5G deployment planning. There are lots of service providers who deployed 5G mmWave, and their simulation result shows that 5G mmwave is a far less used spectrum. It provides very high speed wireless communication and it also offers ultra-wide bandwidth for next generation mobile network.

The evolution of wireless mobile technologies are presented in Table 1 . The abbreviations used in this paper are mentioned in Table 2 .

Summary of Mobile Technology.

Table of Notations and Abbreviations.

1.2. Key Contributions

The objective of this survey is to provide a detailed guide of 5G key technologies, methods to researchers, and to help with understanding how the recent works addressed 5G problems and developed solutions to tackle the 5G challenges; i.e., what are new methods that must be applied and how can they solve problems? Highlights of the research article are as follows.

  • This survey focused on the recent trends and development in the era of 5G and novel contributions by the researcher community and discussed technical details on essential aspects of the 5G advancement.
  • In this paper, the evolution of the mobile network from 1G to 5G is presented. In addition, the growth of mobile communication under different attributes is also discussed.
  • This paper covers the emerging applications and research groups working on 5G & different research areas in 5G wireless communication network with a descriptive taxonomy.
  • This survey discusses the current vision of the 5G networks, advantages, applications, key technologies, and key features. Furthermore, machine learning prospects are also explored with the emerging requirements in the 5G era. The article also focused on technical aspects of 5G IoT Based approaches and optimization techniques for 5G.
  • we provide an extensive overview and recent advancement of emerging technologies of 5G mobile network, namely, MIMO, Non-Orthogonal Multiple Access (NOMA), mmWave, Internet of Things (IoT), Machine Learning (ML), and optimization. Also, a technical summary is discussed by highlighting the context of current approaches and corresponding challenges.
  • Security challenges and considerations while developing 5G technology are discussed.
  • Finally, the paper concludes with the future directives.

The existing survey focused on architecture, key concepts, and implementation challenges and issues. In contrast, this survey covers the state-of-the-art techniques as well as corresponding recent novel developments by researchers. Various recent significant papers are discussed with the key technologies accelerating the development and production of 5G products.

2. Existing Surveys and Their Applicability

In this paper, a detailed survey on various technologies of 5G networks is presented. Various researchers have worked on different technologies of 5G networks. In this section, Table 3 gives a tabular representation of existing surveys of 5G networks. Massive MIMO, NOMA, small cell, mmWave, beamforming, and MEC are the six main pillars that helped to implement 5G networks in real life.

A comparative overview of existing surveys on different technologies of 5G networks.

2.1. Limitations of Existing Surveys

The existing survey focused on architecture, key concepts, and implementation challenges and issues. The numerous current surveys focused on various 5G technologies with different parameters, and the authors did not cover all the technologies of the 5G network in detail with challenges and recent advancements. Few authors worked on MIMO (Non-Orthogonal Multiple Access) NOMA, MEC, small cell technologies. In contrast, some others worked on beamforming, Millimeter-wave (mmWave). But the existing survey did not cover all the technologies of the 5G network from a research and advancement perspective. No detailed survey is available in the market covering all the 5G network technologies and currently published research trade-offs. So, our main aim is to give a detailed study of all the technologies working on the 5G network. In contrast, this survey covers the state-of-the-art techniques as well as corresponding recent novel developments by researchers. Various recent significant papers are discussed with the key technologies accelerating the development and production of 5G products. This survey article collected key information about 5G technology and recent advancements, and it can be a kind of a guide for the reader. This survey provides an umbrella approach to bring multiple solutions and recent improvements in a single place to accelerate the 5G research with the latest key enabling solutions and reviews. A systematic layout representation of the survey in Figure 1 . We provide a state-of-the-art comparative overview of the existing surveys on different technologies of 5G networks in Table 3 .

An external file that holds a picture, illustration, etc.
Object name is sensors-22-00026-g001.jpg

Systematic layout representation of survey.

2.2. Article Organization

This article is organized under the following sections. Section 2 presents existing surveys and their applicability. In Section 3 , the preliminaries of 5G technology are presented. In Section 4 , recent advances of 5G technology based on Massive MIMO, NOMA, Millimeter Wave, 5G with IoT, machine learning for 5G, and Optimization in 5G are provided. In Section 5 , a description of novel 5G features over 4G is provided. Section 6 covered all the security concerns of the 5G network. Section 7 , 5G technology based on above-stated challenges summarize in tabular form. Finally, Section 8 and Section 9 conclude the study, which paves the path for future research.

3. Preliminary Section

3.1. emerging 5g paradigms and its features.

5G provides very high speed, low latency, and highly salable connectivity between multiple devices and IoT worldwide. 5G will provide a very flexible model to develop a modern generation of applications and industry goals [ 26 , 27 ]. There are many services offered by 5G network architecture are stated below:

Massive machine to machine communications: 5G offers novel, massive machine-to-machine communications [ 28 ], also known as the IoT [ 29 ], that provide connectivity between lots of machines without any involvement of humans. This service enhances the applications of 5G and provides connectivity between agriculture, construction, and industries [ 30 ].

Ultra-reliable low latency communications (URLLC): This service offers real-time management of machines, high-speed vehicle-to-vehicle connectivity, industrial connectivity and security principles, and highly secure transport system, and multiple autonomous actions. Low latency communications also clear up a different area where remote medical care, procedures, and operation are all achievable [ 31 ].

Enhanced mobile broadband: Enhance mobile broadband is an important use case of 5G system, which uses massive MIMO antenna, mmWave, beamforming techniques to offer very high-speed connectivity across a wide range of areas [ 32 ].

For communities: 5G provides a very flexible internet connection between lots of machines to make smart homes, smart schools, smart laboratories, safer and smart automobiles, and good health care centers [ 33 ].

For businesses and industry: As 5G works on higher spectrum ranges from 24 to 100 GHz. This higher frequency range provides secure low latency communication and high-speed wireless connectivity between IoT devices and industry 4.0, which opens a market for end-users to enhance their business models [ 34 ].

New and Emerging technologies: As 5G came up with many new technologies like beamforming, massive MIMO, mmWave, small cell, NOMA, MEC, and network slicing, it introduced many new features to the market. Like virtual reality (VR), users can experience the physical presence of people who are millions of kilometers away from them. Many new technologies like smart homes, smart workplaces, smart schools, smart sports academy also came into the market with this 5G Mobile network model [ 35 ].

3.2. Commercial Service Providers of 5G

5G provides high-speed internet browsing, streaming, and downloading with very high reliability and low latency. 5G network will change your working style, and it will increase new business opportunities and provide innovations that we cannot imagine. This section covers top service providers of 5G network [ 36 , 37 ].

Ericsson: Ericsson is a Swedish multinational networking and telecommunications company, investing around 25.62 billion USD in 5G network, which makes it the biggest telecommunication company. It claims that it is the only company working on all the continents to make the 5G network a global standard for the next generation wireless communication. Ericsson developed the first 5G radio prototype that enables the operators to set up the live field trials in their network, which helps operators understand how 5G reacts. It plays a vital role in the development of 5G hardware. It currently provides 5G services in over 27 countries with content providers like China Mobile, GCI, LGU+, AT&T, Rogers, and many more. It has 100 commercial agreements with different operators as of 2020.

Verizon: It is American multinational telecommunication which was founded in 1983. Verizon started offering 5G services in April 2020, and by December 2020, it has actively provided 5G services in 30 cities of the USA. They planned that by the end of 2021, they would deploy 5G in 30 more new cities. Verizon deployed a 5G network on mmWave, a very high band spectrum between 30 to 300 GHz. As it is a significantly less used spectrum, it provides very high-speed wireless communication. MmWave offers ultra-wide bandwidth for next-generation mobile networks. MmWave is a faster and high-band spectrum that has a limited range. Verizon planned to increase its number of 5G cells by 500% by 2020. Verizon also has an ultra wide-band flagship 5G service which is the best 5G service that increases the market price of Verizon.

Nokia: Nokia is a Finnish multinational telecommunications company which was founded in 1865. Nokia is one of the companies which adopted 5G technology very early. It is developing, researching, and building partnerships with various 5G renders to offer 5G communication as soon as possible. Nokia collaborated with Deutsche Telekom and Hamburg Port Authority and provided them 8000-hectare site for their 5G MoNArch project. Nokia is the only company that supplies 5G technology to all the operators of different countries like AT&T, Sprint, T-Mobile US and Verizon in the USA, Korea Telecom, LG U+ and SK Telecom in South Korea and NTT DOCOMO, KDDI, and SoftBank in Japan. Presently, Nokia has around 150+ agreements and 29 live networks all over the world. Nokia is continuously working hard on 5G technology to expand 5G networks all over the globe.

AT&T: AT&T is an American multinational company that was the first to deploy a 5G network in reality in 2018. They built a gigabit 5G network connection in Waco, TX, Kalamazoo, MI, and South Bend to achieve this. It is the first company that archives 1–2 gigabit per second speed in 2019. AT&T claims that it provides a 5G network connection among 225 million people worldwide by using a 6 GHz spectrum band.

T-Mobile: T-Mobile US (TMUS) is an American wireless network operator which was the first service provider that offers a real 5G nationwide network. The company knew that high-band 5G was not feasible nationwide, so they used a 600 MHz spectrum to build a significant portion of its 5G network. TMUS is planning that by 2024 they will double the total capacity and triple the full 5G capacity of T-Mobile and Sprint combined. The sprint buyout is helping T-Mobile move forward the company’s current market price to 129.98 USD.

Samsung: Samsung started their research in 5G technology in 2011. In 2013, Samsung successfully developed the world’s first adaptive array transceiver technology operating in the millimeter-wave Ka bands for cellular communications. Samsung provides several hundred times faster data transmission than standard 4G for core 5G mobile communication systems. The company achieved a lot of success in the next generation of technology, and it is considered one of the leading companies in the 5G domain.

Qualcomm: Qualcomm is an American multinational corporation in San Diego, California. It is also one of the leading company which is working on 5G chip. Qualcomm’s first 5G modem chip was announced in October 2016, and a prototype was demonstrated in October 2017. Qualcomm mainly focuses on building products while other companies talk about 5G; Qualcomm is building the technologies. According to one magazine, Qualcomm was working on three main areas of 5G networks. Firstly, radios that would use bandwidth from any network it has access to; secondly, creating more extensive ranges of spectrum by combining smaller pieces; and thirdly, a set of services for internet applications.

ZTE Corporation: ZTE Corporation was founded in 1985. It is a partially Chinese state-owned technology company that works in telecommunication. It was a leading company that worked on 4G LTE, and it is still maintaining its value and doing research and tests on 5G. It is the first company that proposed Pre5G technology with some series of solutions.

NEC Corporation: NEC Corporation is a Japanese multinational information technology and electronics corporation headquartered in Minato, Tokyo. ZTE also started their research on 5G, and they introduced a new business concept. NEC’s main aim is to develop 5G NR for the global mobile system and create secure and intelligent technologies to realize 5G services.

Cisco: Cisco is a USA networking hardware company that also sleeves up for 5G network. Cisco’s primary focus is to support 5G in three ways: Service—enable 5G services faster so all service providers can increase their business. Infrastructure—build 5G-oriented infrastructure to implement 5G more quickly. Automation—make a more scalable, flexible, and reliable 5G network. The companies know the importance of 5G, and they want to connect more than 30 billion devices in the next couple of years. Cisco intends to work on network hardening as it is a vital part of 5G network. Cisco used AI with deep learning to develop a 5G Security Architecture, enabling Secure Network Transformation.

3.3. 5G Research Groups

Many research groups from all over the world are working on a 5G wireless mobile network [ 38 ]. These groups are continuously working on various aspects of 5G. The list of those research groups are presented as follows: 5GNOW (5th Generation Non-Orthogonal Waveform for Asynchronous Signaling), NEWCOM (Network of Excellence in Wireless Communication), 5GIC (5G Innovation Center), NYU (New York University) Wireless, 5GPPP (5G Infrastructure Public-Private Partnership), EMPHATIC (Enhanced Multi-carrier Technology for Professional Adhoc and Cell-Based Communication), ETRI(Electronics and Telecommunication Research Institute), METIS (Mobile and wireless communication Enablers for the Twenty-twenty Information Society) [ 39 ]. The various research groups along with the research area are presented in Table 4 .

Research groups working on 5G mobile networks.

3.4. 5G Applications

5G is faster than 4G and offers remote-controlled operation over a reliable network with zero delays. It provides down-link maximum throughput of up to 20 Gbps. In addition, 5G also supports 4G WWWW (4th Generation World Wide Wireless Web) [ 5 ] and is based on Internet protocol version 6 (IPv6) protocol. 5G provides unlimited internet connection at your convenience, anytime, anywhere with extremely high speed, high throughput, low-latency, higher reliability, greater scalablility, and energy-efficient mobile communication technology [ 6 ].

There are lots of applications of 5G mobile network are as follows:

  • High-speed mobile network: 5G is an advancement on all the previous mobile network technologies, which offers very high speed downloading speeds 0 of up to 10 to 20 Gbps. The 5G wireless network works as a fiber optic internet connection. 5G is different from all the conventional mobile transmission technologies, and it offers both voice and high-speed data connectivity efficiently. 5G offers very low latency communication of less than a millisecond, useful for autonomous driving and mission-critical applications. 5G will use millimeter waves for data transmission, providing higher bandwidth and a massive data rate than lower LTE bands. As 5 Gis a fast mobile network technology, it will enable virtual access to high processing power and secure and safe access to cloud services and enterprise applications. Small cell is one of the best features of 5G, which brings lots of advantages like high coverage, high-speed data transfer, power saving, easy and fast cloud access, etc. [ 40 ].
  • Entertainment and multimedia: In one analysis in 2015, it was found that more than 50 percent of mobile internet traffic was used for video downloading. This trend will surely increase in the future, which will make video streaming more common. 5G will offer High-speed streaming of 4K videos with crystal clear audio, and it will make a high definition virtual world on your mobile. 5G will benefit the entertainment industry as it offers 120 frames per second with high resolution and higher dynamic range video streaming, and HD TV channels can also be accessed on mobile devices without any interruptions. 5G provides low latency high definition communication so augmented reality (AR), and virtual reality (VR) will be very easily implemented in the future. Virtual reality games are trendy these days, and many companies are investing in HD virtual reality games. The 5G network will offer high-speed internet connectivity with a better gaming experience [ 41 ].
  • Smart homes : smart home appliances and products are in demand these days. The 5G network makes smart homes more real as it offers high-speed connectivity and monitoring of smart appliances. Smart home appliances are easily accessed and configured from remote locations using the 5G network as it offers very high-speed low latency communication.
  • Smart cities: 5G wireless network also helps develop smart cities applications such as automatic traffic management, weather update, local area broadcasting, energy-saving, efficient power supply, smart lighting system, water resource management, crowd management, emergency control, etc.
  • Industrial IoT: 5G wireless technology will provide lots of features for future industries such as safety, process tracking, smart packing, shipping, energy efficiency, automation of equipment, predictive maintenance, and logistics. 5G smart sensor technology also offers smarter, safer, cost-effective, and energy-saving industrial IoT operations.
  • Smart Farming: 5G technology will play a crucial role in agriculture and smart farming. 5G sensors and GPS technology will help farmers track live attacks on crops and manage them quickly. These smart sensors can also be used for irrigation, pest, insect, and electricity control.
  • Autonomous Driving: The 5G wireless network offers very low latency high-speed communication, significant for autonomous driving. It means self-driving cars will come to real life soon with 5G wireless networks. Using 5G autonomous cars can easily communicate with smart traffic signs, objects, and other vehicles running on the road. 5G’s low latency feature makes self-driving more real as every millisecond is essential for autonomous vehicles, decision-making is done in microseconds to avoid accidents.
  • Healthcare and mission-critical applications: 5G technology will bring modernization in medicine where doctors and practitioners can perform advanced medical procedures. The 5G network will provide connectivity between all classrooms, so attending seminars and lectures will be easier. Through 5G technology, patients can connect with doctors and take their advice. Scientists are building smart medical devices which can help people with chronic medical conditions. The 5G network will boost the healthcare industry with smart devices, the internet of medical things, smart sensors, HD medical imaging technologies, and smart analytics systems. 5G will help access cloud storage, so accessing healthcare data will be very easy from any location worldwide. Doctors and medical practitioners can easily store and share large files like MRI reports within seconds using the 5G network.
  • Satellite Internet: In many remote areas, ground base stations are not available, so 5G will play a crucial role in providing connectivity in such areas. The 5G network will provide connectivity using satellite systems, and the satellite system uses a constellation of multiple small satellites to provide connectivity in urban and rural areas across the world.

4. 5G Technologies

This section describes recent advances of 5G Massive MIMO, 5G NOMA, 5G millimeter wave, 5G IOT, 5G with machine learning, and 5G optimization-based approaches. In addition, the summary is also presented in each subsection that paves the researchers for the future research direction.

4.1. 5G Massive MIMO

Multiple-input-multiple-out (MIMO) is a very important technology for wireless systems. It is used for sending and receiving multiple signals simultaneously over the same radio channel. MIMO plays a very big role in WI-FI, 3G, 4G, and 4G LTE-A networks. MIMO is mainly used to achieve high spectral efficiency and energy efficiency but it was not up to the mark MIMO provides low throughput and very low reliable connectivity. To resolve this, lots of MIMO technology like single user MIMO (SU-MIMO), multiuser MIMO (MU-MIMO) and network MIMO were used. However, these new MIMO also did not still fulfill the demand of end users. Massive MIMO is an advancement of MIMO technology used in the 5G network in which hundreds and thousands of antennas are attached with base stations to increase throughput and spectral efficiency. Multiple transmit and receive antennas are used in massive MIMO to increase the transmission rate and spectral efficiency. When multiple UEs generate downlink traffic simultaneously, massive MIMO gains higher capacity. Massive MIMO uses extra antennas to move energy into smaller regions of space to increase spectral efficiency and throughput [ 43 ]. In traditional systems data collection from smart sensors is a complex task as it increases latency, reduced data rate and reduced reliability. While massive MIMO with beamforming and huge multiplexing techniques can sense data from different sensors with low latency, high data rate and higher reliability. Massive MIMO will help in transmitting the data in real-time collected from different sensors to central monitoring locations for smart sensor applications like self-driving cars, healthcare centers, smart grids, smart cities, smart highways, smart homes, and smart enterprises [ 44 ].

Highlights of 5G Massive MIMO technology are as follows:

  • Data rate: Massive MIMO is advised as the one of the dominant technologies to provide wireless high speed and high data rate in the gigabits per seconds.
  • The relationship between wave frequency and antenna size: Both are inversely proportional to each other. It means lower frequency signals need a bigger antenna and vise versa.

An external file that holds a picture, illustration, etc.
Object name is sensors-22-00026-g002.jpg

Pictorial representation of multi-input and multi-output (MIMO).

  • MIMO role in 5G: Massive MIMO will play a crucial role in the deployment of future 5G mobile communication as greater spectral and energy efficiency could be enabled.

State-of-the-Art Approaches

Plenty of approaches were proposed to resolve the issues of conventional MIMO [ 7 ].

The MIMO multirate, feed-forward controller is suggested by Mae et al. [ 46 ]. In the simulation, the proposed model generates the smooth control input, unlike the conventional MIMO, which generates oscillated control inputs. It also outperformed concerning the error rate. However, a combination of multirate and single rate can be used for better results.

The performance of stand-alone MIMO, distributed MIMO with and without corporation MIMO, was investigated by Panzner et al. [ 47 ]. In addition, an idea about the integration of large scale in the 5G technology was also presented. In the experimental analysis, different MIMO configurations are considered. The variation in the ratio of overall transmit antennas to spatial is deemed step-wise from equality to ten.

The simulation of massive MIMO noncooperative and cooperative systems for down-link behavior was performed by He et al. [ 48 ]. It depends on present LTE systems, which deal with various antennas in the base station set-up. It was observed that collaboration in different BS improves the system behaviors, whereas throughput is reduced slightly in this approach. However, a new method can be developed which can enhance both system behavior and throughput.

In [ 8 ], different approaches that increased the energy efficiency benefits provided by massive MIMO were presented. They analyzed the massive MIMO technology and described the detailed design of the energy consumption model for massive MIMO systems. This article has explored several techniques to enhance massive MIMO systems’ energy efficiency (EE) gains. This paper reviews standard EE-maximization approaches for the conventional massive MIMO systems, namely, scaling number of antennas, real-time implementing low-complexity operations at the base station (BS), power amplifier losses minimization, and radio frequency (RF) chain minimization requirements. In addition, open research direction is also identified.

In [ 49 ], various existing approaches based on different antenna selection and scheduling, user selection and scheduling, and joint antenna and user scheduling methods adopted in massive MIMO systems are presented in this paper. The objective of this survey article was to make awareness about the current research and future research direction in MIMO for systems. They analyzed that complete utilization of resources and bandwidth was the most crucial factor which enhances the sum rate.

In [ 50 ], authors discussed the development of various techniques for pilot contamination. To calculate the impact of pilot contamination in time division duplex (TDD) massive MIMO system, TDD and frequency division duplexing FDD patterns in massive MIMO techniques are used. They discussed different issues in pilot contamination in TDD massive MIMO systems with all the possible future directions of research. They also classified various techniques to generate the channel information for both pilot-based and subspace-based approaches.

In [ 19 ], the authors defined the uplink and downlink services for a massive MIMO system. In addition, it maintains a performance matrix that measures the impact of pilot contamination on different performances. They also examined the various application of massive MIMO such as small cells, orthogonal frequency-division multiplexing (OFDM) schemes, massive MIMO IEEE 802, 3rd generation partnership project (3GPP) specifications, and higher frequency bands. They considered their research work crucial for cutting edge massive MIMO and covered many issues like system throughput performance and channel state acquisition at higher frequencies.

In [ 13 ], various approaches were suggested for MIMO future generation wireless communication. They made a comparative study based on performance indicators such as peak data rate, energy efficiency, latency, throughput, etc. The key findings of this survey are as follows: (1) spatial multiplexing improves the energy efficiency; (2) design of MIMO play a vital role in the enhancement of throughput; (3) enhancement of mMIMO focusing on energy & spectral performance; (4) discussed the future challenges to improve the system design.

In [ 51 ], the study of large-scale MIMO systems for an energy-efficient system sharing method was presented. For the resource allocation, circuit energy and transmit energy expenditures were taken into consideration. In addition, the optimization techniques were applied for an energy-efficient resource sharing system to enlarge the energy efficiency for individual QoS and energy constraints. The author also examined the BS configuration, which includes homogeneous and heterogeneous UEs. While simulating, they discussed that the total number of transmit antennas plays a vital role in boosting energy efficiency. They highlighted that the highest energy efficiency was obtained when the BS was set up with 100 antennas that serve 20 UEs.

This section includes various works done on 5G MIMO technology by different author’s. Table 5 shows how different author’s worked on improvement of various parameters such as throughput, latency, energy efficiency, and spectral efficiency with 5G MIMO technology.

Summary of massive MIMO-based approaches in 5G technology.

4.2. 5G Non-Orthogonal Multiple Access (NOMA)

NOMA is a very important radio access technology used in next generation wireless communication. Compared to previous orthogonal multiple access techniques, NOMA offers lots of benefits like high spectrum efficiency, low latency with high reliability and high speed massive connectivity. NOMA mainly works on a baseline to serve multiple users with the same resources in terms of time, space and frequency. NOMA is mainly divided into two main categories one is code domain NOMA and another is power domain NOMA. Code-domain NOMA can improve the spectral efficiency of mMIMO, which improves the connectivity in 5G wireless communication. Code-domain NOMA was divided into some more multiple access techniques like sparse code multiple access, lattice-partition multiple access, multi-user shared access and pattern-division multiple access [ 52 ]. Power-domain NOMA is widely used in 5G wireless networks as it performs well with various wireless communication techniques such as MIMO, beamforming, space-time coding, network coding, full-duplex and cooperative communication etc. [ 53 ]. The conventional orthogonal frequency-division multiple access (OFDMA) used by 3GPP in 4G LTE network provides very low spectral efficiency when bandwidth resources are allocated to users with low channel state information (CSI). NOMA resolved this issue as it enables users to access all the subcarrier channels so bandwidth resources allocated to the users with low CSI can still be accessed by the users with strong CSI which increases the spectral efficiency. The 5G network will support heterogeneous architecture in which small cell and macro base stations work for spectrum sharing. NOMA is a key technology of the 5G wireless system which is very helpful for heterogeneous networks as multiple users can share their data in a small cell using the NOMA principle.The NOMA is helpful in various applications like ultra-dense networks (UDN), machine to machine (M2M) communication and massive machine type communication (mMTC). As NOMA provides lots of features it has some challenges too such as NOMA needs huge computational power for a large number of users at high data rates to run the SIC algorithms. Second, when users are moving from the networks, to manage power allocation optimization is a challenging task for NOMA [ 54 ]. Hybrid NOMA (HNOMA) is a combination of power-domain and code-domain NOMA. HNOMA uses both power differences and orthogonal resources for transmission among multiple users. As HNOMA is using both power-domain NOMA and code-domain NOMA it can achieve higher spectral efficiency than Power-domain NOMA and code-domain NOMA. In HNOMA multiple groups can simultaneously transmit signals at the same time. It uses a message passing algorithm (MPA) and successive interference cancellation (SIC)-based detection at the base station for these groups [ 55 ].

Highlights of 5G NOMA technology as follows:

An external file that holds a picture, illustration, etc.
Object name is sensors-22-00026-g003.jpg

Pictorial representation of orthogonal and Non-Orthogonal Multiple Access (NOMA).

  • NOMA provides higher data rates and resolves all the loop holes of OMA that makes 5G mobile network more scalable and reliable.
  • As multiple users use same frequency band simultaneously it increases the performance of whole network.
  • To setup intracell and intercell interference NOMA provides nonorthogonal transmission on the transmitter end.
  • The primary fundamental of NOMA is to improve the spectrum efficiency by strengthening the ramification of receiver.

State-of-the-Art of Approaches

A plenty of approaches were developed to address the various issues in NOMA.

A novel approach to address the multiple receiving signals at the same frequency is proposed in [ 22 ]. In NOMA, multiple users use the same sub-carrier, which improves the fairness and throughput of the system. As a nonorthogonal method is used among multiple users, at the time of retrieving the user’s signal at the receiver’s end, joint processing is required. They proposed solutions to optimize the receiver and the radio resource allocation of uplink NOMA. Firstly, the authors proposed an iterative MUDD which utilizes the information produced by the channel decoder to improve the performance of the multiuser detector. After that, the author suggested a power allocation and novel subcarrier that enhances the users’ weighted sum rate for the NOMA scheme. Their proposed model showed that NOMA performed well as compared to OFDM in terms of fairness and efficiency.

In [ 53 ], the author’s reviewed a power-domain NOMA that uses superposition coding (SC) and successive interference cancellation (SIC) at the transmitter and the receiver end. Lots of analyses were held that described that NOMA effectively satisfies user data rate demands and network-level of 5G technologies. The paper presented a complete review of recent advances in the 5G NOMA system. It showed the comparative analysis regarding allocation procedures, user fairness, state-of-the-art efficiency evaluation, user pairing pattern, etc. The study also analyzes NOMA’s behavior when working with other wireless communication techniques, namely, beamforming, MIMO, cooperative connections, network, space-time coding, etc.

In [ 9 ], the authors proposed NOMA with MEC, which improves the QoS as well as reduces the latency of the 5G wireless network. This model increases the uplink NOMA by decreasing the user’s uplink energy consumption. They formulated an optimized NOMA framework that reduces the energy consumption of MEC by using computing and communication resource allocation, user clustering, and transmit powers.

In [ 10 ], the authors proposed a model which investigates outage probability under average channel state information CSI and data rate in full CSI to resolve the problem of optimal power allocation, which increase the NOMA downlink system among users. They developed simple low-complexity algorithms to provide the optimal solution. The obtained simulation results showed NOMA’s efficiency, achieving higher performance fairness compared to the TDMA configurations. It was observed from the results that NOMA, through the appropriate power amplifiers (PA), ensures the high-performance fairness requirement for the future 5G wireless communication networks.

In [ 56 ], researchers discussed that the NOMA technology and waveform modulation techniques had been used in the 5G mobile network. Therefore, this research gave a detailed survey of non-orthogonal waveform modulation techniques and NOMA schemes for next-generation mobile networks. By analyzing and comparing multiple access technologies, they considered the future evolution of these technologies for 5G mobile communication.

In [ 57 ], the authors surveyed non-orthogonal multiple access (NOMA) from the development phase to the recent developments. They have also compared NOMA techniques with traditional OMA techniques concerning information theory. The author discussed the NOMA schemes categorically as power and code domain, including the design principles, operating principles, and features. Comparison is based upon the system’s performance, spectral efficiency, and the receiver’s complexity. Also discussed are the future challenges, open issues, and their expectations of NOMA and how it will support the key requirements of 5G mobile communication systems with massive connectivity and low latency.

In [ 17 ], authors present the first review of an elementary NOMA model with two users, which clarify its central precepts. After that, a general design with multicarrier supports with a random number of users on each sub-carrier is analyzed. In performance evaluation with the existing approaches, resource sharing and multiple-input multiple-output NOMA are examined. Furthermore, they took the key elements of NOMA and its potential research demands. Finally, they reviewed the two-user SC-NOMA design and a multi-user MC-NOMA design to highlight NOMA’s basic approaches and conventions. They also present the research study about the performance examination, resource assignment, and MIMO in NOMA.

In this section, various works by different authors done on 5G NOMA technology is covered. Table 6 shows how other authors worked on the improvement of various parameters such as spectral efficiency, fairness, and computing capacity with 5G NOMA technology.

Summary of NOMA-based approaches in 5G technology.

4.3. 5G Millimeter Wave (mmWave)

Millimeter wave is an extremely high frequency band, which is very useful for 5G wireless networks. MmWave uses 30 GHz to 300 GHz spectrum band for transmission. The frequency band between 30 GHz to 300 GHz is known as mmWave because these waves have wavelengths between 1 to 10 mm. Till now radar systems and satellites are only using mmWave as these are very fast frequency bands which provide very high speed wireless communication. Many mobile network providers also started mmWave for transmitting data between base stations. Using two ways the speed of data transmission can be improved one is by increasing spectrum utilization and second is by increasing spectrum bandwidth. Out of these two approaches increasing bandwidth is quite easy and better. The frequency band below 5 GHz is very crowded as many technologies are using it so to boost up the data transmission rate 5G wireless network uses mmWave technology which instead of increasing spectrum utilization, increases the spectrum bandwidth [ 58 ]. To maximize the signal bandwidth in wireless communication the carrier frequency should also be increased by 5% because the signal bandwidth is directly proportional to carrier frequencies. The frequency band between 28 GHz to 60 GHz is very useful for 5G wireless communication as 28 GHz frequency band offers up to 1 GHz spectrum bandwidth and 60 GHz frequency band offers 2 GHz spectrum bandwidth. 4G LTE provides 2 GHz carrier frequency which offers only 100 MHz spectrum bandwidth. However, the use of mmWave increases the spectrum bandwidth 10 times, which leads to better transmission speeds [ 59 , 60 ].

Highlights of 5G mmWave are as follows:

An external file that holds a picture, illustration, etc.
Object name is sensors-22-00026-g004.jpg

Pictorial representation of millimeter wave.

  • The 5G mmWave offer three advantages: (1) MmWave is very less used new Band, (2) MmWave signals carry more data than lower frequency wave, and (3) MmWave can be incorporated with MIMO antenna with the potential to offer a higher magnitude capacity compared to current communication systems.

In [ 11 ], the authors presented the survey of mmWave communications for 5G. The advantage of mmWave communications is adaptability, i.e., it supports the architectures and protocols up-gradation, which consists of integrated circuits, systems, etc. The authors over-viewed the present solutions and examined them concerning effectiveness, performance, and complexity. They also discussed the open research issues of mmWave communications in 5G concerning the software-defined network (SDN) architecture, network state information, efficient regulation techniques, and the heterogeneous system.

In [ 61 ], the authors present the recent work done by investigators in 5G; they discussed the design issues and demands of mmWave 5G antennas for cellular handsets. After that, they designed a small size and low-profile 60 GHz array of antenna units that contain 3D planer mesh-grid antenna elements. For the future prospect, a framework is designed in which antenna components are used to operate cellular handsets on mmWave 5G smartphones. In addition, they cross-checked the mesh-grid array of antennas with the polarized beam for upcoming hardware challenges.

In [ 12 ], the authors considered the suitability of the mmWave band for 5G cellular systems. They suggested a resource allocation system for concurrent D2D communications in mmWave 5G cellular systems, and it improves network efficiency and maintains network connectivity. This research article can serve as guidance for simulating D2D communications in mmWave 5G cellular systems. Massive mmWave BS may be set up to obtain a high delivery rate and aggregate efficiency. Therefore, many wireless users can hand off frequently between the mmWave base terminals, and it emerges the demand to search the neighbor having better network connectivity.

In [ 62 ], the authors provided a brief description of the cellular spectrum which ranges from 1 GHz to 3 GHz and is very crowed. In addition, they presented various noteworthy factors to set up mmWave communications in 5G, namely, channel characteristics regarding mmWave signal attenuation due to free space propagation, atmospheric gaseous, and rain. In addition, hybrid beamforming architecture in the mmWave technique is analyzed. They also suggested methods for the blockage effect in mmWave communications due to penetration damage. Finally, the authors have studied designing the mmWave transmission with small beams in nonorthogonal device-to-device communication.

This section covered various works done on 5G mmWave technology. The Table 7 shows how different author’s worked on the improvement of various parameters i.e., transmission rate, coverage, and cost, with 5G mmWave technology.

Summary of existing mmWave-based approaches in 5G technology.

4.4. 5G IoT Based Approaches

The 5G mobile network plays a big role in developing the Internet of Things (IoT). IoT will connect lots of things with the internet like appliances, sensors, devices, objects, and applications. These applications will collect lots of data from different devices and sensors. 5G will provide very high speed internet connectivity for data collection, transmission, control, and processing. 5G is a flexible network with unused spectrum availability and it offers very low cost deployment that is why it is the most efficient technology for IoT [ 63 ]. In many areas, 5G provides benefits to IoT, and below are some examples:

Smart homes: smart home appliances and products are in demand these days. The 5G network makes smart homes more real as it offers high speed connectivity and monitoring of smart appliances. Smart home appliances are easily accessed and configured from remote locations using the 5G network, as it offers very high speed low latency communication.

Smart cities: 5G wireless network also helps in developing smart cities applications such as automatic traffic management, weather update, local area broadcasting, energy saving, efficient power supply, smart lighting system, water resource management, crowd management, emergency control, etc.

Industrial IoT: 5G wireless technology will provide lots of features for future industries such as safety, process tracking, smart packing, shipping, energy efficiency, automation of equipment, predictive maintenance and logistics. 5G smart sensor technology also offers smarter, safer, cost effective, and energy-saving industrial operation for industrial IoT.

Smart Farming: 5G technology will play a crucial role for agriculture and smart farming. 5G sensors and GPS technology will help farmers to track live attacks on crops and manage them quickly. These smart sensors can also be used for irrigation control, pest control, insect control, and electricity control.

Autonomous Driving: 5G wireless network offers very low latency high speed communication which is very significant for autonomous driving. It means self-driving cars will come to real life soon with 5G wireless networks. Using 5G autonomous cars can easily communicate with smart traffic signs, objects and other vehicles running on the road. 5G’s low latency feature makes self-driving more real as every millisecond is important for autonomous vehicles, decision taking is performed in microseconds to avoid accidents [ 64 ].

Highlights of 5G IoT are as follows:

An external file that holds a picture, illustration, etc.
Object name is sensors-22-00026-g005.jpg

Pictorial representation of IoT with 5G.

  • 5G with IoT is a new feature of next-generation mobile communication, which provides a high-speed internet connection between moderated devices. 5G IoT also offers smart homes, smart devices, sensors, smart transportation systems, smart industries, etc., for end-users to make them smarter.
  • IoT deals with moderate devices which connect through the internet. The approach of the IoT has made the consideration of the research associated with the outcome of providing wearable, smart-phones, sensors, smart transportation systems, smart devices, washing machines, tablets, etc., and these diverse systems are associated to a common interface with the intelligence to connect.
  • Significant IoT applications include private healthcare systems, traffic management, industrial management, and tactile internet, etc.

Plenty of approaches is devised to address the issues of IoT [ 14 , 65 , 66 ].

In [ 65 ], the paper focuses on 5G mobile systems due to the emerging trends and developing technologies, which results in the exponential traffic growth in IoT. The author surveyed the challenges and demands during deployment of the massive IoT applications with the main focus on mobile networking. The author reviewed the features of standard IoT infrastructure, along with the cellular-based, low-power wide-area technologies (LPWA) such as eMTC, extended coverage (EC)-GSM-IoT, as well as noncellular, low-power wide-area (LPWA) technologies such as SigFox, LoRa etc.

In [ 14 ], the authors presented how 5G technology copes with the various issues of IoT today. It provides a brief review of existing and forming 5G architectures. The survey indicates the role of 5G in the foundation of the IoT ecosystem. IoT and 5G can easily combine with improved wireless technologies to set up the same ecosystem that can fulfill the current requirement for IoT devices. 5G can alter nature and will help to expand the development of IoT devices. As the process of 5G unfolds, global associations will find essentials for setting up a cross-industry engagement in determining and enlarging the 5G system.

In [ 66 ], the author introduced an IoT authentication scheme in a 5G network, with more excellent reliability and dynamic. The scheme proposed a privacy-protected procedure for selecting slices; it provided an additional fog node for proper data transmission and service types of the subscribers, along with service-oriented authentication and key understanding to maintain the secrecy, precision of users, and confidentiality of service factors. Users anonymously identify the IoT servers and develop a vital channel for service accessibility and data cached on local fog nodes and remote IoT servers. The author performed a simulation to manifest the security and privacy preservation of the user over the network.

This section covered various works done on 5G IoT by multiple authors. Table 8 shows how different author’s worked on the improvement of numerous parameters, i.e., data rate, security requirement, and performance with 5G IoT.

Summary of IoT-based approaches in 5G technology.

4.5. Machine Learning Techniques for 5G

Various machine learning (ML) techniques were applied in 5G networks and mobile communication. It provides a solution to multiple complex problems, which requires a lot of hand-tuning. ML techniques can be broadly classified as supervised, unsupervised, and reinforcement learning. Let’s discuss each learning technique separately and where it impacts the 5G network.

Supervised Learning, where user works with labeled data; some 5G network problems can be further categorized as classification and regression problems. Some regression problems such as scheduling nodes in 5G and energy availability can be predicted using Linear Regression (LR) algorithm. To accurately predict the bandwidth and frequency allocation Statistical Logistic Regression (SLR) is applied. Some supervised classifiers are applied to predict the network demand and allocate network resources based on the connectivity performance; it signifies the topology setup and bit rates. Support Vector Machine (SVM) and NN-based approximation algorithms are used for channel learning based on observable channel state information. Deep Neural Network (DNN) is also employed to extract solutions for predicting beamforming vectors at the BS’s by taking mapping functions and uplink pilot signals into considerations.

In unsupervised Learning, where the user works with unlabeled data, various clustering techniques are applied to enhance network performance and connectivity without interruptions. K-means clustering reduces the data travel by storing data centers content into clusters. It optimizes the handover estimation based on mobility pattern and selection of relay nodes in the V2V network. Hierarchical clustering reduces network failure by detecting the intrusion in the mobile wireless network; unsupervised soft clustering helps in reducing latency by clustering fog nodes. The nonparametric Bayesian unsupervised learning technique reduces traffic in the network by actively serving the user’s requests and demands. Other unsupervised learning techniques such as Adversarial Auto Encoders (AAE) and Affinity Propagation Clustering techniques detect irregular behavior in the wireless spectrum and manage resources for ultradense small cells, respectively.

In case of an uncertain environment in the 5G wireless network, reinforcement learning (RL) techniques are employed to solve some problems. Actor-critic reinforcement learning is used for user scheduling and resource allocation in the network. Markov decision process (MDP) and Partially Observable MDP (POMDP) is used for Quality of Experience (QoE)-based handover decision-making for Hetnets. Controls packet call admission in HetNets and channel access process for secondary users in a Cognitive Radio Network (CRN). Deep RL is applied to decide the communication channel and mobility and speeds up the secondary user’s learning rate using an antijamming strategy. Deep RL is employed in various 5G network application parameters such as resource allocation and security [ 67 ]. Table 9 shows the state-of-the-art ML-based solution for 5G network.

The state-of-the-art ML-based solution for 5G network.

Highlights of machine learning techniques for 5G are as follows:

An external file that holds a picture, illustration, etc.
Object name is sensors-22-00026-g006.jpg

Pictorial representation of machine learning (ML) in 5G.

  • In ML, a model will be defined which fulfills the desired requirements through which desired results are obtained. In the later stage, it examines accuracy from obtained results.
  • ML plays a vital role in 5G network analysis for threat detection, network load prediction, final arrangement, and network formation. Searching for a better balance between power, length of antennas, area, and network thickness crossed with the spontaneous use of services in the universe of individual users and types of devices.

In [ 79 ], author’s firstly describes the demands for the traditional authentication procedures and benefits of intelligent authentication. The intelligent authentication method was established to improve security practice in 5G-and-beyond wireless communication systems. Thereafter, the machine learning paradigms for intelligent authentication were organized into parametric and non-parametric research methods, as well as supervised, unsupervised, and reinforcement learning approaches. As a outcome, machine learning techniques provide a new paradigm into authentication under diverse network conditions and unstable dynamics. In addition, prompt intelligence to the security management to obtain cost-effective, better reliable, model-free, continuous, and situation-aware authentication.

In [ 68 ], the authors proposed a machine learning-based model to predict the traffic load at a particular location. They used a mobile network traffic dataset to train a model that can calculate the total number of user requests at a time. To launch access and mobility management function (AMF) instances according to the requirement as there were no predictions of user request the performance automatically degrade as AMF does not handle these requests at a time. Earlier threshold-based techniques were used to predict the traffic load, but that approach took too much time; therefore, the authors proposed RNN algorithm-based ML to predict the traffic load, which gives efficient results.

In [ 15 ], authors discussed the issue of network slice admission, resource allocation among subscribers, and how to maximize the profit of infrastructure providers. The author proposed a network slice admission control algorithm based on SMDP (decision-making process) that guarantees the subscribers’ best acceptance policies and satisfiability (tenants). They also suggested novel N3AC, a neural network-based algorithm that optimizes performance under various configurations, significantly outperforms practical and straightforward approaches.

This section includes various works done on 5G ML by different authors. Table 10 shows the state-of-the-art work on the improvement of various parameters such as energy efficiency, Quality of Services (QoS), and latency with 5G ML.

The state-of-the-art ML-based approaches in 5G technology.

4.6. Optimization Techniques for 5G

Optimization techniques may be applied to capture NP-Complete or NP-Hard problems in 5G technology. This section briefly describes various research works suggested for 5G technology based on optimization techniques.

In [ 80 ], Massive MIMO technology is used in 5G mobile network to make it more flexible and scalable. The MIMO implementation in 5G needs a significant number of radio frequencies is required in the RF circuit that increases the cost and energy consumption of the 5G network. This paper provides a solution that increases the cost efficiency and energy efficiency with many radio frequency chains for a 5G wireless communication network. They give an optimized energy efficient technique for MIMO antenna and mmWave technologies based 5G mobile communication network. The proposed Energy Efficient Hybrid Precoding (EEHP) algorithm to increase the energy efficiency for the 5G wireless network. This algorithm minimizes the cost of an RF circuit with a large number of RF chains.

In [ 16 ], authors have discussed the growing demand for energy efficiency in the next-generation networks. In the last decade, they have figured out the things in wireless transmissions, which proved a change towards pursuing green communication for the next generation system. The importance of adopting the correct EE metric was also reviewed. Further, they worked through the different approaches that can be applied in the future for increasing the network’s energy and posed a summary of the work that was completed previously to enhance the energy productivity of the network using these capabilities. A system design for EE development using relay selection was also characterized, along with an observation of distinct algorithms applied for EE in relay-based ecosystems.

In [ 81 ], authors presented how AI-based approach is used to the setup of Self Organizing Network (SON) functionalities for radio access network (RAN) design and optimization. They used a machine learning approach to predict the results for 5G SON functionalities. Firstly, the input was taken from various sources; then, prediction and clustering-based machine learning models were applied to produce the results. Multiple AI-based devices were used to extract the knowledge analysis to execute SON functionalities smoothly. Based on results, they tested how self-optimization, self-testing, and self-designing are done for SON. The author also describes how the proposed mechanism classifies in different orders.

In [ 82 ], investigators examined the working of OFDM in various channel environments. They also figured out the changes in frame duration of the 5G TDD frame design. Subcarrier spacing is beneficial to obtain a small frame length with control overhead. They provided various techniques to reduce the growing guard period (GP) and cyclic prefix (CP) like complete utilization of multiple subcarrier spacing, management and data parts of frame at receiver end, various uses of timing advance (TA) or total control of flexible CP size.

This section includes various works that were done on 5G optimization by different authors. Table 11 shows how other authors worked on the improvement of multiple parameters such as energy efficiency, power optimization, and latency with 5G optimization.

Summary of Optimization Based Approaches in 5G Technology.

5. Description of Novel 5G Features over 4G

This section presents descriptions of various novel features of 5G, namely, the concept of small cell, beamforming, and MEC.

5.1. Small Cell

Small cells are low-powered cellular radio access nodes which work in the range of 10 meters to a few kilometers. Small cells play a very important role in implementation of the 5G wireless network. Small cells are low power base stations which cover small areas. Small cells are quite similar with all the previous cells used in various wireless networks. However, these cells have some advantages like they can work with low power and they are also capable of working with high data rates. Small cells help in rollout of 5G network with ultra high speed and low latency communication. Small cells in the 5G network use some new technologies like MIMO, beamforming, and mmWave for high speed data transmission. The design of small cells hardware is very simple so its implementation is quite easier and faster. There are three types of small cell tower available in the market. Femtocells, picocells, and microcells [ 83 ]. As shown in the Table 12 .

Types of Small cells.

MmWave is a very high band spectrum between 30 to 300 GHz. As it is a significantly less used spectrum, it provides very high-speed wireless communication. MmWave offers ultra-wide bandwidth for next-generation mobile networks. MmWave has lots of advantages, but it has some disadvantages, too, such as mmWave signals are very high-frequency signals, so they have more collision with obstacles in the air which cause the signals loses energy quickly. Buildings and trees also block MmWave signals, so these signals cover a shorter distance. To resolve these issues, multiple small cell stations are installed to cover the gap between end-user and base station [ 18 ]. Small cell covers a very shorter range, so the installation of a small cell depends on the population of a particular area. Generally, in a populated place, the distance between each small cell varies from 10 to 90 meters. In the survey [ 20 ], various authors implemented small cells with massive MIMO simultaneously. They also reviewed multiple technologies used in 5G like beamforming, small cell, massive MIMO, NOMA, device to device (D2D) communication. Various problems like interference management, spectral efficiency, resource management, energy efficiency, and backhauling are discussed. The author also gave a detailed presentation of all the issues occurring while implementing small cells with various 5G technologies. As shown in the Figure 7 , mmWave has a higher range, so it can be easily blocked by the obstacles as shown in Figure 7 a. This is one of the key concerns of millimeter-wave signal transmission. To solve this issue, the small cell can be placed at a short distance to transmit the signals easily, as shown in Figure 7 b.

An external file that holds a picture, illustration, etc.
Object name is sensors-22-00026-g007.jpg

Pictorial representation of communication with and without small cells.

5.2. Beamforming

Beamforming is a key technology of wireless networks which transmits the signals in a directional manner. 5G beamforming making a strong wireless connection toward a receiving end. In conventional systems when small cells are not using beamforming, moving signals to particular areas is quite difficult. Beamforming counter this issue using beamforming small cells are able to transmit the signals in particular direction towards a device like mobile phone, laptops, autonomous vehicle and IoT devices. Beamforming is improving the efficiency and saves the energy of the 5G network. Beamforming is broadly divided into three categories: Digital beamforming, analog beamforming and hybrid beamforming. Digital beamforming: multiuser MIMO is equal to digital beamforming which is mainly used in LTE Advanced Pro and in 5G NR. In digital beamforming the same frequency or time resources can be used to transmit the data to multiple users at the same time which improves the cell capacity of wireless networks. Analog Beamforming: In mmWave frequency range 5G NR analog beamforming is a very important approach which improves the coverage. In digital beamforming there are chances of high pathloss in mmWave as only one beam per set of antenna is formed. While the analog beamforming saves high pathloss in mmWave. Hybrid beamforming: hybrid beamforming is a combination of both analog beamforming and digital beamforming. In the implementation of MmWave in 5G network hybrid beamforming will be used [ 84 ].

Wireless signals in the 4G network are spreading in large areas, and nature is not Omnidirectional. Thus, energy depletes rapidly, and users who are accessing these signals also face interference problems. The beamforming technique is used in the 5G network to resolve this issue. In beamforming signals are directional. They move like a laser beam from the base station to the user, so signals seem to be traveling in an invisible cable. Beamforming helps achieve a faster data rate; as the signals are directional, it leads to less energy consumption and less interference. In [ 21 ], investigators evolve some techniques which reduce interference and increase system efficiency of the 5G mobile network. In this survey article, the authors covered various challenges faced while designing an optimized beamforming algorithm. Mainly focused on different design parameters such as performance evaluation and power consumption. In addition, they also described various issues related to beamforming like CSI, computation complexity, and antenna correlation. They also covered various research to cover how beamforming helps implement MIMO in next-generation mobile networks [ 85 ]. Figure 8 shows the pictorial representation of communication with and without using beamforming.

An external file that holds a picture, illustration, etc.
Object name is sensors-22-00026-g008.jpg

Pictorial Representation of communication with and without using beamforming.

5.3. Mobile Edge Computing

Mobile Edge Computing (MEC) [ 24 ]: MEC is an extended version of cloud computing that brings cloud resources closer to the end-user. When we talk about computing, the very first thing that comes to our mind is cloud computing. Cloud computing is a very famous technology that offers many services to end-user. Still, cloud computing has many drawbacks. The services available in the cloud are too far from end-users that create latency, and cloud user needs to download the complete application before use, which also increases the burden to the device [ 86 ]. MEC creates an edge between the end-user and cloud server, bringing cloud computing closer to the end-user. Now, all the services, namely, video conferencing, virtual software, etc., are offered by this edge that improves cloud computing performance. Another essential feature of MEC is that the application is split into two parts, which, first one is available at cloud server, and the second is at the user’s device. Therefore, the user need not download the complete application on his device that increases the performance of the end user’s device. Furthermore, MEC provides cloud services at very low latency and less bandwidth. In [ 23 , 87 ], the author’s investigation proved that successful deployment of MEC in 5G network increases the overall performance of 5G architecture. Graphical differentiation between cloud computing and mobile edge computing is presented in Figure 9 .

An external file that holds a picture, illustration, etc.
Object name is sensors-22-00026-g009.jpg

Pictorial representation of cloud computing vs. mobile edge computing.

6. 5G Security

Security is the key feature in the telecommunication network industry, which is necessary at various layers, to handle 5G network security in applications such as IoT, Digital forensics, IDS and many more [ 88 , 89 ]. The authors [ 90 ], discussed the background of 5G and its security concerns, challenges and future directions. The author also introduced the blockchain technology that can be incorporated with the IoT to overcome the challenges in IoT. The paper aims to create a security framework which can be incorporated with the LTE advanced network, and effective in terms of cost, deployment and QoS. In [ 91 ], author surveyed various form of attacks, the security challenges, security solutions with respect to the affected technology such as SDN, Network function virtualization (NFV), Mobile Clouds and MEC, and security standardizations of 5G, i.e., 3GPP, 5GPPP, Internet Engineering Task Force (IETF), Next Generation Mobile Networks (NGMN), European Telecommunications Standards Institute (ETSI). In [ 92 ], author elaborated various technological aspects, security issues and their existing solutions and also mentioned the new emerging technological paradigms for 5G security such as blockchain, quantum cryptography, AI, SDN, CPS, MEC, D2D. The author aims to create new security frameworks for 5G for further use of this technology in development of smart cities, transportation and healthcare. In [ 93 ], author analyzed the threats and dark threat, security aspects concerned with SDN and NFV, also their Commercial & Industrial Security Corporation (CISCO) 5G vision and new security innovations with respect to the new evolving architectures of 5G [ 94 ].

AuthenticationThe identification of the user in any network is made with the help of authentication. The different mobile network generations from 1G to 5G have used multiple techniques for user authentication. 5G utilizes the 5G Authentication and Key Agreement (AKA) authentication method, which shares a cryptographic key between user equipment (UE) and its home network and establishes a mutual authentication process between the both [ 95 ].

Access Control To restrict the accessibility in the network, 5G supports access control mechanisms to provide a secure and safe environment to the users and is controlled by network providers. 5G uses simple public key infrastructure (PKI) certificates for authenticating access in the 5G network. PKI put forward a secure and dynamic environment for the 5G network. The simple PKI technique provides flexibility to the 5G network; it can scale up and scale down as per the user traffic in the network [ 96 , 97 ].

Communication Security 5G deals to provide high data bandwidth, low latency, and better signal coverage. Therefore secure communication is the key concern in the 5G network. UE, mobile operators, core network, and access networks are the main focal point for the attackers in 5G communication. Some of the common attacks in communication at various segments are Botnet, message insertion, micro-cell, distributed denial of service (DDoS), and transport layer security (TLS)/secure sockets layer (SSL) attacks [ 98 , 99 ].

Encryption The confidentiality of the user and the network is done using encryption techniques. As 5G offers multiple services, end-to-end (E2E) encryption is the most suitable technique applied over various segments in the 5G network. Encryption forbids unauthorized access to the network and maintains the data privacy of the user. To encrypt the radio traffic at Packet Data Convergence Protocol (PDCP) layer, three 128-bits keys are applied at the user plane, nonaccess stratum (NAS), and access stratum (AS) [ 100 ].

7. Summary of 5G Technology Based on Above-Stated Challenges

In this section, various issues addressed by investigators in 5G technologies are presented in Table 13 . In addition, different parameters are considered, such as throughput, latency, energy efficiency, data rate, spectral efficiency, fairness & computing capacity, transmission rate, coverage, cost, security requirement, performance, QoS, power optimization, etc., indexed from R1 to R14.

Summary of 5G Technology above stated challenges (R1:Throughput, R2:Latency, R3:Energy Efficiency, R4:Data Rate, R5:Spectral efficiency, R6:Fairness & Computing Capacity, R7:Transmission Rate, R8:Coverage, R9:Cost, R10:Security requirement, R11:Performance, R12:Quality of Services (QoS), R13:Power Optimization).

8. Conclusions

This survey article illustrates the emergence of 5G, its evolution from 1G to 5G mobile network, applications, different research groups, their work, and the key features of 5G. It is not just a mobile broadband network, different from all the previous mobile network generations; it offers services like IoT, V2X, and Industry 4.0. This paper covers a detailed survey from multiple authors on different technologies in 5G, such as massive MIMO, Non-Orthogonal Multiple Access (NOMA), millimeter wave, small cell, MEC (Mobile Edge Computing), beamforming, optimization, and machine learning in 5G. After each section, a tabular comparison covers all the state-of-the-research held in these technologies. This survey also shows the importance of these newly added technologies and building a flexible, scalable, and reliable 5G network.

9. Future Findings

This article covers a detailed survey on the 5G mobile network and its features. These features make 5G more reliable, scalable, efficient at affordable rates. As discussed in the above sections, numerous technical challenges originate while implementing those features or providing services over a 5G mobile network. So, for future research directions, the research community can overcome these challenges while implementing these technologies (MIMO, NOMA, small cell, mmWave, beam-forming, MEC) over a 5G network. 5G communication will bring new improvements over the existing systems. Still, the current solutions cannot fulfill the autonomous system and future intelligence engineering requirements after a decade. There is no matter of discussion that 5G will provide better QoS and new features than 4G. But there is always room for improvement as the considerable growth of centralized data and autonomous industry 5G wireless networks will not be capable of fulfilling their demands in the future. So, we need to move on new wireless network technology that is named 6G. 6G wireless network will bring new heights in mobile generations, as it includes (i) massive human-to-machine communication, (ii) ubiquitous connectivity between the local device and cloud server, (iii) creation of data fusion technology for various mixed reality experiences and multiverps maps. (iv) Focus on sensing and actuation to control the network of the entire world. The 6G mobile network will offer new services with some other technologies; these services are 3D mapping, reality devices, smart homes, smart wearable, autonomous vehicles, artificial intelligence, and sense. It is expected that 6G will provide ultra-long-range communication with a very low latency of 1 ms. The per-user bit rate in a 6G wireless network will be approximately 1 Tbps, and it will also provide wireless communication, which is 1000 times faster than 5G networks.

Acknowledgments

Author contributions.

Conceptualization: R.D., I.Y., G.C., P.L. data gathering: R.D., G.C., P.L, I.Y. funding acquisition: I.Y. investigation: I.Y., G.C., G.P. methodology: R.D., I.Y., G.C., P.L., G.P., survey: I.Y., G.C., P.L, G.P., R.D. supervision: G.C., I.Y., G.P. validation: I.Y., G.P. visualization: R.D., I.Y., G.C., P.L. writing, original draft: R.D., I.Y., G.C., P.L., G.P. writing, review, and editing: I.Y., G.C., G.P. All authors have read and agreed to the published version of the manuscript.

This paper was supported by Soonchunhyang University.

Institutional Review Board Statement

Informed consent statement, data availability statement, conflicts of interest.

The authors declare no conflict of interest.

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

quantitative research paper about technology

IEEE Enacts Ban on 'Lenna' Image from Playboy in Research Papers to Foster Inclusivity

T he IEEE Computer Society announced to its members on Wednesday that, effective April 1, it will no longer accept papers containing the commonly used image of Lena Forsén, a Playboy model from 1972. Known as the “Lenna image,” this image has been utilized in image processing research since 1973 but has faced criticism for contributing to a sense of exclusion among women in the field.

In an email sent to members, Terry Benzel, the Vice President of Technical & Conference Activities at the IEEE Computer Society, stated, “IEEE’s diversity statement and supporting policies such as the IEEE Code of Ethics speak to IEEE’s commitment to promoting an including and equitable culture that welcomes all. In alignment with this culture and with respect to the wishes of the subject of the image, Lena Forsén, IEEE will no longer accept submitted papers which include the ‘Lena image.'”

Originally appearing as a 512×512-pixel test image in the December 1972 issue of Playboy Magazine, the uncropped version served as the centerfold picture. The use of the Lenna image in image processing began around June or July 1973, when Alexander Sawchuck, an assistant professor, and a graduate student at the University of Southern California Signal and Image Processing Institute scanned a square portion of the centerfold image using a primitive drum scanner, excluding nudity. This scan was initially done for a colleague’s conference paper, subsequently leading to widespread adoption of the image by others in the field.

Throughout the 1970s, 80s, and 90s, the image gained traction in various papers, drawing the attention of Playboy, though the company chose to overlook copyright violations. In 1997, Playboy facilitated the location of Forsén, who made an appearance at the 50th Annual Conference of the Society for Imaging Science and Technology, signing autographs for admirers. “They must be so tired of me … looking at the same picture for all these years!” she quipped at the time. Playboy’s Vice President of new media, Eileen Kent, told Wired, “We decided we should exploit this, because it is a phenomenon.”

The image, featuring Forsén’s visage and a bare shoulder adorned with a hat sporting a purple feather, reportedly served as an excellent test subject for early digital image technology due to its high contrast and intricate detail. However, it also presented a sexually suggestive portrayal of an attractive woman. Its persistent use by men in the computing domain has faced critique over the years, particularly from female scientists and engineers. They argue that the image, especially its association with the Playboy brand, objectifies women and fosters an academic environment where they feel unwelcome.

In response to longstanding criticism, dating back to at least 1996, the journal Nature took the step to prohibit the use of the Lena image in paper submissions in 2018.

According to the comp.compression Usenet newsgroup FAQ document, in 1988, a Swedish publication approached Forsén to inquire about her thoughts on her image being used in computer science, to which she reportedly reacted with amusement. However, in a 2019 Wired article by Linda Kinstler, Forsén expressed no resentment toward the image but voiced regret over not being fairly compensated initially. “I’m really proud of that picture,” she told Kinstler at the time.

However, it appears Forsén has since changed her stance. In 2019, Creatable and Code Like a Girl produced an advertising documentary titled “Losing Lena” as part of a campaign aimed at eliminating the use of the Lena image in technology and image processing fields. In a press release for the campaign and film, Forsén is quoted as saying, “I retired from modelling a long time ago. It’s time I retired from tech, too. We can make a simple change today that creates a lasting change for tomorrow. Let’s commit to losing me.”

Relevant articles:

– Playboy image from 1972 gets ban from IEEE computer journals

– Journal publisher bans Playboy centerfold Lena’s image from research papers , NewsBytes, Fri, 29 Mar 2024 09:50:18 GMT

– It’s time to retire Lena from computer science , Pursuit, Fri, 13 Dec 2019 08:00:00 GMT

– The Playboy Centerfold That Helped Create the JPEG , The Atlantic, Tue, 09 Feb 2016 08:00:00 GMT

The IEEE Computer Society announced to its members on W […]

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 26 March 2024

Predicting and improving complex beer flavor through machine learning

  • Michiel Schreurs   ORCID: orcid.org/0000-0002-9449-5619 1 , 2 , 3   na1 ,
  • Supinya Piampongsant 1 , 2 , 3   na1 ,
  • Miguel Roncoroni   ORCID: orcid.org/0000-0001-7461-1427 1 , 2 , 3   na1 ,
  • Lloyd Cool   ORCID: orcid.org/0000-0001-9936-3124 1 , 2 , 3 , 4 ,
  • Beatriz Herrera-Malaver   ORCID: orcid.org/0000-0002-5096-9974 1 , 2 , 3 ,
  • Christophe Vanderaa   ORCID: orcid.org/0000-0001-7443-5427 4 ,
  • Florian A. Theßeling 1 , 2 , 3 ,
  • Łukasz Kreft   ORCID: orcid.org/0000-0001-7620-4657 5 ,
  • Alexander Botzki   ORCID: orcid.org/0000-0001-6691-4233 5 ,
  • Philippe Malcorps 6 ,
  • Luk Daenen 6 ,
  • Tom Wenseleers   ORCID: orcid.org/0000-0002-1434-861X 4 &
  • Kevin J. Verstrepen   ORCID: orcid.org/0000-0002-3077-6219 1 , 2 , 3  

Nature Communications volume  15 , Article number:  2368 ( 2024 ) Cite this article

41k Accesses

778 Altmetric

Metrics details

  • Chemical engineering
  • Gas chromatography
  • Machine learning
  • Metabolomics
  • Taste receptors

The perception and appreciation of food flavor depends on many interacting chemical compounds and external factors, and therefore proves challenging to understand and predict. Here, we combine extensive chemical and sensory analyses of 250 different beers to train machine learning models that allow predicting flavor and consumer appreciation. For each beer, we measure over 200 chemical properties, perform quantitative descriptive sensory analysis with a trained tasting panel and map data from over 180,000 consumer reviews to train 10 different machine learning models. The best-performing algorithm, Gradient Boosting, yields models that significantly outperform predictions based on conventional statistics and accurately predict complex food features and consumer appreciation from chemical profiles. Model dissection allows identifying specific and unexpected compounds as drivers of beer flavor and appreciation. Adding these compounds results in variants of commercial alcoholic and non-alcoholic beers with improved consumer appreciation. Together, our study reveals how big data and machine learning uncover complex links between food chemistry, flavor and consumer perception, and lays the foundation to develop novel, tailored foods with superior flavors.

Similar content being viewed by others

quantitative research paper about technology

BitterSweet: Building machine learning models for predicting the bitter and sweet taste of small molecules

Rudraksh Tuwani, Somin Wadhwa & Ganesh Bagler

quantitative research paper about technology

Sensory lexicon and aroma volatiles analysis of brewing malt

Xiaoxia Su, Miao Yu, … Tianyi Du

quantitative research paper about technology

Predicting odor from molecular structure: a multi-label classification approach

Kushagra Saini & Venkatnarayan Ramanathan

Introduction

Predicting and understanding food perception and appreciation is one of the major challenges in food science. Accurate modeling of food flavor and appreciation could yield important opportunities for both producers and consumers, including quality control, product fingerprinting, counterfeit detection, spoilage detection, and the development of new products and product combinations (food pairing) 1 , 2 , 3 , 4 , 5 , 6 . Accurate models for flavor and consumer appreciation would contribute greatly to our scientific understanding of how humans perceive and appreciate flavor. Moreover, accurate predictive models would also facilitate and standardize existing food assessment methods and could supplement or replace assessments by trained and consumer tasting panels, which are variable, expensive and time-consuming 7 , 8 , 9 . Lastly, apart from providing objective, quantitative, accurate and contextual information that can help producers, models can also guide consumers in understanding their personal preferences 10 .

Despite the myriad of applications, predicting food flavor and appreciation from its chemical properties remains a largely elusive goal in sensory science, especially for complex food and beverages 11 , 12 . A key obstacle is the immense number of flavor-active chemicals underlying food flavor. Flavor compounds can vary widely in chemical structure and concentration, making them technically challenging and labor-intensive to quantify, even in the face of innovations in metabolomics, such as non-targeted metabolic fingerprinting 13 , 14 . Moreover, sensory analysis is perhaps even more complicated. Flavor perception is highly complex, resulting from hundreds of different molecules interacting at the physiochemical and sensorial level. Sensory perception is often non-linear, characterized by complex and concentration-dependent synergistic and antagonistic effects 15 , 16 , 17 , 18 , 19 , 20 , 21 that are further convoluted by the genetics, environment, culture and psychology of consumers 22 , 23 , 24 . Perceived flavor is therefore difficult to measure, with problems of sensitivity, accuracy, and reproducibility that can only be resolved by gathering sufficiently large datasets 25 . Trained tasting panels are considered the prime source of quality sensory data, but require meticulous training, are low throughput and high cost. Public databases containing consumer reviews of food products could provide a valuable alternative, especially for studying appreciation scores, which do not require formal training 25 . Public databases offer the advantage of amassing large amounts of data, increasing the statistical power to identify potential drivers of appreciation. However, public datasets suffer from biases, including a bias in the volunteers that contribute to the database, as well as confounding factors such as price, cult status and psychological conformity towards previous ratings of the product.

Classical multivariate statistics and machine learning methods have been used to predict flavor of specific compounds by, for example, linking structural properties of a compound to its potential biological activities or linking concentrations of specific compounds to sensory profiles 1 , 26 . Importantly, most previous studies focused on predicting organoleptic properties of single compounds (often based on their chemical structure) 27 , 28 , 29 , 30 , 31 , 32 , 33 , thus ignoring the fact that these compounds are present in a complex matrix in food or beverages and excluding complex interactions between compounds. Moreover, the classical statistics commonly used in sensory science 34 , 35 , 36 , 37 , 38 , 39 require a large sample size and sufficient variance amongst predictors to create accurate models. They are not fit for studying an extensive set of hundreds of interacting flavor compounds, since they are sensitive to outliers, have a high tendency to overfit and are less suited for non-linear and discontinuous relationships 40 .

In this study, we combine extensive chemical analyses and sensory data of a set of different commercial beers with machine learning approaches to develop models that predict taste, smell, mouthfeel and appreciation from compound concentrations. Beer is particularly suited to model the relationship between chemistry, flavor and appreciation. First, beer is a complex product, consisting of thousands of flavor compounds that partake in complex sensory interactions 41 , 42 , 43 . This chemical diversity arises from the raw materials (malt, yeast, hops, water and spices) and biochemical conversions during the brewing process (kilning, mashing, boiling, fermentation, maturation and aging) 44 , 45 . Second, the advent of the internet saw beer consumers embrace online review platforms, such as RateBeer (ZX Ventures, Anheuser-Busch InBev SA/NV) and BeerAdvocate (Next Glass, inc.). In this way, the beer community provides massive data sets of beer flavor and appreciation scores, creating extraordinarily large sensory databases to complement the analyses of our professional sensory panel. Specifically, we characterize over 200 chemical properties of 250 commercial beers, spread across 22 beer styles, and link these to the descriptive sensory profiling data of a 16-person in-house trained tasting panel and data acquired from over 180,000 public consumer reviews. These unique and extensive datasets enable us to train a suite of machine learning models to predict flavor and appreciation from a beer’s chemical profile. Dissection of the best-performing models allows us to pinpoint specific compounds as potential drivers of beer flavor and appreciation. Follow-up experiments confirm the importance of these compounds and ultimately allow us to significantly improve the flavor and appreciation of selected commercial beers. Together, our study represents a significant step towards understanding complex flavors and reinforces the value of machine learning to develop and refine complex foods. In this way, it represents a stepping stone for further computer-aided food engineering applications 46 .

To generate a comprehensive dataset on beer flavor, we selected 250 commercial Belgian beers across 22 different beer styles (Supplementary Fig.  S1 ). Beers with ≤ 4.2% alcohol by volume (ABV) were classified as non-alcoholic and low-alcoholic. Blonds and Tripels constitute a significant portion of the dataset (12.4% and 11.2%, respectively) reflecting their presence on the Belgian beer market and the heterogeneity of beers within these styles. By contrast, lager beers are less diverse and dominated by a handful of brands. Rare styles such as Brut or Faro make up only a small fraction of the dataset (2% and 1%, respectively) because fewer of these beers are produced and because they are dominated by distinct characteristics in terms of flavor and chemical composition.

Extensive analysis identifies relationships between chemical compounds in beer

For each beer, we measured 226 different chemical properties, including common brewing parameters such as alcohol content, iso-alpha acids, pH, sugar concentration 47 , and over 200 flavor compounds (Methods, Supplementary Table  S1 ). A large portion (37.2%) are terpenoids arising from hopping, responsible for herbal and fruity flavors 16 , 48 . A second major category are yeast metabolites, such as esters and alcohols, that result in fruity and solvent notes 48 , 49 , 50 . Other measured compounds are primarily derived from malt, or other microbes such as non- Saccharomyces yeasts and bacteria (‘wild flora’). Compounds that arise from spices or staling are labeled under ‘Others’. Five attributes (caloric value, total acids and total ester, hop aroma and sulfur compounds) are calculated from multiple individually measured compounds.

As a first step in identifying relationships between chemical properties, we determined correlations between the concentrations of the compounds (Fig.  1 , upper panel, Supplementary Data  1 and 2 , and Supplementary Fig.  S2 . For the sake of clarity, only a subset of the measured compounds is shown in Fig.  1 ). Compounds of the same origin typically show a positive correlation, while absence of correlation hints at parameters varying independently. For example, the hop aroma compounds citronellol, and alpha-terpineol show moderate correlations with each other (Spearman’s rho=0.39 and 0.57), but not with the bittering hop component iso-alpha acids (Spearman’s rho=0.16 and −0.07). This illustrates how brewers can independently modify hop aroma and bitterness by selecting hop varieties and dosage time. If hops are added early in the boiling phase, chemical conversions increase bitterness while aromas evaporate, conversely, late addition of hops preserves aroma but limits bitterness 51 . Similarly, hop-derived iso-alpha acids show a strong anti-correlation with lactic acid and acetic acid, likely reflecting growth inhibition of lactic acid and acetic acid bacteria, or the consequent use of fewer hops in sour beer styles, such as West Flanders ales and Fruit beers, that rely on these bacteria for their distinct flavors 52 . Finally, yeast-derived esters (ethyl acetate, ethyl decanoate, ethyl hexanoate, ethyl octanoate) and alcohols (ethanol, isoamyl alcohol, isobutanol, and glycerol), correlate with Spearman coefficients above 0.5, suggesting that these secondary metabolites are correlated with the yeast genetic background and/or fermentation parameters and may be difficult to influence individually, although the choice of yeast strain may offer some control 53 .

figure 1

Spearman rank correlations are shown. Descriptors are grouped according to their origin (malt (blue), hops (green), yeast (red), wild flora (yellow), Others (black)), and sensory aspect (aroma, taste, palate, and overall appreciation). Please note that for the chemical compounds, for the sake of clarity, only a subset of the total number of measured compounds is shown, with an emphasis on the key compounds for each source. For more details, see the main text and Methods section. Chemical data can be found in Supplementary Data  1 , correlations between all chemical compounds are depicted in Supplementary Fig.  S2 and correlation values can be found in Supplementary Data  2 . See Supplementary Data  4 for sensory panel assessments and Supplementary Data  5 for correlation values between all sensory descriptors.

Interestingly, different beer styles show distinct patterns for some flavor compounds (Supplementary Fig.  S3 ). These observations agree with expectations for key beer styles, and serve as a control for our measurements. For instance, Stouts generally show high values for color (darker), while hoppy beers contain elevated levels of iso-alpha acids, compounds associated with bitter hop taste. Acetic and lactic acid are not prevalent in most beers, with notable exceptions such as Kriek, Lambic, Faro, West Flanders ales and Flanders Old Brown, which use acid-producing bacteria ( Lactobacillus and Pediococcus ) or unconventional yeast ( Brettanomyces ) 54 , 55 . Glycerol, ethanol and esters show similar distributions across all beer styles, reflecting their common origin as products of yeast metabolism during fermentation 45 , 53 . Finally, low/no-alcohol beers contain low concentrations of glycerol and esters. This is in line with the production process for most of the low/no-alcohol beers in our dataset, which are produced through limiting fermentation or by stripping away alcohol via evaporation or dialysis, with both methods having the unintended side-effect of reducing the amount of flavor compounds in the final beer 56 , 57 .

Besides expected associations, our data also reveals less trivial associations between beer styles and specific parameters. For example, geraniol and citronellol, two monoterpenoids responsible for citrus, floral and rose flavors and characteristic of Citra hops, are found in relatively high amounts in Christmas, Saison, and Brett/co-fermented beers, where they may originate from terpenoid-rich spices such as coriander seeds instead of hops 58 .

Tasting panel assessments reveal sensorial relationships in beer

To assess the sensory profile of each beer, a trained tasting panel evaluated each of the 250 beers for 50 sensory attributes, including different hop, malt and yeast flavors, off-flavors and spices. Panelists used a tasting sheet (Supplementary Data  3 ) to score the different attributes. Panel consistency was evaluated by repeating 12 samples across different sessions and performing ANOVA. In 95% of cases no significant difference was found across sessions ( p  > 0.05), indicating good panel consistency (Supplementary Table  S2 ).

Aroma and taste perception reported by the trained panel are often linked (Fig.  1 , bottom left panel and Supplementary Data  4 and 5 ), with high correlations between hops aroma and taste (Spearman’s rho=0.83). Bitter taste was found to correlate with hop aroma and taste in general (Spearman’s rho=0.80 and 0.69), and particularly with “grassy” noble hops (Spearman’s rho=0.75). Barnyard flavor, most often associated with sour beers, is identified together with stale hops (Spearman’s rho=0.97) that are used in these beers. Lactic and acetic acid, which often co-occur, are correlated (Spearman’s rho=0.66). Interestingly, sweetness and bitterness are anti-correlated (Spearman’s rho = −0.48), confirming the hypothesis that they mask each other 59 , 60 . Beer body is highly correlated with alcohol (Spearman’s rho = 0.79), and overall appreciation is found to correlate with multiple aspects that describe beer mouthfeel (alcohol, carbonation; Spearman’s rho= 0.32, 0.39), as well as with hop and ester aroma intensity (Spearman’s rho=0.39 and 0.35).

Similar to the chemical analyses, sensorial analyses confirmed typical features of specific beer styles (Supplementary Fig.  S4 ). For example, sour beers (Faro, Flanders Old Brown, Fruit beer, Kriek, Lambic, West Flanders ale) were rated acidic, with flavors of both acetic and lactic acid. Hoppy beers were found to be bitter and showed hop-associated aromas like citrus and tropical fruit. Malt taste is most detected among scotch, stout/porters, and strong ales, while low/no-alcohol beers, which often have a reputation for being ‘worty’ (reminiscent of unfermented, sweet malt extract) appear in the middle. Unsurprisingly, hop aromas are most strongly detected among hoppy beers. Like its chemical counterpart (Supplementary Fig.  S3 ), acidity shows a right-skewed distribution, with the most acidic beers being Krieks, Lambics, and West Flanders ales.

Tasting panel assessments of specific flavors correlate with chemical composition

We find that the concentrations of several chemical compounds strongly correlate with specific aroma or taste, as evaluated by the tasting panel (Fig.  2 , Supplementary Fig.  S5 , Supplementary Data  6 ). In some cases, these correlations confirm expectations and serve as a useful control for data quality. For example, iso-alpha acids, the bittering compounds in hops, strongly correlate with bitterness (Spearman’s rho=0.68), while ethanol and glycerol correlate with tasters’ perceptions of alcohol and body, the mouthfeel sensation of fullness (Spearman’s rho=0.82/0.62 and 0.72/0.57 respectively) and darker color from roasted malts is a good indication of malt perception (Spearman’s rho=0.54).

figure 2

Heatmap colors indicate Spearman’s Rho. Axes are organized according to sensory categories (aroma, taste, mouthfeel, overall), chemical categories and chemical sources in beer (malt (blue), hops (green), yeast (red), wild flora (yellow), Others (black)). See Supplementary Data  6 for all correlation values.

Interestingly, for some relationships between chemical compounds and perceived flavor, correlations are weaker than expected. For example, the rose-smelling phenethyl acetate only weakly correlates with floral aroma. This hints at more complex relationships and interactions between compounds and suggests a need for a more complex model than simple correlations. Lastly, we uncovered unexpected correlations. For instance, the esters ethyl decanoate and ethyl octanoate appear to correlate slightly with hop perception and bitterness, possibly due to their fruity flavor. Iron is anti-correlated with hop aromas and bitterness, most likely because it is also anti-correlated with iso-alpha acids. This could be a sign of metal chelation of hop acids 61 , given that our analyses measure unbound hop acids and total iron content, or could result from the higher iron content in dark and Fruit beers, which typically have less hoppy and bitter flavors 62 .

Public consumer reviews complement expert panel data

To complement and expand the sensory data of our trained tasting panel, we collected 180,000 reviews of our 250 beers from the online consumer review platform RateBeer. This provided numerical scores for beer appearance, aroma, taste, palate, overall quality as well as the average overall score.

Public datasets are known to suffer from biases, such as price, cult status and psychological conformity towards previous ratings of a product. For example, prices correlate with appreciation scores for these online consumer reviews (rho=0.49, Supplementary Fig.  S6 ), but not for our trained tasting panel (rho=0.19). This suggests that prices affect consumer appreciation, which has been reported in wine 63 , while blind tastings are unaffected. Moreover, we observe that some beer styles, like lagers and non-alcoholic beers, generally receive lower scores, reflecting that online reviewers are mostly beer aficionados with a preference for specialty beers over lager beers. In general, we find a modest correlation between our trained panel’s overall appreciation score and the online consumer appreciation scores (Fig.  3 , rho=0.29). Apart from the aforementioned biases in the online datasets, serving temperature, sample freshness and surroundings, which are all tightly controlled during the tasting panel sessions, can vary tremendously across online consumers and can further contribute to (among others, appreciation) differences between the two categories of tasters. Importantly, in contrast to the overall appreciation scores, for many sensory aspects the results from the professional panel correlated well with results obtained from RateBeer reviews. Correlations were highest for features that are relatively easy to recognize even for untrained tasters, like bitterness, sweetness, alcohol and malt aroma (Fig.  3 and below).

figure 3

RateBeer text mining results can be found in Supplementary Data  7 . Rho values shown are Spearman correlation values, with asterisks indicating significant correlations ( p  < 0.05, two-sided). All p values were smaller than 0.001, except for Esters aroma (0.0553), Esters taste (0.3275), Esters aroma—banana (0.0019), Coriander (0.0508) and Diacetyl (0.0134).

Besides collecting consumer appreciation from these online reviews, we developed automated text analysis tools to gather additional data from review texts (Supplementary Data  7 ). Processing review texts on the RateBeer database yielded comparable results to the scores given by the trained panel for many common sensory aspects, including acidity, bitterness, sweetness, alcohol, malt, and hop tastes (Fig.  3 ). This is in line with what would be expected, since these attributes require less training for accurate assessment and are less influenced by environmental factors such as temperature, serving glass and odors in the environment. Consumer reviews also correlate well with our trained panel for 4-vinyl guaiacol, a compound associated with a very characteristic aroma. By contrast, correlations for more specific aromas like ester, coriander or diacetyl are underrepresented in the online reviews, underscoring the importance of using a trained tasting panel and standardized tasting sheets with explicit factors to be scored for evaluating specific aspects of a beer. Taken together, our results suggest that public reviews are trustworthy for some, but not all, flavor features and can complement or substitute taste panel data for these sensory aspects.

Models can predict beer sensory profiles from chemical data

The rich datasets of chemical analyses, tasting panel assessments and public reviews gathered in the first part of this study provided us with a unique opportunity to develop predictive models that link chemical data to sensorial features. Given the complexity of beer flavor, basic statistical tools such as correlations or linear regression may not always be the most suitable for making accurate predictions. Instead, we applied different machine learning models that can model both simple linear and complex interactive relationships. Specifically, we constructed a set of regression models to predict (a) trained panel scores for beer flavor and quality and (b) public reviews’ appreciation scores from beer chemical profiles. We trained and tested 10 different models (Methods), 3 linear regression-based models (simple linear regression with first-order interactions (LR), lasso regression with first-order interactions (Lasso), partial least squares regressor (PLSR)), 5 decision tree models (AdaBoost regressor (ABR), extra trees (ET), gradient boosting regressor (GBR), random forest (RF) and XGBoost regressor (XGBR)), 1 support vector regression (SVR), and 1 artificial neural network (ANN) model.

To compare the performance of our machine learning models, the dataset was randomly split into a training and test set, stratified by beer style. After a model was trained on data in the training set, its performance was evaluated on its ability to predict the test dataset obtained from multi-output models (based on the coefficient of determination, see Methods). Additionally, individual-attribute models were ranked per descriptor and the average rank was calculated, as proposed by Korneva et al. 64 . Importantly, both ways of evaluating the models’ performance agreed in general. Performance of the different models varied (Table  1 ). It should be noted that all models perform better at predicting RateBeer results than results from our trained tasting panel. One reason could be that sensory data is inherently variable, and this variability is averaged out with the large number of public reviews from RateBeer. Additionally, all tree-based models perform better at predicting taste than aroma. Linear models (LR) performed particularly poorly, with negative R 2 values, due to severe overfitting (training set R 2  = 1). Overfitting is a common issue in linear models with many parameters and limited samples, especially with interaction terms further amplifying the number of parameters. L1 regularization (Lasso) successfully overcomes this overfitting, out-competing multiple tree-based models on the RateBeer dataset. Similarly, the dimensionality reduction of PLSR avoids overfitting and improves performance, to some extent. Still, tree-based models (ABR, ET, GBR, RF and XGBR) show the best performance, out-competing the linear models (LR, Lasso, PLSR) commonly used in sensory science 65 .

GBR models showed the best overall performance in predicting sensory responses from chemical information, with R 2 values up to 0.75 depending on the predicted sensory feature (Supplementary Table  S4 ). The GBR models predict consumer appreciation (RateBeer) better than our trained panel’s appreciation (R 2 value of 0.67 compared to R 2 value of 0.09) (Supplementary Table  S3 and Supplementary Table  S4 ). ANN models showed intermediate performance, likely because neural networks typically perform best with larger datasets 66 . The SVR shows intermediate performance, mostly due to the weak predictions of specific attributes that lower the overall performance (Supplementary Table  S4 ).

Model dissection identifies specific, unexpected compounds as drivers of consumer appreciation

Next, we leveraged our models to infer important contributors to sensory perception and consumer appreciation. Consumer preference is a crucial sensory aspects, because a product that shows low consumer appreciation scores often does not succeed commercially 25 . Additionally, the requirement for a large number of representative evaluators makes consumer trials one of the more costly and time-consuming aspects of product development. Hence, a model for predicting chemical drivers of overall appreciation would be a welcome addition to the available toolbox for food development and optimization.

Since GBR models on our RateBeer dataset showed the best overall performance, we focused on these models. Specifically, we used two approaches to identify important contributors. First, rankings of the most important predictors for each sensorial trait in the GBR models were obtained based on impurity-based feature importance (mean decrease in impurity). High-ranked parameters were hypothesized to be either the true causal chemical properties underlying the trait, to correlate with the actual causal properties, or to take part in sensory interactions affecting the trait 67 (Fig.  4A ). In a second approach, we used SHAP 68 to determine which parameters contributed most to the model for making predictions of consumer appreciation (Fig.  4B ). SHAP calculates parameter contributions to model predictions on a per-sample basis, which can be aggregated into an importance score.

figure 4

A The impurity-based feature importance (mean deviance in impurity, MDI) calculated from the Gradient Boosting Regression (GBR) model predicting RateBeer appreciation scores. The top 15 highest ranked chemical properties are shown. B SHAP summary plot for the top 15 parameters contributing to our GBR model. Each point on the graph represents a sample from our dataset. The color represents the concentration of that parameter, with bluer colors representing low values and redder colors representing higher values. Greater absolute values on the horizontal axis indicate a higher impact of the parameter on the prediction of the model. C Spearman correlations between the 15 most important chemical properties and consumer overall appreciation. Numbers indicate the Spearman Rho correlation coefficient, and the rank of this correlation compared to all other correlations. The top 15 important compounds were determined using SHAP (panel B).

Both approaches identified ethyl acetate as the most predictive parameter for beer appreciation (Fig.  4 ). Ethyl acetate is the most abundant ester in beer with a typical ‘fruity’, ‘solvent’ and ‘alcoholic’ flavor, but is often considered less important than other esters like isoamyl acetate. The second most important parameter identified by SHAP is ethanol, the most abundant beer compound after water. Apart from directly contributing to beer flavor and mouthfeel, ethanol drastically influences the physical properties of beer, dictating how easily volatile compounds escape the beer matrix to contribute to beer aroma 69 . Importantly, it should also be noted that the importance of ethanol for appreciation is likely inflated by the very low appreciation scores of non-alcoholic beers (Supplementary Fig.  S4 ). Despite not often being considered a driver of beer appreciation, protein level also ranks highly in both approaches, possibly due to its effect on mouthfeel and body 70 . Lactic acid, which contributes to the tart taste of sour beers, is the fourth most important parameter identified by SHAP, possibly due to the generally high appreciation of sour beers in our dataset.

Interestingly, some of the most important predictive parameters for our model are not well-established as beer flavors or are even commonly regarded as being negative for beer quality. For example, our models identify methanethiol and ethyl phenyl acetate, an ester commonly linked to beer staling 71 , as a key factor contributing to beer appreciation. Although there is no doubt that high concentrations of these compounds are considered unpleasant, the positive effects of modest concentrations are not yet known 72 , 73 .

To compare our approach to conventional statistics, we evaluated how well the 15 most important SHAP-derived parameters correlate with consumer appreciation (Fig.  4C ). Interestingly, only 6 of the properties derived by SHAP rank amongst the top 15 most correlated parameters. For some chemical compounds, the correlations are so low that they would have likely been considered unimportant. For example, lactic acid, the fourth most important parameter, shows a bimodal distribution for appreciation, with sour beers forming a separate cluster, that is missed entirely by the Spearman correlation. Additionally, the correlation plots reveal outliers, emphasizing the need for robust analysis tools. Together, this highlights the need for alternative models, like the Gradient Boosting model, that better grasp the complexity of (beer) flavor.

Finally, to observe the relationships between these chemical properties and their predicted targets, partial dependence plots were constructed for the six most important predictors of consumer appreciation 74 , 75 , 76 (Supplementary Fig.  S7 ). One-way partial dependence plots show how a change in concentration affects the predicted appreciation. These plots reveal an important limitation of our models: appreciation predictions remain constant at ever-increasing concentrations. This implies that once a threshold concentration is reached, further increasing the concentration does not affect appreciation. This is false, as it is well-documented that certain compounds become unpleasant at high concentrations, including ethyl acetate (‘nail polish’) 77 and methanethiol (‘sulfury’ and ‘rotten cabbage’) 78 . The inability of our models to grasp that flavor compounds have optimal levels, above which they become negative, is a consequence of working with commercial beer brands where (off-)flavors are rarely too high to negatively impact the product. The two-way partial dependence plots show how changing the concentration of two compounds influences predicted appreciation, visualizing their interactions (Supplementary Fig.  S7 ). In our case, the top 5 parameters are dominated by additive or synergistic interactions, with high concentrations for both compounds resulting in the highest predicted appreciation.

To assess the robustness of our best-performing models and model predictions, we performed 100 iterations of the GBR, RF and ET models. In general, all iterations of the models yielded similar performance (Supplementary Fig.  S8 ). Moreover, the main predictors (including the top predictors ethanol and ethyl acetate) remained virtually the same, especially for GBR and RF. For the iterations of the ET model, we did observe more variation in the top predictors, which is likely a consequence of the model’s inherent random architecture in combination with co-correlations between certain predictors. However, even in this case, several of the top predictors (ethanol and ethyl acetate) remain unchanged, although their rank in importance changes (Supplementary Fig.  S8 ).

Next, we investigated if a combination of RateBeer and trained panel data into one consolidated dataset would lead to stronger models, under the hypothesis that such a model would suffer less from bias in the datasets. A GBR model was trained to predict appreciation on the combined dataset. This model underperformed compared to the RateBeer model, both in the native case and when including a dataset identifier (R 2  = 0.67, 0.26 and 0.42 respectively). For the latter, the dataset identifier is the most important feature (Supplementary Fig.  S9 ), while most of the feature importance remains unchanged, with ethyl acetate and ethanol ranking highest, like in the original model trained only on RateBeer data. It seems that the large variation in the panel dataset introduces noise, weakening the models’ performances and reliability. In addition, it seems reasonable to assume that both datasets are fundamentally different, with the panel dataset obtained by blind tastings by a trained professional panel.

Lastly, we evaluated whether beer style identifiers would further enhance the model’s performance. A GBR model was trained with parameters that explicitly encoded the styles of the samples. This did not improve model performance (R2 = 0.66 with style information vs R2 = 0.67). The most important chemical features are consistent with the model trained without style information (eg. ethanol and ethyl acetate), and with the exception of the most preferred (strong ale) and least preferred (low/no-alcohol) styles, none of the styles were among the most important features (Supplementary Fig.  S9 , Supplementary Table  S5 and S6 ). This is likely due to a combination of style-specific chemical signatures, such as iso-alpha acids and lactic acid, that implicitly convey style information to the original models, as well as the low number of samples belonging to some styles, making it difficult for the model to learn style-specific patterns. Moreover, beer styles are not rigorously defined, with some styles overlapping in features and some beers being misattributed to a specific style, all of which leads to more noise in models that use style parameters.

Model validation

To test if our predictive models give insight into beer appreciation, we set up experiments aimed at improving existing commercial beers. We specifically selected overall appreciation as the trait to be examined because of its complexity and commercial relevance. Beer flavor comprises a complex bouquet rather than single aromas and tastes 53 . Hence, adding a single compound to the extent that a difference is noticeable may lead to an unbalanced, artificial flavor. Therefore, we evaluated the effect of combinations of compounds. Because Blond beers represent the most extensive style in our dataset, we selected a beer from this style as the starting material for these experiments (Beer 64 in Supplementary Data  1 ).

In the first set of experiments, we adjusted the concentrations of compounds that made up the most important predictors of overall appreciation (ethyl acetate, ethanol, lactic acid, ethyl phenyl acetate) together with correlated compounds (ethyl hexanoate, isoamyl acetate, glycerol), bringing them up to 95 th percentile ethanol-normalized concentrations (Methods) within the Blond group (‘Spiked’ concentration in Fig.  5A ). Compared to controls, the spiked beers were found to have significantly improved overall appreciation among trained panelists, with panelist noting increased intensity of ester flavors, sweetness, alcohol, and body fullness (Fig.  5B ). To disentangle the contribution of ethanol to these results, a second experiment was performed without the addition of ethanol. This resulted in a similar outcome, including increased perception of alcohol and overall appreciation.

figure 5

Adding the top chemical compounds, identified as best predictors of appreciation by our model, into poorly appreciated beers results in increased appreciation from our trained panel. Results of sensory tests between base beers and those spiked with compounds identified as the best predictors by the model. A Blond and Non/Low-alcohol (0.0% ABV) base beers were brought up to 95th-percentile ethanol-normalized concentrations within each style. B For each sensory attribute, tasters indicated the more intense sample and selected the sample they preferred. The numbers above the bars correspond to the p values that indicate significant changes in perceived flavor (two-sided binomial test: alpha 0.05, n  = 20 or 13).

In a last experiment, we tested whether using the model’s predictions can boost the appreciation of a non-alcoholic beer (beer 223 in Supplementary Data  1 ). Again, the addition of a mixture of predicted compounds (omitting ethanol, in this case) resulted in a significant increase in appreciation, body, ester flavor and sweetness.

Predicting flavor and consumer appreciation from chemical composition is one of the ultimate goals of sensory science. A reliable, systematic and unbiased way to link chemical profiles to flavor and food appreciation would be a significant asset to the food and beverage industry. Such tools would substantially aid in quality control and recipe development, offer an efficient and cost-effective alternative to pilot studies and consumer trials and would ultimately allow food manufacturers to produce superior, tailor-made products that better meet the demands of specific consumer groups more efficiently.

A limited set of studies have previously tried, to varying degrees of success, to predict beer flavor and beer popularity based on (a limited set of) chemical compounds and flavors 79 , 80 . Current sensitive, high-throughput technologies allow measuring an unprecedented number of chemical compounds and properties in a large set of samples, yielding a dataset that can train models that help close the gaps between chemistry and flavor, even for a complex natural product like beer. To our knowledge, no previous research gathered data at this scale (250 samples, 226 chemical parameters, 50 sensory attributes and 5 consumer scores) to disentangle and validate the chemical aspects driving beer preference using various machine-learning techniques. We find that modern machine learning models outperform conventional statistical tools, such as correlations and linear models, and can successfully predict flavor appreciation from chemical composition. This could be attributed to the natural incorporation of interactions and non-linear or discontinuous effects in machine learning models, which are not easily grasped by the linear model architecture. While linear models and partial least squares regression represent the most widespread statistical approaches in sensory science, in part because they allow interpretation 65 , 81 , 82 , modern machine learning methods allow for building better predictive models while preserving the possibility to dissect and exploit the underlying patterns. Of the 10 different models we trained, tree-based models, such as our best performing GBR, showed the best overall performance in predicting sensory responses from chemical information, outcompeting artificial neural networks. This agrees with previous reports for models trained on tabular data 83 . Our results are in line with the findings of Colantonio et al. who also identified the gradient boosting architecture as performing best at predicting appreciation and flavor (of tomatoes and blueberries, in their specific study) 26 . Importantly, besides our larger experimental scale, we were able to directly confirm our models’ predictions in vivo.

Our study confirms that flavor compound concentration does not always correlate with perception, suggesting complex interactions that are often missed by more conventional statistics and simple models. Specifically, we find that tree-based algorithms may perform best in developing models that link complex food chemistry with aroma. Furthermore, we show that massive datasets of untrained consumer reviews provide a valuable source of data, that can complement or even replace trained tasting panels, especially for appreciation and basic flavors, such as sweetness and bitterness. This holds despite biases that are known to occur in such datasets, such as price or conformity bias. Moreover, GBR models predict taste better than aroma. This is likely because taste (e.g. bitterness) often directly relates to the corresponding chemical measurements (e.g., iso-alpha acids), whereas such a link is less clear for aromas, which often result from the interplay between multiple volatile compounds. We also find that our models are best at predicting acidity and alcohol, likely because there is a direct relation between the measured chemical compounds (acids and ethanol) and the corresponding perceived sensorial attribute (acidity and alcohol), and because even untrained consumers are generally able to recognize these flavors and aromas.

The predictions of our final models, trained on review data, hold even for blind tastings with small groups of trained tasters, as demonstrated by our ability to validate specific compounds as drivers of beer flavor and appreciation. Since adding a single compound to the extent of a noticeable difference may result in an unbalanced flavor profile, we specifically tested our identified key drivers as a combination of compounds. While this approach does not allow us to validate if a particular single compound would affect flavor and/or appreciation, our experiments do show that this combination of compounds increases consumer appreciation.

It is important to stress that, while it represents an important step forward, our approach still has several major limitations. A key weakness of the GBR model architecture is that amongst co-correlating variables, the largest main effect is consistently preferred for model building. As a result, co-correlating variables often have artificially low importance scores, both for impurity and SHAP-based methods, like we observed in the comparison to the more randomized Extra Trees models. This implies that chemicals identified as key drivers of a specific sensory feature by GBR might not be the true causative compounds, but rather co-correlate with the actual causative chemical. For example, the high importance of ethyl acetate could be (partially) attributed to the total ester content, ethanol or ethyl hexanoate (rho=0.77, rho=0.72 and rho=0.68), while ethyl phenylacetate could hide the importance of prenyl isobutyrate and ethyl benzoate (rho=0.77 and rho=0.76). Expanding our GBR model to include beer style as a parameter did not yield additional power or insight. This is likely due to style-specific chemical signatures, such as iso-alpha acids and lactic acid, that implicitly convey style information to the original model, as well as the smaller sample size per style, limiting the power to uncover style-specific patterns. This can be partly attributed to the curse of dimensionality, where the high number of parameters results in the models mainly incorporating single parameter effects, rather than complex interactions such as style-dependent effects 67 . A larger number of samples may overcome some of these limitations and offer more insight into style-specific effects. On the other hand, beer style is not a rigid scientific classification, and beers within one style often differ a lot, which further complicates the analysis of style as a model factor.

Our study is limited to beers from Belgian breweries. Although these beers cover a large portion of the beer styles available globally, some beer styles and consumer patterns may be missing, while other features might be overrepresented. For example, many Belgian ales exhibit yeast-driven flavor profiles, which is reflected in the chemical drivers of appreciation discovered by this study. In future work, expanding the scope to include diverse markets and beer styles could lead to the identification of even more drivers of appreciation and better models for special niche products that were not present in our beer set.

In addition to inherent limitations of GBR models, there are also some limitations associated with studying food aroma. Even if our chemical analyses measured most of the known aroma compounds, the total number of flavor compounds in complex foods like beer is still larger than the subset we were able to measure in this study. For example, hop-derived thiols, that influence flavor at very low concentrations, are notoriously difficult to measure in a high-throughput experiment. Moreover, consumer perception remains subjective and prone to biases that are difficult to avoid. It is also important to stress that the models are still immature and that more extensive datasets will be crucial for developing more complete models in the future. Besides more samples and parameters, our dataset does not include any demographic information about the tasters. Including such data could lead to better models that grasp external factors like age and culture. Another limitation is that our set of beers consists of high-quality end-products and lacks beers that are unfit for sale, which limits the current model in accurately predicting products that are appreciated very badly. Finally, while models could be readily applied in quality control, their use in sensory science and product development is restrained by their inability to discern causal relationships. Given that the models cannot distinguish compounds that genuinely drive consumer perception from those that merely correlate, validation experiments are essential to identify true causative compounds.

Despite the inherent limitations, dissection of our models enabled us to pinpoint specific molecules as potential drivers of beer aroma and consumer appreciation, including compounds that were unexpected and would not have been identified using standard approaches. Important drivers of beer appreciation uncovered by our models include protein levels, ethyl acetate, ethyl phenyl acetate and lactic acid. Currently, many brewers already use lactic acid to acidify their brewing water and ensure optimal pH for enzymatic activity during the mashing process. Our results suggest that adding lactic acid can also improve beer appreciation, although its individual effect remains to be tested. Interestingly, ethanol appears to be unnecessary to improve beer appreciation, both for blond beer and alcohol-free beer. Given the growing consumer interest in alcohol-free beer, with a predicted annual market growth of >7% 84 , it is relevant for brewers to know what compounds can further increase consumer appreciation of these beers. Hence, our model may readily provide avenues to further improve the flavor and consumer appreciation of both alcoholic and non-alcoholic beers, which is generally considered one of the key challenges for future beer production.

Whereas we see a direct implementation of our results for the development of superior alcohol-free beverages and other food products, our study can also serve as a stepping stone for the development of novel alcohol-containing beverages. We want to echo the growing body of scientific evidence for the negative effects of alcohol consumption, both on the individual level by the mutagenic, teratogenic and carcinogenic effects of ethanol 85 , 86 , as well as the burden on society caused by alcohol abuse and addiction. We encourage the use of our results for the production of healthier, tastier products, including novel and improved beverages with lower alcohol contents. Furthermore, we strongly discourage the use of these technologies to improve the appreciation or addictive properties of harmful substances.

The present work demonstrates that despite some important remaining hurdles, combining the latest developments in chemical analyses, sensory analysis and modern machine learning methods offers exciting avenues for food chemistry and engineering. Soon, these tools may provide solutions in quality control and recipe development, as well as new approaches to sensory science and flavor research.

Beer selection

250 commercial Belgian beers were selected to cover the broad diversity of beer styles and corresponding diversity in chemical composition and aroma. See Supplementary Fig.  S1 .

Chemical dataset

Sample preparation.

Beers within their expiration date were purchased from commercial retailers. Samples were prepared in biological duplicates at room temperature, unless explicitly stated otherwise. Bottle pressure was measured with a manual pressure device (Steinfurth Mess-Systeme GmbH) and used to calculate CO 2 concentration. The beer was poured through two filter papers (Macherey-Nagel, 500713032 MN 713 ¼) to remove carbon dioxide and prevent spontaneous foaming. Samples were then prepared for measurements by targeted Headspace-Gas Chromatography-Flame Ionization Detector/Flame Photometric Detector (HS-GC-FID/FPD), Headspace-Solid Phase Microextraction-Gas Chromatography-Mass Spectrometry (HS-SPME-GC-MS), colorimetric analysis, enzymatic analysis, Near-Infrared (NIR) analysis, as described in the sections below. The mean values of biological duplicates are reported for each compound.

HS-GC-FID/FPD

HS-GC-FID/FPD (Shimadzu GC 2010 Plus) was used to measure higher alcohols, acetaldehyde, esters, 4-vinyl guaicol, and sulfur compounds. Each measurement comprised 5 ml of sample pipetted into a 20 ml glass vial containing 1.75 g NaCl (VWR, 27810.295). 100 µl of 2-heptanol (Sigma-Aldrich, H3003) (internal standard) solution in ethanol (Fisher Chemical, E/0650DF/C17) was added for a final concentration of 2.44 mg/L. Samples were flushed with nitrogen for 10 s, sealed with a silicone septum, stored at −80 °C and analyzed in batches of 20.

The GC was equipped with a DB-WAXetr column (length, 30 m; internal diameter, 0.32 mm; layer thickness, 0.50 µm; Agilent Technologies, Santa Clara, CA, USA) to the FID and an HP-5 column (length, 30 m; internal diameter, 0.25 mm; layer thickness, 0.25 µm; Agilent Technologies, Santa Clara, CA, USA) to the FPD. N 2 was used as the carrier gas. Samples were incubated for 20 min at 70 °C in the headspace autosampler (Flow rate, 35 cm/s; Injection volume, 1000 µL; Injection mode, split; Combi PAL autosampler, CTC analytics, Switzerland). The injector, FID and FPD temperatures were kept at 250 °C. The GC oven temperature was first held at 50 °C for 5 min and then allowed to rise to 80 °C at a rate of 5 °C/min, followed by a second ramp of 4 °C/min until 200 °C kept for 3 min and a final ramp of (4 °C/min) until 230 °C for 1 min. Results were analyzed with the GCSolution software version 2.4 (Shimadzu, Kyoto, Japan). The GC was calibrated with a 5% EtOH solution (VWR International) containing the volatiles under study (Supplementary Table  S7 ).

HS-SPME-GC-MS

HS-SPME-GC-MS (Shimadzu GCMS-QP-2010 Ultra) was used to measure additional volatile compounds, mainly comprising terpenoids and esters. Samples were analyzed by HS-SPME using a triphase DVB/Carboxen/PDMS 50/30 μm SPME fiber (Supelco Co., Bellefonte, PA, USA) followed by gas chromatography (Thermo Fisher Scientific Trace 1300 series, USA) coupled to a mass spectrometer (Thermo Fisher Scientific ISQ series MS) equipped with a TriPlus RSH autosampler. 5 ml of degassed beer sample was placed in 20 ml vials containing 1.75 g NaCl (VWR, 27810.295). 5 µl internal standard mix was added, containing 2-heptanol (1 g/L) (Sigma-Aldrich, H3003), 4-fluorobenzaldehyde (1 g/L) (Sigma-Aldrich, 128376), 2,3-hexanedione (1 g/L) (Sigma-Aldrich, 144169) and guaiacol (1 g/L) (Sigma-Aldrich, W253200) in ethanol (Fisher Chemical, E/0650DF/C17). Each sample was incubated at 60 °C in the autosampler oven with constant agitation. After 5 min equilibration, the SPME fiber was exposed to the sample headspace for 30 min. The compounds trapped on the fiber were thermally desorbed in the injection port of the chromatograph by heating the fiber for 15 min at 270 °C.

The GC-MS was equipped with a low polarity RXi-5Sil MS column (length, 20 m; internal diameter, 0.18 mm; layer thickness, 0.18 µm; Restek, Bellefonte, PA, USA). Injection was performed in splitless mode at 320 °C, a split flow of 9 ml/min, a purge flow of 5 ml/min and an open valve time of 3 min. To obtain a pulsed injection, a programmed gas flow was used whereby the helium gas flow was set at 2.7 mL/min for 0.1 min, followed by a decrease in flow of 20 ml/min to the normal 0.9 mL/min. The temperature was first held at 30 °C for 3 min and then allowed to rise to 80 °C at a rate of 7 °C/min, followed by a second ramp of 2 °C/min till 125 °C and a final ramp of 8 °C/min with a final temperature of 270 °C.

Mass acquisition range was 33 to 550 amu at a scan rate of 5 scans/s. Electron impact ionization energy was 70 eV. The interface and ion source were kept at 275 °C and 250 °C, respectively. A mix of linear n-alkanes (from C7 to C40, Supelco Co.) was injected into the GC-MS under identical conditions to serve as external retention index markers. Identification and quantification of the compounds were performed using an in-house developed R script as described in Goelen et al. and Reher et al. 87 , 88 (for package information, see Supplementary Table  S8 ). Briefly, chromatograms were analyzed using AMDIS (v2.71) 89 to separate overlapping peaks and obtain pure compound spectra. The NIST MS Search software (v2.0 g) in combination with the NIST2017, FFNSC3 and Adams4 libraries were used to manually identify the empirical spectra, taking into account the expected retention time. After background subtraction and correcting for retention time shifts between samples run on different days based on alkane ladders, compound elution profiles were extracted and integrated using a file with 284 target compounds of interest, which were either recovered in our identified AMDIS list of spectra or were known to occur in beer. Compound elution profiles were estimated for every peak in every chromatogram over a time-restricted window using weighted non-negative least square analysis after which peak areas were integrated 87 , 88 . Batch effect correction was performed by normalizing against the most stable internal standard compound, 4-fluorobenzaldehyde. Out of all 284 target compounds that were analyzed, 167 were visually judged to have reliable elution profiles and were used for final analysis.

Discrete photometric and enzymatic analysis

Discrete photometric and enzymatic analysis (Thermo Scientific TM Gallery TM Plus Beermaster Discrete Analyzer) was used to measure acetic acid, ammonia, beta-glucan, iso-alpha acids, color, sugars, glycerol, iron, pH, protein, and sulfite. 2 ml of sample volume was used for the analyses. Information regarding the reagents and standard solutions used for analyses and calibrations is included in Supplementary Table  S7 and Supplementary Table  S9 .

NIR analyses

NIR analysis (Anton Paar Alcolyzer Beer ME System) was used to measure ethanol. Measurements comprised 50 ml of sample, and a 10% EtOH solution was used for calibration.

Correlation calculations

Pairwise Spearman Rank correlations were calculated between all chemical properties.

Sensory dataset

Trained panel.

Our trained tasting panel consisted of volunteers who gave prior verbal informed consent. All compounds used for the validation experiment were of food-grade quality. The tasting sessions were approved by the Social and Societal Ethics Committee of the KU Leuven (G-2022-5677-R2(MAR)). All online reviewers agreed to the Terms and Conditions of the RateBeer website.

Sensory analysis was performed according to the American Society of Brewing Chemists (ASBC) Sensory Analysis Methods 90 . 30 volunteers were screened through a series of triangle tests. The sixteen most sensitive and consistent tasters were retained as taste panel members. The resulting panel was diverse in age [22–42, mean: 29], sex [56% male] and nationality [7 different countries]. The panel developed a consensus vocabulary to describe beer aroma, taste and mouthfeel. Panelists were trained to identify and score 50 different attributes, using a 7-point scale to rate attributes’ intensity. The scoring sheet is included as Supplementary Data  3 . Sensory assessments took place between 10–12 a.m. The beers were served in black-colored glasses. Per session, between 5 and 12 beers of the same style were tasted at 12 °C to 16 °C. Two reference beers were added to each set and indicated as ‘Reference 1 & 2’, allowing panel members to calibrate their ratings. Not all panelists were present at every tasting. Scores were scaled by standard deviation and mean-centered per taster. Values are represented as z-scores and clustered by Euclidean distance. Pairwise Spearman correlations were calculated between taste and aroma sensory attributes. Panel consistency was evaluated by repeating samples on different sessions and performing ANOVA to identify differences, using the ‘stats’ package (v4.2.2) in R (for package information, see Supplementary Table  S8 ).

Online reviews from a public database

The ‘scrapy’ package in Python (v3.6) (for package information, see Supplementary Table  S8 ). was used to collect 232,288 online reviews (mean=922, min=6, max=5343) from RateBeer, an online beer review database. Each review entry comprised 5 numerical scores (appearance, aroma, taste, palate and overall quality) and an optional review text. The total number of reviews per reviewer was collected separately. Numerical scores were scaled and centered per rater, and mean scores were calculated per beer.

For the review texts, the language was estimated using the packages ‘langdetect’ and ‘langid’ in Python. Reviews that were classified as English by both packages were kept. Reviewers with fewer than 100 entries overall were discarded. 181,025 reviews from >6000 reviewers from >40 countries remained. Text processing was done using the ‘nltk’ package in Python. Texts were corrected for slang and misspellings; proper nouns and rare words that are relevant to the beer context were specified and kept as-is (‘Chimay’,’Lambic’, etc.). A dictionary of semantically similar sensorial terms, for example ‘floral’ and ‘flower’, was created and collapsed together into one term. Words were stemmed and lemmatized to avoid identifying words such as ‘acid’ and ‘acidity’ as separate terms. Numbers and punctuation were removed.

Sentences from up to 50 randomly chosen reviews per beer were manually categorized according to the aspect of beer they describe (appearance, aroma, taste, palate, overall quality—not to be confused with the 5 numerical scores described above) or flagged as irrelevant if they contained no useful information. If a beer contained fewer than 50 reviews, all reviews were manually classified. This labeled data set was used to train a model that classified the rest of the sentences for all beers 91 . Sentences describing taste and aroma were extracted, and term frequency–inverse document frequency (TFIDF) was implemented to calculate enrichment scores for sensorial words per beer.

The sex of the tasting subject was not considered when building our sensory database. Instead, results from different panelists were averaged, both for our trained panel (56% male, 44% female) and the RateBeer reviews (70% male, 30% female for RateBeer as a whole).

Beer price collection and processing

Beer prices were collected from the following stores: Colruyt, Delhaize, Total Wine, BeerHawk, The Belgian Beer Shop, The Belgian Shop, and Beer of Belgium. Where applicable, prices were converted to Euros and normalized per liter. Spearman correlations were calculated between these prices and mean overall appreciation scores from RateBeer and the taste panel, respectively.

Pairwise Spearman Rank correlations were calculated between all sensory properties.

Machine learning models

Predictive modeling of sensory profiles from chemical data.

Regression models were constructed to predict (a) trained panel scores for beer flavors and quality from beer chemical profiles and (b) public reviews’ appreciation scores from beer chemical profiles. Z-scores were used to represent sensory attributes in both data sets. Chemical properties with log-normal distributions (Shapiro-Wilk test, p  <  0.05 ) were log-transformed. Missing chemical measurements (0.1% of all data) were replaced with mean values per attribute. Observations from 250 beers were randomly separated into a training set (70%, 175 beers) and a test set (30%, 75 beers), stratified per beer style. Chemical measurements (p = 231) were normalized based on the training set average and standard deviation. In total, three linear regression-based models: linear regression with first-order interaction terms (LR), lasso regression with first-order interaction terms (Lasso) and partial least squares regression (PLSR); five decision tree models, Adaboost regressor (ABR), Extra Trees (ET), Gradient Boosting regressor (GBR), Random Forest (RF) and XGBoost regressor (XGBR); one support vector machine model (SVR) and one artificial neural network model (ANN) were trained. The models were implemented using the ‘scikit-learn’ package (v1.2.2) and ‘xgboost’ package (v1.7.3) in Python (v3.9.16). Models were trained, and hyperparameters optimized, using five-fold cross-validated grid search with the coefficient of determination (R 2 ) as the evaluation metric. The ANN (scikit-learn’s MLPRegressor) was optimized using Bayesian Tree-Structured Parzen Estimator optimization with the ‘Optuna’ Python package (v3.2.0). Individual models were trained per attribute, and a multi-output model was trained on all attributes simultaneously.

Model dissection

GBR was found to outperform other methods, resulting in models with the highest average R 2 values in both trained panel and public review data sets. Impurity-based rankings of the most important predictors for each predicted sensorial trait were obtained using the ‘scikit-learn’ package. To observe the relationships between these chemical properties and their predicted targets, partial dependence plots (PDP) were constructed for the six most important predictors of consumer appreciation 74 , 75 .

The ‘SHAP’ package in Python (v0.41.0) was implemented to provide an alternative ranking of predictor importance and to visualize the predictors’ effects as a function of their concentration 68 .

Validation of causal chemical properties

To validate the effects of the most important model features on predicted sensory attributes, beers were spiked with the chemical compounds identified by the models and descriptive sensory analyses were carried out according to the American Society of Brewing Chemists (ASBC) protocol 90 .

Compound spiking was done 30 min before tasting. Compounds were spiked into fresh beer bottles, that were immediately resealed and inverted three times. Fresh bottles of beer were opened for the same duration, resealed, and inverted thrice, to serve as controls. Pairs of spiked samples and controls were served simultaneously, chilled and in dark glasses as outlined in the Trained panel section above. Tasters were instructed to select the glass with the higher flavor intensity for each attribute (directional difference test 92 ) and to select the glass they prefer.

The final concentration after spiking was equal to the within-style average, after normalizing by ethanol concentration. This was done to ensure balanced flavor profiles in the final spiked beer. The same methods were applied to improve a non-alcoholic beer. Compounds were the following: ethyl acetate (Merck KGaA, W241415), ethyl hexanoate (Merck KGaA, W243906), isoamyl acetate (Merck KGaA, W205508), phenethyl acetate (Merck KGaA, W285706), ethanol (96%, Colruyt), glycerol (Merck KGaA, W252506), lactic acid (Merck KGaA, 261106).

Significant differences in preference or perceived intensity were determined by performing the two-sided binomial test on each attribute.

Reporting summary

Further information on research design is available in the  Nature Portfolio Reporting Summary linked to this article.

Data availability

The data that support the findings of this work are available in the Supplementary Data files and have been deposited to Zenodo under accession code 10653704 93 . The RateBeer scores data are under restricted access, they are not publicly available as they are property of RateBeer (ZX Ventures, USA). Access can be obtained from the authors upon reasonable request and with permission of RateBeer (ZX Ventures, USA).  Source data are provided with this paper.

Code availability

The code for training the machine learning models, analyzing the models, and generating the figures has been deposited to Zenodo under accession code 10653704 93 .

Tieman, D. et al. A chemical genetic roadmap to improved tomato flavor. Science 355 , 391–394 (2017).

Article   ADS   CAS   PubMed   Google Scholar  

Plutowska, B. & Wardencki, W. Application of gas chromatography–olfactometry (GC–O) in analysis and quality assessment of alcoholic beverages – A review. Food Chem. 107 , 449–463 (2008).

Article   CAS   Google Scholar  

Legin, A., Rudnitskaya, A., Seleznev, B. & Vlasov, Y. Electronic tongue for quality assessment of ethanol, vodka and eau-de-vie. Anal. Chim. Acta 534 , 129–135 (2005).

Loutfi, A., Coradeschi, S., Mani, G. K., Shankar, P. & Rayappan, J. B. B. Electronic noses for food quality: A review. J. Food Eng. 144 , 103–111 (2015).

Ahn, Y.-Y., Ahnert, S. E., Bagrow, J. P. & Barabási, A.-L. Flavor network and the principles of food pairing. Sci. Rep. 1 , 196 (2011).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Bartoshuk, L. M. & Klee, H. J. Better fruits and vegetables through sensory analysis. Curr. Biol. 23 , R374–R378 (2013).

Article   CAS   PubMed   Google Scholar  

Piggott, J. R. Design questions in sensory and consumer science. Food Qual. Prefer. 3293 , 217–220 (1995).

Article   Google Scholar  

Kermit, M. & Lengard, V. Assessing the performance of a sensory panel-panellist monitoring and tracking. J. Chemom. 19 , 154–161 (2005).

Cook, D. J., Hollowood, T. A., Linforth, R. S. T. & Taylor, A. J. Correlating instrumental measurements of texture and flavour release with human perception. Int. J. Food Sci. Technol. 40 , 631–641 (2005).

Chinchanachokchai, S., Thontirawong, P. & Chinchanachokchai, P. A tale of two recommender systems: The moderating role of consumer expertise on artificial intelligence based product recommendations. J. Retail. Consum. Serv. 61 , 1–12 (2021).

Ross, C. F. Sensory science at the human-machine interface. Trends Food Sci. Technol. 20 , 63–72 (2009).

Chambers, E. IV & Koppel, K. Associations of volatile compounds with sensory aroma and flavor: The complex nature of flavor. Molecules 18 , 4887–4905 (2013).

Pinu, F. R. Metabolomics—The new frontier in food safety and quality research. Food Res. Int. 72 , 80–81 (2015).

Danezis, G. P., Tsagkaris, A. S., Brusic, V. & Georgiou, C. A. Food authentication: state of the art and prospects. Curr. Opin. Food Sci. 10 , 22–31 (2016).

Shepherd, G. M. Smell images and the flavour system in the human brain. Nature 444 , 316–321 (2006).

Meilgaard, M. C. Prediction of flavor differences between beers from their chemical composition. J. Agric. Food Chem. 30 , 1009–1017 (1982).

Xu, L. et al. Widespread receptor-driven modulation in peripheral olfactory coding. Science 368 , eaaz5390 (2020).

Kupferschmidt, K. Following the flavor. Science 340 , 808–809 (2013).

Billesbølle, C. B. et al. Structural basis of odorant recognition by a human odorant receptor. Nature 615 , 742–749 (2023).

Article   ADS   PubMed   PubMed Central   Google Scholar  

Smith, B. Perspective: Complexities of flavour. Nature 486 , S6–S6 (2012).

Pfister, P. et al. Odorant receptor inhibition is fundamental to odor encoding. Curr. Biol. 30 , 2574–2587 (2020).

Moskowitz, H. W., Kumaraiah, V., Sharma, K. N., Jacobs, H. L. & Sharma, S. D. Cross-cultural differences in simple taste preferences. Science 190 , 1217–1218 (1975).

Eriksson, N. et al. A genetic variant near olfactory receptor genes influences cilantro preference. Flavour 1 , 22 (2012).

Ferdenzi, C. et al. Variability of affective responses to odors: Culture, gender, and olfactory knowledge. Chem. Senses 38 , 175–186 (2013).

Article   PubMed   Google Scholar  

Lawless, H. T. & Heymann, H. Sensory evaluation of food: Principles and practices. (Springer, New York, NY). https://doi.org/10.1007/978-1-4419-6488-5 (2010).

Colantonio, V. et al. Metabolomic selection for enhanced fruit flavor. Proc. Natl. Acad. Sci. 119 , e2115865119 (2022).

Fritz, F., Preissner, R. & Banerjee, P. VirtualTaste: a web server for the prediction of organoleptic properties of chemical compounds. Nucleic Acids Res 49 , W679–W684 (2021).

Tuwani, R., Wadhwa, S. & Bagler, G. BitterSweet: Building machine learning models for predicting the bitter and sweet taste of small molecules. Sci. Rep. 9 , 1–13 (2019).

Dagan-Wiener, A. et al. Bitter or not? BitterPredict, a tool for predicting taste from chemical structure. Sci. Rep. 7 , 1–13 (2017).

Pallante, L. et al. Toward a general and interpretable umami taste predictor using a multi-objective machine learning approach. Sci. Rep. 12 , 1–11 (2022).

Malavolta, M. et al. A survey on computational taste predictors. Eur. Food Res. Technol. 248 , 2215–2235 (2022).

Lee, B. K. et al. A principal odor map unifies diverse tasks in olfactory perception. Science 381 , 999–1006 (2023).

Mayhew, E. J. et al. Transport features predict if a molecule is odorous. Proc. Natl. Acad. Sci. 119 , e2116576119 (2022).

Niu, Y. et al. Sensory evaluation of the synergism among ester odorants in light aroma-type liquor by odor threshold, aroma intensity and flash GC electronic nose. Food Res. Int. 113 , 102–114 (2018).

Yu, P., Low, M. Y. & Zhou, W. Design of experiments and regression modelling in food flavour and sensory analysis: A review. Trends Food Sci. Technol. 71 , 202–215 (2018).

Oladokun, O. et al. The impact of hop bitter acid and polyphenol profiles on the perceived bitterness of beer. Food Chem. 205 , 212–220 (2016).

Linforth, R., Cabannes, M., Hewson, L., Yang, N. & Taylor, A. Effect of fat content on flavor delivery during consumption: An in vivo model. J. Agric. Food Chem. 58 , 6905–6911 (2010).

Guo, S., Na Jom, K. & Ge, Y. Influence of roasting condition on flavor profile of sunflower seeds: A flavoromics approach. Sci. Rep. 9 , 11295 (2019).

Ren, Q. et al. The changes of microbial community and flavor compound in the fermentation process of Chinese rice wine using Fagopyrum tataricum grain as feedstock. Sci. Rep. 9 , 3365 (2019).

Hastie, T., Friedman, J. & Tibshirani, R. The Elements of Statistical Learning. (Springer, New York, NY). https://doi.org/10.1007/978-0-387-21606-5 (2001).

Dietz, C., Cook, D., Huismann, M., Wilson, C. & Ford, R. The multisensory perception of hop essential oil: a review. J. Inst. Brew. 126 , 320–342 (2020).

CAS   Google Scholar  

Roncoroni, Miguel & Verstrepen, Kevin Joan. Belgian Beer: Tested and Tasted. (Lannoo, 2018).

Meilgaard, M. Flavor chemistry of beer: Part II: Flavor and threshold of 239 aroma volatiles. in (1975).

Bokulich, N. A. & Bamforth, C. W. The microbiology of malting and brewing. Microbiol. Mol. Biol. Rev. MMBR 77 , 157–172 (2013).

Dzialo, M. C., Park, R., Steensels, J., Lievens, B. & Verstrepen, K. J. Physiology, ecology and industrial applications of aroma formation in yeast. FEMS Microbiol. Rev. 41 , S95–S128 (2017).

Article   PubMed   PubMed Central   Google Scholar  

Datta, A. et al. Computer-aided food engineering. Nat. Food 3 , 894–904 (2022).

American Society of Brewing Chemists. Beer Methods. (American Society of Brewing Chemists, St. Paul, MN, U.S.A.).

Olaniran, A. O., Hiralal, L., Mokoena, M. P. & Pillay, B. Flavour-active volatile compounds in beer: production, regulation and control. J. Inst. Brew. 123 , 13–23 (2017).

Verstrepen, K. J. et al. Flavor-active esters: Adding fruitiness to beer. J. Biosci. Bioeng. 96 , 110–118 (2003).

Meilgaard, M. C. Flavour chemistry of beer. part I: flavour interaction between principal volatiles. Master Brew. Assoc. Am. Tech. Q 12 , 107–117 (1975).

Briggs, D. E., Boulton, C. A., Brookes, P. A. & Stevens, R. Brewing 227–254. (Woodhead Publishing). https://doi.org/10.1533/9781855739062.227 (2004).

Bossaert, S., Crauwels, S., De Rouck, G. & Lievens, B. The power of sour - A review: Old traditions, new opportunities. BrewingScience 72 , 78–88 (2019).

Google Scholar  

Verstrepen, K. J. et al. Flavor active esters: Adding fruitiness to beer. J. Biosci. Bioeng. 96 , 110–118 (2003).

Snauwaert, I. et al. Microbial diversity and metabolite composition of Belgian red-brown acidic ales. Int. J. Food Microbiol. 221 , 1–11 (2016).

Spitaels, F. et al. The microbial diversity of traditional spontaneously fermented lambic beer. PLoS ONE 9 , e95384 (2014).

Blanco, C. A., Andrés-Iglesias, C. & Montero, O. Low-alcohol Beers: Flavor Compounds, Defects, and Improvement Strategies. Crit. Rev. Food Sci. Nutr. 56 , 1379–1388 (2016).

Jackowski, M. & Trusek, A. Non-Alcohol. beer Prod. – Overv. 20 , 32–38 (2018).

Takoi, K. et al. The contribution of geraniol metabolism to the citrus flavour of beer: Synergy of geraniol and β-citronellol under coexistence with excess linalool. J. Inst. Brew. 116 , 251–260 (2010).

Kroeze, J. H. & Bartoshuk, L. M. Bitterness suppression as revealed by split-tongue taste stimulation in humans. Physiol. Behav. 35 , 779–783 (1985).

Mennella, J. A. et al. A spoonful of sugar helps the medicine go down”: Bitter masking bysucrose among children and adults. Chem. Senses 40 , 17–25 (2015).

Wietstock, P., Kunz, T., Perreira, F. & Methner, F.-J. Metal chelation behavior of hop acids in buffered model systems. BrewingScience 69 , 56–63 (2016).

Sancho, D., Blanco, C. A., Caballero, I. & Pascual, A. Free iron in pale, dark and alcohol-free commercial lager beers. J. Sci. Food Agric. 91 , 1142–1147 (2011).

Rodrigues, H. & Parr, W. V. Contribution of cross-cultural studies to understanding wine appreciation: A review. Food Res. Int. 115 , 251–258 (2019).

Korneva, E. & Blockeel, H. Towards better evaluation of multi-target regression models. in ECML PKDD 2020 Workshops (eds. Koprinska, I. et al.) 353–362 (Springer International Publishing, Cham, 2020). https://doi.org/10.1007/978-3-030-65965-3_23 .

Gastón Ares. Mathematical and Statistical Methods in Food Science and Technology. (Wiley, 2013).

Grinsztajn, L., Oyallon, E. & Varoquaux, G. Why do tree-based models still outperform deep learning on tabular data? Preprint at http://arxiv.org/abs/2207.08815 (2022).

Gries, S. T. Statistics for Linguistics with R: A Practical Introduction. in Statistics for Linguistics with R (De Gruyter Mouton, 2021). https://doi.org/10.1515/9783110718256 .

Lundberg, S. M. et al. From local explanations to global understanding with explainable AI for trees. Nat. Mach. Intell. 2 , 56–67 (2020).

Ickes, C. M. & Cadwallader, K. R. Effects of ethanol on flavor perception in alcoholic beverages. Chemosens. Percept. 10 , 119–134 (2017).

Kato, M. et al. Influence of high molecular weight polypeptides on the mouthfeel of commercial beer. J. Inst. Brew. 127 , 27–40 (2021).

Wauters, R. et al. Novel Saccharomyces cerevisiae variants slow down the accumulation of staling aldehydes and improve beer shelf-life. Food Chem. 398 , 1–11 (2023).

Li, H., Jia, S. & Zhang, W. Rapid determination of low-level sulfur compounds in beer by headspace gas chromatography with a pulsed flame photometric detector. J. Am. Soc. Brew. Chem. 66 , 188–191 (2008).

Dercksen, A., Laurens, J., Torline, P., Axcell, B. C. & Rohwer, E. Quantitative analysis of volatile sulfur compounds in beer using a membrane extraction interface. J. Am. Soc. Brew. Chem. 54 , 228–233 (1996).

Molnar, C. Interpretable Machine Learning: A Guide for Making Black-Box Models Interpretable. (2020).

Zhao, Q. & Hastie, T. Causal interpretations of black-box models. J. Bus. Econ. Stat. Publ. Am. Stat. Assoc. 39 , 272–281 (2019).

Article   MathSciNet   Google Scholar  

Hastie, T., Tibshirani, R. & Friedman, J. The Elements of Statistical Learning. (Springer, 2019).

Labrado, D. et al. Identification by NMR of key compounds present in beer distillates and residual phases after dealcoholization by vacuum distillation. J. Sci. Food Agric. 100 , 3971–3978 (2020).

Lusk, L. T., Kay, S. B., Porubcan, A. & Ryder, D. S. Key olfactory cues for beer oxidation. J. Am. Soc. Brew. Chem. 70 , 257–261 (2012).

Gonzalez Viejo, C., Torrico, D. D., Dunshea, F. R. & Fuentes, S. Development of artificial neural network models to assess beer acceptability based on sensory properties using a robotic pourer: A comparative model approach to achieve an artificial intelligence system. Beverages 5 , 33 (2019).

Gonzalez Viejo, C., Fuentes, S., Torrico, D. D., Godbole, A. & Dunshea, F. R. Chemical characterization of aromas in beer and their effect on consumers liking. Food Chem. 293 , 479–485 (2019).

Gilbert, J. L. et al. Identifying breeding priorities for blueberry flavor using biochemical, sensory, and genotype by environment analyses. PLOS ONE 10 , 1–21 (2015).

Goulet, C. et al. Role of an esterase in flavor volatile variation within the tomato clade. Proc. Natl. Acad. Sci. 109 , 19009–19014 (2012).

Article   ADS   CAS   PubMed   PubMed Central   Google Scholar  

Borisov, V. et al. Deep Neural Networks and Tabular Data: A Survey. IEEE Trans. Neural Netw. Learn. Syst. 1–21 https://doi.org/10.1109/TNNLS.2022.3229161 (2022).

Statista. Statista Consumer Market Outlook: Beer - Worldwide.

Seitz, H. K. & Stickel, F. Molecular mechanisms of alcoholmediated carcinogenesis. Nat. Rev. Cancer 7 , 599–612 (2007).

Voordeckers, K. et al. Ethanol exposure increases mutation rate through error-prone polymerases. Nat. Commun. 11 , 3664 (2020).

Goelen, T. et al. Bacterial phylogeny predicts volatile organic compound composition and olfactory response of an aphid parasitoid. Oikos 129 , 1415–1428 (2020).

Article   ADS   Google Scholar  

Reher, T. et al. Evaluation of hop (Humulus lupulus) as a repellent for the management of Drosophila suzukii. Crop Prot. 124 , 104839 (2019).

Stein, S. E. An integrated method for spectrum extraction and compound identification from gas chromatography/mass spectrometry data. J. Am. Soc. Mass Spectrom. 10 , 770–781 (1999).

American Society of Brewing Chemists. Sensory Analysis Methods. (American Society of Brewing Chemists, St. Paul, MN, U.S.A., 1992).

McAuley, J., Leskovec, J. & Jurafsky, D. Learning Attitudes and Attributes from Multi-Aspect Reviews. Preprint at https://doi.org/10.48550/arXiv.1210.3926 (2012).

Meilgaard, M. C., Carr, B. T. & Carr, B. T. Sensory Evaluation Techniques. (CRC Press, Boca Raton). https://doi.org/10.1201/b16452 (2014).

Schreurs, M. et al. Data from: Predicting and improving complex beer flavor through machine learning. Zenodo https://doi.org/10.5281/zenodo.10653704 (2024).

Download references

Acknowledgements

We thank all lab members for their discussions and thank all tasting panel members for their contributions. Special thanks go out to Dr. Karin Voordeckers for her tremendous help in proofreading and improving the manuscript. M.S. was supported by a Baillet-Latour fellowship, L.C. acknowledges financial support from KU Leuven (C16/17/006), F.A.T. was supported by a PhD fellowship from FWO (1S08821N). Research in the lab of K.J.V. is supported by KU Leuven, FWO, VIB, VLAIO and the Brewing Science Serves Health Fund. Research in the lab of T.W. is supported by FWO (G.0A51.15) and KU Leuven (C16/17/006).

Author information

These authors contributed equally: Michiel Schreurs, Supinya Piampongsant, Miguel Roncoroni.

Authors and Affiliations

VIB—KU Leuven Center for Microbiology, Gaston Geenslaan 1, B-3001, Leuven, Belgium

Michiel Schreurs, Supinya Piampongsant, Miguel Roncoroni, Lloyd Cool, Beatriz Herrera-Malaver, Florian A. Theßeling & Kevin J. Verstrepen

CMPG Laboratory of Genetics and Genomics, KU Leuven, Gaston Geenslaan 1, B-3001, Leuven, Belgium

Leuven Institute for Beer Research (LIBR), Gaston Geenslaan 1, B-3001, Leuven, Belgium

Laboratory of Socioecology and Social Evolution, KU Leuven, Naamsestraat 59, B-3000, Leuven, Belgium

Lloyd Cool, Christophe Vanderaa & Tom Wenseleers

VIB Bioinformatics Core, VIB, Rijvisschestraat 120, B-9052, Ghent, Belgium

Łukasz Kreft & Alexander Botzki

AB InBev SA/NV, Brouwerijplein 1, B-3000, Leuven, Belgium

Philippe Malcorps & Luk Daenen

You can also search for this author in PubMed   Google Scholar

Contributions

S.P., M.S. and K.J.V. conceived the experiments. S.P., M.S. and K.J.V. designed the experiments. S.P., M.S., M.R., B.H. and F.A.T. performed the experiments. S.P., M.S., L.C., C.V., L.K., A.B., P.M., L.D., T.W. and K.J.V. contributed analysis ideas. S.P., M.S., L.C., C.V., T.W. and K.J.V. analyzed the data. All authors contributed to writing the manuscript.

Corresponding author

Correspondence to Kevin J. Verstrepen .

Ethics declarations

Competing interests.

K.J.V. is affiliated with bar.on. The other authors declare no competing interests.

Peer review

Peer review information.

Nature Communications thanks Florian Bauer, Andrew John Macintosh and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. A peer review file is available.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary information, peer review file, description of additional supplementary files, supplementary data 1, supplementary data 2, supplementary data 3, supplementary data 4, supplementary data 5, supplementary data 6, supplementary data 7, reporting summary, source data, source data, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Schreurs, M., Piampongsant, S., Roncoroni, M. et al. Predicting and improving complex beer flavor through machine learning. Nat Commun 15 , 2368 (2024). https://doi.org/10.1038/s41467-024-46346-0

Download citation

Received : 30 October 2023

Accepted : 21 February 2024

Published : 26 March 2024

DOI : https://doi.org/10.1038/s41467-024-46346-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: Translational Research newsletter — top stories in biotechnology, drug discovery and pharma.

quantitative research paper about technology

IMAGES

  1. FREE 42+ Research Paper Examples in PDF

    quantitative research paper about technology

  2. Example Of Quantitative Research Paper Pdf Philippines

    quantitative research paper about technology

  3. (PDF) GET ALONG WITH QUANTITATIVE RESEARCH PROCESS

    quantitative research paper about technology

  4. Quantitative Research Sample Definition of Terms

    quantitative research paper about technology

  5. Information technology research paper Essay Example

    quantitative research paper about technology

  6. Example Of Quantitative Research Paper About Technology

    quantitative research paper about technology

VIDEO

  1. Quantitative research process

  2. Quantitative Research Paper Review

  3. Quantitative Research

  4. Quantitative Research Designs 📊🔍: Know Your Options #shorts #research

  5. Quantitative Research, Types and Examples Latest

  6. Lecture 41: Quantitative Research

COMMENTS

  1. Quantitative analysis of technology futures: A review of techniques, uses and characteristics

    A variety of quantitative techniques have been used in the past in future-oriented technology analysis (FTA). In recent years, increased computational power and data availability have led to the ...

  2. PDF 1:1 Technology and its Effect on Student Academic Achievement and ...

    This research was a quantitative study using 4th grade participants from a Title 1 elementary school in Central Illinois. This study set out to determine whether one to one technology (1:1 will be used hereafter) truly impacts and effects the academic achievement of students. This

  3. The rise of 5G technologies and systems: A quantitative analysis of

    This paper presents a systematic outline of the development of 5G-related research until 2020 as revealed by over 10,000 science and technology publications. The exercise addresses the emergence, growth, and impact of this body of work and offers insights regarding disciplinary distribution, international performance, and historical dynamics.

  4. Students' use of technology and their perceptions of its usefulness in

    This quantitative study used data from a large-scale bi-national online survey of 3003 students in both countries. Comparative analyses were conducted. ... educational technology research experts, and students. ... this paper is exploratory in nature and the findings offer insight into key issues that will support future research.

  5. The Effects Of Technology On Student Motivation And Engagement In

    technology was introduced. One of the key findings in the literature on technology implementation is the power of. technology to engage students in relevant learning, in that the use of technology increases. student motivation and engagement (Godzicki, Godzicki, Krofel, & Michaels, 2013).

  6. A Quantitative Study of the Impact of Social Media Reviews on Brand

    Recent advancements in technology, arts, and economics have greatly improved the usability and reach of social media platforms. For instance, a report by 2015 Pew research informs that there was a 7% rise in the usage of social media from 2005 to 2015. The report informs that 65% adults use social media (Perrin, 2015).

  7. PDF The Impact of Digital Technology on Learning: A Summary for the ...

    sections. The first sets out an overview of the wider research into the impact of technology on learning to set the context and the rationale for the value of this information. The next section reviews the evidence from meta-analysis and other quantitative syntheses of research into the impact of digital technology.

  8. Digital transformation: a review, synthesis and opportunities for

    A systematic review is a type of literature review that applies an explicit algorithm and a multi-stage review strategy in order to collect and critically appraise a body of research studies (Mulrow 1994; Pittaway et al. 2004; Crossan and Apaydin 2010).This transparent and reproducible process is ideally suited for analyzing and structuring the vast and heterogeneous literature on digital ...

  9. Quantitative analysis of technology futures: A review of techniques

    A first report form of this paper was prepared as part of the project financed by Nesta on 'Research into the quantitative Analysis of Technology Futures'. We acknowledge further support from the US National Science Foundation (Award No. 1064146 'Revealing Innovation Pathways: Hybrid Science Maps for Technology Assessment and Foresight').

  10. How Is Technology Changing the World, and How Should the World Change

    Technologies are becoming increasingly complicated and increasingly interconnected. Cars, airplanes, medical devices, financial transactions, and electricity systems all rely on more computer software than they ever have before, making them seem both harder to understand and, in some cases, harder to control. Government and corporate surveillance of individuals and information processing ...

  11. New quantitative methods for science and technology analysis

    Context and aims of the special issue. Recent advances in access to data and information processing techniques are revolutionizing data-based methods in social sciences and humanities. Among the most advanced techniques are machine learning, text analysis (Natural Language Processing—NLP), image and graph analysis.

  12. AI technologies for education: Recent research & future directions

    Research must be data-supported empirical studies. Articles that were solely based on personal opinions or anecdotal experiences were excluded; 3. Research must have investigated educational effects of AI by reporting relevant qualitative or quantitative data. Papers that did not provide any evidence on learning were excluded; 4.

  13. PDF The impact of ICT on learning: A review of research

    (Cohen, 1981; Kulik, Bangert and Williams, 1983; Roblyer, 1988) in their research. Meta-analysts (Kulik et al., 1983) normally used a quantitative approach to their studies incorporating three main tasks: (a) objective procedures to locate studies; (b) quantitative or quasi-quantitative techniques to describe study features and outcomes; and

  14. The Effect and Importance of Technology in the Research Process

    Abstract. From elementary schooling to doctoral-level education, technology has become an integral part of the learning process in and out of the classroom. With the implementation of the Common Core Learning Standards, the skills required for research are more valuable than ever, for they are required to succeed in a college setting, as well ...

  15. Quantitative Analysis of the Impact of Wireless Internet Technology on

    Finally, the paper is concluded and future research directions are provided in Section 4. 2. Quantitative Analysis of the Impact of Wireless Internet Technology on College Students' Innovation and Entrepreneurship under the Background of "Internet Plus" This section provides a detailed description of the proposed analysis and model.

  16. ICT Adoption Impact on Students' Academic Performance ...

    This study investigates and explores the adoption of information communication technology by the universities and the impact it makes on the university students' academic performance. The study also examines the moderators' effect of gender, GPA, and student majors on the relationship between ICT and academic achievement. By using a quantitative research approach and a sample ...

  17. Quantitative Research

    Quantitative research methods are concerned with the planning, design, and implementation of strategies to collect and analyze data. Descartes, the seventeenth-century philosopher, suggested that how the results are achieved is often more important than the results themselves, as the journey taken along the research path is a journey of discovery. . High-quality quantitative research is ...

  18. Study and Investigation on 5G Technology: A Systematic Review

    1. Introduction. Most recently, in three decades, rapid growth was marked in the field of wireless communication concerning the transition of 1G to 4G [1,2].The main motto behind this research was the requirements of high bandwidth and very low latency. 5G provides a high data rate, improved quality of service (QoS), low-latency, high coverage, high reliability, and economically affordable ...

  19. PDF Quantitative Research: A Successful Investigation in Natural and ...

    need to quantify data. Since then quantitative research has dominated the western cultural as the research method to create new knowledge. This method was originally developed in the natural sciences to study natural phenomena [Williams, 2007]. In quantitative research, a variable is a factor that can be controlled or changed in an

  20. 100 Technology Topics for Research Papers

    Relationships and Media. 7. War. 8. Information and Communication Tech. 9. Computer Science and Robotics. Researching technology can involve looking at how it solves problems, creates new problems, and how interaction with technology has changed humankind. Steps in Researching.

  21. IEEE Enacts Ban on 'Lenna' Image from Playboy in Research Papers ...

    In response to longstanding criticism, dating back to at least 1996, the journal Nature took the step to prohibit the use of the Lena image in paper submissions in 2018.

  22. Predicting and improving complex beer flavor through machine ...

    For each beer, we measure over 200 chemical properties, perform quantitative descriptive sensory analysis with a trained tasting panel and map data from over 180,000 consumer reviews to train 10 ...