Education Technology: An Evidence-Based Review

In recent years, there has been widespread excitement around the potential for technology to transform learning. As investments in education technology continue to grow, students, parents, and teachers face a seemingly endless array of education technologies from which to choose—from digital personalized learning platforms to educational games to online courses. Amidst the excitement, it is important to step back and understand how technology can help—or in some cases hinder—how students learn. This review paper synthesizes and discusses experimental evidence on the effectiveness of technology-based approaches in education and outlines areas for future inquiry. In particular, we examine RCTs across the following categories of education technology: (1) access to technology, (2) computer-assisted learning, (3) technology-enabled behavioral interventions in education, and (4) online learning. While this review focuses on literature from developed countries, it also draws upon extensive research from developing countries. We hope this literature review will advance the knowledge base of how technology can be used to support education, outline key areas for new experimental research, and help drive improvements to the policies, programs, and structures that contribute to successful teaching and learning.

We are extremely grateful to Caitlin Anzelone, Rekha Balu, Peter Bergman, Brad Bernatek, Ben Castleman, Luke Crowley, Angela Duckworth, Jonathan Guryan, Alex Haslam, Andrew Ho, Ben Jones, Matthew Kraft, Kory Kroft, David Laibson, Susanna Loeb, Andrew Magliozzi, Ignacio Martinez, Susan Mayer, Steve Mintz, Piotr Mitros, Lindsay Page, Amanda Pallais, John Pane, Justin Reich, Jonah Rockoff, Sylvi Rzepka, Kirby Smith, and Oscar Sweeten-Lopez for providing helpful and detailed comments as we put together this review. We also thank Rachel Glennerster for detailed support throughout the project, Jessica Mardo and Sophie Shank for edits, and to the Spencer Foundation for financial support. Any errors or omissions are our own. The views expressed herein are those of the authors and do not necessarily reflect the views of the National Bureau of Economic Research.

MARC RIS BibTeΧ

Download Citation Data

Mentioned in the News

More from nber.

In addition to working papers , the NBER disseminates affiliates’ latest findings through a range of free periodicals — the NBER Reporter , the NBER Digest , the Bulletin on Retirement and Disability , the Bulletin on Health , and the Bulletin on Entrepreneurship  — as well as online conference reports , video lectures , and interviews .

15th Annual Feldstein Lecture, Mario Draghi, "The Next Flight of the Bumblebee: The Path to Common Fiscal Policy in the Eurozone cover slide

  • Review article
  • Open access
  • Published: 22 January 2020

Mapping research in student engagement and educational technology in higher education: a systematic evidence map

  • Melissa Bond   ORCID: orcid.org/0000-0002-8267-031X 1 ,
  • Katja Buntins 2 ,
  • Svenja Bedenlier 1 ,
  • Olaf Zawacki-Richter 1 &
  • Michael Kerres 2  

International Journal of Educational Technology in Higher Education volume  17 , Article number:  2 ( 2020 ) Cite this article

117k Accesses

246 Citations

58 Altmetric

Metrics details

Digital technology has become a central aspect of higher education, inherently affecting all aspects of the student experience. It has also been linked to an increase in behavioural, affective and cognitive student engagement, the facilitation of which is a central concern of educators. In order to delineate the complex nexus of technology and student engagement, this article systematically maps research from 243 studies published between 2007 and 2016. Research within the corpus was predominantly undertaken within the United States and the United Kingdom, with only limited research undertaken in the Global South, and largely focused on the fields of Arts & Humanities, Education, and Natural Sciences, Mathematics & Statistics. Studies most often used quantitative methods, followed by mixed methods, with little qualitative research methods employed. Few studies provided a definition of student engagement, and less than half were guided by a theoretical framework. The courses investigated used blended learning and text-based tools (e.g. discussion forums) most often, with undergraduate students as the primary target group. Stemming from the use of educational technology, behavioural engagement was by far the most often identified dimension, followed by affective and cognitive engagement. This mapping article provides the grounds for further exploration into discipline-specific use of technology to foster student engagement.

Introduction

Over the past decade, the conceptualisation and measurement of ‘student engagement’ has received increasing attention from researchers, practitioners, and policy makers alike. Seminal works such as Astin’s ( 1999 ) theory of involvement, Fredricks, Blumenfeld, and Paris’s ( 2004 ) conceptualisation of the three dimensions of student engagement (behavioural, emotional, cognitive), and sociocultural theories of engagement such as Kahu ( 2013 ) and Kahu and Nelson ( 2018 ), have done much to shape and refine our understanding of this complex phenomenon. However, criticism about the strength and depth of student engagement theorising remains e.g. (Boekaerts, 2016 ; Kahn, 2014 ; Zepke, 2018 ), the quality of which has had a direct impact on the rigour of subsequent research (Lawson & Lawson, 2013 ; Trowler, 2010 ), prompting calls for further synthesis (Azevedo, 2015 ; Eccles, 2016 ).

In parallel to this increased attention on student engagement, digital technology has become a central aspect of higher education, inherently affecting all aspects of the student experience (Barak, 2018 ; Henderson, Selwyn, & Aston, 2017 ; Selwyn, 2016 ). International recognition of the importance of ICT skills and digital literacy has been growing, alongside mounting recognition of its importance for active citizenship (Choi, Glassman, & Cristol, 2017 ; OECD, 2015a ; Redecker, 2017 ), and the development of interdisciplinary and collaborative skills (Barak & Levenberg, 2016 ; Oliver, & de St Jorre, Trina, 2018 ). Using technology has the potential to make teaching and learning processes more intensive (Kerres, 2013 ), improve student self-regulation and self-efficacy (Alioon & Delialioğlu, 2017 ; Bouta, Retalis, & Paraskeva, 2012 ), increase participation and involvement in courses as well as the wider university community (Junco, 2012 ; Salaber, 2014 ), and predict increased student engagement (Chen, Lambert, & Guidry, 2010 ; Rashid & Asghar, 2016 ). There is, however, no guarantee of active student engagement as a result of using technology (Kirkwood, 2009 ), with Tamim, Bernard, Borokhovski, Abrami, and Schmid’s ( 2011 ) second-order meta-analysis finding only a small to moderate impact on student achievement across 40 years. Rather, careful planning, sound pedagogy and appropriate tools are vital (Englund, Olofsson, & Price, 2017 ; Koehler & Mishra, 2005 ; Popenici, 2013 ), as “technology can amplify great teaching, but great technology cannot replace poor teaching” (OECD, 2015b ), p. 4.

Due to the nature of its complexity, educational technology research has struggled to find a common definition and terminology with which to talk about student engagement, which has resulted in inconsistency across the field. For example, whilst 77% of articles reviewed by Henrie, Halverson, and Graham ( 2015 ) operationalised engagement from a behavioural perspective, most of the articles did not have a clearly defined statement of engagement, which is no longer considered acceptable in student engagement research (Appleton, Christenson, & Furlong, 2008 ; Christenson, Reschly, & Wylie, 2012 ). Linked to this, educational technology research has, however, lacked theoretical guidance (Al-Sakkaf, Omar, & Ahmad, 2019 ; Hew, Lan, Tang, Jia, & Lo, 2019 ; Lundin, Bergviken Rensfeldt, Hillman, Lantz-Andersson, & Peterson, 2018 ). A review of 44 random articles published in 2014 in the journals Educational Technology Research & Development and Computers & Education, for example, revealed that more than half had no guiding conceptual or theoretical framework (Antonenko, 2015 ), and only 13 out of 62 studies in a systematic review of flipped learning in engineering education reported theoretical grounding (Karabulut-Ilgu, Jaramillo Cherrez, & Jahren, 2018 ). Therefore, calls have been made for a greater understanding of the role that educational technology plays in affecting student engagement, in order to strengthen teaching practice and lead to improved outcomes for students (Castañeda & Selwyn, 2018 ; Krause & Coates, 2008 ; Nelson Laird & Kuh, 2005 ).

A reflection upon prior research that has been undertaken in the field is a necessary first step to engage in meaningful discussion on how to foster student engagement in the digital age. In support of this aim, this article provides a synthesis of student engagement theory research, and systematically maps empirical higher education research between 2007 and 2016 on student engagement in educational technology. Synthesising the vast body of literature on student engagement (for previous literature and systematic reviews, see Additional file  1 ), this article develops “a tentative theory” in the hopes of “plot[ting] the conceptual landscape…[and chart] possible routes to explore it” (Antonenko, 2015 , pp. 57–67) for researchers, practitioners, learning designers, administrators and policy makers. It then discusses student engagement against the background of educational technology research, exploring prior literature and systematic reviews that have been undertaken. The systematic review search method is then outlined, followed by the presentation and discussion of findings.

Literature review

What is student engagement.

Student engagement has been linked to improved achievement, persistence and retention (Finn, 2006 ; Kuh, Cruce, Shoup, Kinzie, & Gonyea, 2008 ), with disengagement having a profound effect on student learning outcomes and cognitive development (Ma, Han, Yang, & Cheng, 2015 ), and being a predictor of student dropout in both secondary school and higher education (Finn & Zimmer, 2012 ). Student engagement is a multifaceted and complex construct (Appleton et al., 2008 ; Ben-Eliyahu, Moore, Dorph, & Schunn, 2018 ), which some have called a ‘meta-construct’ (e.g. Fredricks et al., 2004 ; Kahu, 2013 ), and likened to blind men describing an elephant (Baron & Corbin, 2012 ; Eccles, 2016 ). There is ongoing disagreement about whether there are three components e.g., (Eccles, 2016 )—affective/emotional, cognitive and behavioural—or whether there are four, with the recent suggested addition of agentic engagement (Reeve, 2012 ; Reeve & Tseng, 2011 ) and social engagement (Fredricks, Filsecker, & Lawson, 2016 ). There has also been confusion as to whether the terms ‘engagement’ and ‘motivation’ can and should be used interchangeably (Reschly & Christenson, 2012 ), especially when used by policy makers and institutions (Eccles & Wang, 2012 ). However, the prevalent understanding across the literature is that motivation is an antecedent to engagement; it is the intent and unobservable force that energises behaviour (Lim, 2004 ; Reeve, 2012 ; Reschly & Christenson, 2012 ), whereas student engagement is energy and effort in action; an observable manifestation (Appleton et al., 2008 ; Eccles & Wang, 2012 ; Kuh, 2009 ; Skinner & Pitzer, 2012 ), evidenced through a range of indicators.

Whilst it is widely accepted that no one definition exists that will satisfy all stakeholders (Solomonides, 2013 ), and no one project can be expected to possibly examine every sub-construct of student engagement (Kahu, 2013 ), it is important for each research project to begin with a clear definition of their own understanding (Boekaerts, 2016 ). Therefore, in this project, student engagement is defined as follows:

Student engagement is the energy and effort that students employ within their learning community, observable via any number of behavioural, cognitive or affective indicators across a continuum. It is shaped by a range of structural and internal influences, including the complex interplay of relationships, learning activities and the learning environment. The more students are engaged and empowered within their learning community, the more likely they are to channel that energy back into their learning, leading to a range of short and long term outcomes, that can likewise further fuel engagement.

Dimensions and indicators of student engagement

There are three widely accepted dimensions of student engagement; affective, cognitive and behavioural. Within each component there are several indicators of engagement (see Additional file  2 ), as well as disengagement (see Additional file 2 ), which is now seen as a separate and distinct construct to engagement. It should be stated, however, that whilst these have been drawn from a range of literature, this is not a finite list, and it is recognised that students might experience these indicators on a continuum at varying times (Coates, 2007 ; Payne, 2017 ), depending on their valence (positive or negative) and activation (high or low) (Pekrun & Linnenbrink-Garcia, 2012 ). There has also been disagreement in terms of which dimension the indicators align with. For example, Järvelä, Järvenoja, Malmberg, Isohätälä, and Sobocinski ( 2016 ) argue that ‘interaction’ extends beyond behavioural engagement, covering both cognitive and/or emotional dimensions, as it involves collaboration between students, and Lawson and Lawson ( 2013 ) believe that ‘effort’ and ‘persistence’ are cognitive rather than behavioural constructs, as they “represent cognitive dispositions toward activity rather than an activity unto itself” (p. 465), which is represented in the table through the indicator ‘stay on task/focus’ (see Additional file 2 ). Further consideration of these disagreements represent an area for future research, however, as they are beyond the scope of this paper.

Student engagement within educational technology research

The potential that educational technology has to improve student engagement, has long been recognised (Norris & Coutas, 2014 ), however it is not merely a case of technology plus students equals engagement. Without careful planning and sound pedagogy, technology can promote disengagement and impede rather than help learning (Howard, Ma, & Yang, 2016 ; Popenici, 2013 ). Whilst still a young area, most of the research undertaken to gain insight into this, has been focused on undergraduate students e.g., (Henrie et al., 2015 ; Webb, Clough, O’Reilly, Wilmott, & Witham, 2017 ), with Chen et al. ( 2010 ) finding a positive relationship between the use of technology and student engagement, particularly earlier in university study. Research has also been predominantly STEM and medicine focused (e.g., Li, van der Spek, Feijs, Wang, & Hu, 2017 ; Nikou & Economides, 2018 ), with at least five literature or systematic reviews published in the last 5 years focused on medicine, and nursing in particular (see Additional file  3 ). This indicates that further synthesis is needed of research in other disciplines, such as Arts & Humanities and Education, as well as further investigation into whether research continues to focus on undergraduate students.

The five most researched technologies in Henrie et al.’s ( 2015 ) review were online discussion boards, general websites, learning management systems (LMS), general campus software and videos, as opposed to Schindler, Burkholder, Morad, and Marsh’s ( 2017 ) literature review, which concentrated on social networking sites (Facebook and Twitter), digital games, wikis, web-conferencing software and blogs. Schindler et al. found that most of these technologies had a positive impact on multiple indicators of student engagement across the three dimensions of engagement, with digital games, web-conferencing software and Facebook the most effective. However, it must be noted that they only considered seven indicators of student engagement, which could be extended by considering further indicators of student engagement. Other reviews that have found at least a small positive impact on student engagement include those focused on audience response systems (Hunsu, Adesope, & Bayly, 2016 ; Kay & LeSage, 2009 ), mobile learning (Kaliisa & Picard, 2017 ), and social media (Cheston, Flickinger, & Chisolm, 2013 ). Specific indicators of engagement that increased as a result of technology include interest and enjoyment (Li et al., 2017 ), improved confidence (Smith & Lambert, 2014 ) and attitudes (Nikou & Economides, 2018 ), as well as enhanced relationships with peers and teachers e.g., (Alrasheedi, Capretz, & Raza, 2015 ; Atmacasoy & Aksu, 2018 ).

Literature and systematic reviews focused on student engagement and technology do not always include information on where studies have been conducted. Out of 27 identified reviews (see Additional file 3 ), only 14 report the countries included, and two of these were explicitly focused on a specific region or country, namely Africa and Turkey. Most of the research has been conducted in the USA, followed by the UK, Taiwan, Australia and China. Table  1 depicts the three countries from which most studies originated from in the respective reviews, and highlights a clear lack of research conducted within mainland Europe, South America and Africa. Whilst this could be due to the choice of databases in which the literature was searched for, this nevertheless highlights a substantial gap in the literature, and to that end, it will be interesting to see whether this review is able to substantiate or contradict these trends.

Research into student engagement and educational technology has predominantly used a quantitative methodology (see Additional file 3 ), with 11 literature and systematic reviews reporting that surveys, particularly self-report Likert-scale, are the most used source of measurement (e.g. Henrie et al., 2015 ). Reviews that have included research using a range of methodologies, have found a limited number of studies employing qualitative methods (e.g. Connolly, Boyle, MacArthur, Hainey, & Boyle, 2012 ; Kay & LeSage, 2009 ; Lundin et al., 2018 ). This has led to a call for further qualitative research to be undertaken, exploring student engagement and technology, as well as more rigorous research designs e.g., (Li et al., 2017 ; Nikou & Economides, 2018 ), including sampling strategies, data collection, and in experimental studies in particular (Cheston et al., 2013 ; Connolly et al., 2012 ). However, not all reviews included information on methodologies used. Crook ( 2019 ), in his recent editorial in the British Journal of Educational Technology , stated that research methodology is a “neglected topic” (p. 487) within educational technology research, and stressed its importance in order to conduct studies delving deeper into phenomena (e.g. longitudinal studies).

Therefore, this article presents an initial “evidence map” (Miake-Lye, Hempel, Shanman, & Shekelle, 2016 ), p. 19 of systematically identified literature on student engagement and educational technology within higher education, undertaken through a systematic review, in order to address the issues raised by prior research, and to identify research gaps. These issues include the disparity between field of study and study levels researched, the geographical distribution of studies, the methodologies used, and the theoretical fuzziness surrounding student engagement. This article, however, is intended to provide an initial overview of the systematic review method employed, as well as an overview of the overall corpus. Further synthesis of possible correlations between student engagement and disengagement indicators with the co-occurrence of technology tools, will be undertaken within field of study specific articles (e.g., Bedenlier, 2020b ; Bedenlier 2020a ), allowing more meaningful guidance on applying the findings in practice.

The following research questions guide this enquiry:

How do the studies in the sample ground student engagement and align with theory?

Which indicators of cognitive, behavioural and affective engagement were identified in studies where educational technology was used? Which indicators of student disengagement?

What are the learning scenarios, modes of delivery and educational technology tools employed in the studies?

Overview of the study

With the intent to systematically map empirical research on student engagement and educational technology in higher education, we conducted a systematic review. A systematic review is an explicitly and systematically conducted literature review, that answers a specific question through applying a replicable search strategy, with studies then included or excluded, based on explicit criteria (Gough, Oliver, & Thomas, 2012 ). Studies included for review are then coded and synthesised into findings that shine light on gaps, contradictions or inconsistencies in the literature, as well as providing guidance on applying findings in practice. This contribution maps the research corpus of 243 studies that were identified through a systematic search and ensuing random parameter-based sampling.

Search strategy and selection procedure

The initial inclusion criteria for the systematic review were peer-reviewed articles in the English language, empirically reporting on students and student engagement in higher education, and making use of educational technology. The search was limited to records between 1995 and 2016, chosen due to the implementation of the first Virtual Learning Environments and Learning Management Systems within higher education see (Bond, 2018 ). Articles were limited to those published in peer-reviewed journals, due to the rigorous process under which they are published, and their trustworthiness in academia (Nicholas et al., 2015 ), although concerns within the scientific community with the peer-review process are acknowledged e.g. (Smith, 2006 ).

Discussion arose on how to approach the “hard-to-detect” (O’Mara-Eves et al., 2014 , p. 51) concept of student engagement in regards to sensitivity versus precision (Brunton, Stansfield, & Thomas, 2012 ), particularly in light of engagement being Henrie et al.’s ( 2015 ) most important search term. The decision was made that the concept ‘student engagement’ would be identified from titles and abstracts at a later stage, during the screening process. In this way, it was assumed that articles would be included, which indeed are concerned with student engagement, but which use different terms to describe the concept. Given the nature of student engagement as a meta-construct e.g. (Appleton et al., 2008 ; Christenson et al., 2012 ; Kahu, 2013 ) and by limiting the search to only articles including the term engagement , important research on other elements of student engagement might be missed. Hence, we opted for recall over precision. According to Gough et al. ( 2012 ), p. 13 “electronic searching is imprecise and captures many studies that employ the same terms without sharing the same focus”, or would lead to disregarding studies that analyse the construct but use different terms to describe it.

With this in mind, the search strategy to identify relevant studies was developed iteratively with support from the University Research Librarian. As outlined in O’Mara-Eves et al. ( 2014 ) as a standard approach, we used reviewer knowledge—in this case strongly supported through not only reviewer knowledge but certified expertise—and previous literature (e.g. Henrie et al., 2015 ; Kahu, 2013 ) to elicit concepts with potential importance under the topics student engagement, higher education and educational technology . The final search string (see Fig.  1 ) encompasses clusters of different educational technologies that were searched for separately in order to avoid an overly long search string. It was decided not to include any brand names, e.g. Facebook, Twitter, Moodle etc. because it was again reasoned that in scientific publication, the broader term would be used (e.g. social media). The final search string was slightly adapted, e.g. the format required for truncations or wildcards, according to the settings of each database being used Footnote 1 .

figure 1

Final search terms used in the systematic review

Four databases (ERIC, Web of Science, Scopus and PsycINFO) were searched in July 2017 and three researchers and a student assistant screened abstracts and titles of the retrieved references between August and November 2017, using EPPI Reviewer 4.0. An initial 77,508 references were retrieved, and with the elimination of duplicate records, 53,768 references remained (see Fig.  2 ). A first cursory screening of records revealed that older research was more concerned with technologies that are now considered outdated (e.g. overhead projectors, floppy disks). Therefore, we opted to adjust the period to include research published between 2007 and 2016, labeled as a phase of research and practice, entitled ‘online learning in the digital age’ (Bond, 2018 ). Whilst we initially opted for recall over precision, the decision was then made to search for specific facets of the student engagement construct (e.g. deep learning, interest and persistence) within EPPI-Reviewer, in order to further refine the corpus. These adaptations led to a remaining 18,068 records.

figure 2

Systematic review PRISMA flow chart (slightly modified after Brunton et al., 2012 , p. 86; Moher, Liberati, Tetzlaff, & Altman, 2009 ), p. 8

Four researchers screened the first 150 titles and abstracts, in order to iteratively establish a joint understanding of the inclusion criteria. The remaining references were distributed equally amongst the screening team, which resulted in the inclusion of 4152 potentially relevant articles. Given the large number of articles for screening on full text, whilst facing restrained time as a condition in project-based and funded work, it was decided that a sample of articles would be drawn from this corpus for further analysis. With the intention to draw a sample that estimates the population parameters with a predetermined error range, we used methods of sample size estimation in the social sciences (Kupper & Hafner, 1989 ). To do so, the R Package MBESS (Kelley, Lai, Lai, & Suggests, 2018 ) was used. Accepting a 5% error range, a percentage of a half and an alpha of 5%, 349 articles were sampled, with this sample being then stratified by publishing year, as student engagement has become much more prevalent (Zepke, 2018 ) and educational technology has become more differentiated within the last decade (Bond, 2018 ). Two researchers screened the first 100 articles on full text, reaching an agreement of 88% on inclusion/exclusion. The researchers then discussed the discrepancies and came to an agreement on the remaining 12%. It was decided that further comparison screening was needed, to increase the level of reliability. After screening the sample on full text, 232 articles remained for data extraction, which contained 243 studies.

Data extraction process

In order to extract the article data, an extensive coding system was developed, including codes to extract information on the set-up and execution of the study (e.g. methodology, study sample) as well as information on the learning scenario, the mode of delivery and educational technology used. Learning scenarios included broader pedagogies, such as social collaborative learning and self-determined learning, but also specific pedagogies such as flipped learning, given the increasing number of studies and interest in these approaches (e.g., Lundin et al., 2018 ). Specific examples of student engagement and/or disengagement were coded under cognitive, affective or behavioural (dis)engagement. The facets of student (dis)engagement were identified based on the literature review undertaken (see Additional file 2 ), and applied in this detailed manner to not only capture the overarching dimensions of the concept, but rather their diverse sub-meanings. New indicators also emerged during the coding process, which had not initially been identified from the literature review, including ‘confidence’ and ‘assuming responsibility’. The 243 studies were coded with this extensive code set and any disagreements that occurred between the coders were reconciled. Footnote 2

As a plethora of over 50 individual educational technology applications and tools were identified in the 243 studies, in line with results found in other large-scale systematic reviews (e.g., Lai & Bower, 2019 ), concerns were raised over how the research team could meaningfully analyse and report the results. The decision was therefore made to employ Bower’s ( 2016 ) typology of learning technologies (see Additional file  4 ), in order to channel the tools into groups that share the same characteristics or “structure of information” (Bower, 2016 ), p. 773. Whilst it is acknowledged that some of the technology could be classified into more than one type within the typology, e.g. wikis can be used in individual composition, for collaborative tasks, or for knowledge organisation and sharing, “the type of learning that results from the use of the tool is dependent on the task and the way people engage with it rather than the technology itself” therefore “the typology is presented as descriptions of what each type of tool enables and example use cases rather than prescriptions of any particular pedagogical value system” (Bower, 2016 ), p. 774. For further elaboration on each category, please see Bower ( 2015 ).

Study characteristics

Geographical characteristics.

The systematic mapping reveals that the 243 studies were set in 33 different countries, whilst seven studies investigated settings in an international context, and three studies did not indicate their country setting. In 2% of the studies, the country was allocated based on the author country of origin, if the two authors came from the same country. The top five countries account for 158 studies (see Fig.  3 ), with 35.4% ( n  = 86) studies conducted in the United States (US), 10.7% ( n  = 26) in the United Kingdom (UK), 7.8% ( n  = 19) in Australia, 7.4% ( n  = 18) in Taiwan, and 3.7% ( n  = 9) in China. Across the corpus, studies from countries employing English as the official or one of the official languages total up to 59.7% of the entire sample, followed by East Asian countries that in total account for 18.8% of the sample. With the exception of the UK, European countries are largely absent from the sample, only 7.3% of the articles originate from this region, with countries such as France, Belgium, Italy or Portugal having no studies and countries such as Germany or the Netherlands having one respectively. Thus, with eight articles, Spain is the most prolific European country outside of the UK. The geographical distribution of study settings also clearly shows an almost complete absence of studies undertaken within African contexts, with five studies from South Africa and one from Tunisia. Studies from South-East Asia, the Middle East, and South America are likewise low in number this review. Whilst the global picture evokes an imbalance, this might be partially due to our search and sampling strategy, having focused on English language journals, indexed in four primarily Western-focused databases.

figure 3

Percentage deviation from the average relative frequencies of the different data collection formats per country (≥ 3 articles). Note. NS = not stated; AUS = Australia; CAN = Canada; CHN = China; HKG = Hong Kong; inter = international; IRI = Iran; JAP = Japan; MYS = Malaysia; SGP = Singapore; ZAF = South Africa; KOR = South Korea; ESP = Spain; SWE = Sweden; TWN = Taiwan; TUR = Turkey; GBR = United Kingdom; USA = United States of America

Methodological characteristics

Within this literature corpus, 103 studies (42%) employed quantitative methods, 84 (35%) mixed methods, and 56 (23%) qualitative. Relating these numbers back to the contributing countries, different preferences for and frequencies of methods used become apparent (see Fig. 3 ). As a general tendency, mixed methods and qualitative research occurs more often in Western countries, whereas quantitative research is the preferred method in East Asian countries. For example, studies originating from Australia employ mixed methods research 28% more often than the average, whereas Singapore is far below average in mixed methods research, with 34.5% less than the other countries in the sample. In Taiwan, on the other hand, mixed methods studies are being conducted 23.5% below average and qualitative research 6.4% less often than average. However, quantitative research occurs more often than in other countries, with 29.8% above average.

Amongst the qualitative studies, qualitative content analysis ( n  = 30) was the most frequently used analysis approach, followed by thematic analysis ( n  = 21) and grounded theory ( n  = 12). However, a lot of times ( n  = 37) the exact analysis approach was not reported, could not be allocated to a specific classification ( n  = 22), or no method of analysis was identifiable ( n  = 11). Within studies using quantitative methods, mean comparison was used in 100 studies, frequency data was collected and analysed in 83 studies, and in 40 studies regression models were used. Furthermore, looking at the correlation between the different analysis approaches, only one significant correlation can be identified, this being between mean comparison and frequency data (−.246). Besides that, correlations are small, for example, in only 14% of the studies both mean comparisons and regressions models are employed.

Study population characteristics

Research in the corpus focused on universities as the prime institution type ( n  = 191, 79%), followed by 24 (10%) non-specified institution types, and colleges ( n  = 21, 8.2%) (see Fig.  4 ). Five studies (2%) included institutions classified as ‘other’, and two studies (0.8%) included both college and university students. The most frequently studied student population was undergraduate students (60%, n  = 146), as opposed to 33 studies (14%) focused on postgraduate students (see Fig.  6 ). A combination of undergraduate and postgraduate students were the subject of interest in 23 studies (9%), with 41 studies (17%) not specifying the level of study of research participants.

figure 4

Relative frequencies of study field in dependence of countries with ≥3 articles. Note. Country abbreviations are as per Figure 4. A&H = Arts & Humanities; BA&L = Business, Administration and Law; EDU = Education; EM&C = Engineering, Manufacturing & Construction; H&W = Health & Welfare; ICT = Information & Communication Technologies; ID = interdisciplinary; NS,M&S = Natural Science, Mathematics & Statistics; NS = Not specified; SoS = Social Sciences, Journalism & Information

Based on the UNESCO (2015) ISCED classification, eight broad study fields are covered in the sample, with Arts & Humanities (42 studies), Education (42 studies), and Natural Sciences, Mathematics & Statistics (37) being the top three study fields, followed by Health & Welfare (30 studies), Social Sciences, Journalism & Information (22), Business, Administration & Law (19 studies), Information & Communication Technologies (13), Engineering, Manufacturing & Construction (11), and another 26 studies of interdisciplinary character. One study did not specify a field of study.

An expectancy value was calculated, according to which, the distribution of studies per discipline should occur per country. The actual deviation from this value then showed that several Asian countries are home to more articles in the field of Arts & Humanities than was expected: Japan with 3.3 articles more, China with 5.4 and Taiwan with 5.9. Furthermore, internationally located research also shows 2.3 more interdisciplinary studies than expected, whereas studies on Social Sciences occur more often than expected in the UK (5.7 more articles) and Australia (3.3 articles) but less often than expected across all other countries. Interestingly, the USA have 9.9 studies less in Arts & Humanities than was expected but 5.6 articles more than expected in Natural Science.

Question One: How do the studies in the sample ground student engagement and align with theory?

Defining student engagement.

It is striking that almost all of the studies ( n  = 225, 93%) in this corpus lack a definition of student engagement, with only 18 (7%) articles attempting to define the concept. However, this is not too surprising, as the search strategy was set up with the assumption that researchers investigating student engagement (dimensions and indicators) would not necessarily label them as student engagement. When developing their definitions, authors in these 18 studies referenced 22 different sources, with the work of Kuh and colleagues e.g., (Hu & Kuh, 2002 ; Kuh, 2001 ; Kuh et al., 2006 ), as well as Astin ( 1984 ), the only authors referred to more than once. The most popular definition of student engagement within these studies was that of active participation and involvement in learning and university life e.g., (Bolden & Nahachewsky, 2015 ; bFukuzawa & Boyd, 2016 ), which was also found by Joksimović et al. ( 2018 ) in their review of MOOC research. Interaction, especially between peers and with faculty, was the next most prevalent definition e.g., (Andrew, Ewens, & Maslin-Prothero, 2015 ; Bigatel & Williams, 2015 ). Time and effort was given as a definition in four studies (Gleason, 2012 ; Hatzipanagos & Code, 2016 ; Price, Richardson, & Jelfs, 2007 ; Sun & Rueda, 2012 ), with expending physical and psychological energy (Ivala & Gachago, 2012 ) another definition. This variance in definitions and sources reflects the ongoing complexity of the construct (Zepke, 2018 ), and serves to reinforce the need for a clearer understanding across the field (Schindler et al., 2017 ).

Theoretical underpinnings

Reflecting findings from other systematic and literature reviews on the topic (Abdool, Nirula, Bonato, Rajji, & Silver, 2017 ; Hunsu et al., 2016 ; Kaliisa & Picard, 2017 ; Lundin et al., 2018 ), 59% ( n  = 100) of studies did not employ a theoretical model in their research. Of the 41% ( n  = 100) that did, 18 studies drew on social constructivism, followed by the Community of Inquiry model ( n  = 8), Sociocultural Learning Theory ( n  = 5), and Community of Practice models ( n  = 4). These findings also reflect the state of the field in general (Al-Sakkaf et al., 2019 ; Bond, 2019b ; Hennessy, Girvan, Mavrikis, Price, & Winters, 2018 ).

Another interesting finding of this research is that whilst 144 studies (59%) provided research questions, 99 studies (41%) did not. Although it is recognised that not all studies have research questions (Bryman, 2007 ), or only develop them throughout the research process, such as with grounded theory (Glaser & Strauss, 1967 ), a surprising number of quantitative studies (36%, n  = 37) did not have research questions. This is a reflection on the lack of theoretical guidance, as 30 of these 37 studies also did not draw on a theoretical or conceptual framework.

Question 2: which indicators of cognitive, behavioural and affective engagement were identified in studies where educational technology was used? Which indicators of student disengagement?

Student engagement indicators.

Within the corpus, the behavioural engagement dimension was documented in some form in 209 studies (86%), whereas the dimension of affective engagement was reported in 163 studies (67%) and the cognitive dimension in only 136 (56%) studies. However, the ten most often identified student engagement indicators across the studies overall (see Table  2 ) were evenly distributed over all three dimensions (see Table  3 ). The indicators participation/interaction/involvement , achievement and positive interactions with peers and teachers each appear in at least 100 studies, which is almost double the amount of the next most frequent student engagement indicator.

Across the 243 studies in the corpus, 117 (48%) showed all three dimensions of affective, cognitive and behavioural student engagement e.g., (Szabo & Schwartz, 2011 ), including six studies that used established student engagement questionnaires, such as the NSSE (e.g., Delialioglu, 2012 ), or self-developed addressing these three dimensions. Another 54 studies (22%) displayed at least two student engagement dimensions e.g., (Hatzipanagos & Code, 2016 ), including six questionnaire studies. Studies exhibiting one student engagement dimension only, was reported in 71 studies (29%) e.g., (Vural, 2013 ).

Student disengagement indicators

Indicators of student disengagement (see Table  4 ) were identified considerably less often across the corpus, which could be explained by the purpose of the studies being to primarily address/measure positive engagement, but on the other hand this could potentially be due to a form of self-selected or publication bias, due to less frequently reporting and/or publishing studies with negative results. The three disengagement indicators that were most often indicated were frustration ( n  = 33, 14%) e.g., (Ikpeze, 2007 ), opposition/rejection ( n  = 20, 8%) e.g., (Smidt, Bunk, McGrory, Li, & Gatenby, 2014 ) and disappointment e.g., (Granberg, 2010 ) , as well as other affective disengagement ( n  = 18, 7% each).

Technology tool typology and engagement/disengagement indicators

Across the 243 studies, a plethora of over 50 individual educational technology tools were employed. The top five most frequently researched tools were LMS ( n  = 89), discussion forums ( n  = 80), videos ( n  = 44), recorded lectures ( n  = 25), and chat ( n  = 24). Following a slightly modified version of Bower’s ( 2016 ) educational tools typology, 17 broad categories of tools were identified (see Additional file 4 for classification, and 3.2 for further information). The frequency with which tools from the respective groups employed in studies varied considerably (see Additional file 4 ), with the top five categories being text-based tools ( n  = 138), followed by knowledge organisation & sharing tools ( n  = 104), multimodal production tools ( n  = 89), assessment tools ( n  = 65) and website creation tools ( n  = 29).

Figure  5 shows what percentage of each engagement dimension (e.g., affective engagement or cognitive disengagement) was fostered through each specific technology type. Given the results in 4.2.1 on student engagement, it was somewhat unsurprising to see the prevalence of text-based tools , knowledge organisation & sharing tools, and multimodal production tools having the highest proportion of affective, behavioural and cognitive engagement. For example, affective engagement was identified in 163 studies, with 63% of these studies using text-based tools (e.g., Bulu & Yildirim, 2008 ) , and cognitive engagement identified in 136 studies, with 47% of those using knowledge organisation & sharing tools e.g., (Shonfeld & Ronen, 2015 ). However, further analysis of studies employing discussion forums (a text-based tool ) revealed that, whilst the top affective and behavioural engagement indicators were found in almost two-thirds of studies (see Additional file  5 ), there was a substantial gap between that and the next most prevalent engagement indicator, with the exact pattern (and indicators) emerging for wikis. This represents an area for future research.

figure 5

Engagement and disengagement by tool typology. Note. TBT = text-based tools; MPT = multimodal production tools; WCT = website creation tools; KO&S = knowledge organisation and sharing tools; DAT = data analysis tools; DST = digital storytelling tools; AT = assessment tools; SNT = social networking tools; SCT = synchronous collaboration tools; ML = mobile learning; VW = virtual worlds; LS = learning software; OL = online learning; A&H = Arts & Humanities; BA&L = Business, Administration and Law; EDU = Education; EM&C = Engineering, Manufacturing & Construction; H&W = Health & Welfare; ICT = Information & Communication Technologies; ID = interdisciplinary; NS,M&S = Natural Science, Mathematics & Statistics; NS = Not specified; SoS = Social Sciences, Journalism & Information

Interestingly, studies using website creation tools reported more disengagement than engagement indicators across all three domains (see Fig.  5 ), with studies using assessment tools and social networking tools also reporting increased instances of disengagement across two domains (affective and cognitive, and behavioural and cognitive respectively). 23 of the studies (79%) using website creation tools , used blogs, with students showing, for example, disinterest in topics chosen e.g., (Sullivan & Longnecker, 2014 ), anxiety over their lack of blogging knowledge and skills e.g., (Mansouri & Piki, 2016 ), and continued avoidance of using blogs in some cases, despite introductory training e.g., (Keiller & Inglis-Jassiem, 2015 ). In studies where assessment tools were used, students found timed assessment stressful, particularly when trying to complete complex mathematical solutions e.g., (Gupta, 2009 ), as well as quizzes given at the end of lectures, with some students preferring take-up time of content first e.g., (DePaolo & Wilkinson, 2014 ). Disengagement in studies where social networking tools were used, indicated that some students found it difficult to express themselves in short posts e.g., (Cook & Bissonnette, 2016 ), that conversations lacked authenticity e.g., (Arnold & Paulus, 2010 ), and that some did not want to mix personal and academic spaces e.g., (Ivala & Gachago, 2012 ).

Question 3: What are the learning scenarios, modes of delivery and educational technology tools employed in the studies?

Learning scenarios.

With 58.4% across the sample, social-collaborative learning (SCL) was the scenario most often employed ( n  = 142), followed by 43.2% of studies investigating self-directed learning (SDL) ( n  = 105) and 5.8% of studies using game-based learning (GBL) ( n  = 14) (see Fig. 6 ). Studies coded as SCL included those exploring social learning (Bandura, 1971 ) and social constructivist approaches (Vygotsky, 1978 ). Personal learning environments (PLE) were found for 2.9% of studies, 1.3% studies used other scenarios ( n  = 3), whereas another 13.2% did not provide specification of their learning scenarios ( n  = 32). It is noteworthy that in 45% of possible cases for employing SDL scenarios, SCL was also used. Other learning scenarios were also used mostly in combination with SCL and SDL. Given the rising number of higher education studies exploring flipped learning (Lundin et al., 2018 ), studies exploring the approach were also specifically coded (3%, n  = 7).

figure 6

Co-occurrence of learning scenarios across the sample ( n  = 243). Note. SDL = self-directed learning; SCL = social collaborative learning; GBL = game-based learning; PLE = personal learning environments; other = other learning scenario

Modes of delivery

In 84% of studies ( n  = 204), a single mode of delivery was used, with blended learning the most researched (109 studies), followed by distance education (72 studies), and face-to-face instruction (55 studies). Of the remaining 39 studies, 12 did not indicate their mode of delivery, whilst the other 27 studies combined or compared modes of delivery, e.g. comparing face to face courses to blended learning, such as the study on using iPads in undergraduate nursing education by Davies ( 2014 ).

Educational technology tools investigated

Most studies in this corpus (55%) used technology asynchronously, with 12% of studies researching synchronous tools, and 18% of studies using both asynchronous and synchronous. When exploring the use of tools, the results are not surprising, with a heavy reliance on asynchronous technology. However, when looking at tool usage with studies in face-to-face contexts, the number of synchronous tools (31%) is almost as many as the number of asynchronous tools (41%), and surprisingly low within studies in distance education (7%).

Tool categories were used in combination, with text-based tools most often used in combination with other technology types (see Fig.  7 ). For example, in 60% of all possible cases using multimodal production tools, in 69% of all possible synchronous production tool cases, in 72% of all possible knowledge, organisation & sharing tool cases , and a striking 89% of all possible learning software cases and 100% of all possible MOOC cases. On the contrary, text-based tools were never used in combination with games or data analysis tools . However, studies using gaming tools were used in 67% of possible assessment tool cases as well. Assessment tools, however, constitute somewhat of a special case when studies using website creation tools are concerned, with only 7% of possible cases having employed assessment tools .

figure 7

Co-occurrence of tools across the sample ( n  = 243). Note. TBT = text-based tools; MPT = multimodal production tools; WCT = website creation tools; KO&S = knowledge organisation and sharing tools; DAT = data analysis tools; DST = digital storytelling tools; AT = assessment tools; SNT = social networking tools; SCT = synchronous collaboration tools; ML = mobile learning; VW = virtual worlds; LS = learning software; OL = online learning

In order to gain further understanding into how educational technology was used, we examined how often a combination of two variables should occur in the sample and how often it actually occurs, with deviations described as either ‘more than’ or ‘less than’ the expected value. This provides further insight into potential gaps in the literature, which can inform future research. For example, an analysis of educational technology tool usage amongst study populations (see Fig.  8 ) reveals that 5.0 more studies than expected looked at knowledge organisation & sharing for graduate students, but 5.0 studies less than expected investigated assessment tools for this group. By contrast, 5 studies more than expected researched assessment tools for unspecified study levels, and 4.3 studies less than expected employed knowledge organisation & sharing for undergraduate students.

figure 8

Relative frequency of educational technology tools used according to study level Note. Abbreviations are explained in Fig. 7

Educational technology tools were also used differently from the expected pattern within various fields of study (see Fig.  9 ), most obviously for the cases of the top five tools. However, also for virtual worlds, found in 5.8 studies more in Health & Welfare than expected, and learning software, used in 6.4 studies more in Arts & Humanities than expected. In all other disciplines, learning software was used less often than assumed. Text-based tools were used more often than expected in fields of study that are already text-intensive, including Arts & Humanities, Education, Business, Administration & Law as well as Social Sciences - but less often than thought in fields such as Engineering, Health & Welfare, and Natural Sciences, Mathematics & Statistics. Multimodal production tools were used more often only in Health & Welfare, ICT and Natural Sciences, and less often than assumed across all other disciplines. Assessment tools deviated most clearly, with 11.9 studies more in Natural Sciences, Mathematics & Statistics than assumed, but with 5.2 studies less in both Education and Arts & Humanities.

figure 9

Relative frequency of educational technology tools used according to field of study. Note. TBT = text-based tools; MPT = multimodal production tools; WCT = website creation tools; KO&S = knowledge organisation and sharing tools; DAT = data analysis tools; DST = digital storytelling tools; AT = assessment tools; SNT = social networking tools; SCT = synchronous collaboration tools; ML = mobile learning; VW = virtual worlds; LS = learning software; OL = online learning

In regards to mode of delivery and educational technology tools used, it is interesting to see that from the five top tools, except for assessment tools , all tools were used in face-to-face instruction less often than expected (see Fig.  10 ); from 1.6 studies less for website creation tools to 14.5 studies less for knowledge organisation & sharing tools . Assessment tools , however, were used in 3.3 studies more than expected - but less often than assumed (although moderately) in blended learning and distance education formats. Text-based tools, multimodal production tools and knowledge organisation & sharing tools were employed more often than expected in blended and distance learning, especially obvious in 13.1 studies more on t ext-based tools and 8.2 studies on knowledge organisation & sharing tools in distance education. Contrary to what one would perhaps expect, social networking tools were used in 4.2 studies less than expected for this mode of delivery.

figure 10

Relative frequency of educational technology tools used according mode of delivery. Note. Tool abbreviations as per Figure 10. BL = Blended learning; DE = Distance education; F2F = Face-to-face; NS = Not stated

The findings of this study confirm those of previous research, with the most prolific countries being the US, UK, Australia, Taiwan and China. This is rather representative of the field, with an analysis of instructional design and technology research from 2007 to 2017 listing the most productive countries as the US, Taiwan, UK, Australia and Turkey (Bodily, Leary, & West, 2019 ). Likewise, an analysis of 40 years of research in Computers & Education (CAE) found that the US, UK and Taiwan accounted for 49.9% of all publications (Bond, 2018 ). By contrast, a lack of African research was apparent in this review, which is also evident in educational technology research in top tier peer-reviewed journals, with only 4% of articles published in the British Journal of Educational Technology ( BJET ) in the past decade (Bond, 2019b ) and 2% of articles in the Australasian Journal of Educational Technology (AJET) (Bond, 2018 ) hailing from Africa. Similar results were also found in previous literature and systematic reviews (see Table 1 ), which again raises questions of literature search and inclusion strategies, which will be further discussed in the limitations section.

Whilst other reviews of educational technology and student engagement have found studies to be largely STEM focused (Boyle et al., 2016 ; Li et al., 2017 ; Lundin et al., 2018 ; Nikou & Economides, 2018 ), this corpus features a more balanced scope of research, with the fields of Arts & Humanities (42 studies, 17.3%) and Education (42 studies, 17.3%) constituting roughly one third of all studies in the corpus - and Natural Sciences, Mathematics & Statistics, nevertheless, assuming rank 3 with 38 studies (15.6%). Beyond these three fields, further research is needed within underrepresented fields of study, in order to gain more comprehensive insights into the usage of educational technology tools (Kay & LeSage, 2009 ; Nikou & Economides, 2018 ).

Results of the systematic map further confirm the focus that prior educational technology research has placed on undergraduate students as the target group and participants in technology-enhanced learning settings e.g. (Cheston et al., 2013 ; Henrie et al., 2015 ). With the overwhelming number of 146 studies researching undergraduate students—compared to 33 studies on graduate students and 23 studies investigating both study levels—this also indicates that further investigation into the graduate student experience is needed. Furthermore, the fact that 41 studies do not report on the study level of their participants is an interesting albeit problematic fact, as implications might not easily be drawn for application to one’s own specific teaching context if the target group under investigation is not clearly denominated. A more precise reporting of participants’ details, as well as specification of the study context (country, institution, study level to name a few) is needed to transfer and apply study results to practice—being then able to take into account why some interventions succeed and others do not.

In line with other studies e.g. (Henrie et al., 2015 ), this review has also demonstrated that student engagement remains an under-theorised concept, that is often only considered fragmentally in research. Whilst studies in this review have often focused on isolated aspects of student engagement, their results are nevertheless interesting and valuable. However, it is important to relate these individual facets to the larger framework of student engagement, by considering how these aspects are connected and linked to each other. This is especially helpful to integrate research findings into practice, given that student engagement and disengagement are rarely one-dimensional; it is not enough to focus only on one aspect of engagement, but also to look at aspects that are adjacent to it (Pekrun & Linnenbrink-Garcia, 2012 ). It is also vital, therefore, that researchers develop and refine an understanding of student engagement, and make this explicit in their research (Appleton et al., 2008 ; Christenson et al., 2012 ).

Reflective of current conversations in the field of educational technology (Bond, 2019b ; Castañeda & Selwyn, 2018 ; Hew et al., 2019 ), as well as other reviews (Abdool et al., 2017 ; Hunsu et al., 2016 ; Kaliisa & Picard, 2017 ; Lundin et al., 2018 ), a substantial number of studies in this corpus did not have any theoretical underpinnings. Kaliisa and Picard ( 2017 ) argue that, without theory, research can result in disorganised accounts and issues with interpreting data, with research effectively “sit[ting] in a void if it’s not theoretically connected” (Kara, 2017 ), p. 56. Therefore, framing research in educational technology with a stronger theoretical basis, can assist with locating the “field’s disciplinary alignment” (Crook, 2019 ), p. 486 and further drive conversations forward.

The application of methods in this corpus was interesting in two ways. First, it is noticeable that quantitative studies are prevalent across the 243 articles in the sample. The number of studies employing qualitative research methods in the sample was comparatively low (56 studies as opposed to 84 mixed method studies and 103 quantitative studies). This is also reflected in the educational technology field at large, with a review of articles published in BJET and Educational Technology Research & Development (ETR&D) from 2002 to 2014 revealing that 40% of articles used quantitative methods, 26% qualitative and 13% mixed (Baydas, Kucuk, Yilmaz, Aydemir, & Goktas, 2015 ), and likewise a review of educational technology research from Turkey 1990–2011 revealed that 53% of articles used quantitative methods, 22% qualitative and 10% mixed methods (Kucuk, Aydemir, Yildirim, Arpacik, & Goktas, 2013 ). Quantitative studies primarily show that an intervention has worked or not when applied to e.g. a group of students in a certain setting as done in the study on using mobile apps on student performance in engineering education by Jou, Lin, and Tsai ( 2016 ), however, not all student engagement indicators can actually be measured in this way. The lower numbers of affective and cognitive engagement found in the studies in the corpus, reflect a wider call to the field to increase research on these two domains (Henrie et al., 2015 ; Joksimović et al., 2018 ; O’Flaherty & Phillips, 2015 ; Schindler et al., 2017 ). Whilst it is arguably more difficult to measure these two than behavioural engagement, the use of more rigorous and accurate surveys could be one possibility, as they can “capture unobservable aspects” (Henrie et al., 2015 ), p. 45 such as student feelings and information about the cognitive strategies they employ (Finn & Zimmer, 2012 ). However, they are often lengthy and onerous, or subject to the limitations of self-selection.

Whereas low numbers of qualitative studies researching student engagement and educational technology were previously identified in other student engagement and technology reviews (Connolly et al., 2012 ; Kay & LeSage, 2009 ; Lundin et al., 2018 ), it is studies like that by Lopera Medina ( 2014 ) in this sample, which reveal how people perceive this educational experience and the actual how of the process. Therefore, more qualitative and ethnographic measures should also be employed, such as student observations with thick descriptions, which can help shed light on the complexity of teaching and learning environments (Fredricks et al., 2004 ; Heflin, Shewmaker, & Nguyen, 2017 ). Conducting observations can be costly, however, both in time and money, so this is suggested in combination with computerised learning analytic data, which can provide measurable, objective and timely insight into how certain manifestations of engagement change over time (Henrie et al., 2015 ; Ma et al., 2015 ).

Whereas other results of this review have confirmed previous results in the field, the technology tools that were used in the studies and considered in their relation to student engagement in this corpus deviate. Whilst Henrie et al. ( 2015 ) found that the most frequently researched tools were discussion forums, general websites, LMS, general campus software and videos, the studies here focused predominantly on LMS, discussion forums, videos, recorded lectures and chat. Furthermore, whilst Schindler et al. ( 2017 ) found that digital games, web-conferencing software and Facebook were the most effective tools at enhancing student engagement, this review found that it was rather text-based tools , knowledge organisation & sharing , and multimodal production tools .

Limitations

During the execution of this systematic review, we tried to adhere to the method as rigorously as possible. However, several challenges were also encountered - some of which are addressed and discussed in another publication (Bedenlier, 2020b ) - resulting in limitations to this study. Four large, general educational research databases were searched, which are international in scope. However, by applying the criterion of articles published in English, research published on this topic in languages other than English was not included in this review. The same applies to research documented in, for example, grey literature, book chapters or monographs, or even articles from journals that are not indexed in the four databases searched. Another limitation is that only research published within the period 2007–2016 was investigated. Whilst we are cognisant of this being a restriction, we also think that the technological advances and the implications to be drawn from this time-frame relate more meaningfully to the current situation, than would have been the case for technologies used in the 1990s see (Bond, 2019b ). The sampling strategy also most likely accounts for the low number of studies from certain countries, e.g. in South America and Africa.

Studies included in this review represent various academic fields, and they also vary in the rigour with which they were conducted. Harden and Gough ( 2012 ) stress that the appraisal of quality and relevance of studies “ensure[s] that only the most appropriate, trustworthy and relevant studies are used to develop the conclusions of the review” (p. 154), we have included the criterion of being a peer reviewed contribution as a formal inclusion criterion from the beginning. In doing so, we reason that studies met a baseline of quality as applicable to published research in a specific field - otherwise they would not have been accepted for publication by the respective community. Finally, whilst the studies were diligently read and coded, and disagreements also discussed and reconciled, the human flaw of having overlooked or misinterpreted information provided in the individual articles cannot fully be excluded.

Finally, the results presented here provide an initial window into the overall body of research identified during the search, and further research is being undertaken to provide deeper insight into discipline specific use of technology and resulting student engagement using subsets of this sample (Bedenlier, 2020a ; Bond, M., Bedenlier, S., Buntins, K., Kerres, M., & Zawacki-Richter, O.: Facilitating student engagement through educational technology: A systematic review in the field of education, forthcoming).

Recommendations for future work and implications for practice

Whilst the evidence map presented in this article has confirmed previous research on the nexus of educational technology and student engagement, it has also elucidated a number of areas that further research is invited to address. Although these findings are similar to that of previous reviews, in order to more fully and comprehensively understand student engagement as a multi-faceted construct, it is not enough to focus only on indicators of engagement that can easily be measured, but rather the more complex endeavour of uncovering and investigating those indicators that reside below the surface. This also includes the careful alignment of theory and methodological design, in order to both adequately analyse the phenomenon under investigation, as well as contributing to the soundly executed body of research within the field of educational technology. Further research is invited in particular into how educational technology affects cognitive and affective engagement, whilst considering how this fits within the broader sociocultural framework of engagement (Bond, 2019a ). Further research is also invited into how educational technology affects student engagement within fields of study beyond Arts & Humanities, Education and Natural Sciences, Mathematics & Statistics, as well as within graduate level courses. The use of more qualitative research methods is particularly encouraged.

The findings of this review suggest that research gaps exist with particular combinations of tools, study levels and modes of delivery. With respect to study level, the use of assessment tools with graduate students, as well as knowledge organisation & sharing tools with undergraduate students, are topics researched far less than expected. The use of text-based tools in Engineering, Health & Welfare and Natural Sciences, Mathematics & Statistics, as well as the use of multimodal production tools outside of these disciplines, are also areas for future research, as is the use of assessment tools in the fields of Education and Arts & Humanities in particular.

With 109 studies in this systematic review using a blended learning design, this is a confirmation of the argument that online distance education and traditional face-to-face education are becoming increasingly more integrated with one another. Whilst this indicates that a lot of educators have made the move from face-to-face teaching to technology-enhanced learning, this also makes a case for the need for further professional development, in order to apply these tools effectively within their own teaching contexts, with this review indicating that further research is needed in particlar into the use of social networking tools in online/distance education. The question also needs to be asked, not only why the number of published studies are low within certain countries and regions, but also to enquire into the nature of why that is the case. This entails questioning the conditions under which research is being conducted, potentially criticising publication policies of major, Western-based journals, but also ultimately to reflect on one’s search strategy and research assumptions as a Western educator-researcher.

Based on the findings of this review, educators within higher education institutions are encouraged to use text-based tools , knowledge, organisation and sharing tools , and multimodal production tools in particular and, whilst any technology can lead to disengagement if not employed effectively, to be mindful that website creation tools (blogs and ePortfolios), social networking tools and assessment tools have been found to be more disengaging than engaging in this review. Therefore, educators are encouraged to ensure that students receive sufficient and ongoing training for any new technology used, including those that might appear straightforward, e.g. blogs, and that they may require extra writing support. Ensure that discussion/blog topics are interesting, that they allow student agency, and they are authentic to students, including the use of social media. Social networking tools that augment student professional learning networks are particularly useful. Educators should also be aware, however, that some students do not want to mix their academic and personal lives, and so the decision to use certain social platforms could be decided together with students.

Availability of data and materials

All data will be made publicly available, as part of the funding requirements, via https://www.researchgate.net/project/Facilitating-student-engagement-with-digital-media-in-higher-education-ActiveLeaRn .

The detailed search strategy, including the modified search strings according to the individual databases, can be retrieved from https://www.researchgate.net/project/Facilitating-student-engagement-with-digital-media-in-higher-education-ActiveLeaRn

The full code set can be retrieved from the review protocol at https://www.researchgate.net/project/Facilitating-student-engagement-with-digital-media-in-higher-education-ActiveLeaRn .

Abdool, P. S., Nirula, L., Bonato, S., Rajji, T. K., & Silver, I. L. (2017). Simulation in undergraduate psychiatry: Exploring the depth of learner engagement. Academic Psychiatry : the Journal of the American Association of Directors of Psychiatric Residency Training and the Association for Academic Psychiatry , 41 (2), 251–261. https://doi.org/10.1007/s40596-016-0633-9 .

Article   Google Scholar  

Alioon, Y., & Delialioğlu, Ö. (2017). The effect of authentic m-learning activities on student engagement and motivation. British Journal of Educational Technology , 32 , 121. https://doi.org/10.1111/bjet.12559 .

Alrasheedi, M., Capretz, L. F., & Raza, A. (2015). A systematic review of the critical factors for success of mobile learning in higher education (university students’ perspective). Journal of Educational Computing Research , 52 (2), 257–276. https://doi.org/10.1177/0735633115571928 .

Al-Sakkaf, A., Omar, M., & Ahmad, M. (2019). A systematic literature review of student engagement in software visualization: A theoretical perspective. Computer Science Education , 29 (2–3), 283–309. https://doi.org/10.1080/08993408.2018.1564611 .

Andrew, L., Ewens, B., & Maslin-Prothero, S. (2015). Enhancing the online learning experience using virtual interactive classrooms. Australian Journal of Advanced Nursing , 32 (4), 22–31.

Google Scholar  

Antonenko, P. D. (2015). The instrumental value of conceptual frameworks in educational technology research. Educational Technology Research and Development , 63 (1), 53–71. https://doi.org/10.1007/s11423-014-9363-4 .

Appleton, J. J., Christenson, S. L., & Furlong, M. J. (2008). Student engagement with school: Critical conceptual and methodological issues of the construct. Psychology in the Schools , 45 (5), 369–386. https://doi.org/10.1002/pits.20303 .

Arnold, N., & Paulus, T. (2010). Using a social networking site for experiential learning: Appropriating, lurking, modeling and community building. Internet and Higher Education , 13 (4), 188–196. https://doi.org/10.1016/j.iheduc.2010.04.002 .

Astin, A. W. (1984). Student involvement: A developmental theory for higher education. Journal of College Student Development , 25 (4), 297–308.

Astin, A. W. (1999). Student involvement: A developmental theory for higher education. Journal of College Student Development , 40 (5), 518–529. https://www.researchgate.net/publication/220017441 (Original work published July 1984).

Atmacasoy, A., & Aksu, M. (2018). Blended learning at pre-service teacher education in Turkey: A systematic review. Education and Information Technologies , 23 (6), 2399–2422. https://doi.org/10.1007/s10639-018-9723-5 .

Azevedo, R. (2015). Defining and measuring engagement and learning in science: Conceptual, theoretical, methodological, and analytical issues. Educational Psychologist , 50 (1), 84–94. https://doi.org/10.1080/00461520.2015.1004069 .

Bandura, A. (1971). Social learning theory . New York: General Learning Press.

Barak, M. (2018). Are digital natives open to change? Examining flexible thinking and resistance to change. Computers & Education , 121 , 115–123. https://doi.org/10.1016/j.compedu.2018.01.016 .

Barak, M., & Levenberg, A. (2016). Flexible thinking in learning: An individual differences measure for learning in technology-enhanced environments. Computers & Education , 99 , 39–52. https://doi.org/10.1016/j.compedu.2016.04.003 .

Baron, P., & Corbin, L. (2012). Student engagement: Rhetoric and reality. Higher Education Research and Development , 31 (6), 759–772. https://doi.org/10.1080/07294360.2012.655711 .

Baydas, O., Kucuk, S., Yilmaz, R. M., Aydemir, M., & Goktas, Y. (2015). Educational technology research trends from 2002 to 2014. Scientometrics , 105 (1), 709–725. https://doi.org/10.1007/s11192-015-1693-4 .

Bedenlier, S., Bond, M., Buntins, K., Zawacki-Richter, O., & Kerres, M. (2020a). Facilitating student engagement through educational technology in higher education: A systematic review in the field of arts & humanities. Australasian Journal of Educational Technology , 36 (4), 27–47. https://doi.org/10.14742/ajet.5477 .

Bedenlier, S., Bond, M., Buntins, K., Zawacki-Richter, O., & Kerres, M. (2020b). Learning by Doing? Reflections on Conducting a Systematic Review in the Field of Educational Technology. In O. Zawacki-Richter, M. Kerres, S. Bedenlier, M. Bond, & K. Buntins (Eds.), Systematic Reviews in Educational Research (Vol. 45 , pp. 111–127). Wiesbaden: Springer Fachmedien Wiesbaden. https://doi.org/10.1007/978-3-658-27602-7_7 .

Ben-Eliyahu, A., Moore, D., Dorph, R., & Schunn, C. D. (2018). Investigating the multidimensionality of engagement: Affective, behavioral, and cognitive engagement across science activities and contexts. Contemporary Educational Psychology , 53 , 87–105. https://doi.org/10.1016/j.cedpsych.2018.01.002 .

Betihavas, V., Bridgman, H., Kornhaber, R., & Cross, M. (2016). The evidence for ‘flipping out’: A systematic review of the flipped classroom in nursing education. Nurse Education Today , 38 , 15–21. https://doi.org/10.1016/j.nedt.2015.12.010 .

Bigatel, P., & Williams, V. (2015). Measuring student engagement in an online program. Online Journal of Distance Learning Administration , 18 (2), 9.

Bodily, R., Leary, H., & West, R. E. (2019). Research trends in instructional design and technology journals. British Journal of Educational Technology , 50 (1), 64–79. https://doi.org/10.1111/bjet.12712 .

Boekaerts, M. (2016). Engagement as an inherent aspect of the learning process. Learning and Instruction , 43 , 76–83. https://doi.org/10.1016/j.learninstruc.2016.02.001 .

Bolden, B., & Nahachewsky, J. (2015). Podcast creation as transformative music engagement. Music Education Research , 17 (1), 17–33. https://doi.org/10.1080/14613808.2014.969219 .

Bond, M. (2018). Helping doctoral students crack the publication code: An evaluation and content analysis of the Australasian Journal of Educational Technology. Australasian Journal of Educational Technology , 34 (5), 168–183. https://doi.org/10.14742/ajet.4363 .

Bond, M., & Bedenlier, S. (2019a). Facilitating Student Engagement Through Educational Technology: Towards a Conceptual Framework. Journal of Interactive Media in Education , 2019 (1), 1-14. https://doi.org/10.5334/jime.528 .

Bond, M., Zawacki-Richter, O., & Nichols, M. (2019b). Revisiting five decades of educational technology research: A content and authorship analysis of the British Journal of Educational Technology. British Journal of Educational Technology , 50 (1), 12–63. https://doi.org/10.1111/bjet.12730 .

Bouta, H., Retalis, S., & Paraskeva, F. (2012). Utilising a collaborative macro-script to enhance student engagement: A mixed method study in a 3D virtual environment. Computers & Education , 58 (1), 501–517. https://doi.org/10.1016/j.compedu.2011.08.031 .

Bower, M. (2015). A typology of web 2.0 learning technologies . EDUCAUSE Digital Library Retrieved 20 June 2019, from http://www.educause.edu/library/resources/typology-web-20-learning-technologies .

Bower, M. (2016). Deriving a typology of web 2.0 learning technologies. British Journal of Educational Technology , 47 (4), 763–777. https://doi.org/10.1111/bjet.12344 .

Boyle, E. A., Connolly, T. M., Hainey, T., & Boyle, J. M. (2012). Engagement in digital entertainment games: A systematic review. Computers in Human Behavior , 28 (3), 771–780. https://doi.org/10.1016/j.chb.2011.11.020 .

Boyle, E. A., Hainey, T., Connolly, T. M., Gray, G., Earp, J., Ott, M., … Pereira, J. (2016). An update to the systematic literature review of empirical evidence of the impacts and outcomes of computer games and serious games. Computers & Education , 94 , 178–192. https://doi.org/10.1016/j.compedu.2015.11.003 .

Broadbent, J., & Poon, W. L. (2015). Self-regulated learning strategies & academic achievement in online higher education learning environments: A systematic review. The Internet and Higher Education , 27 , 1–13. https://doi.org/10.1016/j.iheduc.2015.04.007 .

Brunton, G., Stansfield, C., & Thomas, J. (2012). Finding relevant studies. In D. Gough, S. Oliver, & J. Thomas (Eds.), An introduction to systematic reviews , (pp. 107–134). Los Angeles: Sage.

Bryman, A. (2007). The research question in social research: What is its role? International Journal of Social Research Methodology , 10 (1), 5–20. https://doi.org/10.1080/13645570600655282 .

Bulu, S. T., & Yildirim, Z. (2008). Communication behaviors and trust in collaborative online teams. Educational Technology & Society , 11 (1), 132–147.

Bundick, M., Quaglia, R., Corso, M., & Haywood, D. (2014). Promoting student engagement in the classroom. Teachers College Record , 116 (4) Retrieved from http://www.tcrecord.org/content.asp?contentid=17402 .

Castañeda, L., & Selwyn, N. (2018). More than tools? Making sense of the ongoing digitizations of higher education. International Journal of Educational Technology in Higher Education , 15 (1), 211. https://doi.org/10.1186/s41239-018-0109-y .

Chen, P.-S. D., Lambert, A. D., & Guidry, K. R. (2010). Engaging online learners: The impact of web-based learning technology on college student engagement. Computers & Education , 54 (4), 1222–1232. https://doi.org/10.1016/j.compedu.2009.11.008 .

Cheston, C. C., Flickinger, T. E., & Chisolm, M. S. (2013). Social media use in medical education: A systematic review. Academic Medicine : Journal of the Association of American Medical Colleges , 88 (6), 893–901. https://doi.org/10.1097/ACM.0b013e31828ffc23 .

Choi, M., Glassman, M., & Cristol, D. (2017). What it means to be a citizen in the internet age: Development of a reliable and valid digital citizenship scale. Computers & Education , 107 , 100–112. https://doi.org/10.1016/j.compedu.2017.01.002 .

Christenson, S. L., Reschly, A. L., & Wylie, C. (Eds.) (2012). Handbook of research on student engagement . Boston: Springer US.

Coates, H. (2007). A model of online and general campus-based student engagement. Assessment & Evaluation in Higher Education , 32 (2), 121–141. https://doi.org/10.1080/02602930600801878 .

Connolly, T. M., Boyle, E. A., MacArthur, E., Hainey, T., & Boyle, J. M. (2012). A systematic literature review of empirical evidence on computer games and serious games. Computers & Education , 59 (2), 661–686. https://doi.org/10.1016/j.compedu.2012.03.004 .

Cook, M. P., & Bissonnette, J. D. (2016). Developing preservice teachers’ positionalities in 140 characters or less: Examining microblogging as dialogic space. Contemporary Issues in Technology and Teacher Education (CITE Journal) , 16 (2), 82–109.

Crompton, H., Burke, D., Gregory, K. H., & Gräbe, C. (2016). The use of mobile learning in science: A systematic review. Journal of Science Education and Technology , 25 (2), 149–160. https://doi.org/10.1007/s10956-015-9597-x .

Crook, C. (2019). The “British” voice of educational technology research: 50th birthday reflection. British Journal of Educational Technology , 50 (2), 485–489. https://doi.org/10.1111/bjet.12757 .

Davies, M. (2014). Using the apple iPad to facilitate student-led group work and seminar presentation. Nurse Education in Practice , 14 (4), 363–367. https://doi.org/10.1016/j.nepr.2014.01.006 .

Article   MathSciNet   Google Scholar  

Delialioglu, O. (2012). Student engagement in blended learning environments with lecture-based and problem-based instructional approaches. Educational Technology & Society , 15 (3), 310–322.

DePaolo, C. A., & Wilkinson, K. (2014). Recurrent online quizzes: Ubiquitous tools for promoting student presence, participation and performance. Interdisciplinary Journal of E-Learning and Learning Objects , 10 , 75–91 Retrieved from http://www.ijello.org/Volume10/IJELLOv10p075-091DePaolo0900.pdf .

Doherty, K., & Doherty, G. (2018). Engagement in HCI. ACM Computing Surveys , 51 (5), 1–39. https://doi.org/10.1145/3234149 .

Eccles, J. (2016). Engagement: Where to next? Learning and Instruction , 43 , 71–75. https://doi.org/10.1016/j.learninstruc.2016.02.003 .

Eccles, J., & Wang, M.-T. (2012). Part I commentary: So what is student engagement anyway? In S. L. Christenson, A. L. Reschly, & C. Wylie (Eds.), Handbook of research on student engagement , (pp. 133–145). Boston: Springer US Retrieved from http://link.springer.com/10.1007/978-1-4614-2018-7_6 .

Chapter   Google Scholar  

Englund, C., Olofsson, A. D., & Price, L. (2017). Teaching with technology in higher education: Understanding conceptual change and development in practice. Higher Education Research and Development , 36 (1), 73–87. https://doi.org/10.1080/07294360.2016.1171300 .

Fabian, K., Topping, K. J., & Barron, I. G. (2016). Mobile technology and mathematics: Effects on students’ attitudes, engagement, and achievement. Journal of Computers in Education , 3 (1), 77–104. https://doi.org/10.1007/s40692-015-0048-8 .

Filsecker, M., & Kerres, M. (2014). Engagement as a volitional construct. Simulation & Gaming , 45 (4–5), 450–470. https://doi.org/10.1177/1046878114553569 .

Finn, J. (2006). The adult lives of at-risk students: The roles of attainment and engagement in high school (NCES 2006-328) . Washington, DC: U.S. Department of Education, National Center for Education Statistics Retrieved from website: https://nces.ed.gov/pubs2006/2006328.pdf .

Finn, J., & Zimmer, K. (2012). Student engagement: What is it? Why does it matter? In S. L. Christenson, A. L. Reschly, & C. Wylie (Eds.), Handbook of research on student engagement , (pp. 97–131). Boston: Springer US. https://doi.org/10.1007/978-1-4614-2018-7_5 .

Fredricks, J. A., Blumenfeld, P. C., & Paris, A. H. (2004). School engagement: Potential of the concept, state of the evidence. Review of Educational Research , 74 (1), 59–109. https://doi.org/10.3102/00346543074001059 .

Fredricks, J. A., Filsecker, M., & Lawson, M. A. (2016). Student engagement, context, and adjustment: Addressing definitional, measurement, and methodological issues. Learning and Instruction , 43 , 1–4. https://doi.org/10.1016/j.learninstruc.2016.02.002 .

Fredricks, J. A., Wang, M.-T., Schall Linn, J., Hofkens, T. L., Sung, H., Parr, A., & Allerton, J. (2016). Using qualitative methods to develop a survey measure of math and science engagement. Learning and Instruction , 43 , 5–15. https://doi.org/10.1016/j.learninstruc.2016.01.009 .

Fukuzawa, S., & Boyd, C. (2016). Student engagement in a large classroom: Using technology to generate a hybridized problem-based learning experience in a large first year undergraduate class. Canadian Journal for the Scholarship of Teaching and Learning , 7 (1). https://doi.org/10.5206/cjsotl-rcacea.2016.1.7 .

Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative research . Chicago: Aldine.

Gleason, J. (2012). Using technology-assisted instruction and assessment to reduce the effect of class size on student outcomes in undergraduate mathematics courses. College Teaching , 60 (3), 87–94.

Gough, D., Oliver, S., & Thomas, J. (2012). An introduction to systematic reviews . Los Angeles: Sage.

Granberg, C. (2010). Social software for reflective dialogue: Questions about reflection and dialogue in student Teachers’ blogs. Technology, Pedagogy and Education , 19 (3), 345–360. https://doi.org/10.1080/1475939X.2010.513766 .

Greenwood, L., & Kelly, C. (2019). A systematic literature review to explore how staff in schools describe how a sense of belonging is created for their pupils. Emotional and Behavioural Difficulties , 24 (1), 3–19. https://doi.org/10.1080/13632752.2018.1511113 .

Gupta, M. L. (2009). Using emerging technologies to promote student engagement and learning in agricultural mathematics. International Journal of Learning , 16 (10), 497–508. https://doi.org/10.18848/1447-9494/CGP/v16i10/46658 .

Harden, A., & Gough, D. (2012). Quality and relevance appraisal. In D. Gough, S. Oliver, & J. Thomas (Eds.), An introduction to systematic reviews , (pp. 153–178). London: Sage.

Hatzipanagos, S., & Code, J. (2016). Open badges in online learning environments: Peer feedback and formative assessment as an engagement intervention for promoting agency. Journal of Educational Multimedia and Hypermedia , 25 (2), 127–142.

Heflin, H., Shewmaker, J., & Nguyen, J. (2017). Impact of mobile technology on student attitudes, engagement, and learning. Computers & Education , 107 , 91–99. https://doi.org/10.1016/j.compedu.2017.01.006 .

Henderson, M., Selwyn, N., & Aston, R. (2017). What works and why? Student perceptions of ‘useful’ digital technology in university teaching and learning. Studies in Higher Education , 42 (8), 1567–1579. https://doi.org/10.1080/03075079.2015.1007946 .

Hennessy, S., Girvan, C., Mavrikis, M., Price, S., & Winters, N. (2018). Editorial. British Journal of Educational Technology , 49 (1), 3–5. https://doi.org/10.1111/bjet.12598 .

Henrie, C. R., Halverson, L. R., & Graham, C. R. (2015). Measuring student engagement in technology-mediated learning: A review. Computers & Education , 90 , 36–53. https://doi.org/10.1016/j.compedu.2015.09.005 .

Hew, K. F., & Cheung, W. S. (2013). Use of web 2.0 technologies in K-12 and higher education: The search for evidence-based practice. Educational Research Review , 9 , 47–64. https://doi.org/10.1016/j.edurev.2012.08.001 .

Hew, K. F., Lan, M., Tang, Y., Jia, C., & Lo, C. K. (2019). Where is the “theory” within the field of educational technology research? British Journal of Educational Technology , 50 (3), 956–971. https://doi.org/10.1111/bjet.12770 .

Howard, S. K., Ma, J., & Yang, J. (2016). Student rules: Exploring patterns of students’ computer-efficacy and engagement with digital technologies in learning. Computers & Education , 101 , 29–42. https://doi.org/10.1016/j.compedu.2016.05.008 .

Hu, S., & Kuh, G. D. (2002). Being (dis)engaged in educationally purposeful activities: The influences of student and institutional characteristics. Research in Higher Education , 43 (5), 555–575. https://doi.org/10.1023/A:1020114231387 .

Hunsu, N. J., Adesope, O., & Bayly, D. J. (2016). A meta-analysis of the effects of audience response systems (clicker-based technologies) on cognition and affect. Computers & Education , 94 , 102–119. https://doi.org/10.1016/j.compedu.2015.11.013 .

Ikpeze, C. (2007). Small group collaboration in peer-led electronic discourse: An analysis of group dynamics and interactions involving Preservice and Inservice teachers. Journal of Technology and Teacher Education , 15 (3), 383–407.

Ivala, E., & Gachago, D. (2012). Social media for enhancing student engagement: The use of Facebook and blogs at a university of technology. South African Journal of Higher Education , 26 (1), 152–167.

Järvelä, S., Järvenoja, H., Malmberg, J., Isohätälä, J., & Sobocinski, M. (2016). How do types of interaction and phases of self-regulated learning set a stage for collaborative engagement? Learning and Instruction , 43 , 39–51. https://doi.org/10.1016/j.learninstruc.2016.01.005 .

Joksimović, S., Poquet, O., Kovanović, V., Dowell, N., Mills, C., Gašević, D., … Brooks, C. (2018). How do we model learning at scale? A systematic review of research on MOOCs. Review of Educational Research , 88 (1), 43–86. https://doi.org/10.3102/0034654317740335 .

Jou, M., Lin, Y.-T., & Tsai, H.-C. (2016). Mobile APP for motivation to learning: An engineering case. Interactive Learning Environments , 24 (8), 2048–2057. https://doi.org/10.1080/10494820.2015.1075136 .

Junco, R. (2012). The relationship between frequency of Facebook use, participation in Facebook activities, and student engagement. Computers & Education , 58 (1), 162–171. https://doi.org/10.1016/j.compedu.2011.08.004 .

Kahn, P. (2014). Theorising student engagement in higher education. British Educational Research Journal , 40 (6), 1005–1018. https://doi.org/10.1002/berj.3121 .

Kahu, E. R. (2013). Framing student engagement in higher education. Studies in Higher Education , 38 (5), 758–773. https://doi.org/10.1080/03075079.2011.598505 .

Kahu, E. R., & Nelson, K. (2018). Student engagement in the educational interface: Understanding the mechanisms of student success. Higher Education Research and Development , 37 (1), 58–71. https://doi.org/10.1080/07294360.2017.1344197 .

Kaliisa, R., & Picard, M. (2017). A systematic review on mobile learning in higher education: The African perspective. The Turkish Online Journal of Educational Technology , 16 (1) Retrieved from https://files.eric.ed.gov/fulltext/EJ1124918.pdf .

Kara, H. (2017). Research and evaluation for busy students and practitioners: A time-saving guide , (2nd ed., ). Bristol: Policy Press.

Book   Google Scholar  

Karabulut-Ilgu, A., Jaramillo Cherrez, N., & Jahren, C. T. (2018). A systematic review of research on the flipped learning method in engineering education: Flipped learning in engineering education. British Journal of Educational Technology , 49 (3), 398–411. https://doi.org/10.1111/bjet.12548 .

Kay, R. H., & LeSage, A. (2009). Examining the benefits and challenges of using audience response systems: A review of the literature. Computers & Education , 53 (3), 819–827. https://doi.org/10.1016/j.compedu.2009.05.001 .

Keiller, L., & Inglis-Jassiem, G. (2015). A lesson in listening: Is the student voice heard in the rush to incorporate technology into health professions education? African Journal of Health Professions Education , 7 (1), 47–50. https://doi.org/10.7196/ajhpe.371 .

Kelley, K., Lai, K., Lai, M. K., & Suggests, M. (2018). Package ‘MBESS’. Retrieved from https://cran.r-project.org/web/packages/MBESS/MBESS.pdf

Kerres, M. (2013). Mediendidaktik. Konzeption und Entwicklung mediengestützter Lernangebote . München: Oldenbourg.

Kirkwood, A. (2009). E-learning: You don’t always get what you hope for. Technology, Pedagogy and Education , 18 (2), 107–121. https://doi.org/10.1080/14759390902992576 .

Koehler, M., & Mishra, P. (2005). What happens when teachers design educational technology? The development of technological pedagogical content knowledge. Journal of Educational Computing Research , 32 (2), 131–152.

Krause, K.-L., & Coates, H. (2008). Students’ engagement in first-year university. Assessment & Evaluation in Higher Education , 33 (5), 493–505. https://doi.org/10.1080/02602930701698892 .

Kucuk, S., Aydemir, M., Yildirim, G., Arpacik, O., & Goktas, Y. (2013). Educational technology research trends in Turkey from 1990 to 2011. Computers & Education , 68 , 42–50. https://doi.org/10.1016/j.compedu.2013.04.016 .

Kuh, G. D. (2001). The National Survey of student engagement: Conceptual framework and overview of psychometric properties . Bloomington: Indiana University Center for Postsecondary Research Retrieved from http://nsse.indiana.edu/2004_annual_report/pdf/2004_conceptual_framework.pdf .

Kuh, G. D. (2009). What student affairs professionals need to know about student engagement. Journal of College Student Development , 50 (6), 683–706. https://doi.org/10.1353/csd.0.0099 .

Kuh, G. D., Cruce, T. M., Shoup, R., Kinzie, J., & Gonyea, R. M. (2008). Unmasking the effects of student engagement on first-year college grades and persistence. The Journal of Higher Education , 79 (5), 540–563 Retrieved from http://www.jstor.org.ezproxy.umuc.edu/stable/25144692 .

Kuh, G. D., J. Kinzie, J. A. Buckley, B. K. Bridges, & J. C. Hayek. (2006). What matters to student success: A review of the literature. Washington, DC: National Postsecondary Education Cooperative.

Kupper, L. L., & Hafner, K. B. (1989). How appropriate are popular sample size formulas? The American Statistician , 43 (2), 101–105.

Lai, J. W. M., & Bower, M. (2019). How is the use of technology in education evaluated? A systematic review. Computers & Education , 133 , 27–42. https://doi.org/10.1016/j.compedu.2019.01.010 .

Lawson, M. A., & Lawson, H. A. (2013). New conceptual frameworks for student engagement research, policy, and practice. Review of Educational Research , 83 (3), 432–479. https://doi.org/10.3102/0034654313480891 .

Leach, L., & Zepke, N. (2011). Engaging students in learning: A review of a conceptual organiser. Higher Education Research and Development , 30 (2), 193–204. https://doi.org/10.1080/07294360.2010.509761 .

Li, J., van der Spek, E. D., Feijs, L., Wang, F., & Hu, J. (2017). Augmented reality games for learning: A literature review. In N. Streitz, & P. Markopoulos (Eds.), Lecture Notes in Computer Science. Distributed, Ambient and Pervasive Interactions , (vol. 10291, pp. 612–626). Cham: Springer International Publishing. https://doi.org/10.1007/978-3-319-58697-7_46 .

Lim, C. (2004). Engaging learners in online learning environments. TechTrends , 48 (4), 16–23 Retrieved from https://link.springer.com/content/pdf/10.1007%2FBF02763440.pdf .

Lopera Medina, S. (2014). Motivation conditions in a foreign language reading comprehension course offering both a web-based modality and a face-to-face modality (Las condiciones de motivación en un curso de comprensión de lectura en lengua extranjera (LE) ofrecido tanto en la modalidad presencial como en la modalidad a distancia en la web). PROFILE: Issues in Teachers’ Professional Development , 16 (1), 89–104 Retrieved from https://search.proquest.com/docview/1697487398?accountid=12968 .

Lundin, M., Bergviken Rensfeldt, A., Hillman, T., Lantz-Andersson, A., & Peterson, L. (2018). Higher education dominance and siloed knowledge: A systematic review of flipped classroom research. International Journal of Educational Technology in Higher Education , 15 (1), 1. https://doi.org/10.1186/s41239-018-0101-6 .

Ma, J., Han, X., Yang, J., & Cheng, J. (2015). Examining the necessary condition for engagement in an online learning environment based on learning analytics approach: The role of the instructor. The Internet and Higher Education , 24 , 26–34. https://doi.org/10.1016/j.iheduc.2014.09.005 .

Mahatmya, D., Lohman, B. J., Matjasko, J. L., & Farb, A. F. (2012). Engagement across developmental periods. In S. L. Christenson, A. L. Reschly, & C. Wylie (Eds.), Handbook of research on student engagement , (pp. 45–63). Boston: Springer US Retrieved from http://link.springer.com/10.1007/978-1-4614-2018-7_3 .

Mansouri, A. S., & Piki, A. (2016). An exploration into the impact of blogs on students’ learning: Case studies in postgraduate business education. Innovations in Education and Teaching International , 53 (3), 260–273. https://doi.org/10.1080/14703297.2014.997777 .

Martin, A. J. (2012). Motivation and engagement: Conceptual, operational, and empirical clarity. In S. L. Christenson, A. L. Reschly, & C. Wylie (Eds.), Handbook of research on student engagement , (pp. 303–311). Boston: Springer US. https://doi.org/10.1007/978-1-4614-2018-7_14 .

McCutcheon, K., Lohan, M., Traynor, M., & Martin, D. (2015). A systematic review evaluating the impact of online or blended learning vs. face-to-face learning of clinical skills in undergraduate nurse education. Journal of Advanced Nursing , 71 (2), 255–270. https://doi.org/10.1111/jan.12509 .

Miake-Lye, I. M., Hempel, S., Shanman, R., & Shekelle, P. G. (2016). What is an evidence map? A systematic review of published evidence maps and their definitions, methods, and products. Systematic Reviews , 5 , 28. https://doi.org/10.1186/s13643-016-0204-x .

Moher, D., Liberati, A., Tetzlaff, J., & Altman, D. G. (2009). Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. BMJ (Clinical Research Ed.) , 339 , b2535. https://doi.org/10.1136/bmj.b2535 .

Nelson Laird, T. F., & Kuh, G. D. (2005). Student experiences with information technology and their relationship to other aspects of student engagement. Research in Higher Education , 46 (2), 211–233. https://doi.org/10.1007/s11162-004-1600-y .

Nguyen, L., Barton, S. M., & Nguyen, L. T. (2015). iPads in higher education-hype and hope. British Journal of Educational Technology , 46 (1), 190–203. https://doi.org/10.1111/bjet.12137 .

Nicholas, D., Watkinson, A., Jamali, H. R., Herman, E., Tenopir, C., Volentine, R., … Levine, K. (2015). Peer review: Still king in the digital age. Learned Publishing , 28 (1), 15–21. https://doi.org/10.1087/20150104 .

Nikou, S. A., & Economides, A. A. (2018). Mobile-based assessment: A literature review of publications in major referred journals from 2009 to 2018. Computers & Education , 125 , 101–119. https://doi.org/10.1016/j.compedu.2018.06.006 .

Norris, L., & Coutas, P. (2014). Cinderella’s coach or just another pumpkin? Information communication technologies and the continuing marginalisation of languages in Australian schools. Australian Review of Applied Linguistics , 37 (1), 43–61 Retrieved from http://www.jbe-platform.com/content/journals/10.1075/aral.37.1.03nor .

OECD (2015a). Schooling redesigned. Educational Research and Innovation . OECD Publishing Retrieved from http://www.oecd-ilibrary.org/education/schooling-redesigned_9789264245914-en .

OECD (2015b). Students, computers and learning . PISA: OECD Publishing Retrieved from http://www.oecd-ilibrary.org/education/students-computers-and-learning_9789264239555-en .

O’Flaherty, J., & Phillips, C. (2015). The use of flipped classrooms in higher education: A scoping review. The Internet and Higher Education , 25 , 85–95. https://doi.org/10.1016/j.iheduc.2015.02.002 .

O’Gorman, E., Salmon, N., & Murphy, C.-A. (2016). Schools as sanctuaries: A systematic review of contextual factors which contribute to student retention in alternative education. International Journal of Inclusive Education , 20 (5), 536–551. https://doi.org/10.1080/13603116.2015.1095251 .

Oliver, B., & de St Jorre, Trina, J. (2018). Graduate attributes for 2020 and beyond: recommendations for Australian higher education providers. Higher Education Research and Development , 1–16. https://doi.org/10.1080/07294360.2018.1446415 .

O’Mara-Eves, A., Brunton, G., McDaid, D., Kavanagh, J., Oliver, S., & Thomas, J. (2014). Techniques for identifying cross-disciplinary and ‘hard-to-detect’ evidence for systematic review. Research Synthesis Methods , 5 (1), 50–59. https://doi.org/10.1002/jrsm.1094 .

Payne, L. (2017). Student engagement: Three models for its investigation. Journal of Further and Higher Education , 3 (2), 1–17. https://doi.org/10.1080/0309877X.2017.1391186 .

Pekrun, R., & Linnenbrink-Garcia, L. (2012). Academic emotions and student engagement. In S. L. Christenson, A. L. Reschly, & C. Wylie (Eds.), Handbook of research on student engagement , (pp. 259–282). Boston: Springer US Retrieved from http://link.springer.com/10.1007/978-1-4614-2018-7_12 .

Popenici, S. (2013). Towards a new vision for university governance, pedagogies and student engagement. In E. Dunne, & D. Owen (Eds.), The student engagement handbook: Practice in higher education , (1st ed., pp. 23–42). Bingley: Emerald.

Price, L., Richardson, J. T., & Jelfs, A. (2007). Face-to-face versus online tutoring support in distance education. Studies in Higher Education , 32 (1), 1–20.

Quin, D. (2017). Longitudinal and contextual associations between teacher–student relationships and student engagement. Review of Educational Research , 87 (2), 345–387. https://doi.org/10.3102/0034654316669434 .

Rashid, T., & Asghar, H. M. (2016). Technology use, self-directed learning, student engagement and academic performance: Examining the interrelations. Computers in Human Behavior , 63 , 604–612. https://doi.org/10.1016/j.chb.2016.05.084 .

Redecker, C. (2017). European framework for the digital competence of educators . Luxembourg: Office of the European Union.

Redmond, P., Heffernan, A., Abawi, L., Brown, A., & Henderson, R. (2018). An online engagement framework for higher education. Online Learning , 22 (1). https://doi.org/10.24059/olj.v22i1.1175 .

Reeve, J. (2012). A self-determination theory perspective on student engagement. In S. L. Christenson, A. L. Reschly, & C. Wylie (Eds.), Handbook of research on student engagement , (pp. 149–172). Boston: Springer US Retrieved from http://link.springer.com/10.1007/978-1-4614-2018-7_7 .

Reeve, J., & Tseng, C.-M. (2011). Agency as a fourth aspect of students’ engagement during learning activities. Contemporary Educational Psychology , 36 (4), 257–267. https://doi.org/10.1016/j.cedpsych.2011.05.002 .

Reschly, A. L., & Christenson, S. L. (2012). Jingle, jangle, and conceptual haziness: Evolution and future directions of the engagement construct. In S. L. Christenson, A. L. Reschly, & C. Wylie (Eds.), Handbook of research on student engagement , (pp. 3–19). Boston: Springer US Retrieved from http://link.springer.com/10.1007/978-1-4614-2018-7_1 .

Salaber, J. (2014). Facilitating student engagement and collaboration in a large postgraduate course using wiki-based activities. The International Journal of Management Education , 12 (2), 115–126. https://doi.org/10.1016/j.ijme.2014.03.006 .

Schindler, L. A., Burkholder, G. J., Morad, O. A., & Marsh, C. (2017). Computer-based technology and student engagement: A critical review of the literature. International Journal of Educational Technology in Higher Education , 14 (1), 253. https://doi.org/10.1186/s41239-017-0063-0 .

Selwyn, N. (2016). Digital downsides: Exploring university students’ negative engagements with digital technology. Teaching in Higher Education , 21 (8), 1006–1021. https://doi.org/10.1080/13562517.2016.1213229 .

Shonfeld, M., & Ronen, I. (2015). Online learning for students from diverse backgrounds: Learning disability students, excellent students and average students. IAFOR Journal of Education , 3 (2), 13–29.

Skinner, E., & Pitzer, J. R. (2012). Developmental dynamics of student engagement, coping, and everyday resilience. In S. L. Christenson, A. L. Reschly, & C. Wylie (Eds.), Handbook of research on student engagement , (pp. 21–44). Boston: Springer US.

Smidt, E., Bunk, J., McGrory, B., Li, R., & Gatenby, T. (2014). Student attitudes about distance education: Focusing on context and effective practices. IAFOR Journal of Education , 2 (1), 40–64.

Smith, R. (2006). Peer review: A flawed process at the heart of science and journals. Journal of the Royal Society of Medicine , 99 , 178–182.

Smith, T., & Lambert, R. (2014). A systematic review investigating the use of twitter and Facebook in university-based healthcare education. Health Education , 114 (5), 347–366. https://doi.org/10.1108/HE-07-2013-0030 .

Solomonides, I. (2013). A relational and multidimensional model of student engagement. In E. Dunne, & D. Owen (Eds.), The student engagement handbook: Practice in higher education , (1st ed., pp. 43–58). Bingley: Emerald.

Sosa Neira, E. A., Salinas, J., & de Benito, B. (2017). Emerging technologies (ETs) in education: A systematic review of the literature published between 2006 and 2016. International Journal of Emerging Technologies in Learning (IJET) , 12 (05), 128. https://doi.org/10.3991/ijet.v12i05.6939 .

Sullivan, M., & Longnecker, N. (2014). Class blogs as a teaching tool to promote writing and student interaction. Australasian Journal of Educational Technology , 30 (4), 390–401. https://doi.org/10.14742/ajet.322 .

Sun, J. C.-Y., & Rueda, R. (2012). Situational interest, computer self-efficacy and self-regulation: Their impact on student engagement in distance education. British Journal of Educational Technology , 43 (2), 191–204. https://doi.org/10.1111/j.1467-8535.2010.01157.x .

Szabo, Z., & Schwartz, J. (2011). Learning methods for teacher education: The use of online discussions to improve critical thinking. Technology, Pedagogy and Education , 20 (1), 79–94. https://doi.org/10.1080/1475939x.2010.534866 .

Tamim, R. M., Bernard, R. M., Borokhovski, E., Abrami, P. C., & Schmid, R. F. (2011). What forty years of research says about the impact of technology on learning: A second-order meta-analysis and validation study. Review of Educational Research , 81 (1), 4–28. https://doi.org/10.3102/0034654310393361 .

Trowler, V. (2010). Student engagement literature review . York: The Higher Education Academy Retrieved from website: https://www.heacademy.ac.uk/system/files/studentengagementliteraturereview_1.pdf .

Van Rooij, E., Brouwer, J., Fokkens-Bruinsma, M., Jansen, E., Donche, V., & Noyens, D. (2017). A systematic review of factors related to first-year students’ success in Dutch and Flemish higher education. Pedagogische Studien , 94 (5), 360–405 Retrieved from https://repository.uantwerpen.be/docman/irua/cebc4c/149722.pdf .

Vural, O. F. (2013). The impact of a question-embedded video-based learning tool on E-learning. Educational Sciences: Theory and Practice , 13 (2), 1315–1323.

Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes . Cambridge: Harvard University Press.

Webb, L., Clough, J., O’Reilly, D., Wilmott, D., & Witham, G. (2017). The utility and impact of information communication technology (ICT) for pre-registration nurse education: A narrative synthesis systematic review. Nurse Education Today , 48 , 160–171. https://doi.org/10.1016/j.nedt.2016.10.007 .

Wekullo, C. S. (2019). International undergraduate student engagement: Implications for higher education administrators. Journal of International Students , 9 (1), 320–337. https://doi.org/10.32674/jis.v9i1.257 .

Wimpenny, K., & Savin-Baden, M. (2013). Alienation, agency and authenticity: A synthesis of the literature on student engagement. Teaching in Higher Education , 18 (3), 311–326. https://doi.org/10.1080/13562517.2012.725223 .

Winstone, N. E., Nash, R. A., Parker, M., & Rowntree, J. (2017). Supporting learners’ agentic engagement with feedback: A systematic review and a taxonomy of recipience processes. Educational Psychologist , 52 (1), 17–37. https://doi.org/10.1080/00461520.2016.1207538 .

Zepke, N. (2014). Student engagement research in higher education: Questioning an academic orthodoxy. Teaching in Higher Education , 19 (6), 697–708. https://doi.org/10.1080/13562517.2014.901956 .

Zepke, N. (2018). Student engagement in neo-liberal times: What is missing? Higher Education Research and Development , 37 (2), 433–446. https://doi.org/10.1080/07294360.2017.1370440 .

Zepke, N., & Leach, L. (2010). Improving student engagement: Ten proposals for action. Active Learning in Higher Education , 11 (3), 167–177. https://doi.org/10.1177/1469787410379680 .

Zhang, A., & Aasheim, C. (2011). Academic success factors: An IT student perspective. Journal of Information Technology Education: Research , 10 , 309–331. https://doi.org/10.28945/1518 .

Download references

Acknowledgements

The authors thank the two student assistants who helped during the article retrieval and screening stage.

This research resulted from the ActiveLearn project, funded by the Bundesministerium für Bildung und Forschung (BMBF-German Ministry of Education and Research) [grant number 16DHL1007].

Author information

Authors and affiliations.

Faculty of Education and Social Sciences (COER), Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany

Melissa Bond, Svenja Bedenlier & Olaf Zawacki-Richter

Learning Lab, Universität Duisburg-Essen, Essen, Germany

Katja Buntins & Michael Kerres

You can also search for this author in PubMed   Google Scholar

Contributions

All authors contributed to the design and conceptualisation of the systematic review. MB, KB and SB conducted the systematic review search and data extraction. MB undertook the literature review on student engagement and educational technology, co-wrote the method, results, discussion and conclusion section. KB designed and executed the sampling strategy and produced all of the graphs and tables, as well as assisted with the formulation of the article. SB co-wrote the method, results, discussion and conclusion sections, and proof read the introduction and literature review sections. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Melissa Bond .

Ethics declarations

Consent for publication.

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Additional file 1..

Literature reviews (LR) and systematic reviews (SR) on student engagement

Additional file 2.

Indicators of engagement and disengagement

Additional file 3.

Literature reviews (LR) and systematic reviews (SR) on student engagement and technology in higher education (HE)

Additional file 4.

Educational technology tool typology based on Bower ( 2016 ) and Educational technology tools used

Additional file 5.

Text-based tool examples by engagement domain

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Cite this article.

Bond, M., Buntins, K., Bedenlier, S. et al. Mapping research in student engagement and educational technology in higher education: a systematic evidence map. Int J Educ Technol High Educ 17 , 2 (2020). https://doi.org/10.1186/s41239-019-0176-8

Download citation

Received : 01 May 2019

Accepted : 17 December 2019

Published : 22 January 2020

DOI : https://doi.org/10.1186/s41239-019-0176-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Educational technology
  • Higher education
  • Systematic review
  • Evidence map
  • Student engagement

educational technologies research papers

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Perspective
  • Open access
  • Published: 11 August 2020

Schooling and Covid-19: lessons from recent research on EdTech

  • Robert Fairlie 1 &
  • Prashant Loyalka 2  

npj Science of Learning volume  5 , Article number:  13 ( 2020 ) Cite this article

6106 Accesses

8 Citations

3 Altmetric

Metrics details

The wide-scale global movement of school education to remote instruction due to Covid-19 is unprecedented. The use of educational technology (EdTech) offers an alternative to in-person learning and reinforces social distancing, but there is limited evidence on whether and how EdTech affects academic outcomes. Recently, we conducted two large-scale randomized experiments, involving ~10,000 primary school students in China and Russia, to evaluate the effectiveness of EdTech as a substitute for traditional schooling. In China, we examined whether EdTech improves academic outcomes relative to paper-and-pencil workbook exercises of identical content. We found that EdTech was a perfect substitute for traditional learning. In Russia, we further explored how much EdTech can substitute for traditional learning. We found that EdTech substitutes only to a limited extent. The findings from these large-scale trials indicate that we need to be careful about using EdTech as a full-scale substitute for the traditional instruction received by schoolchildren.

Similar content being viewed by others

educational technologies research papers

Artificial intelligence and illusions of understanding in scientific research

Lisa Messeri & M. J. Crockett

educational technologies research papers

Impact of artificial intelligence on human loss in decision making, laziness and safety in education

Sayed Fayaz Ahmad, Heesup Han, … Antonio Ariza-Montes

educational technologies research papers

Sleep quality, duration, and consistency are associated with better academic performance in college students

Kana Okano, Jakub R. Kaczmarzyk, … Jeffrey C. Grossman

The wide-scale global movement of school education to remote instruction due to Covid-19 is unprecedented. The use of educational technology (EdTech) offers an alternative to in-person learning and reinforces social distancing, but there is limited evidence on whether and how EdTech affects academic outcomes, and that limited evidence is mixed. 1 , 2 For example, previous studies examine performance of students in online courses and generally find that they do not perform as well as in traditional courses. On the other hand, recent large-scale evaluations of supplemental computer-assisted learning programs show large positive effects on test scores. One concern, however, is that EdTech is often evaluated as a supplemental after-school program instead of as a direct substitute for traditional learning. Supplemental programs inherently have an advantage in that provide more time learning material.

Recently, we conducted two large-scale randomized experiments, involving ~10,000 primary school students in China and Russia, to evaluate the effectiveness of EdTech as a substitute for traditional schooling. 3 , 4 In both, we focused on whether and how EdTech can substitute for in-person instruction (being careful to control for time on task). In China, we examined whether EdTech improves academic outcomes relative to paper-and-pencil workbook exercises of identical content. We followed students ages 9–13 for several months over the academic year. When we examined the impacts of each supplemental program we found that EdTech and workbook exercise sessions of equal time and content outside of school hours had the same effect on standardized math test scores and grades in math classes. As such, EdTech appeared to be a perfect substitute for traditional learning.

In Russia, we built on these findings by further exploring how much EdTech can substitute for traditional learning. We examined whether providing students ages 9–11 with no EdTech, a base level of EdTech (~45 min per week), and a doubling of that level of EdTech can improve standardized test scores and grades. We found that EdTech can substitute for traditional learning only to a limited extent. There is a diminishing marginal rate of substitution for traditional learning from doubling the amount of EdTech use (that is, when we double the amount of EdTech used we do not find that test scores performance doubles). We find that additional time on EdTech even decreases schoolchildren’s motivation and engagement in subject material.

The findings from the large-scale trials indicate that we need to be careful about using EdTech as a full-scale substitute for the traditional instruction received by schoolchildren. There are two general takeaways: First, to a certain extent, EdTech can successfully substitute for traditional learning. Second, there are limits on how much EdTech may be beneficial. Admittedly, we need to be careful about extrapolating from the smaller amount of technology substitution in our experiments to the full-scale substitution in the face of the coronavirus pandemic. However, these studies may offer important lessons. For example, a balanced approach to learning in which schoolchildren intermingle work on electronic devices and work with traditional materials might be optimal. Schools could mail workbooks to students or recommend that students print out exercises to break up the amount of continuous time schoolchildren spend on devices. This might keep students engaged throughout the day and avoid problems associated with removing the structure of classroom schedules. Schools and families can devise creative remote learning solutions that include a combination of EdTech and more traditional forms of learning. Activities such as reading books, running at-home experiments, and art projects can also be used to break up extensive use of technology in remote instruction.

Bulman, G. & Fairlie, R. W. in Handbook of the Economics of Education (eds Hanushek, E., Machin, S. & Woessmann, L.) 239–280 (North-Holland, 2016).

Escueta, M., Quan, V., Nickow, A. J. & Oreopoulos, P. Education Technology: An Evidence-Based Review (National Bureau of Economics Research Working Paper No. 23744, 2017).

Bettinger, E. et al. Does EdTech Substitute for Traditional Learning? Experimental Estimates of the Educational Production Function (National Bureau of Economics Research Working Paper, 2020).

Ma, Y., Fairlie, R. W., Loyalka, P. & Rozelle, S. Isolating the “Tech” from EdTech: Experimental Evidence on Computer Assisted Learning in China (National Bureau of Economics Research Working Paper, 2020).

Download references

Acknowledgements

We would like to thank the numerous people that helped us with this research.

Author information

Authors and affiliations.

Department of Economics, University of California, Santa Cruz, USA

Robert Fairlie

Graduate School of Education/Freeman Spogli Institute for International Studies, Stanford University, Stanford, USA

Prashant Loyalka

You can also search for this author in PubMed   Google Scholar

Contributions

R.F.: contributed to analysis and writing. P.L.: contributed to analysis and writing. Both authors are accountable for the accuracy or integrity of all of the work. The authors are co-first authors having provided equal contributions to work.

Corresponding author

Correspondence to Robert Fairlie .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Fairlie, R., Loyalka, P. Schooling and Covid-19: lessons from recent research on EdTech. npj Sci. Learn. 5 , 13 (2020). https://doi.org/10.1038/s41539-020-00072-6

Download citation

Received : 04 May 2020

Accepted : 08 July 2020

Published : 11 August 2020

DOI : https://doi.org/10.1038/s41539-020-00072-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

educational technologies research papers

15 EdTech research papers that we share all the time

We hope you saw our recent blog post responding to questions we often get about interesting large-scale EdTech initiatives. Another question we are often asked is: “What EdTech research should I know about?” 

As Sara’s blog post explains, one of the Hub’s core spheres of work is research, so we ourselves are very interested in the answer to this question. Katy’s latest blog post explains how the Hub’s research programme is addressing this question through a literature review to create a foundation for further research.  While the literature review is in progress, we thought we would share an initial list of EdTech papers that we often reach for. At the Hub we are fortunate enough to have authors of several papers on this list as members of our team. 

All papers on this list are linked to a record in the EdTech Hub’s growing document library – where you will find the citation and source to the full text. This library is currently an alpha version. This means it’s the first version of the service and we’re testing how it works for you. If you have any feedback or find any issues with our evidence library, please get in touch.

Tablet use in schools: a critical review of the evidence for learning outcomes

This critical review by our own Bjӧrn Haßler, Sara Hennessy, and Louis Major has been cited over 200 times since it was published in 2016. It examines evidence from 23 studies on tablet use at the primary and secondary school levels. It discusses the fragmented nature of the knowledge base and limited rigorous evidence on tablet use in education. 

Haßler, B., Major, L., & Hennessy, S. (2016) Tablet use in schools: a critical review of the evidence for learning outcomes . Journal of Computer Assisted Learning, 32(2), 139-156.

The impact and reach of MOOCs: a developing countries’ perspective

This article challenges the narrative that Massive Open Online Courses (MOOCs) are a solution to low and middle-income countries’ (LMIC) lack of access to education, examining the features of MOOCs from their perspectives. It argues that a complicated set of conditions, including access, language, and computer literacy, among others, challenge the viability of MOOCs as a solution for populations in LMIC. 

Liyanagunawardena, T., Williams, S., & Adams, A. (2013) The impact and reach of MOOCs: a developing countries’ perspective. eLearning Papers , 33(33).

Technology and education – Why it’s crucial to be critical

A thought-provoking read, Selwyn’s book chapter argues that technology and education should continuously be viewed through a critical lens. It points to how the use of technology in education is entwined with issues of inequality, domination, and exploitation, and offers suggestions for how to grapple with these issues. 

Selwyn, N. (2015) Technology and education – Why it’s crucial to be critical. In S. Bulfin, N. F. Johnson & L. Rowan (Eds.), Critical Perspectives on Technology and Education (pp. 245-255). Basingstoke and St. Martins, New York: Palgrave Macmillan.

Moving beyond the predictable failure of Ed-Tech initiatives

This article argues that a narrow vision of digital technology, which ignores the complexity of education, is becoming an obstacle to improvement and transformation of education. Specifically, the authors critically reflect on common approaches to introducing digital technology in education under the guise of promoting equality and digital inclusion.

Sancho-Gil, J.M., Rivera-Vargas, P. & Miño-Puigcercós, R. (2019) Moving beyond the predictable failure of Ed-Tech initiatives. Learning, Media and Technology , early view. DOI: 10.1080/17439884.2019.1666873

Synergies Between the Principles for Digital Development and Four Case Studies

The REAL Centre’s report, which includes contributions from the Hub’s own ranks, is one of the few we’ve seen that provides an in-depth exploration of how the Principles for Digital Development apply to the education sector. It uses four case studies on the work of the Aga Khan Foundation, Camfed, the Punjab Education and Technology Board, and the Varkey Foundation. 

REAL Centre (2018). Synergies Between the Principles for Digital Development and Four Case Studies. Cambridge, UK: Research for Equitable Access and Learning (REAL) Centre, Faculty of Education, University of Cambridge .

Education technology map: guidance document

This report by the Hub’s Jigsaw colleagues accompanies a comprehensive map of 401 resources with evidence on the use of EdTech in low-resource environments. The evidence mapping reviews certain criteria of the resources from sources such as journal indices, online research, evaluation repositories, and resource centres and experts. The type of criteria it maps include: the geographical location of study, outcomes studied, and type of EdTech introduced.  While not inclusive of the latest EdTech research and evidence (from 2016 to the present), this mapping represents a strong starting point to understand what we know about EdTech as well as the characteristics of existing evidence.

Muyoya, C., Brugha, M., Hollow, D. (2016). Education technology map: guidance document. Jigsaw, United Kingdom.

Scaling Access & Impact: Realizing the Power of EdTech

Commissioned by Omidyar Network and written by RTI, this executive summary (with the full report expected soon) is a useful examination of the factors needed to enable, scale, and sustain equitable EdTech on a national basis. Four country reports on Chile, China, Indonesia, and the United States examine at-scale access and use of EdTech across a broad spectrum of students. It also provides a framework for an ecosystem that will allow EdTech to be equitable and able to be scaled.  

S caling Access & Impact: Realizing the Power of EdTech (Executive Summary). Omidyar Network.

Perspectives on Technology, Resources and Learning – Productive Classroom Practices, Effective Teacher Professional Development

If you are interested in how technology can be used in the classroom and to support teacher professional development, this report by the Hub’s Björn Haßler and members of the Faculty of Education at the University of Cambridge emphasizes the key point that technology should be seen as complementary to, rather than as a replacement for, teachers. As the authors put it, “the teacher and teacher education are central for the successful integration of digital technology into the classroom.” The report is also accompanied by a toolkit (linked below) with questions that can be used to interrogate EdTech interventions.

Haßler, B., Major, L., Warwick, P., Watson, S., Hennessy, S., & Nichol, B. (2016). Perspectives on Technology, Resources and Learning – Productive Classroom Practices, Effective Teacher Professional Development . Faculty of Education, University of Cambridge. DOI:10.5281/zenodo.2626440

Haßler, B., Major, L., Warwick, P., Watson, S., Hennessy, S., & Nichol, B. (2016). A short guide on the use of technology in learning: Perspectives and Toolkit for Discussion . Faculty of Education, University of Cambridge. DOI:10.5281/zenodo.2626660

Teacher Factors Influencing Classroom Use of ICT in Sub-Saharan Africa

In this paper, the Hub’s Sara Hennessy and co-authors synthesise literature on teachers’ use of ICT, with a focus on using ICT to improve the quality of teaching and learning. They find evidence to support the integration of ICT into subject learning, instead of treating it as a discrete subject, and to provide relevant preparation to teachers during pre- and in-service training to use ICT in classrooms. Although this evidence has been available for a decade, the implications of the paper’s findings are still not often reflected in practice.  

Hennessy, S., Harrison, D., & Wamakote, L. (2010). Teacher Factors Influencing Classroom Use of ICT in Sub-Saharan Africa. Itupale Online Journal of African Studies, 2, 39- 54.

Information and Communications Technologies in Secondary Education in Sub-Saharan Africa: Policies, Practices, Trends, and Recommendations

This landscape review by Burns and co-authors offers a useful descriptive starting point for understanding technology use in sub-Saharan Africa in secondary education, including the policy environment, key actors, promising practices, challenges, trends, and opportunities. The report includes four case studies on South Africa, Mauritius, Botswana, and Cape Verde. 

Burns, M., Santally, M. I., Halkhoree, R., Sungkur, K. R., Juggurnath, B., Rajabalee, Y. B. (2019) Information and Communications Technologies in Secondary Education in Sub-Saharan Africa: Policies, Practices, Trends, and Recommendations. Mastercard Foundation.

The influence of infrastructure, training, content and communication on the success of NEPAD’S pilot e-Schools in Kenya

This study examines the impact of training teachers to use ICT, on the success of NEPAD’S e-Schools. The e-Schools objectives were to impart ICT skills to students, enhance teachers’ capacities through the use of ICT in teaching, improve school management and increase access to education. Unlike other studies on the subject, Nyawoga, Ocholla, and Mutula crucially recognise that while teachers received technical ICT training, they did not receive training on pedagogies for integrating ICT in teaching and learning. 

Nyagowa, H. O., Ocholla, D. N., & Mutula, S. M. (2014). T he influence of infrastructure, training, content and communication on the success of NEPAD’S pilot e-Schools in Kenya . Information Development, 30(3), 235-246 .

Education in Conflict and Crisis: How Can Technology Make a Difference?

This landscape review identifies ICT projects supporting education in conflict and crisis settings. It finds that most of the projects operate in post-conflict settings and focus on the long-term development of such places. The report hones in on major thematic areas of professional development and student learning. It also presents directions for further research, including considerations of conflict sensitivity and inclusion in the use of ICT. 

Dahya, N. (2016) Education in Conflict and Crisis: How Can Technology Make a Difference? A Landscape Review . GIZ.

Does technology improve reading outcomes? Comparing the effectiveness and cost-effectiveness of ICT interventions for early-grade reading in Kenya

This randomized controlled trial contributes to the limited evidence base on the effects of different types of ICT investments on learning outcomes. All groups participated in the ‘base’ initiative which focused on training teachers and headteachers in literacy and numeracy, books for every student, teacher guides that matched closely with the content of the students’ book, and modest ICT intervention with tablets provided only for government-funded instructional supervisors. The RCT then compared outcomes from three interventions:  (1) base program plus e-readers for students, (2) base program plus tablets for teachers, and (3) the control group who were treated only with the base program. The paper finds that the classroom-level ICT investments do not improve literacy outcomes significantly more than the base program alone, and that cost considerations are crucial in selecting ICT investments in education.

Piper, B., Zuilkowski, S., Kwayumba, D., & Strigel, C. (2016). Does technology improve reading outcomes? Comparing the effectiveness and cost-effectiveness of ICT interventions for early-grade reading in Kenya. International Journal of Educational Development (49), 204-214.

[FORTHCOMING] Technology in education in low-income countries: Problem analysis and focus of the EdTech Hub’s work

Informed by the research cited in this list (and much more) – the Hub will soon publish a problem analysis. It will define our focus and the scope of our work. To give a taste of what is to come, the problem analysis will explain why we will prioritise teachers, marginalised groups, and use a systems lens. It will also explore emergent challenges in EdTech research, design, and implementation.

EdTech Hub. (2020). Technology in education in low-income countries: Problem analysis and focus of the Hub’s work (EdTech Hub Working Paper No. 5). London, UK. https://doi.org/10.5281/zenodo.3377829

It is important to note that we have included a mix of research types at varying levels of rigour, from landscape reviews and evidence maps, to critical reviews and case studies. Our list is not comprehensive and has some obvious limitations (they are all in English, for one). If you are interested in exploring more papers and evidence, don’t forget to check out the EdTech Hub’s growing document library , where you will find not just links to the full papers in this list but over 200 resources, with more being added each day.

What interesting EdTech research have you recently read, and what did you take away from it? Let us know in the comments section or on Twitter at @GlobalEdTechHub and use #EdTechHub

Related Posts

Call for papers in a special ..., engaging with equity: insights from our ..., privacy overview.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Springer Nature - PMC COVID-19 Collection

Logo of phenaturepg

Impacts of digital technologies on education and factors influencing schools' digital capacity and transformation: A literature review

Stella timotheou.

1 CYENS Center of Excellence & Cyprus University of Technology (Cyprus Interaction Lab), Cyprus, CYENS Center of Excellence & Cyprus University of Technology, Nicosia-Limassol, Cyprus

Ourania Miliou

Yiannis dimitriadis.

2 Universidad de Valladolid (UVA), Spain, Valladolid, Spain

Sara Villagrá Sobrino

Nikoleta giannoutsou, romina cachia.

3 JRC - Joint Research Centre of the European Commission, Seville, Spain

Alejandra Martínez Monés

Andri ioannou, associated data.

Data sharing not applicable to this article as no datasets were generated or analysed during the current study.

Digital technologies have brought changes to the nature and scope of education and led education systems worldwide to adopt strategies and policies for ICT integration. The latter brought about issues regarding the quality of teaching and learning with ICTs, especially concerning the understanding, adaptation, and design of the education systems in accordance with current technological trends. These issues were emphasized during the recent COVID-19 pandemic that accelerated the use of digital technologies in education, generating questions regarding digitalization in schools. Specifically, many schools demonstrated a lack of experience and low digital capacity, which resulted in widening gaps, inequalities, and learning losses. Such results have engendered the need for schools to learn and build upon the experience to enhance their digital capacity and preparedness, increase their digitalization levels, and achieve a successful digital transformation. Given that the integration of digital technologies is a complex and continuous process that impacts different actors within the school ecosystem, there is a need to show how these impacts are interconnected and identify the factors that can encourage an effective and efficient change in the school environments. For this purpose, we conducted a non-systematic literature review. The results of the literature review were organized thematically based on the evidence presented about the impact of digital technology on education and the factors that affect the schools’ digital capacity and digital transformation. The findings suggest that ICT integration in schools impacts more than just students’ performance; it affects several other school-related aspects and stakeholders, too. Furthermore, various factors affect the impact of digital technologies on education. These factors are interconnected and play a vital role in the digital transformation process. The study results shed light on how ICTs can positively contribute to the digital transformation of schools and which factors should be considered for schools to achieve effective and efficient change.

Introduction

Digital technologies have brought changes to the nature and scope of education. Versatile and disruptive technological innovations, such as smart devices, the Internet of Things (IoT), artificial intelligence (AI), augmented reality (AR) and virtual reality (VR), blockchain, and software applications have opened up new opportunities for advancing teaching and learning (Gaol & Prasolova-Førland, 2021 ; OECD, 2021 ). Hence, in recent years, education systems worldwide have increased their investment in the integration of information and communication technology (ICT) (Fernández-Gutiérrez et al., 2020 ; Lawrence & Tar, 2018 ) and prioritized their educational agendas to adapt strategies or policies around ICT integration (European Commission, 2019 ). The latter brought about issues regarding the quality of teaching and learning with ICTs (Bates, 2015 ), especially concerning the understanding, adaptation, and design of education systems in accordance with current technological trends (Balyer & Öz, 2018 ). Studies have shown that despite the investment made in the integration of technology in schools, the results have not been promising, and the intended outcomes have not yet been achieved (Delgado et al., 2015 ; Lawrence & Tar, 2018 ). These issues were exacerbated during the COVID-19 pandemic, which forced teaching across education levels to move online (Daniel, 2020 ). Online teaching accelerated the use of digital technologies generating questions regarding the process, the nature, the extent, and the effectiveness of digitalization in schools (Cachia et al., 2021 ; König et al., 2020 ). Specifically, many schools demonstrated a lack of experience and low digital capacity, which resulted in widening gaps, inequalities, and learning losses (Blaskó et al., 2021 ; Di Pietro et al, 2020 ). Such results have engendered the need for schools to learn and build upon the experience in order to enhance their digital capacity (European Commission, 2020 ) and increase their digitalization levels (Costa et al., 2021 ). Digitalization offers possibilities for fundamental improvement in schools (OECD, 2021 ; Rott & Marouane, 2018 ) and touches many aspects of a school’s development (Delcker & Ifenthaler, 2021 ) . However, it is a complex process that requires large-scale transformative changes beyond the technical aspects of technology and infrastructure (Pettersson, 2021 ). Namely, digitalization refers to “ a series of deep and coordinated culture, workforce, and technology shifts and operating models ” (Brooks & McCormack, 2020 , p. 3) that brings cultural, organizational, and operational change through the integration of digital technologies (JISC, 2020 ). A successful digital transformation requires that schools increase their digital capacity levels, establishing the necessary “ culture, policies, infrastructure as well as digital competence of students and staff to support the effective integration of technology in teaching and learning practices ” (Costa et al, 2021 , p.163).

Given that the integration of digital technologies is a complex and continuous process that impacts different actors within the school ecosystem (Eng, 2005 ), there is a need to show how the different elements of the impact are interconnected and to identify the factors that can encourage an effective and efficient change in the school environment. To address the issues outlined above, we formulated the following research questions:

a) What is the impact of digital technologies on education?

b) Which factors might affect a school’s digital capacity and transformation?

In the present investigation, we conducted a non-systematic literature review of publications pertaining to the impact of digital technologies on education and the factors that affect a school’s digital capacity and transformation. The results of the literature review were organized thematically based on the evidence presented about the impact of digital technology on education and the factors which affect the schools’ digital capacity and digital transformation.

Methodology

The non-systematic literature review presented herein covers the main theories and research published over the past 17 years on the topic. It is based on meta-analyses and review papers found in scholarly, peer-reviewed content databases and other key studies and reports related to the concepts studied (e.g., digitalization, digital capacity) from professional and international bodies (e.g., the OECD). We searched the Scopus database, which indexes various online journals in the education sector with an international scope, to collect peer-reviewed academic papers. Furthermore, we used an all-inclusive Google Scholar search to include relevant key terms or to include studies found in the reference list of the peer-reviewed papers, and other key studies and reports related to the concepts studied by professional and international bodies. Lastly, we gathered sources from the Publications Office of the European Union ( https://op.europa.eu/en/home ); namely, documents that refer to policies related to digital transformation in education.

Regarding search terms, we first searched resources on the impact of digital technologies on education by performing the following search queries: “impact” OR “effects” AND “digital technologies” AND “education”, “impact” OR “effects” AND “ICT” AND “education”. We further refined our results by adding the terms “meta-analysis” and “review” or by adjusting the search options based on the features of each database to avoid collecting individual studies that would provide limited contributions to a particular domain. We relied on meta-analyses and review studies as these consider the findings of multiple studies to offer a more comprehensive view of the research in a given area (Schuele & Justice, 2006 ). Specifically, meta-analysis studies provided quantitative evidence based on statistically verifiable results regarding the impact of educational interventions that integrate digital technologies in school classrooms (Higgins et al., 2012 ; Tolani-Brown et al., 2011 ).

However, quantitative data does not offer explanations for the challenges or difficulties experienced during ICT integration in learning and teaching (Tolani-Brown et al., 2011 ). To fill this gap, we analyzed literature reviews and gathered in-depth qualitative evidence of the benefits and implications of technology integration in schools. In the analysis presented herein, we also included policy documents and reports from professional and international bodies and governmental reports, which offered useful explanations of the key concepts of this study and provided recent evidence on digital capacity and transformation in education along with policy recommendations. The inclusion and exclusion criteria that were considered in this study are presented in Table ​ Table1 1 .

Inclusion and exclusion criteria for the selection of resources on the impact of digital technologies on education

To ensure a reliable extraction of information from each study and assist the research synthesis we selected the study characteristics of interest (impact) and constructed coding forms. First, an overview of the synthesis was provided by the principal investigator who described the processes of coding, data entry, and data management. The coders followed the same set of instructions but worked independently. To ensure a common understanding of the process between coders, a sample of ten studies was tested. The results were compared, and the discrepancies were identified and resolved. Additionally, to ensure an efficient coding process, all coders participated in group meetings to discuss additions, deletions, and modifications (Stock, 1994 ). Due to the methodological diversity of the studied documents we began to synthesize the literature review findings based on similar study designs. Specifically, most of the meta-analysis studies were grouped in one category due to the quantitative nature of the measured impact. These studies tended to refer to student achievement (Hattie et al., 2014 ). Then, we organized the themes of the qualitative studies in several impact categories. Lastly, we synthesized both review and meta-analysis data across the categories. In order to establish a collective understanding of the concept of impact, we referred to a previous impact study by Balanskat ( 2009 ) which investigated the impact of technology in primary schools. In this context, the impact had a more specific ICT-related meaning and was described as “ a significant influence or effect of ICT on the measured or perceived quality of (parts of) education ” (Balanskat, 2009 , p. 9). In the study presented herein, the main impacts are in relation to learning and learners, teaching, and teachers, as well as other key stakeholders who are directly or indirectly connected to the school unit.

The study’s results identified multiple dimensions of the impact of digital technologies on students’ knowledge, skills, and attitudes; on equality, inclusion, and social integration; on teachers’ professional and teaching practices; and on other school-related aspects and stakeholders. The data analysis indicated various factors that might affect the schools’ digital capacity and transformation, such as digital competencies, the teachers’ personal characteristics and professional development, as well as the school’s leadership and management, administration, infrastructure, etc. The impacts and factors found in the literature review are presented below.

Impacts of digital technologies on students’ knowledge, skills, attitudes, and emotions

The impact of ICT use on students’ knowledge, skills, and attitudes has been investigated early in the literature. Eng ( 2005 ) found a small positive effect between ICT use and students' learning. Specifically, the author reported that access to computer-assisted instruction (CAI) programs in simulation or tutorial modes—used to supplement rather than substitute instruction – could enhance student learning. The author reported studies showing that teachers acknowledged the benefits of ICT on pupils with special educational needs; however, the impact of ICT on students' attainment was unclear. Balanskat et al. ( 2006 ) found a statistically significant positive association between ICT use and higher student achievement in primary and secondary education. The authors also reported improvements in the performance of low-achieving pupils. The use of ICT resulted in further positive gains for students, namely increased attention, engagement, motivation, communication and process skills, teamwork, and gains related to their behaviour towards learning. Evidence from qualitative studies showed that teachers, students, and parents recognized the positive impact of ICT on students' learning regardless of their competence level (strong/weak students). Punie et al. ( 2006 ) documented studies that showed positive results of ICT-based learning for supporting low-achieving pupils and young people with complex lives outside the education system. Liao et al. ( 2007 ) reported moderate positive effects of computer application instruction (CAI, computer simulations, and web-based learning) over traditional instruction on primary school student's achievement. Similarly, Tamim et al. ( 2011 ) reported small to moderate positive effects between the use of computer technology (CAI, ICT, simulations, computer-based instruction, digital and hypermedia) and student achievement in formal face-to-face classrooms compared to classrooms that did not use technology. Jewitt et al., ( 2011 ) found that the use of learning platforms (LPs) (virtual learning environments, management information systems, communication technologies, and information- and resource-sharing technologies) in schools allowed primary and secondary students to access a wider variety of quality learning resources, engage in independent and personalized learning, and conduct self- and peer-review; LPs also provide opportunities for teacher assessment and feedback. Similar findings were reported by Fu ( 2013 ), who documented a list of benefits and opportunities of ICT use. According to the author, the use of ICTs helps students access digital information and course content effectively and efficiently, supports student-centered and self-directed learning, as well as the development of a creative learning environment where more opportunities for critical thinking skills are offered, and promotes collaborative learning in a distance-learning environment. Higgins et al. ( 2012 ) found consistent but small positive associations between the use of technology and learning outcomes of school-age learners (5–18-year-olds) in studies linking the provision and use of technology with attainment. Additionally, Chauhan ( 2017 ) reported a medium positive effect of technology on the learning effectiveness of primary school students compared to students who followed traditional learning instruction.

The rise of mobile technologies and hardware devices instigated investigations into their impact on teaching and learning. Sung et al. ( 2016 ) reported a moderate effect on students' performance from the use of mobile devices in the classroom compared to the use of desktop computers or the non-use of mobile devices. Schmid et al. ( 2014 ) reported medium–low to low positive effects of technology integration (e.g., CAI, ICTs) in the classroom on students' achievement and attitude compared to not using technology or using technology to varying degrees. Tamim et al. ( 2015 ) found a low statistically significant effect of the use of tablets and other smart devices in educational contexts on students' achievement outcomes. The authors suggested that tablets offered additional advantages to students; namely, they reported improvements in students’ notetaking, organizational and communication skills, and creativity. Zheng et al. ( 2016 ) reported a small positive effect of one-to-one laptop programs on students’ academic achievement across subject areas. Additional reported benefits included student-centered, individualized, and project-based learning enhanced learner engagement and enthusiasm. Additionally, the authors found that students using one-to-one laptop programs tended to use technology more frequently than in non-laptop classrooms, and as a result, they developed a range of skills (e.g., information skills, media skills, technology skills, organizational skills). Haßler et al. ( 2016 ) found that most interventions that included the use of tablets across the curriculum reported positive learning outcomes. However, from 23 studies, five reported no differences, and two reported a negative effect on students' learning outcomes. Similar results were indicated by Kalati and Kim ( 2022 ) who investigated the effect of touchscreen technologies on young students’ learning. Specifically, from 53 studies, 34 advocated positive effects of touchscreen devices on children’s learning, 17 obtained mixed findings and two studies reported negative effects.

More recently, approaches that refer to the impact of gamification with the use of digital technologies on teaching and learning were also explored. A review by Pan et al. ( 2022 ) that examined the role of learning games in fostering mathematics education in K-12 settings, reported that gameplay improved students’ performance. Integration of digital games in teaching was also found as a promising pedagogical practice in STEM education that could lead to increased learning gains (Martinez et al., 2022 ; Wang et al., 2022 ). However, although Talan et al. ( 2020 ) reported a medium effect of the use of educational games (both digital and non-digital) on academic achievement, the effect of non-digital games was higher.

Over the last two years, the effects of more advanced technologies on teaching and learning were also investigated. Garzón and Acevedo ( 2019 ) found that AR applications had a medium effect on students' learning outcomes compared to traditional lectures. Similarly, Garzón et al. ( 2020 ) showed that AR had a medium impact on students' learning gains. VR applications integrated into various subjects were also found to have a moderate effect on students’ learning compared to control conditions (traditional classes, e.g., lectures, textbooks, and multimedia use, e.g., images, videos, animation, CAI) (Chen et al., 2022b ). Villena-Taranilla et al. ( 2022 ) noted the moderate effect of VR technologies on students’ learning when these were applied in STEM disciplines. In the same meta-analysis, Villena-Taranilla et al. ( 2022 ) highlighted the role of immersive VR, since its effect on students’ learning was greater (at a high level) across educational levels (K-6) compared to semi-immersive and non-immersive integrations. In another meta-analysis study, the effect size of the immersive VR was small and significantly differentiated across educational levels (Coban et al., 2022 ). The impact of AI on education was investigated by Su and Yang ( 2022 ) and Su et al. ( 2022 ), who showed that this technology significantly improved students’ understanding of AI computer science and machine learning concepts.

It is worth noting that the vast majority of studies referred to learning gains in specific subjects. Specifically, several studies examined the impact of digital technologies on students’ literacy skills and reported positive effects on language learning (Balanskat et al., 2006 ; Grgurović et al., 2013 ; Friedel et al., 2013 ; Zheng et al., 2016 ; Chen et al., 2022b ; Savva et al., 2022 ). Also, several studies documented positive effects on specific language learning areas, namely foreign language learning (Kao, 2014 ), writing (Higgins et al., 2012 ; Wen & Walters, 2022 ; Zheng et al., 2016 ), as well as reading and comprehension (Cheung & Slavin, 2011 ; Liao et al., 2007 ; Schwabe et al., 2022 ). ICTs were also found to have a positive impact on students' performance in STEM (science, technology, engineering, and mathematics) disciplines (Arztmann et al., 2022 ; Bado, 2022 ; Villena-Taranilla et al., 2022 ; Wang et al., 2022 ). Specifically, a number of studies reported positive impacts on students’ achievement in mathematics (Balanskat et al., 2006 ; Hillmayr et al., 2020 ; Li & Ma, 2010 ; Pan et al., 2022 ; Ran et al., 2022 ; Verschaffel et al., 2019 ; Zheng et al., 2016 ). Furthermore, studies documented positive effects of ICTs on science learning (Balanskat et al., 2006 ; Liao et al., 2007 ; Zheng et al., 2016 ; Hillmayr et al., 2020 ; Kalemkuş & Kalemkuş, 2022 ; Lei et al., 2022a ). Çelik ( 2022 ) also noted that computer simulations can help students understand learning concepts related to science. Furthermore, some studies documented that the use of ICTs had a positive impact on students’ achievement in other subjects, such as geography, history, music, and arts (Chauhan, 2017 ; Condie & Munro, 2007 ), and design and technology (Balanskat et al., 2006 ).

More specific positive learning gains were reported in a number of skills, e.g., problem-solving skills and pattern exploration skills (Higgins et al., 2012 ), metacognitive learning outcomes (Verschaffel et al., 2019 ), literacy skills, computational thinking skills, emotion control skills, and collaborative inquiry skills (Lu et al., 2022 ; Su & Yang, 2022 ; Su et al., 2022 ). Additionally, several investigations have reported benefits from the use of ICT on students’ creativity (Fielding & Murcia, 2022 ; Liu et al., 2022 ; Quah & Ng, 2022 ). Lastly, digital technologies were also found to be beneficial for enhancing students’ lifelong learning skills (Haleem et al., 2022 ).

Apart from gaining knowledge and skills, studies also reported improvement in motivation and interest in mathematics (Higgins et. al., 2019 ; Fadda et al., 2022 ) and increased positive achievement emotions towards several subjects during interventions using educational games (Lei et al., 2022a ). Chen et al. ( 2022a ) also reported a small but positive effect of digital health approaches in bullying and cyberbullying interventions with K-12 students, demonstrating that technology-based approaches can help reduce bullying and related consequences by providing emotional support, empowerment, and change of attitude. In their meta-review study, Su et al. ( 2022 ) also documented that AI technologies effectively strengthened students’ attitudes towards learning. In another meta-analysis, Arztmann et al. ( 2022 ) reported positive effects of digital games on motivation and behaviour towards STEM subjects.

Impacts of digital technologies on equality, inclusion and social integration

Although most of the reviewed studies focused on the impact of ICTs on students’ knowledge, skills, and attitudes, reports were also made on other aspects in the school context, such as equality, inclusion, and social integration. Condie and Munro ( 2007 ) documented research interventions investigating how ICT can support pupils with additional or special educational needs. While those interventions were relatively small scale and mostly based on qualitative data, their findings indicated that the use of ICTs enabled the development of communication, participation, and self-esteem. A recent meta-analysis (Baragash et al., 2022 ) with 119 participants with different disabilities, reported a significant overall effect size of AR on their functional skills acquisition. Koh’s meta-analysis ( 2022 ) also revealed that students with intellectual and developmental disabilities improved their competence and performance when they used digital games in the lessons.

Istenic Starcic and Bagon ( 2014 ) found that the role of ICT in inclusion and the design of pedagogical and technological interventions was not sufficiently explored in educational interventions with people with special needs; however, some benefits of ICT use were found in students’ social integration. The issue of gender and technology use was mentioned in a small number of studies. Zheng et al. ( 2016 ) reported a statistically significant positive interaction between one-to-one laptop programs and gender. Specifically, the results showed that girls and boys alike benefitted from the laptop program, but the effect on girls’ achievement was smaller than that on boys’. Along the same lines, Arztmann et al. ( 2022 ) reported no difference in the impact of game-based learning between boys and girls, arguing that boys and girls equally benefited from game-based interventions in STEM domains. However, results from a systematic review by Cussó-Calabuig et al. ( 2018 ) found limited and low-quality evidence on the effects of intensive use of computers on gender differences in computer anxiety, self-efficacy, and self-confidence. Based on their view, intensive use of computers can reduce gender differences in some areas and not in others, depending on contextual and implementation factors.

Impacts of digital technologies on teachers’ professional and teaching practices

Various research studies have explored the impact of ICT on teachers’ instructional practices and student assessment. Friedel et al. ( 2013 ) found that the use of mobile devices by students enabled teachers to successfully deliver content (e.g., mobile serious games), provide scaffolding, and facilitate synchronous collaborative learning. The integration of digital games in teaching and learning activities also gave teachers the opportunity to study and apply various pedagogical practices (Bado, 2022 ). Specifically, Bado ( 2022 ) found that teachers who implemented instructional activities in three stages (pre-game, game, and post-game) maximized students’ learning outcomes and engagement. For instance, during the pre-game stage, teachers focused on lectures and gameplay training, at the game stage teachers provided scaffolding on content, addressed technical issues, and managed the classroom activities. During the post-game stage, teachers organized activities for debriefing to ensure that the gameplay had indeed enhanced students’ learning outcomes.

Furthermore, ICT can increase efficiency in lesson planning and preparation by offering possibilities for a more collaborative approach among teachers. The sharing of curriculum plans and the analysis of students’ data led to clearer target settings and improvements in reporting to parents (Balanskat et al., 2006 ).

Additionally, the use and application of digital technologies in teaching and learning were found to enhance teachers’ digital competence. Balanskat et al. ( 2006 ) documented studies that revealed that the use of digital technologies in education had a positive effect on teachers’ basic ICT skills. The greatest impact was found on teachers with enough experience in integrating ICTs in their teaching and/or who had recently participated in development courses for the pedagogical use of technologies in teaching. Punie et al. ( 2006 ) reported that the provision of fully equipped multimedia portable computers and the development of online teacher communities had positive impacts on teachers’ confidence and competence in the use of ICTs.

Moreover, online assessment via ICTs benefits instruction. In particular, online assessments support the digitalization of students’ work and related logistics, allow teachers to gather immediate feedback and readjust to new objectives, and support the improvement of the technical quality of tests by providing more accurate results. Additionally, the capabilities of ICTs (e.g., interactive media, simulations) create new potential methods of testing specific skills, such as problem-solving and problem-processing skills, meta-cognitive skills, creativity and communication skills, and the ability to work productively in groups (Punie et al., 2006 ).

Impacts of digital technologies on other school-related aspects and stakeholders

There is evidence that the effective use of ICTs and the data transmission offered by broadband connections help improve administration (Balanskat et al., 2006 ). Specifically, ICTs have been found to provide better management systems to schools that have data gathering procedures in place. Condie and Munro ( 2007 ) reported impacts from the use of ICTs in schools in the following areas: attendance monitoring, assessment records, reporting to parents, financial management, creation of repositories for learning resources, and sharing of information amongst staff. Such data can be used strategically for self-evaluation and monitoring purposes which in turn can result in school improvements. Additionally, they reported that online access to other people with similar roles helped to reduce headteachers’ isolation by offering them opportunities to share insights into the use of ICT in learning and teaching and how it could be used to support school improvement. Furthermore, ICTs provided more efficient and successful examination management procedures, namely less time-consuming reporting processes compared to paper-based examinations and smooth communications between schools and examination authorities through electronic data exchange (Punie et al., 2006 ).

Zheng et al. ( 2016 ) reported that the use of ICTs improved home-school relationships. Additionally, Escueta et al. ( 2017 ) reported several ICT programs that had improved the flow of information from the school to parents. Particularly, they documented that the use of ICTs (learning management systems, emails, dedicated websites, mobile phones) allowed for personalized and customized information exchange between schools and parents, such as attendance records, upcoming class assignments, school events, and students’ grades, which generated positive results on students’ learning outcomes and attainment. Such information exchange between schools and families prompted parents to encourage their children to put more effort into their schoolwork.

The above findings suggest that the impact of ICT integration in schools goes beyond students’ performance in school subjects. Specifically, it affects a number of school-related aspects, such as equality and social integration, professional and teaching practices, and diverse stakeholders. In Table ​ Table2, 2 , we summarize the different impacts of digital technologies on school stakeholders based on the literature review, while in Table ​ Table3 3 we organized the tools/platforms and practices/policies addressed in the meta-analyses, literature reviews, EU reports, and international bodies included in the manuscript.

The impact of digital technologies on schools’ stakeholders based on the literature review

Tools/platforms and practices/policies addressed in the meta-analyses, literature reviews, EU reports, and international bodies included in the manuscript

Additionally, based on the results of the literature review, there are many types of digital technologies with different affordances (see, for example, studies on VR vs Immersive VR), which evolve over time (e.g. starting from CAIs in 2005 to Augmented and Virtual reality 2020). Furthermore, these technologies are linked to different pedagogies and policy initiatives, which are critical factors in the study of impact. Table ​ Table3 3 summarizes the different tools and practices that have been used to examine the impact of digital technologies on education since 2005 based on the review results.

Factors that affect the integration of digital technologies

Although the analysis of the literature review demonstrated different impacts of the use of digital technology on education, several authors highlighted the importance of various factors, besides the technology itself, that affect this impact. For example, Liao et al. ( 2007 ) suggested that future studies should carefully investigate which factors contribute to positive outcomes by clarifying the exact relationship between computer applications and learning. Additionally, Haßler et al., ( 2016 ) suggested that the neutral findings regarding the impact of tablets on students learning outcomes in some of the studies included in their review should encourage educators, school leaders, and school officials to further investigate the potential of such devices in teaching and learning. Several other researchers suggested that a number of variables play a significant role in the impact of ICTs on students’ learning that could be attributed to the school context, teaching practices and professional development, the curriculum, and learners’ characteristics (Underwood, 2009 ; Tamim et al., 2011 ; Higgins et al., 2012 ; Archer et al., 2014 ; Sung et al., 2016 ; Haßler et al., 2016 ; Chauhan, 2017 ; Lee et al., 2020 ; Tang et al., 2022 ).

Digital competencies

One of the most common challenges reported in studies that utilized digital tools in the classroom was the lack of students’ skills on how to use them. Fu ( 2013 ) found that students’ lack of technical skills is a barrier to the effective use of ICT in the classroom. Tamim et al. ( 2015 ) reported that students faced challenges when using tablets and smart mobile devices, associated with the technical issues or expertise needed for their use and the distracting nature of the devices and highlighted the need for teachers’ professional development. Higgins et al. ( 2012 ) reported that skills training about the use of digital technologies is essential for learners to fully exploit the benefits of instruction.

Delgado et al. ( 2015 ), meanwhile, reported studies that showed a strong positive association between teachers’ computer skills and students’ use of computers. Teachers’ lack of ICT skills and familiarization with technologies can become a constraint to the effective use of technology in the classroom (Balanskat et al., 2006 ; Delgado et al., 2015 ).

It is worth noting that the way teachers are introduced to ICTs affects the impact of digital technologies on education. Previous studies have shown that teachers may avoid using digital technologies due to limited digital skills (Balanskat, 2006 ), or they prefer applying “safe” technologies, namely technologies that their own teachers used and with which they are familiar (Condie & Munro, 2007 ). In this regard, the provision of digital skills training and exposure to new digital tools might encourage teachers to apply various technologies in their lessons (Condie & Munro, 2007 ). Apart from digital competence, technical support in the school setting has also been shown to affect teachers’ use of technology in their classrooms (Delgado et al., 2015 ). Ferrari et al. ( 2011 ) found that while teachers’ use of ICT is high, 75% stated that they needed more institutional support and a shift in the mindset of educational actors to achieve more innovative teaching practices. The provision of support can reduce time and effort as well as cognitive constraints, which could cause limited ICT integration in the school lessons by teachers (Escueta et al., 2017 ).

Teachers’ personal characteristics, training approaches, and professional development

Teachers’ personal characteristics and professional development affect the impact of digital technologies on education. Specifically, Cheok and Wong ( 2015 ) found that teachers’ personal characteristics (e.g., anxiety, self-efficacy) are associated with their satisfaction and engagement with technology. Bingimlas ( 2009 ) reported that lack of confidence, resistance to change, and negative attitudes in using new technologies in teaching are significant determinants of teachers’ levels of engagement in ICT. The same author reported that the provision of technical support, motivation support (e.g., awards, sufficient time for planning), and training on how technologies can benefit teaching and learning can eliminate the above barriers to ICT integration. Archer et al. ( 2014 ) found that comfort levels in using technology are an important predictor of technology integration and argued that it is essential to provide teachers with appropriate training and ongoing support until they are comfortable with using ICTs in the classroom. Hillmayr et al. ( 2020 ) documented that training teachers on ICT had an important effecton students’ learning.

According to Balanskat et al. ( 2006 ), the impact of ICTs on students’ learning is highly dependent on the teachers’ capacity to efficiently exploit their application for pedagogical purposes. Results obtained from the Teaching and Learning International Survey (TALIS) (OECD, 2021 ) revealed that although schools are open to innovative practices and have the capacity to adopt them, only 39% of teachers in the European Union reported that they are well or very well prepared to use digital technologies for teaching. Li and Ma ( 2010 ) and Hardman ( 2019 ) showed that the positive effect of technology on students’ achievement depends on the pedagogical practices used by teachers. Schmid et al. ( 2014 ) reported that learning was best supported when students were engaged in active, meaningful activities with the use of technological tools that provided cognitive support. Tamim et al. ( 2015 ) compared two different pedagogical uses of tablets and found a significant moderate effect when the devices were used in a student-centered context and approach rather than within teacher-led environments. Similarly, Garzón and Acevedo ( 2019 ) and Garzón et al. ( 2020 ) reported that the positive results from the integration of AR applications could be attributed to the existence of different variables which could influence AR interventions (e.g., pedagogical approach, learning environment, and duration of the intervention). Additionally, Garzón et al. ( 2020 ) suggested that the pedagogical resources that teachers used to complement their lectures and the pedagogical approaches they applied were crucial to the effective integration of AR on students’ learning gains. Garzón and Acevedo ( 2019 ) also emphasized that the success of a technology-enhanced intervention is based on both the technology per se and its characteristics and on the pedagogical strategies teachers choose to implement. For instance, their results indicated that the collaborative learning approach had the highest impact on students’ learning gains among other approaches (e.g., inquiry-based learning, situated learning, or project-based learning). Ran et al. ( 2022 ) also found that the use of technology to design collaborative and communicative environments showed the largest moderator effects among the other approaches.

Hattie ( 2008 ) reported that the effective use of computers is associated with training teachers in using computers as a teaching and learning tool. Zheng et al. ( 2016 ) noted that in addition to the strategies teachers adopt in teaching, ongoing professional development is also vital in ensuring the success of technology implementation programs. Sung et al. ( 2016 ) found that research on the use of mobile devices to support learning tends to report that the insufficient preparation of teachers is a major obstacle in implementing effective mobile learning programs in schools. Friedel et al. ( 2013 ) found that providing training and support to teachers increased the positive impact of the interventions on students’ learning gains. Trucano ( 2005 ) argued that positive impacts occur when digital technologies are used to enhance teachers’ existing pedagogical philosophies. Higgins et al. ( 2012 ) found that the types of technologies used and how they are used could also affect students’ learning. The authors suggested that training and professional development of teachers that focuses on the effective pedagogical use of technology to support teaching and learning is an important component of successful instructional approaches (Higgins et al., 2012 ). Archer et al. ( 2014 ) found that studies that reported ICT interventions during which teachers received training and support had moderate positive effects on students’ learning outcomes, which were significantly higher than studies where little or no detail about training and support was mentioned. Fu ( 2013 ) reported that the lack of teachers’ knowledge and skills on the technical and instructional aspects of ICT use in the classroom, in-service training, pedagogy support, technical and financial support, as well as the lack of teachers’ motivation and encouragement to integrate ICT on their teaching were significant barriers to the integration of ICT in education.

School leadership and management

Management and leadership are important cornerstones in the digital transformation process (Pihir et al., 2018 ). Zheng et al. ( 2016 ) documented leadership among the factors positively affecting the successful implementation of technology integration in schools. Strong leadership, strategic planning, and systematic integration of digital technologies are prerequisites for the digital transformation of education systems (Ređep, 2021 ). Management and leadership play a significant role in formulating policies that are translated into practice and ensure that developments in ICT become embedded into the life of the school and in the experiences of staff and pupils (Condie & Munro, 2007 ). Policy support and leadership must include the provision of an overall vision for the use of digital technologies in education, guidance for students and parents, logistical support, as well as teacher training (Conrads et al., 2017 ). Unless there is a commitment throughout the school, with accountability for progress at key points, it is unlikely for ICT integration to be sustained or become part of the culture (Condie & Munro, 2007 ). To achieve this, principals need to adopt and promote a whole-institution strategy and build a strong mutual support system that enables the school’s technological maturity (European Commission, 2019 ). In this context, school culture plays an essential role in shaping the mindsets and beliefs of school actors towards successful technology integration. Condie and Munro ( 2007 ) emphasized the importance of the principal’s enthusiasm and work as a source of inspiration for the school staff and the students to cultivate a culture of innovation and establish sustainable digital change. Specifically, school leaders need to create conditions in which the school staff is empowered to experiment and take risks with technology (Elkordy & Lovinelli, 2020 ).

In order for leaders to achieve the above, it is important to develop capacities for learning and leading, advocating professional learning, and creating support systems and structures (European Commission, 2019 ). Digital technology integration in education systems can be challenging and leadership needs guidance to achieve it. Such guidance can be introduced through the adoption of new methods and techniques in strategic planning for the integration of digital technologies (Ređep, 2021 ). Even though the role of leaders is vital, the relevant training offered to them has so far been inadequate. Specifically, only a third of the education systems in Europe have put in place national strategies that explicitly refer to the training of school principals (European Commission, 2019 , p. 16).

Connectivity, infrastructure, and government and other support

The effective integration of digital technologies across levels of education presupposes the development of infrastructure, the provision of digital content, and the selection of proper resources (Voogt et al., 2013 ). Particularly, a high-quality broadband connection in the school increases the quality and quantity of educational activities. There is evidence that ICT increases and formalizes cooperative planning between teachers and cooperation with managers, which in turn has a positive impact on teaching practices (Balanskat et al., 2006 ). Additionally, ICT resources, including software and hardware, increase the likelihood of teachers integrating technology into the curriculum to enhance their teaching practices (Delgado et al., 2015 ). For example, Zheng et al. ( 2016 ) found that the use of one-on-one laptop programs resulted in positive changes in teaching and learning, which would not have been accomplished without the infrastructure and technical support provided to teachers. Delgado et al. ( 2015 ) reported that limited access to technology (insufficient computers, peripherals, and software) and lack of technical support are important barriers to ICT integration. Access to infrastructure refers not only to the availability of technology in a school but also to the provision of a proper amount and the right types of technology in locations where teachers and students can use them. Effective technical support is a central element of the whole-school strategy for ICT (Underwood, 2009 ). Bingimlas ( 2009 ) reported that lack of technical support in the classroom and whole-school resources (e.g., failing to connect to the Internet, printers not printing, malfunctioning computers, and working on old computers) are significant barriers that discourage the use of ICT by teachers. Moreover, poor quality and inadequate hardware maintenance, and unsuitable educational software may discourage teachers from using ICTs (Balanskat et al., 2006 ; Bingimlas, 2009 ).

Government support can also impact the integration of ICTs in teaching. Specifically, Balanskat et al. ( 2006 ) reported that government interventions and training programs increased teachers’ enthusiasm and positive attitudes towards ICT and led to the routine use of embedded ICT.

Lastly, another important factor affecting digital transformation is the development and quality assurance of digital learning resources. Such resources can be support textbooks and related materials or resources that focus on specific subjects or parts of the curriculum. Policies on the provision of digital learning resources are essential for schools and can be achieved through various actions. For example, some countries are financing web portals that become repositories, enabling teachers to share resources or create their own. Additionally, they may offer e-learning opportunities or other services linked to digital education. In other cases, specific agencies of projects have also been set up to develop digital resources (Eurydice, 2019 ).

Administration and digital data management

The digital transformation of schools involves organizational improvements at the level of internal workflows, communication between the different stakeholders, and potential for collaboration. Vuorikari et al. ( 2020 ) presented evidence that digital technologies supported the automation of administrative practices in schools and reduced the administration’s workload. There is evidence that digital data affects the production of knowledge about schools and has the power to transform how schooling takes place. Specifically, Sellar ( 2015 ) reported that data infrastructure in education is developing due to the demand for “ information about student outcomes, teacher quality, school performance, and adult skills, associated with policy efforts to increase human capital and productivity practices ” (p. 771). In this regard, practices, such as datafication which refers to the “ translation of information about all kinds of things and processes into quantified formats” have become essential for decision-making based on accountability reports about the school’s quality. The data could be turned into deep insights about education or training incorporating ICTs. For example, measuring students’ online engagement with the learning material and drawing meaningful conclusions can allow teachers to improve their educational interventions (Vuorikari et al., 2020 ).

Students’ socioeconomic background and family support

Research show that the active engagement of parents in the school and their support for the school’s work can make a difference to their children’s attitudes towards learning and, as a result, their achievement (Hattie, 2008 ). In recent years, digital technologies have been used for more effective communication between school and family (Escueta et al., 2017 ). The European Commission ( 2020 ) presented data from a Eurostat survey regarding the use of computers by students during the pandemic. The data showed that younger pupils needed additional support and guidance from parents and the challenges were greater for families in which parents had lower levels of education and little to no digital skills.

In this regard, the socio-economic background of the learners and their socio-cultural environment also affect educational achievements (Punie et al., 2006 ). Trucano documented that the use of computers at home positively influenced students’ confidence and resulted in more frequent use at school, compared to students who had no home access (Trucano, 2005 ). In this sense, the socio-economic background affects the access to computers at home (OECD, 2015 ) which in turn influences the experience of ICT, an important factor for school achievement (Punie et al., 2006 ; Underwood, 2009 ). Furthermore, parents from different socio-economic backgrounds may have different abilities and availability to support their children in their learning process (Di Pietro et al., 2020 ).

Schools’ socioeconomic context and emergency situations

The socio-economic context of the school is closely related to a school’s digital transformation. For example, schools in disadvantaged, rural, or deprived areas are likely to lack the digital capacity and infrastructure required to adapt to the use of digital technologies during emergency periods, such as the COVID-19 pandemic (Di Pietro et al., 2020 ). Data collected from school principals confirmed that in several countries, there is a rural/urban divide in connectivity (OECD, 2015 ).

Emergency periods also affect the digitalization of schools. The COVID-19 pandemic led to the closure of schools and forced them to seek appropriate and connective ways to keep working on the curriculum (Di Pietro et al., 2020 ). The sudden large-scale shift to distance and online teaching and learning also presented challenges around quality and equity in education, such as the risk of increased inequalities in learning, digital, and social, as well as teachers facing difficulties coping with this demanding situation (European Commission, 2020 ).

Looking at the findings of the above studies, we can conclude that the impact of digital technologies on education is influenced by various actors and touches many aspects of the school ecosystem. Figure  1 summarizes the factors affecting the digital technologies’ impact on school stakeholders based on the findings from the literature review.

An external file that holds a picture, illustration, etc.
Object name is 10639_2022_11431_Fig1_HTML.jpg

Factors that affect the impact of ICTs on education

The findings revealed that the use of digital technologies in education affects a variety of actors within a school’s ecosystem. First, we observed that as technologies evolve, so does the interest of the research community to apply them to school settings. Figure  2 summarizes the trends identified in current research around the impact of digital technologies on schools’ digital capacity and transformation as found in the present study. Starting as early as 2005, when computers, simulations, and interactive boards were the most commonly applied tools in school interventions (e.g., Eng, 2005 ; Liao et al., 2007 ; Moran et al., 2008 ; Tamim et al., 2011 ), moving towards the use of learning platforms (Jewitt et al., 2011 ), then to the use of mobile devices and digital games (e.g., Tamim et al., 2015 ; Sung et al., 2016 ; Talan et al., 2020 ), as well as e-books (e.g., Savva et al., 2022 ), to the more recent advanced technologies, such as AR and VR applications (e.g., Garzón & Acevedo, 2019 ; Garzón et al., 2020 ; Kalemkuş & Kalemkuş, 2022 ), or robotics and AI (e.g., Su & Yang, 2022 ; Su et al., 2022 ). As this evolution shows, digital technologies are a concept in flux with different affordances and characteristics. Additionally, from an instructional perspective, there has been a growing interest in different modes and models of content delivery such as online, blended, and hybrid modes (e.g., Cheok & Wong, 2015 ; Kazu & Yalçin, 2022 ; Ulum, 2022 ). This is an indication that the value of technologies to support teaching and learning as well as other school-related practices is increasingly recognized by the research and school community. The impact results from the literature review indicate that ICT integration on students’ learning outcomes has effects that are small (Coban et al., 2022 ; Eng, 2005 ; Higgins et al., 2012 ; Schmid et al., 2014 ; Tamim et al., 2015 ; Zheng et al., 2016 ) to moderate (Garzón & Acevedo, 2019 ; Garzón et al., 2020 ; Liao et al., 2007 ; Sung et al., 2016 ; Talan et al., 2020 ; Wen & Walters, 2022 ). That said, a number of recent studies have reported high effect sizes (e.g., Kazu & Yalçin, 2022 ).

An external file that holds a picture, illustration, etc.
Object name is 10639_2022_11431_Fig2_HTML.jpg

Current work and trends in the study of the impact of digital technologies on schools’ digital capacity

Based on these findings, several authors have suggested that the impact of technology on education depends on several variables and not on the technology per se (Tamim et al., 2011 ; Higgins et al., 2012 ; Archer et al., 2014 ; Sung et al., 2016 ; Haßler et al., 2016 ; Chauhan, 2017 ; Lee et al., 2020 ; Lei et al., 2022a ). While the impact of ICTs on student achievement has been thoroughly investigated by researchers, other aspects related to school life that are also affected by ICTs, such as equality, inclusion, and social integration have received less attention. Further analysis of the literature review has revealed a greater investment in ICT interventions to support learning and teaching in the core subjects of literacy and STEM disciplines, especially mathematics, and science. These were the most common subjects studied in the reviewed papers often drawing on national testing results, while studies that investigated other subject areas, such as social studies, were limited (Chauhan, 2017 ; Condie & Munro, 2007 ). As such, research is still lacking impact studies that focus on the effects of ICTs on a range of curriculum subjects.

The qualitative research provided additional information about the impact of digital technologies on education, documenting positive effects and giving more details about implications, recommendations, and future research directions. Specifically, the findings regarding the role of ICTs in supporting learning highlight the importance of teachers’ instructional practice and the learning context in the use of technologies and consequently their impact on instruction (Çelik, 2022 ; Schmid et al., 2014 ; Tamim et al., 2015 ). The review also provided useful insights regarding the various factors that affect the impact of digital technologies on education. These factors are interconnected and play a vital role in the transformation process. Specifically, these factors include a) digital competencies; b) teachers’ personal characteristics and professional development; c) school leadership and management; d) connectivity, infrastructure, and government support; e) administration and data management practices; f) students’ socio-economic background and family support and g) the socioeconomic context of the school and emergency situations. It is worth noting that we observed factors that affect the integration of ICTs in education but may also be affected by it. For example, the frequent use of ICTs and the use of laptops by students for instructional purposes positively affect the development of digital competencies (Zheng et al., 2016 ) and at the same time, the digital competencies affect the use of ICTs (Fu, 2013 ; Higgins et al., 2012 ). As a result, the impact of digital technologies should be explored more as an enabler of desirable and new practices and not merely as a catalyst that improves the output of the education process i.e. namely student attainment.

Conclusions

Digital technologies offer immense potential for fundamental improvement in schools. However, investment in ICT infrastructure and professional development to improve school education are yet to provide fruitful results. Digital transformation is a complex process that requires large-scale transformative changes that presuppose digital capacity and preparedness. To achieve such changes, all actors within the school’s ecosystem need to share a common vision regarding the integration of ICTs in education and work towards achieving this goal. Our literature review, which synthesized quantitative and qualitative data from a list of meta-analyses and review studies, provided useful insights into the impact of ICTs on different school stakeholders and showed that the impact of digital technologies touches upon many different aspects of school life, which are often overlooked when the focus is on student achievement as the final output of education. Furthermore, the concept of digital technologies is a concept in flux as technologies are not only different among them calling for different uses in the educational practice but they also change through time. Additionally, we opened a forum for discussion regarding the factors that affect a school’s digital capacity and transformation. We hope that our study will inform policy, practice, and research and result in a paradigm shift towards more holistic approaches in impact and assessment studies.

Study limitations and future directions

We presented a review of the study of digital technologies' impact on education and factors influencing schools’ digital capacity and transformation. The study results were based on a non-systematic literature review grounded on the acquisition of documentation in specific databases. Future studies should investigate more databases to corroborate and enhance our results. Moreover, search queries could be enhanced with key terms that could provide additional insights about the integration of ICTs in education, such as “policies and strategies for ICT integration in education”. Also, the study drew information from meta-analyses and literature reviews to acquire evidence about the effects of ICT integration in schools. Such evidence was mostly based on the general conclusions of the studies. It is worth mentioning that, we located individual studies which showed different, such as negative or neutral results. Thus, further insights are needed about the impact of ICTs on education and the factors influencing the impact. Furthermore, the nature of the studies included in meta-analyses and reviews is different as they are based on different research methodologies and data gathering processes. For instance, in a meta-analysis, the impact among the studies investigated is measured in a particular way, depending on policy or research targets (e.g., results from national examinations, pre-/post-tests). Meanwhile, in literature reviews, qualitative studies offer additional insights and detail based on self-reports and research opinions on several different aspects and stakeholders who could affect and be affected by ICT integration. As a result, it was challenging to draw causal relationships between so many interrelating variables.

Despite the challenges mentioned above, this study envisaged examining school units as ecosystems that consist of several actors by bringing together several variables from different research epistemologies to provide an understanding of the integration of ICTs. However, the use of other tools and methodologies and models for evaluation of the impact of digital technologies on education could give more detailed data and more accurate results. For instance, self-reflection tools, like SELFIE—developed on the DigCompOrg framework- (Kampylis et al., 2015 ; Bocconi & Lightfoot, 2021 ) can help capture a school’s digital capacity and better assess the impact of ICTs on education. Furthermore, the development of a theory of change could be a good approach for documenting the impact of digital technologies on education. Specifically, theories of change are models used for the evaluation of interventions and their impact; they are developed to describe how interventions will work and give the desired outcomes (Mayne, 2015 ). Theory of change as a methodological approach has also been used by researchers to develop models for evaluation in the field of education (e.g., Aromatario et al., 2019 ; Chapman & Sammons, 2013 ; De Silva et al., 2014 ).

We also propose that future studies aim at similar investigations by applying more holistic approaches for impact assessment that can provide in-depth data about the impact of digital technologies on education. For instance, future studies could focus on different research questions about the technologies that are used during the interventions or the way the implementation takes place (e.g., What methodologies are used for documenting impact? How are experimental studies implemented? How can teachers be taken into account and trained on the technology and its functions? What are the elements of an appropriate and successful implementation? How is the whole intervention designed? On which learning theories is the technology implementation based?).

Future research could also focus on assessing the impact of digital technologies on various other subjects since there is a scarcity of research related to particular subjects, such as geography, history, arts, music, and design and technology. More research should also be done about the impact of ICTs on skills, emotions, and attitudes, and on equality, inclusion, social interaction, and special needs education. There is also a need for more research about the impact of ICTs on administration, management, digitalization, and home-school relationships. Additionally, although new forms of teaching and learning with the use of ICTs (e.g., blended, hybrid, and online learning) have initiated several investigations in mainstream classrooms, only a few studies have measured their impact on students’ learning. Additionally, our review did not document any study about the impact of flipped classrooms on K-12 education. Regarding teaching and learning approaches, it is worth noting that studies referred to STEM or STEAM did not investigate the impact of STEM/STEAM as an interdisciplinary approach to learning but only investigated the impact of ICTs on learning in each domain as a separate subject (science, technology, engineering, arts, mathematics). Hence, we propose future research to also investigate the impact of the STEM/STEAM approach on education. The impact of emerging technologies on education, such as AR, VR, robotics, and AI has also been investigated recently, but more work needs to be done.

Finally, we propose that future studies could focus on the way in which specific factors, e.g., infrastructure and government support, school leadership and management, students’ and teachers’ digital competencies, approaches teachers utilize in the teaching and learning (e.g., blended, online and hybrid learning, flipped classrooms, STEM/STEAM approach, project-based learning, inquiry-based learning), affect the impact of digital technologies on education. We hope that future studies will give detailed insights into the concept of schools’ digital transformation through further investigation of impacts and factors which influence digital capacity and transformation based on the results and the recommendations of the present study.

Acknowledgements

This project has received funding under Grant Agreement No Ref Ares (2021) 339036 7483039 as well as funding from the European Union’s Horizon 2020 Research and Innovation Program under Grant Agreement No 739578 and the Government of the Republic of Cyprus through the Deputy Ministry of Research, Innovation and Digital Policy. The UVa co-authors would like also to acknowledge funding from the European Regional Development Fund and the National Research Agency of the Spanish Ministry of Science and Innovation, under project grant PID2020-112584RB-C32.

Data availability statement

Declarations.

Publisher's note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

  • Archer K, Savage R, Sanghera-Sidhu S, Wood E, Gottardo A, Chen V. Examining the effectiveness of technology use in classrooms: A tertiary meta-analysis. Computers & Education. 2014; 78 :140–149. doi: 10.1016/j.compedu.2014.06.001. [ CrossRef ] [ Google Scholar ]
  • Aromatario O, Van Hoye A, Vuillemin A, Foucaut AM, Pommier J, Cambon L. Using theory of change to develop an intervention theory for designing and evaluating behavior change SDApps for healthy eating and physical exercise: The OCAPREV theory. BMC Public Health. 2019; 19 (1):1–12. doi: 10.1186/s12889-019-7828-4. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Arztmann, M., Hornstra, L., Jeuring, J., & Kester, L. (2022). Effects of games in STEM education: A meta-analysis on the moderating role of student background characteristics. Studies in Science Education , 1-37. 10.1080/03057267.2022.2057732
  • Bado N. Game-based learning pedagogy: A review of the literature. Interactive Learning Environments. 2022; 30 (5):936–948. doi: 10.1080/10494820.2019.1683587. [ CrossRef ] [ Google Scholar ]
  • Balanskat, A. (2009). Study of the impact of technology in primary schools – Synthesis Report. Empirica and European Schoolnet. Retrieved 30 June 2022 from: https://erte.dge.mec.pt/sites/default/files/Recursos/Estudos/synthesis_report_steps_en.pdf
  • Balanskat, A. (2006). The ICT Impact Report: A review of studies of ICT impact on schools in Europe, European Schoolnet. Retrieved 30 June 2022 from:  https://en.unesco.org/icted/content/ict-impact-report-review-studies-ict-impact-schools-europe
  • Balanskat, A., Blamire, R., & Kefala, S. (2006). The ICT impact report.  European Schoolnet . Retrieved from: http://colccti.colfinder.org/sites/default/files/ict_impact_report_0.pdf
  • Balyer, A., & Öz, Ö. (2018). Academicians’ views on digital transformation in education. International Online Journal of Education and Teaching (IOJET), 5 (4), 809–830. Retrieved 30 June 2022 from  http://iojet.org/index.php/IOJET/article/view/441/295
  • Baragash RS, Al-Samarraie H, Moody L, Zaqout F. Augmented reality and functional skills acquisition among individuals with special needs: A meta-analysis of group design studies. Journal of Special Education Technology. 2022; 37 (1):74–81. doi: 10.1177/0162643420910413. [ CrossRef ] [ Google Scholar ]
  • Bates, A. W. (2015). Teaching in a digital age: Guidelines for designing teaching and learning . Open Educational Resources Collection . 6. Retrieved 30 June 2022 from: https://irl.umsl.edu/oer/6
  • Bingimlas KA. Barriers to the successful integration of ICT in teaching and learning environments: A review of the literature. Eurasia Journal of Mathematics, Science and Technology Education. 2009; 5 (3):235–245. doi: 10.12973/ejmste/75275. [ CrossRef ] [ Google Scholar ]
  • Blaskó Z, Costa PD, Schnepf SV. Learning losses and educational inequalities in Europe: Mapping the potential consequences of the COVID-19 crisis. Journal of European Social Policy. 2022; 32 (4):361–375. doi: 10.1177/09589287221091687. [ CrossRef ] [ Google Scholar ]
  • Bocconi S, Lightfoot M. Scaling up and integrating the selfie tool for schools' digital capacity in education and training systems: Methodology and lessons learnt. European Training Foundation. 2021 doi: 10.2816/907029,JRC123936. [ CrossRef ] [ Google Scholar ]
  • Brooks, D. C., & McCormack, M. (2020). Driving Digital Transformation in Higher Education . Retrieved 30 June 2022 from: https://library.educause.edu/-/media/files/library/2020/6/dx2020.pdf?la=en&hash=28FB8C377B59AFB1855C225BBA8E3CFBB0A271DA
  • Cachia, R., Chaudron, S., Di Gioia, R., Velicu, A., & Vuorikari, R. (2021). Emergency remote schooling during COVID-19, a closer look at European families. Retrieved 30 June 2022 from  https://publications.jrc.ec.europa.eu/repository/handle/JRC125787
  • Çelik B. The effects of computer simulations on students’ science process skills: Literature review. Canadian Journal of Educational and Social Studies. 2022; 2 (1):16–28. doi: 10.53103/cjess.v2i1.17. [ CrossRef ] [ Google Scholar ]
  • Chapman, C., & Sammons, P. (2013). School Self-Evaluation for School Improvement: What Works and Why? . CfBT Education Trust. 60 Queens Road, Reading, RG1 4BS, England.
  • Chauhan S. A meta-analysis of the impact of technology on learning effectiveness of elementary students. Computers & Education. 2017; 105 :14–30. doi: 10.1016/j.compedu.2016.11.005. [ CrossRef ] [ Google Scholar ]
  • Chen, Q., Chan, K. L., Guo, S., Chen, M., Lo, C. K. M., & Ip, P. (2022a). Effectiveness of digital health interventions in reducing bullying and cyberbullying: a meta-analysis. Trauma, Violence, & Abuse , 15248380221082090. 10.1177/15248380221082090 [ PubMed ]
  • Chen B, Wang Y, Wang L. The effects of virtual reality-assisted language learning: A meta-analysis. Sustainability. 2022; 14 (6):3147. doi: 10.3390/su14063147. [ CrossRef ] [ Google Scholar ]
  • Cheok ML, Wong SL. Predictors of e-learning satisfaction in teaching and learning for school teachers: A literature review. International Journal of Instruction. 2015; 8 (1):75–90. doi: 10.12973/iji.2015.816a. [ CrossRef ] [ Google Scholar ]
  • Cheung, A. C., & Slavin, R. E. (2011). The Effectiveness of Education Technology for Enhancing Reading Achievement: A Meta-Analysis. Center for Research and reform in Education .
  • Coban, M., Bolat, Y. I., & Goksu, I. (2022). The potential of immersive virtual reality to enhance learning: A meta-analysis. Educational Research Review , 100452. 10.1016/j.edurev.2022.100452
  • Condie, R., & Munro, R. K. (2007). The impact of ICT in schools-a landscape review. Retrieved 30 June 2022 from: https://oei.org.ar/ibertic/evaluacion/sites/default/files/biblioteca/33_impact_ict_in_schools.pdf
  • Conrads, J., Rasmussen, M., Winters, N., Geniet, A., Langer, L., (2017). Digital Education Policies in Europe and Beyond: Key Design Principles for More Effective Policies. Redecker, C., P. Kampylis, M. Bacigalupo, Y. Punie (ed.), EUR 29000 EN, Publications Office of the European Union, Luxembourg, 10.2760/462941
  • Costa P, Castaño-Muñoz J, Kampylis P. Capturing schools’ digital capacity: Psychometric analyses of the SELFIE self-reflection tool. Computers & Education. 2021; 162 :104080. doi: 10.1016/j.compedu.2020.104080. [ CrossRef ] [ Google Scholar ]
  • Cussó-Calabuig R, Farran XC, Bosch-Capblanch X. Effects of intensive use of computers in secondary school on gender differences in attitudes towards ICT: A systematic review. Education and Information Technologies. 2018; 23 (5):2111–2139. doi: 10.1007/s10639-018-9706-6. [ CrossRef ] [ Google Scholar ]
  • Daniel SJ. Education and the COVID-19 pandemic. Prospects. 2020; 49 (1):91–96. doi: 10.1007/s11125-020-09464-3. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Delcker J, Ifenthaler D. Teachers’ perspective on school development at German vocational schools during the Covid-19 pandemic. Technology, Pedagogy and Education. 2021; 30 (1):125–139. doi: 10.1080/1475939X.2020.1857826. [ CrossRef ] [ Google Scholar ]
  • Delgado, A., Wardlow, L., O’Malley, K., & McKnight, K. (2015). Educational technology: A review of the integration, resources, and effectiveness of technology in K-12 classrooms. Journal of Information Technology Education Research , 14, 397. Retrieved 30 June 2022 from  http://www.jite.org/documents/Vol14/JITEv14ResearchP397-416Delgado1829.pdf
  • De Silva MJ, Breuer E, Lee L, Asher L, Chowdhary N, Lund C, Patel V. Theory of change: A theory-driven approach to enhance the Medical Research Council's framework for complex interventions. Trials. 2014; 15 (1):1–13. doi: 10.1186/1745-6215-15-267. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Di Pietro G, Biagi F, Costa P, Karpiński Z, Mazza J. The likely impact of COVID-19 on education: Reflections based on the existing literature and recent international datasets. Publications Office of the European Union; 2020. [ Google Scholar ]
  • Elkordy A, Lovinelli J. Competencies, Culture, and Change: A Model for Digital Transformation in K12 Educational Contexts. In: Ifenthaler D, Hofhues S, Egloffstein M, Helbig C, editors. Digital Transformation of Learning Organizations. Springer; 2020. pp. 203–219. [ Google Scholar ]
  • Eng TS. The impact of ICT on learning: A review of research. International Education Journal. 2005; 6 (5):635–650. [ Google Scholar ]
  • European Commission. (2020). Digital Education Action Plan 2021 – 2027. Resetting education and training for the digital age. Retrieved 30 June 2022 from  https://ec.europa.eu/education/sites/default/files/document-library-docs/deap-communication-sept2020_en.pdf
  • European Commission. (2019). 2 nd survey of schools: ICT in education. Objective 1: Benchmark progress in ICT in schools . Retrieved 30 June 2022 from: https://data.europa.eu/euodp/data/storage/f/2019-03-19T084831/FinalreportObjective1-BenchmarkprogressinICTinschools.pdf
  • Eurydice. (2019). Digital Education at School in Europe , Luxembourg: Publications Office of the European Union. Retrieved 30 June 2022 from: https://eacea.ec.europa.eu/national-policies/eurydice/content/digital-education-school-europe_en
  • Escueta, M., Quan, V., Nickow, A. J., & Oreopoulos, P. (2017). Education technology: An evidence-based review. Retrieved 30 June 2022 from  https://ssrn.com/abstract=3031695
  • Fadda D, Pellegrini M, Vivanet G, Zandonella Callegher C. Effects of digital games on student motivation in mathematics: A meta-analysis in K-12. Journal of Computer Assisted Learning. 2022; 38 (1):304–325. doi: 10.1111/jcal.12618. [ CrossRef ] [ Google Scholar ]
  • Fernández-Gutiérrez M, Gimenez G, Calero J. Is the use of ICT in education leading to higher student outcomes? Analysis from the Spanish Autonomous Communities. Computers & Education. 2020; 157 :103969. doi: 10.1016/j.compedu.2020.103969. [ CrossRef ] [ Google Scholar ]
  • Ferrari, A., Cachia, R., & Punie, Y. (2011). Educational change through technology: A challenge for obligatory schooling in Europe. Lecture Notes in Computer Science , 6964 , 97–110. Retrieved 30 June 2022  https://link.springer.com/content/pdf/10.1007/978-3-642-23985-4.pdf
  • Fielding, K., & Murcia, K. (2022). Research linking digital technologies to young children’s creativity: An interpretive framework and systematic review. Issues in Educational Research , 32 (1), 105–125. Retrieved 30 June 2022 from  http://www.iier.org.au/iier32/fielding-abs.html
  • Friedel, H., Bos, B., Lee, K., & Smith, S. (2013). The impact of mobile handheld digital devices on student learning: A literature review with meta-analysis. In Society for Information Technology & Teacher Education International Conference (pp. 3708–3717). Association for the Advancement of Computing in Education (AACE).
  • Fu JS. ICT in education: A critical literature review and its implications. International Journal of Education and Development Using Information and Communication Technology (IJEDICT) 2013; 9 (1):112–125. [ Google Scholar ]
  • Gaol FL, Prasolova-Førland E. Special section editorial: The frontiers of augmented and mixed reality in all levels of education. Education and Information Technologies. 2022; 27 (1):611–623. doi: 10.1007/s10639-021-10746-2. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Garzón J, Acevedo J. Meta-analysis of the impact of Augmented Reality on students’ learning gains. Educational Research Review. 2019; 27 :244–260. doi: 10.1016/j.edurev.2019.04.001. [ CrossRef ] [ Google Scholar ]
  • Garzón, J., Baldiris, S., Gutiérrez, J., & Pavón, J. (2020). How do pedagogical approaches affect the impact of augmented reality on education? A meta-analysis and research synthesis. Educational Research Review , 100334. 10.1016/j.edurev.2020.100334
  • Grgurović M, Chapelle CA, Shelley MC. A meta-analysis of effectiveness studies on computer technology-supported language learning. ReCALL. 2013; 25 (2):165–198. doi: 10.1017/S0958344013000013. [ CrossRef ] [ Google Scholar ]
  • Haßler B, Major L, Hennessy S. Tablet use in schools: A critical review of the evidence for learning outcomes. Journal of Computer Assisted Learning. 2016; 32 (2):139–156. doi: 10.1111/jcal.12123. [ CrossRef ] [ Google Scholar ]
  • Haleem A, Javaid M, Qadri MA, Suman R. Understanding the role of digital technologies in education: A review. Sustainable Operations and Computers. 2022; 3 :275–285. doi: 10.1016/j.susoc.2022.05.004. [ CrossRef ] [ Google Scholar ]
  • Hardman J. Towards a pedagogical model of teaching with ICTs for mathematics attainment in primary school: A review of studies 2008–2018. Heliyon. 2019; 5 (5):e01726. doi: 10.1016/j.heliyon.2019.e01726. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hattie J, Rogers HJ, Swaminathan H. The role of meta-analysis in educational research. In: Reid AD, Hart P, Peters MA, editors. A companion to research in education. Springer; 2014. pp. 197–207. [ Google Scholar ]
  • Hattie J. Visible learning: A synthesis of over 800 meta-analyses relating to achievement. Routledge. 2008 doi: 10.4324/9780203887332. [ CrossRef ] [ Google Scholar ]
  • Higgins S, Xiao Z, Katsipataki M. The impact of digital technology on learning: A summary for the education endowment foundation. Education Endowment Foundation and Durham University; 2012. [ Google Scholar ]
  • Higgins, K., Huscroft-D’Angelo, J., & Crawford, L. (2019). Effects of technology in mathematics on achievement, motivation, and attitude: A meta-analysis. Journal of Educational Computing Research , 57(2), 283-319.
  • Hillmayr D, Ziernwald L, Reinhold F, Hofer SI, Reiss KM. The potential of digital tools to enhance mathematics and science learning in secondary schools: A context-specific meta-analysis. Computers & Education. 2020; 153 (1038):97. doi: 10.1016/j.compedu.2020.103897. [ CrossRef ] [ Google Scholar ]
  • Istenic Starcic A, Bagon S. ICT-supported learning for inclusion of people with special needs: Review of seven educational technology journals, 1970–2011. British Journal of Educational Technology. 2014; 45 (2):202–230. doi: 10.1111/bjet.12086. [ CrossRef ] [ Google Scholar ]
  • Jewitt C, Clark W, Hadjithoma-Garstka C. The use of learning platforms to organise learning in English primary and secondary schools. Learning, Media and Technology. 2011; 36 (4):335–348. doi: 10.1080/17439884.2011.621955. [ CrossRef ] [ Google Scholar ]
  • JISC. (2020). What is digital transformation?.  Retrieved 30 June 2022 from: https://www.jisc.ac.uk/guides/digital-strategy-framework-for-university-leaders/what-is-digital-transformation
  • Kalati, A. T., & Kim, M. S. (2022). What is the effect of touchscreen technology on young children’s learning?: A systematic review. Education and Information Technologies , 1-19. 10.1007/s10639-021-10816-5
  • Kalemkuş, J., & Kalemkuş, F. (2022). Effect of the use of augmented reality applications on academic achievement of student in science education: Meta-analysis review. Interactive Learning Environments , 1-18. 10.1080/10494820.2022.2027458
  • Kao C-W. The effects of digital game-based learning task in English as a foreign language contexts: A meta-analysis. Education Journal. 2014; 42 (2):113–141. [ Google Scholar ]
  • Kampylis P, Punie Y, Devine J. Promoting effective digital-age learning - a European framework for digitally competent educational organisations. JRC Technical Reports. 2015 doi: 10.2791/54070. [ CrossRef ] [ Google Scholar ]
  • Kazu IY, Yalçin CK. Investigation of the effectiveness of hybrid learning on academic achievement: A meta-analysis study. International Journal of Progressive Education. 2022; 18 (1):249–265. doi: 10.29329/ijpe.2022.426.14. [ CrossRef ] [ Google Scholar ]
  • Koh C. A qualitative meta-analysis on the use of serious games to support learners with intellectual and developmental disabilities: What we know, what we need to know and what we can do. International Journal of Disability, Development and Education. 2022; 69 (3):919–950. doi: 10.1080/1034912X.2020.1746245. [ CrossRef ] [ Google Scholar ]
  • König J, Jäger-Biela DJ, Glutsch N. Adapting to online teaching during COVID-19 school closure: Teacher education and teacher competence effects among early career teachers in Germany. European Journal of Teacher Education. 2020; 43 (4):608–622. doi: 10.1080/02619768.2020.1809650. [ CrossRef ] [ Google Scholar ]
  • Lawrence JE, Tar UA. Factors that influence teachers’ adoption and integration of ICT in teaching/learning process. Educational Media International. 2018; 55 (1):79–105. doi: 10.1080/09523987.2018.1439712. [ CrossRef ] [ Google Scholar ]
  • Lee, S., Kuo, L. J., Xu, Z., & Hu, X. (2020). The effects of technology-integrated classroom instruction on K-12 English language learners’ literacy development: A meta-analysis. Computer Assisted Language Learning , 1-32. 10.1080/09588221.2020.1774612
  • Lei, H., Chiu, M. M., Wang, D., Wang, C., & Xie, T. (2022a). Effects of game-based learning on students’ achievement in science: a meta-analysis. Journal of Educational Computing Research . 10.1177/07356331211064543
  • Lei H, Wang C, Chiu MM, Chen S. Do educational games affect students' achievement emotions? Evidence from a meta-analysis. Journal of Computer Assisted Learning. 2022; 38 (4):946–959. doi: 10.1111/jcal.12664. [ CrossRef ] [ Google Scholar ]
  • Liao YKC, Chang HW, Chen YW. Effects of computer application on elementary school student's achievement: A meta-analysis of students in Taiwan. Computers in the Schools. 2007; 24 (3–4):43–64. doi: 10.1300/J025v24n03_04. [ CrossRef ] [ Google Scholar ]
  • Li Q, Ma X. A meta-analysis of the effects of computer technology on school students’ mathematics learning. Educational Psychology Review. 2010; 22 (3):215–243. doi: 10.1007/s10648-010-9125-8. [ CrossRef ] [ Google Scholar ]
  • Liu, M., Pang, W., Guo, J., & Zhang, Y. (2022). A meta-analysis of the effect of multimedia technology on creative performance. Education and Information Technologies , 1-28. 10.1007/s10639-022-10981-1
  • Lu Z, Chiu MM, Cui Y, Mao W, Lei H. Effects of game-based learning on students’ computational thinking: A meta-analysis. Journal of Educational Computing Research. 2022 doi: 10.1177/07356331221100740. [ CrossRef ] [ Google Scholar ]
  • Martinez L, Gimenes M, Lambert E. Entertainment video games for academic learning: A systematic review. Journal of Educational Computing Research. 2022 doi: 10.1177/07356331211053848. [ CrossRef ] [ Google Scholar ]
  • Mayne J. Useful theory of change models. Canadian Journal of Program Evaluation. 2015; 30 (2):119–142. doi: 10.3138/cjpe.230. [ CrossRef ] [ Google Scholar ]
  • Moran J, Ferdig RE, Pearson PD, Wardrop J, Blomeyer RL., Jr Technology and reading performance in the middle-school grades: A meta-analysis with recommendations for policy and practice. Journal of Literacy Research. 2008; 40 (1):6–58. doi: 10.1080/10862960802070483. [ CrossRef ] [ Google Scholar ]
  • OECD. (2015). Students, Computers and Learning: Making the Connection . PISA, OECD Publishing, Paris. Retrieved from: 10.1787/9789264239555-en
  • OECD. (2021). OECD Digital Education Outlook 2021: Pushing the Frontiers with Artificial Intelligence, Blockchain and Robots. Retrieved from: https://www.oecd-ilibrary.org/education/oecd-digital-education-outlook-2021_589b283f-en
  • Pan Y, Ke F, Xu X. A systematic review of the role of learning games in fostering mathematics education in K-12 settings. Educational Research Review. 2022; 36 :100448. doi: 10.1016/j.edurev.2022.100448. [ CrossRef ] [ Google Scholar ]
  • Pettersson F. Understanding digitalization and educational change in school by means of activity theory and the levels of learning concept. Education and Information Technologies. 2021; 26 (1):187–204. doi: 10.1007/s10639-020-10239-8. [ CrossRef ] [ Google Scholar ]
  • Pihir, I., Tomičić-Pupek, K., & Furjan, M. T. (2018). Digital transformation insights and trends. In Central European Conference on Information and Intelligent Systems (pp. 141–149). Faculty of Organization and Informatics Varazdin. Retrieved 30 June 2022 from https://www.proquest.com/conference-papers-proceedings/digital-transformation-insights-trends/docview/2125639934/se-2
  • Punie, Y., Zinnbauer, D., & Cabrera, M. (2006). A review of the impact of ICT on learning. Working Paper prepared for DG EAC. Retrieved 30 June 2022 from: http://www.eurosfaire.prd.fr/7pc/doc/1224678677_jrc47246n.pdf
  • Quah CY, Ng KH. A systematic literature review on digital storytelling authoring tool in education: January 2010 to January 2020. International Journal of Human-Computer Interaction. 2022; 38 (9):851–867. doi: 10.1080/10447318.2021.1972608. [ CrossRef ] [ Google Scholar ]
  • Ran H, Kim NJ, Secada WG. A meta-analysis on the effects of technology's functions and roles on students' mathematics achievement in K-12 classrooms. Journal of computer assisted learning. 2022; 38 (1):258–284. doi: 10.1111/jcal.12611. [ CrossRef ] [ Google Scholar ]
  • Ređep, N. B. (2021). Comparative overview of the digital preparedness of education systems in selected CEE countries. Center for Policy Studies. CEU Democracy Institute .
  • Rott, B., & Marouane, C. (2018). Digitalization in schools–organization, collaboration and communication. In Digital Marketplaces Unleashed (pp. 113–124). Springer, Berlin, Heidelberg.
  • Savva M, Higgins S, Beckmann N. Meta-analysis examining the effects of electronic storybooks on language and literacy outcomes for children in grades Pre-K to grade 2. Journal of Computer Assisted Learning. 2022; 38 (2):526–564. doi: 10.1111/jcal.12623. [ CrossRef ] [ Google Scholar ]
  • Schmid RF, Bernard RM, Borokhovski E, Tamim RM, Abrami PC, Surkes MA, Wade CA, Woods J. The effects of technology use in postsecondary education: A meta-analysis of classroom applications. Computers & Education. 2014; 72 :271–291. doi: 10.1016/j.compedu.2013.11.002. [ CrossRef ] [ Google Scholar ]
  • Schuele CM, Justice LM. The importance of effect sizes in the interpretation of research: Primer on research: Part 3. The ASHA Leader. 2006; 11 (10):14–27. doi: 10.1044/leader.FTR4.11102006.14. [ CrossRef ] [ Google Scholar ]
  • Schwabe, A., Lind, F., Kosch, L., & Boomgaarden, H. G. (2022). No negative effects of reading on screen on comprehension of narrative texts compared to print: A meta-analysis. Media Psychology , 1-18. 10.1080/15213269.2022.2070216
  • Sellar S. Data infrastructure: a review of expanding accountability systems and large-scale assessments in education. Discourse: Studies in the Cultural Politics of Education. 2015; 36 (5):765–777. doi: 10.1080/01596306.2014.931117. [ CrossRef ] [ Google Scholar ]
  • Stock WA. Systematic coding for research synthesis. In: Cooper H, Hedges LV, editors. The handbook of research synthesis, 236. Russel Sage; 1994. pp. 125–138. [ Google Scholar ]
  • Su, J., Zhong, Y., & Ng, D. T. K. (2022). A meta-review of literature on educational approaches for teaching AI at the K-12 levels in the Asia-Pacific region. Computers and Education: Artificial Intelligence , 100065. 10.1016/j.caeai.2022.100065
  • Su J, Yang W. Artificial intelligence in early childhood education: A scoping review. Computers and Education: Artificial Intelligence. 2022; 3 :100049. doi: 10.1016/j.caeai.2022.100049. [ CrossRef ] [ Google Scholar ]
  • Sung YT, Chang KE, Liu TC. The effects of integrating mobile devices with teaching and learning on students' learning performance: A meta-analysis and research synthesis. Computers & Education. 2016; 94 :252–275. doi: 10.1016/j.compedu.2015.11.008. [ CrossRef ] [ Google Scholar ]
  • Talan T, Doğan Y, Batdı V. Efficiency of digital and non-digital educational games: A comparative meta-analysis and a meta-thematic analysis. Journal of Research on Technology in Education. 2020; 52 (4):474–514. doi: 10.1080/15391523.2020.1743798. [ CrossRef ] [ Google Scholar ]
  • Tamim, R. M., Bernard, R. M., Borokhovski, E., Abrami, P. C., & Schmid, R. F. (2011). What forty years of research says about the impact of technology on learning: A second-order meta-analysis and validation study. Review of Educational research, 81 (1), 4–28. Retrieved 30 June 2022 from 10.3102/0034654310393361
  • Tamim, R. M., Borokhovski, E., Pickup, D., Bernard, R. M., & El Saadi, L. (2015). Tablets for teaching and learning: A systematic review and meta-analysis. Commonwealth of Learning. Retrieved from: http://oasis.col.org/bitstream/handle/11599/1012/2015_Tamim-et-al_Tablets-for-Teaching-and-Learning.pdf
  • Tang C, Mao S, Xing Z, Naumann S. Improving student creativity through digital technology products: A literature review. Thinking Skills and Creativity. 2022; 44 :101032. doi: 10.1016/j.tsc.2022.101032. [ CrossRef ] [ Google Scholar ]
  • Tolani-Brown, N., McCormac, M., & Zimmermann, R. (2011). An analysis of the research and impact of ICT in education in developing country contexts. In ICTs and sustainable solutions for the digital divide: Theory and perspectives (pp. 218–242). IGI Global.
  • Trucano, M. (2005). Knowledge Maps: ICTs in Education. Washington, DC: info Dev / World Bank. Retrieved 30 June 2022 from  https://files.eric.ed.gov/fulltext/ED496513.pdf
  • Ulum H. The effects of online education on academic success: A meta-analysis study. Education and Information Technologies. 2022; 27 (1):429–450. doi: 10.1007/s10639-021-10740-8. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Underwood, J. D. (2009). The impact of digital technology: A review of the evidence of the impact of digital technologies on formal education. Retrieved 30 June 2022 from: http://dera.ioe.ac.uk/id/eprint/10491
  • Verschaffel, L., Depaepe, F., & Mevarech, Z. (2019). Learning Mathematics in metacognitively oriented ICT-Based learning environments: A systematic review of the literature. Education Research International , 2019 . 10.1155/2019/3402035
  • Villena-Taranilla R, Tirado-Olivares S, Cózar-Gutiérrez R, González-Calero JA. Effects of virtual reality on learning outcomes in K-6 education: A meta-analysis. Educational Research Review. 2022; 35 :100434. doi: 10.1016/j.edurev.2022.100434. [ CrossRef ] [ Google Scholar ]
  • Voogt J, Knezek G, Cox M, Knezek D, ten Brummelhuis A. Under which conditions does ICT have a positive effect on teaching and learning? A call to action. Journal of Computer Assisted Learning. 2013; 29 (1):4–14. doi: 10.1111/j.1365-2729.2011.00453.x. [ CrossRef ] [ Google Scholar ]
  • Vuorikari, R., Punie, Y., & Cabrera, M. (2020). Emerging technologies and the teaching profession: Ethical and pedagogical considerations based on near-future scenarios  (No. JRC120183). Joint Research Centre. Retrieved 30 June 2022 from: https://publications.jrc.ec.europa.eu/repository/handle/JRC120183
  • Wang LH, Chen B, Hwang GJ, Guan JQ, Wang YQ. Effects of digital game-based STEM education on students’ learning achievement: A meta-analysis. International Journal of STEM Education. 2022; 9 (1):1–13. doi: 10.1186/s40594-022-00344-0. [ CrossRef ] [ Google Scholar ]
  • Wen X, Walters SM. The impact of technology on students’ writing performances in elementary classrooms: A meta-analysis. Computers and Education Open. 2022; 3 :100082. doi: 10.1016/j.caeo.2022.100082. [ CrossRef ] [ Google Scholar ]
  • Zheng B, Warschauer M, Lin CH, Chang C. Learning in one-to-one laptop environments: A meta-analysis and research synthesis. Review of Educational Research. 2016; 86 (4):1052–1084. doi: 10.3102/0034654316628645. [ CrossRef ] [ Google Scholar ]

Trends and Topics in Educational Technology, 2022 Edition

  • Column: Guest Editorial
  • Published: 23 February 2022
  • Volume 66 , pages 134–140, ( 2022 )

Cite this article

  • Royce Kimmons 1 &
  • Joshua M. Rosenberg 2  

9586 Accesses

11 Citations

9 Altmetric

Explore all metrics

Avoid common mistakes on your manuscript.

This editorial continues our annual effort to identify and catalog trends and popular topics in the field of educational technology. Continuing our approach from previous years (Kimmons, 2020 ; Kimmons et al., 2021 ), we use public internet data mining methods (Kimmons & Veletsianos, 2018 ) to extract and analyze data from three large data sources: the Scopus research article database, the Twitter #edtech affinity group, and school and school district Facebook pages. Such data sources can provide valuable insights into what is happening and what is of interest in the field as educators, researchers, and students grapple with crises and the rapidly evolving uses of educational technologies (e.g., Kimmons et al., 2020 ; Trust et al., 2020 ; Veletsianos & Kimmons, 2020 ). Through this analysis, we provide a brief snapshot of what the educational technology field looked like in 2021 via each of these lenses and attempt to triangulate an overall state of our field and vision for what may be coming next.

What Were Trending Topics in Educational Technology Journals in 2021?

Educational technology research topics for 2021 were very similar to previous years, with a few exceptions. In total, we collected titles for 2368 articles via Scopus published in top educational technology journals as identified by Google Scholar. We then analyzed keyword and bigram (two words found together) frequencies in titles to determine the most commonly referenced terms. To assist in making sense of results, we also manually grouped together keywords and bigrams into four information types: contexts, methods, modalities, and topics. Contexts included terms referring to the research setting, such as “COVID-19” or “higher education.” Methods included terms referring to research methods involved in the article, such as “systematic review” or “meta-analysis.” Modalities included terms referring to the technical modality through which the study was occurring, such as “virtual reality” or “online learning.” Last, Topics included terms referring to the intervention, objective, or theoretical goal of the study, such as “computational thinking,” “learning environment,” or “language learning.” The most common bigrams and keywords for each type may be found in Table  1 ; a few items of interest follow.

Bigrams generally provide more specificity for interpreting meaning than do keywords, simply because keywords might have greater variety in usage (e.g., “school” might be used in the context of “primary school,” “secondary school,” “school teacher,” and so forth). So, when interpreting Table 1 , the bigram column is generally more useful for identifying trending topics, though the keyword column may at times be helpful as a clarifying supplement.

“Computational thinking” and “learning environments” were the two most-researched topical bigrams in 2021, and “virtual reality” and “online learning” were the most-researched modality bigrams. Most-referenced methods included “systematic review” and “meta-analysis,” which is noteworthy because such methods are used to conduct secondary analyses on existing studies, and their dominance may suggest an interest in the field to identify what works and to synthesize findings across various contexts within a sea of articles that is ever-increasing in size.

Due to the ongoing COVID-19 pandemic, this contextual term was regularly mentioned in many article titles (5.4%). “Pandemic” (3.4%), “emergency” (1.2%), and “shift to” (e.g., digital, online, blended; 0.9%) were also commonly referenced. This suggests that as the world continues to grapple with this multifaceted crisis, educational technology researchers are heavily engaged in addressing educational concerns associated with it (and remote teaching, particularly).

Grade level references in titles further suggested that educational technology research is being conducted at all levels but that it is most prominent at the higher education or post-secondary level and reduces in frequency as grade levels go down, with high school or secondary terms being more prominent than elementary or primary terms, with “higher education” (3.5%) being referenced twice as frequently as “K-12” (1.7%). This is noteworthy as it suggests that research findings associated with educational technology are currently mainly focused on older (and even adult) students and that if results are applied to understanding learners generally, then the needs of adolescents and younger children may currently be relatively underrepresented.

What Were Trending #Edtech Topics and Tools on Twitter in 2021?

Twitter is a valuable source of information about trends in a field because it allows researchers and practitioners to share relevant resources, studies, and musings and categorize posts via descriptive hashtags. The #edtech hashtag continued to be very popular during 2021, and we collected all original tweets (ignoring retweets) that included the #edtech hashtag for the year. This included 433,078 original tweets posted by 40,767 users, averaging 36,090 tweets per month ( SD  = 2974).

Because users can include multiple hashtags on a tweet, we aggregated the frequencies of additional (co-occurring) hashtags to determine the intended audiences (e.g., #teachers, #k12) and content topics (e.g., #elearning, #ai) of tweets. Some of the most popular additional hashtags of each type are presented in Table  2 . To better understand results, we also calculated the representation of each additional hashtag in the overall dataset (e.g., 2% of all #edtech tweets also included the #teachers hashtag) and the diversity of authorship (i.e., the number of users divided by the number of tweets). This diversity score was helpful for understanding how some hashtags were used by relatively few accounts for purposes such as product promotion. For example, the #byjus hashtag, which refers to an educational technology company founded in India, was tweeted 19,546 times. Still, the diversity score was only 3%, revealing that though this was a very popular hashtag in terms of tweet counts, it was being included by relatively few accounts at very high frequencies, such as via focused marketing campaigns.

Notably, several community or affinity space hashtags (Carpenter & Krutka, 2014 ; Rosenberg et al., 2016 ) were among the most common included with #edtech, such as #edchat, #edutwitter, and #teachertwitter. In particular, 13.9% of #edtech tweets also were tagged as #educhat, and 25.7% of #educhat tweets were also tagged as #edtech, revealing relatively high synchronicity between these two spaces. Furthermore, regarding institutional level, #k12 ( n  = 1712) and #highered ( n  = 1770) exhibited similar user counts, as did #school ( n  = 1284) and #highereducation ( n  = 1161), but, interestingly, the #k12 and #school hashtags exhibited nearly twice as many tweets as their #highered and #highereducation counterparts. This suggests that although the communities tweeting about topics for each group may be of similar size, the K-12 community was much more active than the higher education community.

Regarding topics, #elearning, #onlinelearning, #remotelearning, #distancelearning, #virtuallearning, and #blendedlearning were represented at a relatively high rate (in 16.1% of tweets), perhaps reflecting ongoing interest associated with #covid19. Other prominent topical hashtags included emerging technologies, such as #ai ( n  = 2112), #vr ( n  = 917), #ar ( n  = 679), and #blockchain ( n  = 545), as well as subject areas (e.g., #stem) and general descriptors (e.g., #innovation).

Furthermore, one of the primary reasons for tweeting is to share resources or media items. An analysis of these #edtech tweets revealed that 94.4% included either a link to an external site or an embedded media resource, such as an image or video. Regarding external links, prominent domains included (a) news sites, such as edsurge.com , edtechmagazine.com , or edutopia.org , (b) other social media, such as linkedin.com , instagram.com , or facebook.com , (c) multimedia resources, such as youtube.com , anchor.fm, or podcasts.apple.com , and (d) productivity and management tools, such as docs.google.com , forms.gle, or eventbrite.com (cf., Table  3 ).

Twitter communications in 2021 regarding #edtech included chatter about a variety of topics and resources. Shadows of #COVID-19 might be detected in the prevalence of this hashtag with others, like #remotelearning and #onlinelearning, but in many ways it seems that conversations continued to focus on issues of #education and #learning, as well as emerging topics like #ai, #vr, and #cybersecurity, suggesting some level of imperviousness to the pandemic.

What Were Trending Topics among Schools and School Districts on Facebook in 2021?

To examine trending educational technology topics on Facebook, we studied the posts by 14,481 schools and school districts on their public pages. First, one aspect of this analysis concerned the number of posts shared. In our last report, we documented how schools and districts posted more posts than in any other month during March, April, and May 2020—during the earliest and perhaps most tumultuous months of the COVID-19 pandemic, suggesting the importance of communication during this crisis period, as others have documented with Twitter data (Michela et al., 2022 ). Notably, in 2021, those months remained the most active; apart from those months, the numbers of posts by schools and districts in 2021 were roughly comparable to the numbers in 2019 and 2020 (see Fig.  1 ).

figure 1

The Number of Posts on Facebook by Schools and School Districts

To understand which technologies were shared on these Facebook pages, we examined the domain names for all of the hyperlinks that were posted. Despite the myriad social and other changes experienced by schools from 2019 to 2021, link domains shared on Facebook exhibited remarkable consistency: Youtube, Google Docs, Google, and Google Drive—Google or tools created by Google—were the four most frequently shared for each of these years (Table  4 ). Note that the n represents the number of schools or districts sharing one or more links to these domains (of the 14,481 total school and school district pages). Thus, the 8278 indicates that 57.2% of schools and districts posted one or more links to YouTube over the 2021 year. These were followed by Zoom, which was also widely shared in 2020 (though not in 2019), and then Google Sites (which was shared frequently in 2020). The CDC and 2020 Census’s websites dropped from the list of the top ten most frequently shared domains in 2021, despite having been widely shared in 2020. Otherwise, the results are largely comparable between 2019, 2020, and 2021, indicating that schools and districts continued to use a core set of productivity tools despite the many disruptions and changes over this period.

We also examined the contents of the messages of schools’ and school districts’ posts. To do so, we considered the technologies identified by Weller ( 2020 ) in his history of the past 25 years of educational technology, as in our report for last year. Specifically, we searched the contents of the messages posted by schools and districts for the inclusion of the terms that correspond to technologies Weller identified as being representative of a particular year. While the domains shared by schools and districts demonstrated remarkable consistency, the contents of the messages posted by schools and districts varied substantially, especially when considering the changes from 2019 to 2020 and from 2020 to 2021. To illustrate, consider mentions of “e-learning,” which Weller identified as the focal point of 1999. In 2019, 834 messages that mentioned “e-learning” were posted by schools and districts, but in 2020, the number increased around ten-fold to 8326 mentions. Though it may have been expected for mentions of “e-learning” to remain somewhat constant during 2021, instead we saw a marked downturn to 1899 (or a 78% drop). This trend—a sizable increase in how often certain technologies were mentioned in 2020 relative to 2019 that was not sustained in 2021—was also found for mentions of “learning management systems,” “video,” and “Second Life and virtual worlds,” among others. Indeed, the only noteworthy increase in mentions of these technologies from 2020 to 2021 was for “artificial intelligence”.

Summary and Discussion

By triangulating the 2021 snapshots of each of these three data sources—Scopus, Twitter, and Facebook—we can begin to see a state of the educational technology field pressing into the future. Results on specific terms or topics may be useful for individual researchers and practitioners to see the representation of their areas of interest. Still, some common takeaways that emerge from all three sources include the following.

First, we found an emphasis on “e-learning”—particularly in Twitter and Facebook posts—as well as “blended learning” (Twitter) and “online learning” (journal articles). Notably, COVID-19 (and related terms) were also frequently mentioned. These findings align with how mentions of “e-learning” spiked during the 2020 year when the effects of the COVID-19 pandemic on education were especially disruptive, but their ongoing presence also suggests that interest in these topics will likely extend outside and beyond the context of the pandemic.

Second, we note a keen interest in emergent technologies like artificial intelligence and virtual reality, particularly on the part of researchers (as evidenced by how frequently these terms were mentioned in journal articles published in 2021). At the same time, we note that this interest has not yet crystallized into the sustained adoption and use of these emergent technologies—a point bolstered by the relatively limited mention of these technologies in the Facebook posts of schools and school districts. Thus, we think we as a field must wait and see whether interest in these technologies is lasting or transient.

Last, we found an ever-increasing reliance on several corporate entities for productivity and sharing. This was especially the case for Google and tools created by Google: YouTube, Google Docs, and Google Drive, in particular. Indeed, such tools are such an established part of our work (and educational) context that we might hardly think of them as tools. Furthermore, tools created by Google and several other corporations—including social media platforms themselves—were also prevalent in the content of the tweets we analyzed. While we do not believe it is a bad decision on the part of individuals or educational institutions to use these and other tools, there are also some potential downsides to their use that we think invite critical questions (Burchfield et al., 2021; Krutka et al., 2021 ).

As a result of these common takeaways, we will now conclude with three questions for educational technology researchers and practitioners to consider.

Pandemic Bump Vs. Ubiquity

First, many have wondered whether changes in educational technology catalyzed by the pandemic will yield sustained, ubiquitous changes to the field, or if adjustments represent only a short-term bump of interest—as may be the case with emergency remote teaching tools and strategies used in the early days of the pandemic (Hodges et al., 2020 ). One of the takeaways from our Facebook analysis was that while some productivity technologies appeared to have remained consistently used on the basis of our domain analysis (e.g., Google Docs), mentions of many specific technologies in the messages of the posts by schools and districts appeared to have been more transitory in nature, such as in the cases of “e-learning” and “learning management systems.” This suggests at least two possible interpretations. One is that these technologies were used in transient response to an unprecedented period of emergency remote instruction—though tools associated with remote teaching and learning continue to be used, their use was primarily a temporary, emergency measure. Another is that these tools were mentioned less because they have become a more ubiquitous but less visible tool used by teachers and learners. Learning management systems may still, of course, be widely used, but schools and districts may be sharing about their role less through their public social media platforms because they may already be familiar to students and their parents. While we cannot say why there was a dramatic increase followed by a decrease in the use of many educational technologies over the period from 2019 through 2021, our analysis indicates that many tools are, at least, being communicated about much less over the past year than in the preceding year when the pandemic began in the U.S.

Technocentrism Vs. Focusing on Learners and Improving Educational Systems

Second, though emerging technologies are obviously an essential component of our field, one of the perennial challenges we must grapple with is our relationship to these technologies. Are we technocentric, as Papert ( 1987 , 1990 ) warned, or do we focus on learning and improvement? In our results, we notice that technologies such as artificial intelligence, virtual reality, and augmented reality were very frequently referenced in comparison to most other modalities or topics of research. As processing and graphical rendering capabilities continue to become more compact and inexpensive via headsets, smartphones, and haptic devices, we would expect these technologies to continue to receive ongoing attention. Though there are certainly valuable learning improvement opportunities associated with such technologies (Glaser & Schmidt, 2021 ), we might also justifiably wonder whether the volume of attention that these technologies are currently receiving in the literature is concomitant to their actual (or even hypothetical) large-scale learning benefits—or whether current fascination with such technologies represents a repeat of other historical emphases that may not have panned out in the form of systemic educational improvement, such as in the case of MUVEs (cf., Nelson & Ketelhut, 2007 ).

Limited Broader Impacts on Larger Social Issues

Finally, to reiterate our critiques from previous years (Kimmons, 2020 ; Kimmons et al., 2021 ), we continued to see a dearth of references to important social issues in scholarly article titles, including references to social matters upon which educational technology should be expected to have a strong voice. For instance, terms relating to universal design ( n  = 0), accessibility ( n  = 4), privacy ( n  = 8), ethics ( n  = 12), security ( n  = 8), equity ( n  = 6), justice ( n  = 1), and (digital and participatory) divides ( n  = 1) were all very uncommon. Though “ethics” was the most common of these terms, it only was represented in 1-in-200 article titles, and though current “practices with student data represent cause for concern, as student behaviors are increasingly tracked, analyzed, and studied to draw conclusions about learning, attitudes, and future behaviors” (Kimmons, 2021 , para. 2; cf., Rosenberg et al., 2021 ) and proctoring software becomes increasingly ubiquitous (Kimmons & Veletsianos, 2021 ), “privacy” was only mentioned in 1-in-333 article titles and “proctor*” was only in 1-in-600 titles. In our current pandemic context, we have often heard educational technologists lament the fact that decision-makers and those in power may not seek our guidance in addressing issues related to the pandemic that would clearly benefit from our expertise. And yet, the absence of other socially-relevant topics from our research suggests that we may be challenged to leverage our work toward addressing matters of larger social or educational importance ourselves. A focus on the social matters and the social context around educational technology use, then, remains an opportunity for research and development by the educational technology community in the years ahead. This seems especially salient as our data suggests that the field is heavily influenced by big technology corporations like Google and Facebook that historically have been critiqued for violating ethical expectations of privacy and failing to support social good. As educational technology researchers and practitioners, we are primed with the position and expertise necessary to shape the future of ethical technology use in education. Hopefully, we can step up to this challenge.

Carpenter, J. P., & Krutka, D. G. (2014). How and why educators use twitter: A survey of the field. Journal of Research on Technology in Education, 46 (4), 414–434.

Article   Google Scholar  

Glaser, N., & Schmidt, M. (2021). Systematic literature review of virtual reality intervention design patterns for B with autism Spectrum disorders. International Journal of Human-Computer Interaction , 1–36.

Hodges, C.B., Moore, S., Lockee, B.B., Trust, T., & Bond, M. A. (2020). The difference between emergency remote teaching and online learning. EDUCAUSE Review. https://er.educause.edu/articles/2020/3/the-difference-between-emergency-remote-teaching-and-online-learning

Kimmons, R. (2020). Current trends (and missing links) in educational technology research and practice. TechTrends, 64(6). https://doi.org/10.1007/s11528-020-00549-6 .

Kimmons, R. (2021). Safeguarding student privacy in an age of analytics. Educational Technology Research & Development, 69 , 343–345. https://doi.org/10.1007/s11423-021-09950-1

Kimmons, R., & Veletsianos, G. (2018). Public internet data mining methods in instructional design, educational technology, and online learning research. TechTrends, 62 (5), 492–500. https://doi.org/10.1007/s11528-018-0307-4

Kimmons, R., & Veletsianos, G. (2021). Proctoring software in higher ed: Prevalence and patterns. EDUCAUSE Review. https://er.educause.edu/articles/2021/2/proctoring-software-in-higher-ed-prevalence-and-patterns

Kimmons, R., Veletsianos, G., & VanLeeuwen, C. (2020). What (some) faculty are saying about the shift to remote teaching and learning. EDUCAUSE Review. https://er.educause.edu/blogs/2020/5/what-some-faculty-are-saying-about-the-shift-to-remote-teaching-and-learning

Kimmons, R., Rosenberg, J., & Allman, B. (2021). Trends in educational technology: What Facebook, twitter, and Scopus can tell us about current research and practice. TechTrends, 65 , 125–136. https://doi.org/10.1007/s11528-021-00589-6

Krutka, D. G., Smits, R. M., & Willhelm, T. A. (2021). Don’t be evil: Should we use Google in schools? TechTrends, 65 (4), 421–431.

Michela, E., Rosenberg, J. M., Kimmons, R., Sultana, O., Burchfield, M. A., & Thomas, T. (2022). “We are trying to communicate the best we can”: Understanding districts’ communication on Twitter during the COVID-19 pandemic. AERA Open . https://osf.io/qpu8v/

Nelson, B. C., & Ketelhut, D. J. (2007). Scientific inquiry in educational multi-user virtual environments. Educational Psychology Review, 19 (3), 265–283.

Papert, S. (1987). Computer criticism vs. technocentric thinking. Educational Researcher, 16 (1), 22–30.

Google Scholar  

Papert, S. (1990). A critique of technocentrism in thinking about the school of the future. MIT Epistemology and Learning Memo No. 2. Cambridge, Massachusetts: Massachusetts Institute of Technology Media Laboratory.

Rosenberg, J. M., Greenhalgh, S. P., Koehler, M. J., Hamilton, E. R., & Akcaoglu, M. (2016). An investigation of state educational twitter hashtags (SETHs) as affinity spaces. E-learning and Digital Media, 13 (1–2), 24–44.

Rosenberg, J. M., Burchfield, M., Borchers, C., Gibbons, B., Anderson, D., & Fischer, C. (2021). Social media and students’ privacy: What schools and districts should know. Phi Delta Kappan, 103 (2), 49–53.

Trust, T., Carpenter, J., Krutka, D. G., & Kimmons, R. (2020). #RemoteTeaching & #RemoteLearning: Educator tweeting during the COVID-19 pandemic. Journal of Technology and Teacher Education, 28 (2), 151–159.

Veletsianos, G., & Kimmons, R. (2020). What (some) students are saying about the switch to remote teaching and learning. EDUCAUSE Review . https://er.educause.edu/blogs/2020/4/what-some-students-are-saying-about-the-switch-to-remote-teaching-and-learning

Weller, M. (2020). 25 years of ed tech . Athabasca University Press.

Book   Google Scholar  

Download references

Author information

Authors and affiliations.

Brigham Young University, Provo, UT, USA

Royce Kimmons

University of Tennessee, Knoxville, TN, USA

Joshua M. Rosenberg

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Royce Kimmons .

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Kimmons, R., Rosenberg, J.M. Trends and Topics in Educational Technology, 2022 Edition. TechTrends 66 , 134–140 (2022). https://doi.org/10.1007/s11528-022-00713-0

Download citation

Published : 23 February 2022

Issue Date : March 2022

DOI : https://doi.org/10.1007/s11528-022-00713-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Find a journal
  • Publish with us
  • Track your research

Advertisement

Supported by

In One Key A.I. Metric, China Pulls Ahead of the U.S.: Talent

China has produced a huge number of top A.I. engineers in recent years. New research shows that, by some measures, it has already eclipsed the United States.

  • Share full article

Several men in suits sit on a stage at a conference.

By Paul Mozur and Cade Metz

Paul Mozur reported from Taipei, Taiwan, and Cade Metz from San Francisco.

When it comes to the artificial intelligence that powers chatbots like ChatGPT, China lags behind the United States . But when it comes to producing the scientists behind a new generation of humanoid technologies, China is pulling ahead.

New research shows that China has by some metrics eclipsed the United States as the biggest producer of A.I. talent, with the country generating almost half the world’s top A.I. researchers. By contrast, about 18 percent come from U.S. undergraduate institutions, according to the study , from MacroPolo, a think tank run by the Paulson Institute, which promotes constructive ties between the United States and China.

The findings show a jump for China, which produced about one-third of the world’s top talent three years earlier. The United States, by contrast, remained mostly the same. The research is based on the backgrounds of researchers whose papers were published at 2022’s Conference on Neural Information Processing Systems. NeurIPS, as it is known, is focused on advances in neural networks , which have anchored recent developments in generative A.I.

The talent imbalance has been building for the better part of a decade. During much of the 2010s, the United States benefited as large numbers of China’s top minds moved to American universities to complete doctoral degrees. A majority of them stayed in the United States. But the research shows that trend has also begun to turn, with growing numbers of Chinese researchers staying in China.

What happens in the next few years could be critical as China and the United States jockey for primacy in A.I. — a technology that can potentially increase productivity, strengthen industries and drive innovation — turning the researchers into one of the most geopolitically important groups in the world.

Generative A.I. has captured the tech industry in Silicon Valley and in China, causing a frenzy in funding and investment. The boom has been led by U.S. tech giants such as Google and start-ups like OpenAI. That could attract China’s researchers, though rising tensions between Beijing and Washington could also deter some, experts said.

(The New York Times has sued OpenAI and Microsoft for copyright infringement of news content related to A.I. systems.)

China has nurtured so much A.I. talent partly because it invested heavily in A.I. education. Since 2018, the country has added more than 2,000 undergraduate A.I. programs, with more than 300 at its most elite universities, said Damien Ma, the managing director of MacroPolo, though he noted the programs were not heavily focused on the technology that had driven breakthroughs by chatbots like ChatGPT.

“A lot of the programs are about A.I. applications in industry and manufacturing, not so much the generative A.I. stuff that’s come to dominate the American A.I. industry at the moment,” he said.

While the United States has pioneered breakthroughs in A.I., most recently with the uncanny humanlike abilities of chatbots , a significant portion of that work was done by researchers educated in China.

Researchers originally from China now make up 38 percent of the top A.I. researchers working in the United States, with Americans making up 37 percent, according to the research. Three years earlier, those from China made up 27 percent of top talent working in the United States, compared with 31 percent from the United States.

“The data shows just how critical Chinese-born researchers are to the United States for A.I. competitiveness,” said Matt Sheehan, a fellow at the Carnegie Endowment for International Peace who studies Chinese A.I.

He added that the data seemed to show the United States was still attractive. “We’re the world leader in A.I. because we continue to attract and retain talent from all over the world, but especially China,” he said.

Pieter Abbeel, a professor at the University of California, Berkeley, and a founder of Covariant , an A.I. and robotics start-up, said working alongside large numbers of Chinese researchers was taken for granted inside the leading American companies and universities.

“It’s just a natural state of affairs,” he said.

In the past, U.S. defense officials were not too concerned about A.I. talent flows from China, partly because many of the biggest A.I. projects did not deal with classified data and partly because they reasoned that it was better to have the best minds available. That so much of the leading research in A.I. is published openly also held back worries.

Despite bans introduced by the Trump administration that prohibit entry to the United States for students from some military-linked universities in China and a relative slowdown in the flow of Chinese students into the country during Covid, the research showed large numbers of the most promising A.I. minds continued coming to the United States to study.

But this month, a Chinese citizen who was an engineer at Google was charged with trying to transfer A.I. technology — including critical microchip architecture — to a Beijing-based company that paid him in secret , according to a federal indictment.

The substantial numbers of Chinese A.I. researchers working in the United States now present a conundrum for policymakers, who want to counter Chinese espionage while not discouraging the continued flow of top Chinese computer engineers into the United States, according to experts focused on American competitiveness.

“Chinese scholars are almost leading the way in the A.I. field,” said Subbarao Kambhampati, a professor and researcher of A.I. at Arizona State University. If policymakers try to bar Chinese nationals from research in the United States, he said, they are “shooting themselves in the foot.”

The track record of U.S. policymakers is mixed. A policy by the Trump administration aimed at curbing Chinese industrial espionage and intellectual property theft has since been criticized for errantly prosecuting a number of professors. Such programs, Chinese immigrants said, have encouraged some to stay in China.

For now, the research showed, most Chinese who complete doctorates in the United States stay in the country, helping to make it the global center of the A.I. world. Even so, the U.S. lead has begun to slip, to hosting about 42 percent of the world’s top talent, down from about 59 percent three years ago, according to the research.

Paul Mozur is the global technology correspondent for The Times, based in Taipei. Previously he wrote about technology and politics in Asia from Hong Kong, Shanghai and Seoul. More about Paul Mozur

Cade Metz writes about artificial intelligence, driverless cars, robotics, virtual reality and other emerging areas of technology. More about Cade Metz

Explore Our Coverage of Artificial Intelligence

News  and Analysis

Amazon said it had added $2.75 billion to its investment in Anthropic , an A.I. start-up that competes with companies like OpenAI and Google.

Gov. Bill Lee of Tennessee signed a bill  to prevent the use of A.I. to copy a performer’s voice. It is the first such measure in the United States.

French regulators said Google failed to notify news publishers  that it was using their articles to train its A.I. algorithms, part of a wider ruling against the company for its negotiating practices with media outlets.

Apple is in discussions with Google  about using Google’s generative A.I. model called Gemini for its next iPhone.

The Age of A.I.

The Caribbean island Anguilla made $32 million last year, more than 10 percent of its G.D.P., from companies registering web addresses that end in .ai .

When it comes to the A.I. that powers chatbots like ChatGPT, China trails the United States. But when it comes to producing the scientists behind a new generation of humanoid technologies, China is pulling ahead . Silicon Valley leaders are lobbying Congress on the dangers of falling behind .

By interacting with data about genes and cells, A.I. models have made some surprising discoveries and are learning what it means to be alive. What could they teach us someday ?

Covariant, a robotics start-up, is using the technology behind chatbots  to build robots that learn skills much like ChatGPT does.

Office of Teaching, Learning, and Technology

Ai-assisted literature reviews.

ChatGPT has a reputation for generating hallucinations, or false information. So can an Artificial Intelligence (AI) platform be trusted to assist in a literature review? Yes, if the tool you are using is the right one for the job. ChatGPT and Copilot are not designed to provide accurate citations. Instead, use them to brainstorm research questions. Keep alert for misinformation, hallucinations, and bias that could be part of the generative AI’s responses. Be aware of historical biases in the literature, which can also influence the output you encounter. 

Be sure to keep track of what tools you use, your purpose for using them, and the output from your interactions. Be prepared to disclose the AI tools, databases, and criteria used to select and analyze sources. Remember you are the one ultimately responsible for anything you create, generative AI is only your assistant.  

Try these five AI platforms to assist you in your literature reviews and academic research: 

  • Copilot . Many people are exploring the ways that AI can be used to improve research. Even with a general generative AI platform like Copilot, you can use AI to help you brainstorm or discover new perspectives on research topics. An example prompt for this purpose can be found in David Maslach's article, "Generative AI Can Supercharge Your Academic Research," “I am thinking about [insert topic], but this is not a very novel idea. Can you help me find innovative papers and research from the last 10 years that has discussed [insert topic]?”  
  • Elicit . This AI research assistant helps in evidence synthesis and text extraction. Users can enter a research question, and the AI identifies top papers in the field, even without perfect keyword matching. Elicit only includes academic papers, since Elicit is designed around finding and analyzing academic papers specifically. Elicit pulls from over 126 million papers through Semantic Scholar. Elicit organizes papers into an easy-to-use table and provides features for brainstorming research questions. 
  • Consensus . This is an AI-powered search engine that pulls answers from research papers. Consensus is  not meant to be used to ask questions about basic facts such as, “How many people live in Europe?” or “When is the next leap year?” as there would likely not be research dedicated to investigating these subjects. Consensus is more effective with research questions on topics that have likely been studied by researchers. Yes/No questions will generate a “Consensus” from papers on the topic. Papers in Consensus also are from Semantic Scholar. Results in a Consensus search can be filtered by sample size of the study, population studied, study types, and more. This makes Consensus an interesting tool for finding related literature on your search topic. 
  • Research Rabbit . An AI research assistant designed to assist researchers in literature research, discovering and organizing academic papers efficiently. It offers features such as interactive visualizations, collaborative exploration, and personalized recommendations. Users can create collections of papers, visualize networks of papers and co-authorships, and explore research questions. Unlike the previous two platforms listed, Research Rabbit doesn’t start with a question, but a paper that already is known. You need to have a starting article to go down a “rabbit hole” to see connections between papers. 
  • Litmaps . A similar tool to Research Rabbit, a Litmap shows the relationships between the articles in your collection in the form of connecting lines which trace the citations for you. It allows a user to start with a citation, or a seed, and then through a simple interface, investigate connections between papers. 

For further reading, see " How to Write AI-Powered Literature Reviews: Balancing Speed, Depth, and Breadth in Academic Research " which includes a helpful table comparing the different tools that specialize in literature searching. And check out the February 2024 webinar, " Unlock the Power of AI for Academic Research " hosted by Tracy Mendolia-Moore and Brett Christie for more information on this topic. 

This paper is in the following e-collection/theme issue:

Published on 28.3.2024 in Vol 26 (2024)

Augmenting K-Means Clustering With Qualitative Data to Discover the Engagement Patterns of Older Adults With Multimorbidity When Using Digital Health Technologies: Proof-of-Concept Trial

Authors of this article:

Author Orcid Image

Original Paper

  • Yiyang Sheng 1 , MSc   ; 
  • Raymond Bond 2 , PhD   ; 
  • Rajesh Jaiswal 3 , PhD   ; 
  • John Dinsmore 4 , PhD   ; 
  • Julie Doyle 1 , PhD  

1 NetwellCASALA, Dundalk Institution of Technology, Dundalk, Ireland

2 School of Computing, Ulster University, Jordanstown, United Kingdom

3 School of Enterprise Computing and Digital Transformation, Technological University Dublin, Dublin, Ireland

4 Trinity Centre for Practice and Healthcare Innovation, School of Nursing and Midwifery, Trinity College Dublin, Dublin, Ireland

Corresponding Author:

Yiyang Sheng, MSc

NetwellCASALA

Dundalk Institution of Technology

Dublin Road, PJ Carrolls Building, Dundalk Institute of Technology

Co.Louth, Ireland

Dundalk, A91 K584

Phone: 353 894308214

Email: [email protected]

Background: Multiple chronic conditions (multimorbidity) are becoming more prevalent among aging populations. Digital health technologies have the potential to assist in the self-management of multimorbidity, improving the awareness and monitoring of health and well-being, supporting a better understanding of the disease, and encouraging behavior change.

Objective: The aim of this study was to analyze how 60 older adults (mean age 74, SD 6.4; range 65-92 years) with multimorbidity engaged with digital symptom and well-being monitoring when using a digital health platform over a period of approximately 12 months.

Methods: Principal component analysis and clustering analysis were used to group participants based on their levels of engagement, and the data analysis focused on characteristics (eg, age, sex, and chronic health conditions), engagement outcomes, and symptom outcomes of the different clusters that were discovered.

Results: Three clusters were identified: the typical user group, the least engaged user group, and the highly engaged user group. Our findings show that age, sex, and the types of chronic health conditions do not influence engagement. The 3 primary factors influencing engagement were whether the same device was used to submit different health and well-being parameters, the number of manual operations required to take a reading, and the daily routine of the participants. The findings also indicate that higher levels of engagement may improve the participants’ outcomes (eg, reduce symptom exacerbation and increase physical activity).

Conclusions: The findings indicate potential factors that influence older adult engagement with digital health technologies for home-based multimorbidity self-management. The least engaged user groups showed decreased health and well-being outcomes related to multimorbidity self-management. Addressing the factors highlighted in this study in the design and implementation of home-based digital health technologies may improve symptom management and physical activity outcomes for older adults self-managing multimorbidity.

Introduction

According to the United Nations, the number of people aged ≥65 years is growing faster than all other age groups [ 1 ]. The worldwide population of people aged ≥65 years will increase from approximately 550 million in 2000 to 973 million in 2030 [ 2 ]. Furthermore, by 2050, approximately 16% of the world’s population will be aged >65 years, whereas 426 million people will be aged >80 years [ 1 ]. Living longer is a great benefit to today’s society. However, this comes with several challenges. Aging can be associated with many health problems, including multimorbidity (ie, the presence of ≥2 chronic conditions) [ 3 ]. The prevalence rate of multimorbidity among older adults is estimated to be between 55% and 98%, and the factors associated with multimorbidity are older age, female sex, and low socioeconomic status [ 4 ]. In the United States, almost 75% of older adults have multimorbidity [ 5 ], and it was estimated that 50 million people in the European Union were living with multimorbidity in 2015 [ 6 ]. Likewise, the prevalence rate of multimorbidity is 69.3% among older adults in China [ 5 ].

Home-based self-management for chronic health conditions involves actions and behaviors that protect and promote good health care practices comprising the management of physical, emotional, and social care [ 7 ]. Engaging in self-management can help older adults understand and manage their health conditions, prevent illness, and promote wellness [ 7 , 8 ]. However, self-management for older adults with multimorbidity is a long-term, complex, and challenging mission [ 9 , 10 ]. There are numerous self-care tasks to engage in, which can be very complicated, especially for people with multiple chronic health conditions. Furthermore, the severity of the disease can negatively impact a person’s ability to engage in self-management [ 10 ].

Digital home-based health technologies have the potential to support better engagement with self-management interventions, such as the monitoring of symptom and well-being parameters as well as medication adherence [ 10 , 11 ]. Such technologies can help older adults understand their disease or diseases, respond to changes, and communicate with health care providers [ 12 - 14 ]. Furthermore, digital health technologies can be tailored to individual motivations and personal needs [ 13 ], which can improve sustained use [ 15 ] and result in people feeling supported [ 16 ]. Digital self-management can also create better opportunities for adoption and adherence in the long term compared with paper booklet self-management [ 16 ]. Moreover, digital health technologies, such as small wearable monitoring devices, can increase the frequency of symptom monitoring for patients with minimal stress compared with symptom monitoring with manual notifications [ 17 ].

A large body of research implements data mining and machine learning algorithms using data acquired from home-based health care data sets. Data mining techniques, such as data visualization, clustering, classification, and prediction, to name a few, can help researchers understand users, behaviors, and health care phenomena by identifying novel, interesting patterns. These techniques can also be used to build predictive models [ 18 - 21 ]. In addition, data mining techniques can help in designing health care management systems and tracking the state of a person’s chronic disease, resulting in appropriate interventions and a reduction in hospital admissions [ 18 , 22 ]. Vast amounts of data can be generated when users interact with digital health technologies, which provides an opportunity to understand chronic illnesses as well as elucidate how users engage with digital health technologies in the real world. Armstrong et al [ 23 ] used the k-means algorithm to identify previously unknown patterns of clinical characteristics in home care rehabilitation services. The authors used k-means cluster analysis to analyze data from 150,253 clients and discovered new insights into the clients’ characteristics and their needs, which led to more appropriate rehabilitation services for home care clients. Madigan and Curet [ 22 ] used classification and regression trees to investigate a home-based health care data set that comprised 580 patients who had 3 specific conditions: chronic obstructive pulmonary disease (COPD), heart failure (HF), and hip replacement. They found that data mining methods identified the dependencies and interactions that influence the results, thereby improving the accuracy of risk adjustment methods and establishing practical benchmarks [ 22 ]. Other research [ 24 ] has developed a flow diagram of a proposed platform by using machine learning methods to analyze multiple health care data sets, including medical images as well as diagnostic and voice records. The authors believe that the system could help people in resource-limited areas, which have lower ratios of physicians and hospitals, to diagnose diseases such as breast cancer, heart disease (HD), diabetes, and liver disease at a lower cost and in less time than local hospitals. In the study, the accuracy of disease detection was >95% [ 24 ].

There are many different approaches to clustering analysis of health care data sets, such as k-means, density-based spatial clustering of applications with noise, agglomerative hierarchical clustering, self-organizing maps, partitioning around medoids algorithm, hybrid hierarchical clustering, and so on [ 25 - 28 ]. K-means clustering is 1 of the most commonly used clustering or unsupervised machine learning algorithms [ 19 , 29 ], and it is relatively easy to implement and relatively fast [ 30 - 32 ]. In addition, k-means has been used in research studies related to chronic health conditions such as diabetes [ 33 ], COPD [ 34 , 35 ], and HF [ 36 ]; for example, a cloud-based framework with k-means clustering technique has been used for the diagnosis of diabetes and was found to be more efficient and suitable for handling extensive data sets in cloud computing platforms than hierarchical clustering [ 32 ]. Violán et al [ 37 ] analyzed data from 408,994 patients aged 45 to 64 years with multimorbidity using k-means clustering to ascertain multimorbidity patterns. The authors stratified the k-means clustering analysis by sex, and 6 multimorbidity patterns were found for each sex. They also suggest that clusters identified by multimorbidity patterns obtained using nonhierarchical clustering analysis (eg, k-means and k-medoids) are more consistent with clinical practice [ 37 ].

The majority of data mining studies on chronic health conditions focus on the diseases themselves and their symptoms; there is less exploration of the patterns of engagement of persons with multimorbidity with digital health technologies. However, data mining and machine learning are excellent ways to understand users’ engagement patterns with digital health technologies. A study by McCauley et al [ 38 ] compared clustering analysis of the user interaction event log data from a reminiscence mobile app that was designed for people living with dementia. In addition to performing quantitative user interaction log analysis, the authors also gathered data on the qualitative experience of users. The study showed the benefits of using data mining to analyze the user log data with complementary qualitative data analysis [ 38 ]. This is a research challenge where both quantitative and qualitative methods can be combined to fully understand users; for example, the quantitative analysis of the user event data can tell us about use patterns, the preferred times of day to use the app, the feature use, and so on, but qualitative data (eg, user interviews) are necessary to understand why these use patterns exist.

The aim of this study was to analyze how older adults with multimorbidity engage with digital symptom and health monitoring over a period of approximately 12 months using a digital health platform. In this study, user log data of engagement with digital health technology and user interview qualitative data were examined to explore the patterns of engagement. K-means clustering was used to analyze the user log data. The study had four research questions: (1) How do clusters differ in terms of participant characteristics such as age, sex, and health conditions? (2) How do clusters differ in terms of patterns of engagement, such as the number of days a week participants take readings (eg, weight and blood pressure [BP])? (3) How do engagement rates with the different devices correlate with each other (determined by analyzing the weekly submissions of every parameter and the interviews of participants)? and (4) How do engagement rates affect participants’ health condition symptoms, such as BP, blood glucose (BG) level, weight, peripheral oxygen saturation (SpO 2 ) level, and physical activity (PA)?

The study was a proof-of-concept trial with an action research design and mixed methods approach. Action research is a period of investigation that “describes, interprets, and explains social situations while executing a change intervention aimed at improvement and involvement” [ 39 ]. An action research approach supports the generation of solutions to practical problems while using methods to understand the contexts of care as well as the needs and experiences of participants.

Recruitment and Sample

Although 120 participants consented to take part across Ireland and Belgium, this paper reports on data from 60 Irish older adults with multiple chronic health conditions (≥2 of the following: COPD, HF, HD, and diabetes). Participants were recruited through purposive sampling and from multiple sources, including through health care organizations (general practitioner clinics and specialist clinics), relevant older adult networks, chronic disease support groups, social media, and local newspaper advertising. Recruitment strategies included the use of study flyers and advertisements as well as giving talks and platform demonstrations.

Sources of Data

The data set was collected during the Integrated Technology Systems for Proactive Patient Centred Care (ProACT) project proof-of-concept trial. As the trial was a proof-of-concept of a novel digital health platform, the main goal was to understand how the platform worked or did not work, rather than whether it worked. Thus, to determine sample size, a pragmatic approach was taken in line with two important factors: (1) Is the sample size large enough to provide a reliable analysis of the ecosystem? and (2) Is the sample size small enough to be financially feasible? The literature suggests that overall sample size in proof-of-concept digital health trials is low. A review of 1030 studies on technical interventions for management of chronic disease that focused on HF (436 studies), stroke (422 studies), and COPD (172 studies) suggested that robust sample sizes were 17 for COPD, 19 for HF, and 21 for stroke [ 40 ]. Full details on the study protocol can be found in the study by Dinsmore et al [ 41 ].

Participants used a suite of sensor devices (ie, BP monitors, weight scales, glucometers, pulse oximeters, and activity watches) and a tablet app to monitor their health conditions and well-being. All participants received a smartwatch to measure PA levels and sleep, a BP monitor to measure BP and pulse rate, and a weight scale. A BG meter was provided to participants with diabetes, and a pulse oximeter was provided to those with COPD to measure SpO 2 levels. In addition, all participants received an iPad with a custom-designed app, the ProACT CareApp, that allowed users to view their data, provide self-report (SR) data on symptoms that could not be easily captured through a sensor (eg, breathlessness and edema) and well-being (eg, mood and satisfaction with social life), receive targeted education based on their current health status, set PA goals, and share their data with others. The ProACT platform was designed and developed following an extensive user-centered design process. This involved interviews, focus groups, co-design sessions (hands-on design activities with participants), and usability testing before the platform’s deployment in the trial. A total of 58 people with multimorbidity and 106 care network participants, including informal carers, formal carers, and health care professionals, took part in this process. Findings from the user-centered design process have been published elsewhere [ 42 , 43 ]. More detailed information about the full ProACT platform and the CareApp used by participants can be found in the study by Doyle et al [ 44 ].

The study took place between April 1, 2018, and June 30, 2019. Participants in the trial typically participated for 12 months, although some stayed on for 14 months and others for 9 months (in the case of those who entered the trial later). One of the trial objectives was to understand real-world engagement. Therefore, participants were asked to take readings with the devices and provide SR data in the ProACT CareApp whenever they wished (not necessarily daily). As part of the trial, participants were assisted by technical help desk staff who responded to questions about the technology, and home visits were conducted as needed to resolve issues. In addition, a clinical triage service monitored the participants’ readings and contacted them in instances of abnormal parameter values (eg, high BP and low SpO 2 levels) [ 45 ]. Participants also received a monthly check-in telephone call from 1 of the triage nurses.

Table 1 outlines the types of health and well-being metrics that were collected, as well as the collection method and the number of participants who collected that type of data. The health and well-being metrics were determined from the interviews and focus groups held with health care professionals during the design of the ProACT platform to determine the most important symptom and well-being parameters to monitor across the health conditions of interest [ 42 ]. Off-the-shelf digital devices manufactured by 2 providers, Withings and iHealth, were used during the trial. Data from these providers were extracted into a custom platform called Context-Aware Broker and Inference Engine–Subject Information Management System (CABIE-SIMS), which includes a data aggregator for storing health and well-being data. All devices require the user to interact with them in some way. However, some devices needed more interaction than others (eg, taking a BG reading involved several steps, but PA and sleep only required participants to open the activity watch app to sync the relevant data). The activity watch was supposed to synchronize automatically without user interaction. However, inconsistencies with syncing meant that users were advised to open the Withings app to sync their data. The CABIE-SIMS platform would display the readings in near real time, apart from PA data, which were collected at regular intervals throughout the day, whereas sleep data were gathered every morning. Table 1 lists the types of data that were collected and the number of participants who collected them. In addition, semistructured interviews were conducted with all participants at 4 time points throughout the trial to understand their experience of using the ProACT platform. Although a full qualitative thematic analysis was outside the scope of this study and was reported on elsewhere [ 44 ], interview transcripts for participants of interest to the analysis presented in this paper were reviewed as part of this study to provide an enhanced understanding of the results.

a SpO 2 : peripheral oxygen saturation.

b HF: heart failure.

c ProACT: Integrated Technology Systems for Proactive Patient Centred Care.

d CABIE-SIMS: Context-Aware Broker and Inference Engine–Subject Information Management System.

e COPD: chronic obstructive pulmonary disease.

Data Analysis Methods

The original data set in the CABIE-SIMS platform was formatted using the JSON format. As a first step, a JSON-to-CSV file converter was used to make the data set more accessible for data analysis. The main focus was on dealing with duplicate data and missing data during the data cleaning phase. Data duplication might occur when a user uploads their SpO 2 reading 3 times in 2 minutes as a result of mispressing the button. In such cases, only 1 record was added to the cleaned data file. As for missing data, the data set file comprised “N/A” (not available) values for all missing data.

The cleaned data set was preprocessed using Microsoft Excel, the R programming language (R Foundation for Statistical Computing), and RStudio (Posit Software, PBC). The preprocessed data set included participants’ details (ID, sex, age, and chronic health conditions) and the number of days of weekly submissions of every parameter (BP, pulse rate, SpO 2 level, BG level, weight, PA, SR data, and sleep). All analyses (including correlation analysis, principal component analysis [PCA], k-means clustering, 2-tailed t test, and 1-way ANOVA) were implemented in the R programming language and RStudio.

After performing Shapiro-Wilk normality tests on the data submitted each week, we found that the data were not normally distributed. Therefore, Spearman correlation was used to check the correlation among the parameters. Correlation analysis and PCA were used to determine which portions of the data would be included in the k-means clustering. Correlation analysis determined which characteristics or parameters should be selected, and PCA determined the number of dimensions that should be selected as features for clustering. In the clustering process, the weekly submission of each parameter was considered as an independent variable for the discovery of participant clusters, and the outcome of the clustering was a categorical taxonomy that was used to label the 3 discovered clusters. Similarly, the Shapiro-Wilk test was conducted to check the normality of the variables in each group. It was found that most of the variables in each group were normally distributed, and only the weight data submission records of cluster 3, the PA data submission records of cluster 2, the SR data submission records of cluster 3, and the sleep data submission records of cluster 1 were not normally distributed. Therefore, the 2-tailed t test and 1-way ANOVA were used to compare different groups of variables. The 2-tailed t test was used to compare 2 groups of variables, whereas 1-way ANOVA was used to compare ≥2 groups of variables. P values >.05 indicated that there were no statistically significant differences among the groups of variables [ 46 ].

As for the qualitative data from the interviews, we performed keyword searches after a review of the entire interview; for example, when the data analysis was related to BP and weight monitoring, a search with the keywords “blood pressure,” “weight,” or “scale” was performed to identify relevant information. In addition, when the aim was to understand the impact of digital health care technology, we focused on specific questions in the second interview, such as “Has it had any impact on the management of your health?”

Ethical Considerations

Ethics approval was received from 3 ethics committees: the Health Service Executive North East Area Research Ethics Committee, the School of Health and Science Research Ethics Committee at Dundalk Institute of Technology, and the Faculty of Health Sciences Research Ethics Committee at Trinity College Dublin. All procedures were in line with the European Union’s General Data Protection Regulation for research projects, with the platform and trial methods and procedures undergoing data protection impact assessments. Written informed consent was obtained on an individual basis from participants in accordance with legal and ethics guidelines after a careful explanation of the study and the provision of patient information and informed consent forms in plain language. All participants were informed of their right to withdraw from the study at any time without having to provide a reason. Participants were not compensated for their time. Data stored within the CABIE-SIMS platform were identifiable because they were shared (with the participant’s consent) with the clinical triage teams and health care professionals. This was clearly outlined in the participant information leaflet and consent form. However, the data set that was extracted for the purpose of the analysis presented in this paper was pseudonymized.

Participants

A total of 60 older adults were enrolled in the study. The average age of participants was 74 (SD 6.4; range 65-92) years; 60% (36) were male individuals, and 40% (24/60) were female individuals. The most common combination of health conditions was diabetes and HD (30/60, 50%), which was followed by COPD and HD (16/60, 27%); HF and HD (7/60, 12%); diabetes and COPD (3/60, 5%); diabetes and HF (1/60, 2%); COPD and HF (1/60, 2%); HF, HD, and COPD (1/60, 2%); and COPD, HD, and diabetes (1/60, 2%). Of the 60 participants, 11 (18%) had HF, 55 (92%) had HD, 22 (37%) had COPD, and 31 (52%) had diabetes. Over the course of the trial, of the 60 participants, 8 (13%) withdrew, and 3 (5%) died. However, this study included data from all participants in the beginning, as long as the participant had at least 1 piece of data. Hence, of the 60 participants, we included 56 (93%) in our analysis, whereas 4 (7%) were excluded because no data were recorded.

Correlation of Submission Parameters

To help determine which distinct use characteristics or parameters (such as the weekly frequency of BP data submissions) should be selected as features for clustering, the correlations among the parameters were calculated. Figure 1 shows the correlation matrix for all parameter weekly submissions (days). In this study, a moderate correlation (correlation coefficient between 0.3 to 0.7 and −0.7 to −0.3) [ 47 , 48 ] was chosen as the standard for selecting parameters. First, every participant received a BP monitor to measure BP, and pulse rate was collected as part of the BP measurement. Moreover, the correlation coefficient between BP and pulse rate was 0.93, a strong correlation. In this case, BP was selected for clustering rather than pulse rate. As for the other parameters, the correlations between BP and weight (0.51), PA (0.55), SR data (0.41), and sleep (0.55) were moderate, whereas the correlations between BP and SpO 2 level (0.05) and BG (0.24) were weak. In addition, the correlations between SpO 2 level and weight (−0.25), PA (0.16), SR data (0.29), and sleep (−0.24) were weak. Therefore, SpO 2 level was not selected for clustering. Likewise, the correlations between BG and weight (0.19), PA (0.2), SR data (−0.06), and sleep (0.25) were weak. Therefore, BG was not selected for clustering. Thus, BP, weight, PA, SR data, and sleep were selected for clustering.

educational technologies research papers

PCA and Clustering

The fundamental question for k-means clustering is this: how many clusters (k) should be discovered? To determine the optimum number of clusters, we further investigated the data through visualization offered by PCA. As can be seen from Figure 2 , the first 2 principal components (PCs) explain 73.6% of the variation, which is an acceptably large percentage. However, after a check of individual contributions, we found that there were 3 participants—P038, P016, and P015—who contributed substantially to PC1 and PC2. After a check of the original data set, we found that P038 submitted symptom parameters only on 1 day, and P016 submitted symptom parameters only on 2 days. Conversely, P015 submitted parameters almost every day during the trial. Therefore, P038 and P016 were omitted from clustering.

After removing the outliers (P038 and P016), we found that the first 2 PCs explain 70.5% of the variation ( Figure 3 ), which is an acceptably large percentage.

The clusters were projected into 2 dimensions as shown in Figure 4 . Each subpart in Figure 4 shows a different number of clusters (k). When k=2, the data are obviously separated into 2 big clusters. Similarly, when k=3, the clusters are still separated very well into 3 clusters. When k=4, the clusters are well separated, but compared with the subpart with 3 clusters, 2 clusters are similar, whereas cluster 1, which only has 3 participants, is a relatively small cluster. When k=5, there is some overlap between cluster 1 and cluster 2. Likewise, Figure 5 shows the optimal number of clusters using the elbow method. In view of this, we determined that 3 clusters of participants separate the data set best. The 3 clusters can be labeled as the least engaged user group (cluster 1), the highly engaged user group (cluster 2), and the typical user group (cluster 3).

In the remainder of this section, we report on the examination of the clusters with respect to participant characteristics and the weekly submissions (days) of different parameters in a visual manner to reveal potential correlations and insights. Finally, we report on the examination of the correlations among all parameters by PCA.

educational technologies research papers

Participant Characteristics

As seen in Figure 6 , the distribution of age within the 3 clusters is similar, with the P value of the 1-way ANOVA being .93, because all participants in this trial were older adults. However, the median age in the cluster 3 box plot is slightly higher than the median ages in the box plots of the other 2 clusters, and the average age of cluster 2 participants (74.1 years) is lower than that of cluster 1 (74.6 years) and cluster 3 (74.8 years; Table 2 ) participants. As Table 2 shows, 6 (26%) of the 23 female participants are in cluster 1 compared with 7 (23%) of the 31 male participants. However, the male participants in cluster 2 (10/31, 32%) and cluster 3 (14/31, 45%) represent higher proportions of total male participants compared with female participants in cluster 2 (7/23, 30%) and cluster 3 (10/23, 43%). Figure 7 shows the proportion of the 4 chronic health conditions within the 3 clusters. Cluster 1 has the largest proportion of participants with COPD and the smallest proportion of participants with diabetes. Moreover, cluster 3 has the smallest proportion of participants with HF (3/24, 13%; Table 2 ).

educational technologies research papers

a COPD: chronic obstructive pulmonary disease.

educational technologies research papers

Participant Engagement Outcomes

Cluster 2 has the longest average enrollment time at 352 days compared with cluster 3 at 335 days and cluster 1 at 330 days. As seen in Figure 8 , the overall distribution of the BP data weekly submissions is different, with the P value of the 1-way ANOVA being 8.4 × 10 −9 . The frequency of BP data weekly submissions (days) of cluster 2 exceeds the frequencies of cluster 1 and cluster 3, which means that participants in cluster 2 have a higher frequency of BP data submissions than those in the other 2 clusters. The median and maximum of cluster 3 are higher than those of cluster 1, but the minimum of cluster 3 is lower than that of cluster 1. Likewise, as seen in Table 3 , the mean and SD of cluster 1 (mean 2.5, SD 1.4) are smaller than those of cluster 3 (mean 2.9, SD 2.9).

As Figure 9 shows, the overall distribution of the weekly submissions of weight data is different, with the P value of the 1-way ANOVA being 1.4 × 10 −13 , because the participants in cluster 2 submitted weight parameters more frequently than those in cluster 1 and cluster 3. In addition, similar to the BP data submissions, the median of cluster 3 is higher than that of cluster 1. As seen in Figure 9 , there are 3 outliers in cluster 2. The top outlier is P015, who submitted a weight reading almost every day. During the trial, this participant mentioned many times in the interviews that his goal was to lose weight and that he used the scale to check his progress:

I’ve set out to reduce my weight. The doctor has been saying to me you know there’s where you are and you should be over here. So, I’ve been using the weighing thing just to clock, to track reduction of weight. [P015]

The other 2 outliers are P051 and P053, both of whom mentioned taking their weight measurements as part of their daily routine:

Once I get up in the morning the first thing is I weigh myself. That is, the day starts off with the weight, right. [P053]

Although their frequency of weekly weight data submissions is lower than that of all other participants in cluster 2, it is still higher than that of most of the participants in the other 2 clusters.

In Table 3 , it can be observed that the average frequency of weekly submissions of PA and sleep data for every cluster is higher than the frequencies of other variables, and the SDs are relatively low. This is likely because participants only needed to open the Withings app once a day to ensure the syncing of data. However, the overall distributions of PA and sleep data submissions are different in Figure 10 and Figure 11 , with the P values of the 1-way ANOVA being 1.1 × 10 −9 and 3.7 × 10 −10 , respectively. Moreover, as Figure 10 and Figure 11 show, there are still some outliers who have a low frequency of submissions, and the box plot of cluster 1 is lower than the box plots of cluster 2 and cluster 3 in both figures. The reasons for the low frequency of submissions can mostly be explained by (1) technical issues, including internet connection issues, devices not syncing, and devices needing to be paired again; (2) participants forgetting to put the watch back on after taking it off; and (3) participants stopping using the devices (eg, some participants do not like wearing the watch while sleeping or when they go on holiday):

I was without my watch there for the last month or 3 or 4 weeks [owing to technical issues], and I missed it very badly because everything I look at the watch to tell the time, I was looking at my steps. [P042]
I don’t wear it, I told them I wouldn’t wear the watch at night, I don’t like it. [P030]

Unlike in the case of other variables, the submission of SR data through the ProACT CareApp required participants to reflect on each question and their status before selecting the appropriate answer. Participants had different questions to answer based on their health conditions; for example, participants with HF and COPD were asked to answer symptom-related questions, whereas those with diabetes were not. All participants were presented with general well-being and mood questions. Therefore, for some participants, self-reporting could possibly take more time than using the health monitoring devices. As shown in Table 3 , the frequency of average weekly submissions of SR data within the 3 clusters is relatively small and the SDs are large, which means that the frequency of SR data submissions is lower than that of other variables. Furthermore, there were approximately 5 questions asked daily about general well-being, and some participants would skip the questions if they thought the question was unnecessary or not relevant:

Researcher: And do you answer your daily questions? P027: Yeah, once a week.
Researcher: Once a week, okay. P027: But they’re the same.

As Figure 12 shows, the distribution of SR data submissions is different, with the P value of the 1-way ANOVA being .001. In Figure 12 , the median of cluster 2 is higher than the medians of the other 2 clusters, and compared with other variables, but unlike other parameters, cluster 2 also has some participants who had very low SR data submission rates (close to 0). SR data is the only parameter where cluster 1 has a higher median than cluster 3.

educational technologies research papers

a Lowest submission rate across the clusters.

b Highest submission rate across the clusters.

educational technologies research papers

The Correlation Among the Weekly Submissions of Different Parameters

As seen in Figure 13 , the arrows of BP and weight point to the same side of the plot, which shows a strong correlation. Likewise, PA and sleep also have a strong correlation. As noted previously, the strong correlation between PA and sleep is because the same device collected these 2 measurements, and participants only needed to sync the data once a day. By contrast, BP and weight were collected by 2 different devices but are strongly correlated. During interviews, many participants mentioned that their daily routine with the ProACT platform involved taking both BP and weight readings:

Usually in the morning when I get out of the bed, first, I go into the bathroom, wash my hands and come back, then weigh myself, do my blood pressure, do my bloods. [P008]
I now have a routine that I let the system read my watch first thing, then I do my blood pressure thing and then I do the weight. [P015]
As I said, it’s keeping me in line with my, when I dip my finger, my weight, my blood pressure. [P040]
I use it in the morning and at night for putting in the details of blood pressure in the morning and then the blood glucose at night. Yes, there’s nothing else, is there? Oh, every morning the [weight] scales. [P058]

By contrast, as shown in Figure 13 , SR data have a weak correlation with other parameters, for reasons noted earlier.

educational technologies research papers

Parameter Variation Over Time

Analysis was conducted to determine any differences among the clusters in terms of symptom and well-being parameter changes over the course of the trial. Table 4 provides a description of each cluster in this regard. As Figure 14 shows, the box plot of cluster 2 is comparatively short in every time period of the trial, and the medians of cluster 2 and cluster 3 are more stable than the median of cluster 1. In addition, the median of cluster 1 is increasing over time, whereas the medians of cluster 2 and cluster 3 are decreasing and within the normal systolic BP of older adults [ 49 ] ( Figure 14 ). As can be seen in Table 5 , cluster 2 has a P value of .51 for systolic BP and a P value of .52 for diastolic BP, which are higher than the P values of cluster 1 ( P =.19 and P =.16, respectively) and cluster 3 ( P =.27 and P =.35, respectively). Therefore, participants in cluster 2, as highly engaged users, have more stable B P values than those in the other 2 clusters. By contrast, participants in cluster 1, as the least engaged users, have the most unstable B P values.

As seen in Figure 15 , the median of cluster 2 is relatively higher than the medians of the other 2 clusters. The median of cluster 3 is increasing over time. In the second and third time periods of the trial, the box plot of cluster 1 is comparatively short. Normal SpO 2 levels are between 95% and 100%, but older adults may have SpO 2 levels closer to 95% [ 50 ]. In addition, for patients with COPD, SpO 2 levels range between 88% and 92% [ 51 ]. In this case, there is not much difference in terms of SpO 2 levels, and most of the SpO 2 levels are between 90% and 95% in this study. However, the SpO 2 levels of cluster 1 and cluster 2 were maintained at a relatively high level during the trial. As for cluster 3, the SpO 2 levels were comparatively low but relatively the same as those in the other 2 clusters in the later period of the trial. Therefore, the SpO 2 levels of cluster 3 ( P =.25) are relatively unstable compared with those of cluster 1 ( P =.66) and cluster 2 ( P =.59). As such, there is little correlation between SpO 2 levels and engagement with digital health monitoring.

In relation to BG, Figure 16 shows that the box plot of cluster 2 is relatively lower than the box plots of the other 2 clusters in the second and third time periods. Moreover, the medians of cluster 2 and cluster 3 are lower than those of cluster 1 in the second and third time periods. The BG levels in cluster 2 and cluster 3 decreased at later periods of the trial compared with the beginning of the trial, but those in cluster 1 increased. Cluster 3 ( P =.25), as the typical user group, had more significant change than cluster 1 ( P =.50) and cluster 2 ( P =.41). Overall, participants with a higher engagement rate had better BG control.

In relation to weight, Figure 17 shows that the box plot of cluster 2 is lower than the box plots of the other 2 clusters and comparatively short. As Table 5 shows, the P value of cluster 2 weight data is .72, which is higher than the P values of cluster 1 (.47) and cluster 3 (.61). Therefore, participants in cluster 2 had a relatively stable weight during the trial. In addition, as seen in Figure 17 , the median weight of cluster 1 participants is decreasing, whereas that of cluster 3 participants is increasing. It is well known that there are many factors that can influence body weight, such as PA, diet, environmental factors, and so on. [ 52 ]. In this case, engagement with digital health and well-being monitoring may help control weight but the impact is not significant.

As Table 5 shows, the P value of cluster 2 PA (.049) is lower than .05, which means that there are significant differences among the 3 time slots in cluster 2. However, the median of cluster 2 PA, as seen in Figure 18 , is still higher than the medians of the other 2 clusters. In cluster 2, approximately 50% of daily PA (steps) consists of >2500 steps. Overall, participants with a higher engagement rate also had a higher level of PA.

a BP: blood pressure.

b BG: blood glucose.

c SR: self-report.

d PA: physical activity.

educational technologies research papers

b SpO 2 : peripheral oxygen saturation.

c BG: blood glucose.

educational technologies research papers

Principal Findings

Digital health technologies hold great promise to help older adults with multimorbidity to improve health management and health outcomes. However, such benefits can only be realized if users engage with the technology. The aim of this study was to explore the engagement patterns of older adults with multimorbidity with digital self-management by using data mining to analyze users’ weekly submission data. Three clusters were identified: cluster 1 (the least engaged user group), cluster 2 (the highly engaged user group), and cluster 3 (the typical user group). The subsequent analysis focused on how the clusters differ in terms of participant characteristics, patterns of engagement, and stabilization of health condition symptoms and well-being parameters over time, as well as how engagement rates with the different devices correlate with each other.

The key findings from the study are as follows:

  • There is no significant difference in participants’ characteristics among the clusters in general. The highly engaged group had the lowest average age ( Table 4 ), and there was no significant difference with regard to sex and health conditions among these clusters. The least engaged user group had fewer male participants and participants with diabetes.
  • There are 3 main factors influencing the correlations among the submission rates of different parameters. The first concerns whether the same device was used to submit the parameters, the second concerns the number of manual operations required to submit the parameter, and the third concerns the daily routine of the participants.
  • Increased engagement with devices may improve the participants’ health and well-being outcomes (eg, symptoms and PA levels). However, the difference between the highly engaged user group and the typical user group was relatively minimal compared with the difference between the highly engaged user group and the least engaged user group.

Each of these findings is discussed in further detail in the following subsections.

Although the findings presented in this paper focus on engagement based on the ProACT trial participants’ use data, the interviews that were carried out as part of the trial identified additional potential factors of engagement. As reported in the study by Doyle et al [ 44 ], participants spoke about how they used the data to support their self-management (eg, taking action based on their data) and experienced various benefits, including increased knowledge of their health conditions and well-being, symptom optimization, reductions in weight, increased PA, and increased confidence to participate in certain activities as a result of health improvements. The peace of mind and encouragement provided by the clinical triage service as well as the technical support available were also identified during the interviews as potential factors positively impacting engagement [ 44 ]. In addition, the platform was found to be usable, and it imposed minimal burden on participants ( Table 1 ). These findings supplement the quantitative findings presented in this paper.

Age, Sex, Health Condition Types, and Engagement

In this study, the difference in engagement with health care technologies between the sex was not significant. Of the 23 female participants, 6 (26%) were part of the least engaged user group compared with 7 (23%) of the 31 male participants. Moreover, there were lower proportions of female participants in the highly engaged user group (7/23, 30%) and typical user group (10/23, 43%) compared with male participants (10/31, 32% and 14/31, 45%, respectively). Other research has found that engagement with mobile health technology for BP monitoring was independent of sex [ 53 ]. However, there are also some studies that show that female participants are more likely to engage with digital mental health care interventions [ 54 , 55 ]. Therefore, sex cannot be considered as a separate criterion when comparing engagement with health care technologies, and it was not found to have significant impact on engagement in this study. Regarding age, many studies have shown that younger people are more likely to use health care technologies than older adults [ 56 , 57 ]. Although all participants in our study are older adults, the highly engaged user group is the youngest group. However, there was no significant difference in age among the clusters, with some of the oldest users being part of cluster 3, the typical user cluster. Similarly, the health conditions of a participant did not significantly impact their level of engagement. Other research [ 53 ] found that participants who were highly engaged with health monitoring had higher rates of hypertension, chronic kidney disease, and hypercholesterolemia than those with lower engagement levels. Our findings indicate that the highly engaged user group had a higher proportion of participants with diabetes, and the least engaged user group had a higher proportion of participants with COPD. Further research is needed to understand why there might be differences in engagement depending on health conditions. In our study, participants with COPD also self-reported on certain symptoms, such as breathlessness, chest tightness, and sputum amount and color. Although engagement with specific questions was not explored, participants in cluster 1, the least engaged user group, self-reported more frequently than those in cluster 3, the typical user group. Our findings also indicate that participants monitoring BG level and BP experienced better symptom stabilization over time than those monitoring SpO 2 level. It has been noted that the expected benefits of technology (eg, increased safety and usefulness) and need for technology (eg, subjective health status and perception of need) are 2 important factors that can influence the acceptance and use of technology by older adults [ 58 ]. It is also well understood that engaging in monitoring BG level can help people with diabetes to better self-manage and make decisions about diet, exercise, and medication [ 59 ].

Factors Influencing Engagement

Many research studies use P values to show the level of similarity or difference among clusters [ 60 - 63 ]. For most of the engagement outcomes in this study, all clusters significantly differed, with 1-way ANOVA P <.001, with the exception being SR data ( P =.001). In addition, the 2-tailed t test P values showed that cluster 2 was significantly different from cluster 1 and cluster 3 in BP and weight data submission rates, whereas cluster 1 was significantly different from cluster 2 and cluster 3 in PA and sleep data submission rates. As for SR data submission rates, all 3 two-tailed t tests had P values >.001, meaning that there were no significant differences between any 2 of these clusters. Therefore, all 5 parameters used for clustering were separated into 3 groups based on the correlations of submission rates: 1 for BP and weight, 1 for PA and sleep, and 1 for SR data. PA and sleep data submission rates have a strong correlation because participants used the same device to record daily PA and sleeping conditions. SR data submission rates have a weak correlation with other parameters’ submission rates. Our previous research found that user retention in terms of submitting SR data was poorer than user retention in terms of using digital health devices, possibly because more manual operations are involved in the submission of SR data than other parameters or because the same questions were asked regularly, as noted by P027 in the Participant Engagement Outcomes subsection [ 64 ].

Other research that analyzed engagement with a diabetes support app found that user engagement was lower when more manual data entry was required [ 65 ]. In contrast to the other 2 groups of parameters, BP and weight data are collected using different devices. Whereas measuring BP requires using a BP monitor and manually synchronizing the data, measuring weight simply requires standing on the weight scale, and the data are automatically synchronized. Therefore, the manual operations involved in submitting BP and weight data are slightly different. However, the results showed a strong correlation between BP and weight because many participants preferred to measure both BP and weight together and incorporate taking these measurements into their daily routines. Research has indicated that if the use of a health care device becomes a regular routine, then participants will use it without consciously thinking about it [ 66 ]. Likewise, Yuan et al [ 67 ] note that integrating health apps into people’s daily activities and forming regular habits can increase people’s willingness to continue using the apps. However, participants using health care technology for long periods of time might become less receptive to exploring the system compared with using it based on the established methods to which they are accustomed [ 68 ]. In this study, many participants bundled their BP measurement with their weight measurement during their morning routine. Therefore, the engagement rates of interacting with these 2 devices were enhanced by each other. Future work could explore how to integrate additional measurements, such as monitoring SpO 2 level as well as self-reporting into this routine (eg, through prompting the user to submit these parameters while they are engaging with monitoring other parameters, such as BP and weight).

Relationship Between Engagement and Health and Well-Being Outcomes

Our third finding indicates that higher levels of engagement with digital health monitoring may result in better outcomes, such as symptom stabilization and increased PA levels. Milani et al [ 69 ] found that digital health care interventions can help people achieve BP control and improve hypertension control compared with usual care. In their study, users in the digital intervention group took an average of 4.2 readings a week. Compared with our study, this rate is lower than that of cluster 2 (5.7), the highly engaged user group, but higher than cluster 1 (2.5) and cluster 3 (2.9) rates. In our study, participants with a higher engagement rate experienced more stable BP, and for the majority of these participants (34/41, 83%), levels were maintained within the recommended thresholds of 140/90 mm Hg [ 70 ]. Many studies have shown that as engagement in digital diabetes interventions increases, patients will experience greater reductions in BG level compared with those with lower engagement [ 71 , 72 ]. However, in our study, BG levels in both the highly engaged user group (cluster 2) and the least engaged user group (cluster 1) increased in the later stages of the trial. Only the BG levels of the typical user group (cluster 3) decreased over time, which could be because the cluster 3 participants performed more PA in the later stages of the trial than during other time periods, as Figure 18 shows. Cluster 2, the highly engaged user group, maintained a relatively high level of PA during the trial period, although it continued to decline throughout the trial. Other research shows that more PA can also lead to better weight control and management [ 73 , 74 ], which could be 1 of the reasons why cluster 2 participants maintained their weight.

Limitations

There are some limitations to the research presented in this paper. First, although the sample size (n=60) was relatively large for a digital health study, the sample sizes for some parameters were small because not all participants monitored all parameters. Second, the participants were clustered based on weekly submissions of parameters only. If more features were included in clustering, such as submission intervals, participants could be grouped differently. It should also be pointed out that correlation is not a causality with respect to analyzing engagement rates with outcomes.

Conclusions

This study presents findings after the clustering of a data set that was generated from a longitudinal study of older adults using a digital health technology platform (ProACT) to self-manage multiple chronic health conditions. The highly engaged user group cluster (includes 17/54, 31% of users) had the lowest average age and highest frequency of submissions for every parameter. Engagement with digital health care technologies may also influence health and well-being outcomes (eg, symptoms and PA levels). The least engaged user group in our study had relatively poorer outcomes. However, the difference between the outcomes of the highly engaged user group and those of the typical user group is relatively small. There are 3 possible reasons for the correlations between the submission rates of parameters and devices. First, if 2 parameters are collected by the same device, they usually have a strong correlation, and users will engage with both equally. Second, the devices that involve fewer steps and parameters with less manual data entry will have a weak correlation with those devices that require more manual operations and data entry. Finally, participants’ daily routines also influence the correlations among devices; for example, in this study, many participants had developed a daily routine to weigh themselves after measuring their BP, which led to a strong correlation between BP and weight data submission rates. Future work should explore how to integrate the monitoring of additional parameters into a user’s routine and whether additional characteristics, such as the severity of disease or technical proficiency, impact engagement.

Acknowledgments

This work was part funded by the Integrated Technology Systems for Proactive Patient Centred Care (ProACT) project and has received funding from the European Union (EU)–funded Horizon 2020 research and innovation program (689996). This work was part funded by the EU’s INTERREG VA program, managed by the Special EU Programs Body through the Eastern Corridor Medical Engineering Centre (ECME) project. This work was part funded by the Scaling European Citizen Driven Transferable and Transformative Digital Health (SEURO) project and has received funding from the EU-funded Horizon 2020 research and innovation program (945449). This work was part funded by the COVID-19 Relief for Researchers Scheme set up by Ireland’s Higher Education Authority. The authors would like to sincerely thank all the participants of this research for their valuable time.

Conflicts of Interest

None declared.

  • Ageing. United Nations. 2020. URL: https://www.un.org/en/global-issues/ageing [accessed 2022-01-13]
  • Centers for Disease Control and Prevention (CDC). Trends in aging--United States and worldwide. MMWR Morb Mortal Wkly Rep. Feb 14, 2003;52(6):101-104. [ FREE Full text ] [ Medline ]
  • Valderas JM, Starfield B, Sibbald B, Salisbury C, Roland M. Defining comorbidity: implications for understanding health and health services. Ann Fam Med. Jul 13, 2009;7(4):357-363. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Marengoni A, Angleman S, Melis R, Mangialasche F, Karp A, Garmen A, et al. Aging with multimorbidity: a systematic review of the literature. Ageing Res Rev. Sep 2011;10(4):430-439. [ CrossRef ] [ Medline ]
  • Zhang L, Ma L, Sun F, Tang Z, Chan P. A multicenter study of multimorbidity in older adult inpatients in China. J Nutr Health Aging. Mar 2020;24(3):269-276. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • van der Heide I, Snoeijs S, Melchiorre MG, Quattrini S, Boerma W, Schellevis F, et al. Innovating care for people with multiple chronic conditions in Europe. Innovating Care for people with Multiple Chronic Conditions in Europe (ICARE4EU). 2015. URL: http:/​/www.​icare4eu.org/​pdf/​Innovating-care-for-people-with-multiple-chronic-conditions-in-Europe.​pdf [accessed 2024-01-29]
  • Bartlett SJ, Lambert SD, McCusker J, Yaffe M, de Raad M, Belzile E, et al. Self-management across chronic diseases: targeting education and support needs. Patient Educ Couns. Feb 2020;103(2):398-404. [ CrossRef ] [ Medline ]
  • Anekwe TD, Rahkovsky I. Self-management: a comprehensive approach to management of chronic conditions. Am J Public Health. Dec 2018;108(S6):S430-S436. [ CrossRef ]
  • Barlow J, Wright C, Sheasby J, Turner A, Hainsworth J. Self-management approaches for people with chronic conditions: a review. Patient Educ Couns. 2002;48(2):177-187. [ CrossRef ] [ Medline ]
  • Setiawan IM, Zhou L, Alfikri Z, Saptono A, Fairman AD, Dicianno BE, et al. An adaptive mobile health system to support self-management for persons with chronic conditions and disabilities: usability and feasibility studie. JMIR Form Res. Apr 25, 2019;3(2):e12982. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Alanzi T. mHealth for diabetes self-management in the Kingdom of Saudi Arabia: barriers and solutions. J Multidiscip Healthc. 2018;11:535-546. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Nunes F, Verdezoto N, Fitzpatrick G, Kyng M, Grönvall E, Storni C. Self-care technologies in HCI. ACM Trans Comput Hum Interact. Dec 14, 2015;22(6):1-45. [ CrossRef ]
  • Klasnja P, Kendall L, Pratt W, Blondon K. Long-term engagement with health-management technology: a dynamic process in diabetes. AMIA Annu Symp Proc. 2015;2015:756-765. [ FREE Full text ] [ Medline ]
  • Talboom-Kamp EP, Verdijk NA, Harmans LM, Numans ME, Chavannes NH. An eHealth platform to manage chronic disease in primary care: an innovative approach. Interact J Med Res. Feb 09, 2016;5(1):e5. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Tighe SA, Ball K, Kensing F, Kayser L, Rawstorn JC, Maddison R. Toward a digital platform for the self-management of noncommunicable disease: systematic review of platform-like interventions. J Med Internet Res. Oct 28, 2020;22(10):e16774. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Pettersson B, Wiklund M, Janols R, Lindgren H, Lundin-Olsson L, Skelton DA, et al. 'Managing pieces of a personal puzzle' - older people's experiences of self-management falls prevention exercise guided by a digital program or a booklet. BMC Geriatr. Feb 18, 2019;19(1):43. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Kario K. Management of hypertension in the digital era: mall wearable monitoring devices for remote blood pressure monitoring. Hypertension. Sep 2020;76(3):640-650. [ CrossRef ]
  • Koh HC, Tan G. Data mining applications in healthcare. J Healthc Inf Manag. 2005;19(2):64-72. [ Medline ]
  • Alsayat A, El-Sayed H. Efficient genetic k-means clustering for health care knowledge discovery. In: Proceedings of the 14th International Conference on Software Engineering Research, Management and Applications. 2016. Presented at: SERA '16; June 8-10, 2016;45-52; Towson, MD. URL: https://ieeexplore.ieee.org/document/7516127 [ CrossRef ]
  • Katsis Y, Balac N, Chapman D, Kapoor M, Block J, Griswold WG, et al. Big data techniques for public health: a case study. In: Proceedings of the 2017 IEEE/ACM International Conference on Connected Health: Applications, Systems and Engineering Technologies. 2017. Presented at: CHASE '17; July 17-19, 2017;222-231; Philadelphia, PA. URL: https://ieeexplore.ieee.org/document/8010636 [ CrossRef ]
  • Elbattah M, Molloy O. Data-driven patient segmentation using k-means clustering: the case of hip fracture care in Ireland. In: Proceedings of the 2017 Australasian Computer Science Week Multiconference. 2017. Presented at: ACSW '17; January 30- February 3, 2017;1-8; Geelong, Australia. URL: https://dl.acm.org/doi/10.1145/3014812.3014874 [ CrossRef ]
  • Madigan EA, Curet OL. A data mining approach in home healthcare: outcomes and service use. BMC Health Serv Res. Feb 24, 2006;6(1):18. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Armstrong JJ, Zhu M, Hirdes JP, Stolee P. K-means cluster analysis of rehabilitation service users in the home health care system of Ontario: examining the heterogeneity of a complex geriatric population. Arch Phys Med Rehabil. Dec 2012;93(12):2198-2205. [ CrossRef ] [ Medline ]
  • Islam MS, Liu D, Wang K, Zhou P, Yu L, Wu D. A case study of healthcare platform using big data analytics and machine learning. In: Proceedings of the 2019 3rd High Performance Computing and Cluster Technologies Conference. 2019. Presented at: HPCCT '19; June 22-24, 2019;139-146; Guangzhou, China. URL: https://dl.acm.org/doi/10.1145/3341069.3342980 [ CrossRef ]
  • Delias P, Doumpos M, Grigoroudis E, Manolitzas P, Matsatsinis N. Supporting healthcare management decisions via robust clustering of event logs. Knowl Based Syst. Aug 2015;84:203-213. [ CrossRef ]
  • Lefèvre T, Rondet C, Parizot I, Chauvin P. Applying multivariate clustering techniques to health data: the 4 types of healthcare utilization in the Paris metropolitan area. PLoS One. Dec 15, 2014;9(12):e115064. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Ahmad P, Qamar S, Qasim Afser Rizvi S. Techniques of data mining in healthcare: a review. Int J Comput Appl. Jun 18, 2015;120(15):38-50. [ CrossRef ]
  • Mahoto NA, Shaikh FK, Ansari AQ. Exploitation of clustering techniques in transactional healthcare data. Mehran Univ Res J Eng Technol. 2014;33(1):77-92.
  • Zahi S, Achchab B. Clustering of the population benefiting from health insurance using k-means. In: Proceedings of the 4th International Conference on Smart City Applications. 2019. Presented at: SCA '19; October 2-4, 2019;1-6; Casablanca, Morocco. URL: https://dl.acm.org/doi/abs/10.1145/3368756.3369103 [ CrossRef ]
  • Jain AK. Data clustering: 50 years beyond k-means. Pattern Recognit Lett. Jun 2010;31(8):651-666. [ CrossRef ]
  • Silitonga P. Clustering of patient disease data by using k-means clustering. Int J Comput Sci Inf Sec. 2017;15(7):219-221. [ FREE Full text ]
  • Shakeel PM, Baskar S, Dhulipala VR, Jaber MM. Cloud based framework for diagnosis of diabetes mellitus using k-means clustering. Health Inf Sci Syst. Dec 24, 2018;6(1):16. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Berry E, Davies M, Dempster M. Illness perception clusters and relationship quality are associated with diabetes distress in adults with type 2 diabetes. Psychol Health Med. Oct 19, 2017;22(9):1118-1126. [ CrossRef ] [ Medline ]
  • Harrison S, Robertson N, Graham C, Williams J, Steiner M, Morgan M, et al. Can we identify patients with different illness schema following an acute exacerbation of COPD: a cluster analysis. Respir Med. Feb 2014;108(2):319-328. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Lopes AC, Xavier RF, Ac Pereira AC, Stelmach R, Fernandes FL, Harrison SL, et al. Identifying COPD patients at risk for worse symptoms, HRQoL, and self-efficacy: a cluster analysis. Chronic Illn. Jun 17, 2019;15(2):138-148. [ CrossRef ] [ Medline ]
  • Cikes M, Sanchez-Martinez S, Claggett B, Duchateau N, Piella G, Butakoff C, et al. Machine learning-based phenogrouping in heart failure to identify responders to cardiac resynchronization therapy. Eur J Heart Fail. Jan 17, 2019;21(1):74-85. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Violán C, Roso-Llorach A, Foguet-Boreu Q, Guisado-Clavero M, Pons-Vigués M, Pujol-Ribera E, et al. Multimorbidity patterns with K-means nonhierarchical cluster analysis. BMC Fam Pract. Jul 03, 2018;19(1):108. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • McCauley CO, Bond RB, Ryan A, Mulvenna MD, Laird L, Gibson A, et al. Evaluating user engagement with a reminiscence app using cross-comparative analysis of user event logs and qualitative data. Cyberpsychol Behav Soc Netw. Aug 2019;22(8):543-551. [ CrossRef ] [ Medline ]
  • Waterman H, Tillen D, Dickson R, de Koning K. Action research: a systematic review and guidance for assessment. Health Technol Assess. 2001;5(23):iii-157. [ FREE Full text ] [ Medline ]
  • Bashshur RL, Shannon GW, Smith BR, Alverson DC, Antoniotti N, Barsan WG, et al. The empirical foundations of telemedicine interventions for chronic disease management. Telemed J E Health. Sep 2014;20(9):769-800. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Dinsmore J, Hannigan C, Smith S, Murphy E, Kuiper JM, O'Byrne E, et al. A digital health platform for integrated and proactive patient-centered multimorbidity self-management and care (ProACT): protocol for an action research proof-of-concept trial. JMIR Res Protoc. Dec 15, 2021;10(12):e22125. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Doyle J, Murphy E, Kuiper J, Smith S, Hannigan C, Jacobs A, et al. Managing multimorbidity: identifying design requirements for a digital self-management tool to support older adults with multiple chronic conditions. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 2019. Presented at: CHI '19; May 4-9, 2019;1-14; Glasgow, Scotland. URL: https://dl.acm.org/doi/10.1145/3290605.3300629 [ CrossRef ]
  • Doyle J, Murphy E, Hannigan C, Smith S, Bettencourt-Silva J, Dinsmore J. Designing digital goal support systems for multimorbidity self-management: insights from older adults and their care network. In: Proceedings of the 12th EAI International Conference on Pervasive Computing Technologies for Healthcare. 2018. Presented at: PervasiveHealth '18; May 21-24, 2018;168-177; New York, NY. URL: https://dl.acm.org/doi/10.1145/3240925.3240982 [ CrossRef ]
  • Doyle J, Murphy E, Gavin S, Pascale A, Deparis S, Tommasi P, et al. A digital platform to support self-management of multiple chronic conditions (ProACT): findings in relation to engagement during a one-year proof-of-concept trial. J Med Internet Res. Dec 15, 2021;23(12):e22672. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Doyle J, McAleer P, van Leeuwen C, Smith S, Murphy E, Sillevis Smitt M, et al. The role of phone-based triage nurses in supporting older adults with multimorbidity to digitally self-manage - findings from the ProACT proof-of-concept study. Digit Health. Oct 09, 2022;8:20552076221131140. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Ross A, Willson VL. One-way anova. In: Ross A, Willson VL, editors. Basic and Advanced Statistical Tests: Writing Results Sections and Creating Tables and Figures. Cham, Switzerland. Springer; 2017;21-24.
  • Dancey CP, Reidy J. Statistics without Maths for Psychology. Upper Saddle River, NJ. Prentice Hall; 2007.
  • Akoglu H. User's guide to correlation coefficients. Turk J Emerg Med. Sep 2018;18(3):91-93. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Master AM, Dublin LI, Marks HH. The normal blood pressure range and its clinical implications. J Am Med Assoc. Aug 26, 1950;143(17):1464-1470. [ CrossRef ] [ Medline ]
  • Cunha JP. What is a good oxygen rate by age? eMedicineHealth. URL: https://www.emedicinehealth.com/what_is_a_good_ oxygen_rate_by_age/article_em.htm [accessed 2024-01-29]
  • Echevarria C, Steer J, Wason J, Bourke S. Oxygen therapy and inpatient mortality in COPD exacerbation. Emerg Med J. Mar 26, 2021;38(3):170-177. [ CrossRef ] [ Medline ]
  • Atkinson Jr RL, Butterfield G, Dietz W, Fernstrom J, Frank A, Hansen B. Weight Management: State of the Science and Opportunities for Military Programs. Washington, DC. National Academies Press; 2003.
  • Kaplan AL, Cohen ER, Zimlichman E. Improving patient engagement in self-measured blood pressure monitoring using a mobile health technology. Health Inf Sci Syst. Dec 07, 2017;5(1):4. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Mikolasek M, Witt CM, Barth J. Adherence to a mindfulness and relaxation self-care app for cancer patients: mixed-methods feasibility study. JMIR Mhealth Uhealth. Dec 06, 2018;6(12):e11271. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Harjumaa M, Halttu K, Koistinen K, Oinas-Kukkonen H. User experience of mobile coaching for stress-management to tackle prevalent health complaints. In: Proceedings of the 6th Scandinavian Conference on Information Systems. 2015. Presented at: SCIS '15; August 9-12, 2015; Oulu, Finland. URL: https:/​/cris.​vtt.fi/​en/​publications/​user-experience-of-mobile-coaching-for-stress-management-to-tackl [ CrossRef ]
  • Kannisto KA, Korhonen J, Adams CE, Koivunen MH, Vahlberg T, Välimäki MA. Factors associated with dropout during recruitment and follow-up periods of a mHealth-based randomized controlled trial for mobile.net to encourage treatment adherence for people with serious mental health problems. J Med Internet Res. Feb 21, 2017;19(2):e46. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Abel EA, Shimada SL, Wang K, Ramsey C, Skanderson M, Erdos J, et al. Dual use of a patient portal and clinical video telehealth by veterans with mental health diagnoses: retrospective, cross-sectional analysis. J Med Internet Res. Nov 07, 2018;20(11):e11350. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Peek ST, Wouters EJ, van Hoof J, Luijkx KG, Boeije HR, Vrijhoef HJ. Factors influencing acceptance of technology for aging in place: a systematic review. Int J Med Inform. Apr 2014;83(4):235-248. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Weinstock RS, Aleppo G, Bailey TS, Bergenstal RM, Fisher WA, Greenwood DA, et al. The role of blood glucose monitoring in diabetes management. Compendia. Oct 2022;2020(3):1-32. [ CrossRef ] [ Medline ]
  • Rahman QA, Janmohamed T, Pirbaglou M, Ritvo P, Heffernan JM, Clarke H, et al. Patterns of user engagement with the mobile app, manage my pain: results of a data mining investigation. JMIR Mhealth Uhealth. Jul 12, 2017;5(7):e96. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Booth FG, R Bond R, D Mulvenna M, Cleland B, McGlade K, Rankin D, et al. Discovering and comparing types of general practitioner practices using geolocational features and prescribing behaviours by means of K-means clustering. Sci Rep. Sep 14, 2021;11(1):18289. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Sulistyono MT, Pane ES, Wibawa AD, Purnomo MH. Analysis of EEG-based stroke severity groups clustering using k-means. In: Proceedings of the 2021 International Seminar on Intelligent Technology and Its Applications. 2021. Presented at: ISITIA '21; July 21-22, 2021;67-74; Surabaya, Indonesia. URL: https://ieeexplore.ieee.org/document/9502250 [ CrossRef ]
  • Oskooei A, Chau SM, Weiss J, Sridhar A, Martínez MR, Michel B. DeStress: deep learning for unsupervised identification of mental stress in firefighters from heart-rate variability (HRV) data. In: Shaban-Nejad A, Michalowski M, Buckeridge DL, editors. Explainability and Interpretability: Keys to Deep Medicine. Cham, Switzerland. Springer; 2020;93-105.
  • Sheng Y, Doyle J, Bond R, Jaiswal R, Gavin S, Dinsmore J. Home-based digital health technologies for older adults to self-manage multiple chronic conditions: a data-informed analysis of user engagement from a longitudinal trial. Digit Health. Sep 22, 2022;8:20552076221125957. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Böhm AK, Jensen ML, Sørensen MR, Stargardt T. Real-world evidence of user engagement with mobile health for diabetes management: longitudinal observational study. JMIR Mhealth Uhealth. Nov 06, 2020;8(11):e22212. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Kim SS, Malhotra NK. A longitudinal model of continued is use: an integrative view of four mechanisms underlying postadoption phenomena. Manag Sci. May 2005;51(5):741-755. [ CrossRef ]
  • Yuan S, Ma W, Kanthawala S, Peng W. Keep using my health apps: discover users' perception of health and fitness apps with the UTAUT2 model. Telemed J E Health. Sep 2015;21(9):735-741. [ CrossRef ] [ Medline ]
  • O'Connor Y, O'Reilly P, O'Donoghue J. M-health infusion by healthcare practitioners in the national health services (NHS). Health Policy Technol. Mar 2013;2(1):26-35. [ CrossRef ]
  • Milani RV, Lavie CJ, Bober RM, Milani AR, Ventura HO. Improving hypertension control and patient engagement using digital tools. Am J Med. Jan 2017;130(1):14-20. [ CrossRef ] [ Medline ]
  • Williams B, Mancia G, Spiering W, Agabiti Rosei E, Azizi M, Burnier M, et al. ESC Scientific Document Group. 2018 ESC/ESH guidelines for the management of arterial hypertension: the task force for the management of arterial hypertension of the European Society of Cardiology (ESC) and the European Society of Hypertension (ESH). Eur Heart J. Sep 01, 2018;39(33):3021-3104. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Quinn CC, Butler EC, Swasey KK, Shardell MD, Terrin MD, Barr EA, et al. Mobile diabetes intervention study of patient engagement and impact on blood glucose: mixed methods analysis. JMIR Mhealth Uhealth. Feb 02, 2018;6(2):e31. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Sepah SC, Jiang L, Ellis RJ, McDermott K, Peters AL. Engagement and outcomes in a digital diabetes prevention program: 3-year update. BMJ Open Diabetes Res Care. Sep 07, 2017;5(1):e000422. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Carroll JK, Moorhead A, Bond R, LeBlanc WG, Petrella RJ, Fiscella K. Who uses mobile phone health apps and does use matter? a secondary data analytics approach. J Med Internet Res. Apr 19, 2017;19(4):e125. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Demark-Wahnefried W, Schmitz KH, Alfano CM, Bail JR, Goodwin PJ, Thomson CA, et al. Weight management and physical activity throughout the cancer care continuum. CA Cancer J Clin. Jan 2018;68(1):64-89. [ FREE Full text ] [ CrossRef ] [ Medline ]

Abbreviations

Edited by T Leung, T de Azevedo Cardoso; submitted 05.02.23; peer-reviewed by B Chaudhry, M Peeples, A DeVito Dabbs; comments to author 12.09.23; revised version received 25.10.23; accepted 29.01.24; published 28.03.24.

©Yiyang Sheng, Raymond Bond, Rajesh Jaiswal, John Dinsmore, Julie Doyle. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 28.03.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

IMAGES

  1. (PDF) Educational Technology: A Review of the Integration, Resources

    educational technologies research papers

  2. (PDF) IMPACT OF MODERN TECHNOLOGY ON THE STUDENT PERFORMANCE IN HIGHER

    educational technologies research papers

  3. (DOC) Technological determinism in educational technology research

    educational technologies research papers

  4. (PDF) NANOTECHNOLOGY IMPACT ON INFORMATION TECHNOLOGY

    educational technologies research papers

  5. 38+ Research Paper Samples

    educational technologies research papers

  6. (PDF) Insights from the Journal Analysis Series: What We Have Learned

    educational technologies research papers

VIDEO

  1. questions paper of research methodology for BBA students

  2. Educational Technology in Context: The Big Picture

  3. Security System Solar and Motor || SJ TECHNOLOGIES & RESEARCH CENTER

  4. Types of Research Papers

  5. Creative Technologies 2.11.2023: Beyond the Limits of Human Touch

  6. Emerging Technologies in Higher Education

COMMENTS

  1. Understanding the role of digital technologies in education: A review

    When compared to a stack of notebooks, an iPad is relatively light. When opposed to a weighty book, surfing an E-book is easier. These methods aid in increasing interest in research. This paper is brief about the need for digital technologies in education and discusses major applications and challenges in education.

  2. Home

    Overview. Educational Technology Research and Development is a scholarly journal focusing on research and development in educational technology. Publishes rigorous original quantitative, qualitative, or mixed methods studies on topics relating to applications of technology or instructional design in educational settings.

  3. Education Technology: An Evidence-Based Review

    This review paper synthesizes and discusses experimental evidence on the effectiveness of technology-based approaches in education and outlines areas for future inquiry. In particular, we examine RCTs across the following categories of education technology: (1) access to technology, (2) computer-assisted learning, (3) technology-enabled ...

  4. AI technologies for education: Recent research & future directions

    5. Conclusion. AI technology is rapidly advancing and its application in education is expected to grow rapidly in the near future. In the USA, for example, education sectors are predicted with an approximate 48% of growth in AI market in the near future, from 2018 to 2022 ( BusinessWire.com, 2018).

  5. A Comprehensive Review of Educational Technology on ...

    Rapid advances in technology during the last few decades have provided a multitude of new options for teaching and learning. Although technology is being widely adopted in education, there is a shortage of research on the effects that this technology might have on student learning, and why those effects occur. We conducted a comprehensive review of the literature on various uses of digital ...

  6. Educational Technology Research: Contexts, Complexity and Challenges

    The paper discusses some key contemporary trends in educational technology research. These are identified as personalisation, social learning, learning design, machine learning, and data driven ...

  7. Educational Technology Adoption: A systematic review

    The interest of researchers worldwide in educational technology acceptance and adoption is evident (see Fig. 2).Most of the identified studies were conducted in Taiwan (N=7), followed by relevant research carried out in South Korea and USA (N=4), Spain (N=3), Canada, China, Hong Kong, Malaysia, Pakistan, Singapore and Turkey (N=2).

  8. Mapping research in student engagement and educational technology in

    Digital technology has become a central aspect of higher education, inherently affecting all aspects of the student experience. It has also been linked to an increase in behavioural, affective and cognitive student engagement, the facilitation of which is a central concern of educators. In order to delineate the complex nexus of technology and student engagement, this article systematically ...

  9. The nature and building blocks of educational technology research

    1. Introduction. Supporting teaching and learning with technology has a rich, broad, and long history. Huang, Spector, and Yang (2019) traced this field thousands of years back, while Reisser (2001) started the history of educational technology from motion picture projectors and instructional films. Throughout the history, new technological innovations have enabled new possibilities and ways ...

  10. Educational Technology Research and Development

    Analytical papers that evaluate important research issues related to educational technology research and reviews of the literature on similar topics are also published. The Development Section publishes research on planning, implementation, evaluation and management of a variety of instructional technologies and learning environments.

  11. PDF Education Technology: An Evidence-Based Review

    evidence on the effectiveness of technology-based approaches in education and outlines areas for future inquiry. In particular, we examine RCTs across the following categories of education technology: (1) access to technology, (2) computer-assisted learning, (3) technology-enabled

  12. Educational Technology Adoption: A systematic review

    Several reviews and meta-analysis that summarize empirical research have been focused on specific topics in the field of education, for example: (i) particular technology adoption model, like the meta-analysis dealing with TAM in prediction of teachers' adoption of technology (Scherer, Siddiq & Tondeur, 2019), and the quantitative meta-analysis to identify the most commonly used external ...

  13. Journal of Educational Technology Systems: Sage Journals

    The Journal of Educational Technology Systems (ETS) deals with systems in which technology and education interface and is designed to inform educators who are interested in making optimum use of technology. More importantly, the Journal focuses on techniques and curriculum that utilize technology in all types of educational systems.

  14. Schooling and Covid-19: lessons from recent research on EdTech

    The wide-scale global movement of school education to remote instruction due to Covid-19 is unprecedented. The use of educational technology (EdTech) offers an alternative to in-person learning ...

  15. 15 EdTech research papers that we share all the time

    This critical review by our own Bjӧrn Haßler, Sara Hennessy, and Louis Major has been cited over 200 times since it was published in 2016. It examines evidence from 23 studies on tablet use at the primary and secondary school levels. It discusses the fragmented nature of the knowledge base and limited rigorous evidence on tablet use in ...

  16. Artificial intelligence and deep learning in educational technology

    The research findings contribute to the current studies on digital education in general (Cheng et al., 2016), and AI applications and algorithms in the realm of educational technology in particular (Cruz-Benito, Sánchez-Prieto, Therón, & García-Peñalvo, 2019; Li, Kizilcec, Bailenson, & Ju, 2016; Mavrikis, 2010). The published papers in this ...

  17. Education Sciences

    Digital technology is increasingly used in STEM education for young children aged 0-8 years. An extensive literature search was conducted using seven databases to systematically investigate the effect of digital technology on young children's STEM education. Twenty-two eligible articles published from 2010 to 2021 were identified. Results showed that robotics, programming, and multimedia ...

  18. (PDF) Impact of modern technology in education

    This research paper is a study of the efficacy of the flipped learning approach in teaching 'Wh' questions to first-year undergraduate ESL learners. ... Education feeds technology which in ...

  19. Impacts of digital technologies on education and factors influencing

    It is based on meta-analyses and review papers found in scholarly, peer-reviewed content databases and other key studies and reports related to the concepts studied (e.g., digitalization, digital capacity) from professional and international bodies (e.g., the OECD). ... Journal of Research on Technology in Education. 2020; 52 (4):474-514. doi ...

  20. The impact of scenario-based online gamified learning environment

    Since September 2018, she has been the Head of the Department of Computer Education and Instructional Technology at Gazi Education Faculty. Her research focuses on e-learning, online learning environment design, instructional design, technology integration in education, project management, and scale development. Prof.

  21. Trends and Topics in Educational Technology, 2022 Edition

    This editorial continues our annual effort to identify and catalog trends and popular topics in the field of educational technology. Continuing our approach from previous years (Kimmons, 2020; Kimmons et al., 2021), we use public internet data mining methods (Kimmons & Veletsianos, 2018) to extract and analyze data from three large data sources: the Scopus research article database, the ...

  22. PDF Information and Communication Technologies in Secondary Education

    Expert systems were established in the early eighties. The key problem was to make human thinking explicit. The so-called fifth generation of "thinking machines" failed, except smaller attempts in practical reasoning. Intelligent tutoring, simulation and embedded task support systems were built in the early nineties.

  23. PDF Engaging the Adult Learner Generational Mix

    teaching. In the workforce, as well as higher education, in current literature pertaining to adult learners tends to lump all adults into the same category. Scant research exists that reviews the adult learner through a generational lens. This paper examines the elements that engage and disengageadult learners

  24. In One Key A.I. Metric, China Pulls Ahead of the U.S.: Talent

    China has produced a huge number of top A.I. engineers in recent years. New research shows that, by some measures, it has already eclipsed the United States. By Paul Mozur and Cade Metz Paul Mozur ...

  25. PDF underdahl_etal_skills_white_paper_formatted

    In July 2020, research conducted by the National Governors Association Center for Best Practices identified a "systems change road map" (NGA, 2020, p. 9) to engage educators, industry, policy makers, and workers in creating a future-ready workforce (NGA, 2020). Adapting education and training to evolving skill needs is fundamental to

  26. AI-Assisted Literature Reviews

    Elicit only includes academic papers, since Elicit is designed around finding and analyzing academic papers specifically. Elicit pulls from over 126 million papers through Semantic Scholar. Elicit organizes papers into an easy-to-use table and provides features for brainstorming research questions. Consensus. This is an AI-powered search engine ...

  27. One in six school-aged children experiences cyberbullying, finds new

    27 March 2024 Copenhagen, DenmarkWHO/Europe today released the second volume of the Health Behaviour in School-aged Children (HBSC) study, which focuses on patterns of bullying and peer violence among adolescents across 44 countries and regions. While the overall trends in school bullying have remained stable since 2018, cyberbullying has increased, magnified by the increasing digitalization ...

  28. INFORMATION AND COMMUNICATION TECHNOLOGY IN SPECIAL EDUCATION

    united nations educational, scientific and cultural organization unesco institute for information technologies in education information and communication INFORMATION AND COMMUNICATION TECHNOLOGY IN SPECIAL EDUCATION - Research Paper - Mike-Sokol

  29. Journal of Medical Internet Research

    Background: Multiple chronic conditions (multimorbidity) are becoming more prevalent among aging populations. Digital health technologies have the potential to assist in the self-management of multimorbidity, improving the awareness and monitoring of health and well-being, supporting a better understanding of the disease, and encouraging behavior change.

  30. April 2024: Melissa Jay Smith, PhD

    In this paper, my coauthors and I developed a method that allows researchers to quantify the potential effect of area-level interventions (e.g., adding a new healthy food store or physical activity facility to a ZIP code) on reducing the difference in health outcome (e.g., cancer incidence) between ZIP codes with certain characteristics.