Search code, repositories, users, issues, pull requests...

Provide feedback.

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly.

To see all available qualifiers, see our documentation .

  • Notifications

A course on getting started with the Twitter API v2 for academic research

twitterdev/getting-started-with-the-twitter-api-v2-for-academic-research

Folders and files, repository files navigation, getting started with the twitter api v2 for academic research.

banner

Welcome to this '101 course' on getting started with academic research using the Twitter API. The objective of this course is to help academic researchers learn how to get Twitter data using Twitter API v2.

By the end of this course, you will learn:

  • What the Twitter API is
  • How to apply for the Academic Research product track and what’s available in it
  • How to identify the endpoints to use for your use-case
  • How to get data from the Twitter API v2 using Python and R
  • How to write and build search queries

Who is this course for?

This is an introductory course (101), meant for anyone who is interested in getting started with the Twitter API v2 for research including

  • Academic Researchers
  • Independent Researchers

Note: While undergraduate students & independent researchers do not qualify for the academic research product track (which provides ability to search for Tweets older than 7 days), they can still follow this course and use the standard product track and the code samples associated with it.

For most of this course, there are no prerequisites and anyone can follow along. Specifically for module 6 which is the labs, you need to know very basic coding in Python or R. If you want to first learn or review the syntax for these two languages, check out the appendix section. It provides links to introductory material on Python and R, along with instructions on how to install Python and R.

Who is this course not for?

This is designed like a 100-level course. If you already gained access to the Academic Research product track, and/or, you already know how to get data from the Twitter API v2 using Python or R, this course may feel too “introductory” for you.

How is this course structured

This course consists of 8 modules. Use this course as a complete start-to-finish lesson for getting started, or if you already know some of the basics, you can start off on one of the more advanced modules later on in the course.

  • Module 1 : Learn what the Twitter API v2 is, and see examples of research done with it
  • Module 2 : Learn how to apply for a Twitter developer account and how to choose the right product track for your project
  • Module 3 : Learn what resources to request through the Twitter API, based on the data you need
  • Module 4 : Learn how to get your keys and bearer token from the developer dashboard to start using the Twitter API
  • Module 5 : Learn how to write search queries to get Tweets from the Twitter API
  • Module 6 Labs in Python and R to learn how to write code and use libraries and packages to get Twitter data
  • Module 7 : Learn how to store Twitter data once you receive it, as well as data compliance and best practices
  • Module 8 : See a summary of what we learned in this course and find links for important resources for future reference.

There is also an Appendix that contains additional information and a glossary of terms used throughout this course, so it is a good idea to keep it handy (maybe even open in a new tab) and reference it whenever you come across a new term in this course .

Assumptions

Whenever we refer to getting ‘Tweets’ using the Twitter API, we refer to only those Tweets that are publicly available. The Twitter API does not provide Tweet information for Tweets that have been deleted, and does not provide Tweets from users who have made their Tweets private.

We will only be using the new Twitter API v2 and not the old API (v1.1). To learn more about the Twitter API v2, check out this technical overview of the Twitter API v2 .

Let us start with module 1, that provides an introduction to Twitter API and examples of research with it.

Get Started

Code of conduct

Contributors 6.

@sparack

  • Python 94.6%
  • Twitter - X /

Twitter just closed the book on academic research

Twitter was once an indispensable resource for academic research. that’s changed under elon musk..

By Justine Calma , a senior science reporter covering climate change, clean energy, and environmental justice with more than a decade of experience. She is also the host of Hell or High Water: When Disaster Hits Home, a podcast from Vox Media and Audible Originals.

Share this story

Twitter’s logo, a blue bird, reflected on broken shards of glass.

Twitter was once a mainstay of academic research — a way to take the pulse of the internet. But as new owner Elon Musk has attempted to monetize the service, researchers are struggling to replace a once-crucial tool. Unless Twitter makes another about-face soon, it could close the chapter on an entire era of research. 

“Research using social media data, it was mostly Twitter-ology,” says Gordon Pennycook, an associate professor of behavioral science at the University of Regina. “It was the primary source that people were using,”

“It was mostly Twitter-ology.”

Until Musk’s takeover, Twitter’s API — which allows third-party developers to gather data — was considered one of the best on the internet. It enabled studies into everything from how people respond to weather disasters to how to stop misinformation from spreading online. The problems they addressed are only getting worse, making this kind of research just as important as ever. But Twitter decided to end free access to its API in February and launched paid tiers in March. The company said it was “looking at new ways to continue serving” academia but nevertheless started unceremoniously cutting off access to third-party users who didn’t pay. While the cutoff caused problems for many different kinds of users , including public transit agencies and emergency responders, academics are among the groups hit the hardest.

Researchers who’ve relied on Twitter for years tell The Verge they’ve had to stop using it. It’s just too expensive to pay for access to its API, which has reportedly skyrocketed to $42,000 a month or more for an enterprise account. Scientists have lost a key vantage point into human behavior as a result. And while they’re scrambling to find new sources, there’s no clear alternative yet.

Twitter gave researchers a way to observe people’s real reactions instead of having to ask study participants how they think they might react in certain scenarios. That’s been crucial for Pennycook’s research into strategies to prevent misinformation from fomenting online, for instance, by showing people content that asks them to think about accuracy before sharing a link.

Without being able to see what an individual actually tweets, researchers like Pennycook might be limited to asking someone in a survey what kind of content they would share on social media. “It’s basically hypothetical,” says Pennycook. “For tech companies who would actually be able to implement one of these interventions, they would not be impressed by that ... We had to do experiments somewhere to show that it actually can work in the wild.”

In April, a group of academics, journalists, and other researchers called the Coalition for Independent Technology Research sent a letter to Twitter asking it to help them maintain access. The coalition surveyed researchers and found that Twitter’s new restrictions jeopardized more than 250 different projects. It would also signal the end of at least 76 “long-term efforts,” the letter says, including code packages and tools. With enforcement of Twitter’s new policies somewhat haphazard (some users were kicked off the platform before others), the coalition set up a mutual aid effort. Scientists scrambled to harvest as much data as they could before losing their own access keys, and others offered to help them collect that data or donated their own access to Twitter’s API to researchers who lost it.

  • Anti-harassment service Block Party leaves Twitter amid API changes
  • Elon Musk’s lawyer accuses Microsoft of abusing its access to Twitter data
  • Twitter backtracks, lets emergency and traffic alert accounts keep free API access

Twitter’s most affordable API tier, at $100 a month, would only allow third parties to collect 10,000 per month. That’s just 0.3 percent of what they previously had free access to in a single day, according to the letter. And even its “outrageously expensive” enterprise tier, the coalition argued, wasn’t enough to conduct some ambitious studies or maintain important tools.

One such tool is Botometer, a system that rates how likely it is that a Twitter account is a bot. While Musk has expressed skepticism of things like disinformation research, he’s actually used Botometer publicly — to estimate how many bots were on the platform during his attempt to get out of the deal he made to buy Twitter. Now, his move to charge for API access could bring on Botometer’s demise.

A notice on Botometer’s website says that the tool will probably stop working soon. “We are actively seeking solutions to keep this website alive and free for our users, which will involve training a new machine-learning model and working with Twitter’s new paid API plans,” it says. “Please note that even if it is feasible to build a new version of the Botometer website, it will have limited functionalities and quotas compared to the current version due to Twitter’s restricted API.”

The impending shutdown is a personal blow to Botometer co-creator Kai-Cheng Yang, a researcher studying misinformation and bots on social media who recently earned his PhD in informatics at Indiana University Bloomington. “My whole PhD, my whole career, is pretty much based on Twitter data right now. It’s likely that it’s no longer available for the future,” Yang tells The Verge . When asked how he might have to approach his work differently now, he says, “I’ve been asking myself that question constantly.”

“The platform went from one of the most transparent and accessible on the planet to truly bottom of the barrel.”

Other researchers are similarly nonplussed. “The platform went from one of the most transparent and accessible on the planet to truly bottom of the barrel,” says letter signatory Rebekah Tromble, director of the Institute for Data, Democracy, and Politics (IDDP) at George Washington University. Some of Tromble’s previous work, studying political conversations on Twitter, was actually funded by the company before it changed its API policies.

“Twitter’s API has been absolutely vital to the research that I’ve been doing for years now,” Tromble tells The Verge . And like Yang, she has to pivot in response to the platform’s new pricing schemes. “I’m simply not studying Twitter at the moment,” she says.

But there aren’t many other options for gathering bulk data from social media. While scraping data from a website without the use of an API is one option, it’s more tedious work and can be fraught with other risks. Twitter and other platforms have tried to curtail scraping, in part because it can be hard to discern whether it’s being done in the public interest or for malicious purposes like phishing.

Meanwhile, other social media giants have been even more restrictive than Twitter with API access, making it difficult to pivot to a different platform. And the restrictions seem to be getting tougher — last month, Reddit similarly announced that it would start to limit third-party access to its API.

“I just wonder if this is the beginning of companies now becoming less and less willing to have the API for data sharing,” says Hause Lin, a post-doctoral research fellow at MIT and the University of Regina developing ways to stop the spread of hate speech and misinformation online. “It seems like totally the landscape is changing, so we don’t know where it’s heading right now,” Lin tells The Verge .

There are signs that things could take an even sharper turn for the worse. Last week, inews reported that Twitter had told some researchers they would need to delete data they had already collected through its decahose, which provides a random sample of 10 percent of all the content on the platform unless they pay for an enterprise account that can run upwards of $42,000 a month. The move amounts to “the big data equivalent of book burning,” one unnamed academic who received the notice reportedly told inews .

The Verge was unable to verify that news with Twitter, which now routinely responds to inquiries from reporters with a poop emoji. None of the researchers The Verge spoke to had received such a notice, and it seems to so far be limited to users who previously paid to use the decahose (just one use of Twitter’s API that previously would have been free or low-cost for academics).

Both Tromble and Yang have used decahose for their work in the past. “Never before did Twitter ever come back to researchers and say that now the contract is over, you have to give up all the data,” Tromble says. “It’s a complete travesty. It will devastate a bunch of really important ongoing research projects.”

“We won’t be able to know as much about the world as we did before.”

Other academics similarly tell The Verge that Twitter’s reported push to make researchers “expunge all Twitter data stored and cached in your systems” without an enterprise subscription would be devastating. It could prevent students from completing work they’ve invested years into if they’re forced to delete the data before publishing their findings. Even if they’ve already published their work, access to the raw data is what allows other researchers to test the strength of the study by being able to replicate it.

“That’s really important for transparent science,” Yang says. “This is just a personal preference — I would probably go against Twitter’s policy and still share the data, make it available because I think science is more important in this case.”

Twitter was a great place for digital field experiments in part because it encouraged people from different backgrounds to meet in one place online. That’s different from Facebook or Mastodon, which tend to have more friction between social circles. This centralization sometimes fostered conflict — but to academics, it was valuable.

“If the research is not going to be as good, we won’t be able to know as much about the world as we did before,” Pennycook says. “And so maybe we’ll figure out a way to bridge that gap, but we haven’t figured it out yet.”

Google Podcasts is gone — and so is my faith in Google

April fools’ day 2024: the best and cringiest pranks, it’s time for a hard reset on notifications, amazon gives up on no-checkout shopping in its grocery stores, yahoo is buying artifact, the ai news app from the instagram co-founders.

Sponsor logo

More from Science

Illustration showing Amazon’s logo on a black, orange, and tan background, formed by outlines of the letter “A.”

Amazon — like SpaceX — claims the labor board is unconstitutional

Pixel illustration of a computer generation an image connected to many electrical outlets at once.

How much electricity does AI consume?

Two zebras stand in the foreground. In the background, trees dot a grassy landscape.

A Big Tech-backed campaign to plant trees might have taken a wrong turn

A SpaceX Falcon 9 rocket lifts off from launch pad LC-39A at the Kennedy Space Center with the Intuitive Machines’ Nova-C moon lander mission, in Cape Canaveral, Florida, on February 15, 2024.

SpaceX successfully launches Odysseus in bid to return US to the lunar surface

DEV Community

DEV Community

Suhem Parack

Posted on Dec 3, 2021

Year in review: Academic Research with the Twitter API v2

In 2021, we launched product improvements and various community initiatives to help the academic research community succeed in their research studies with Twitter data. In this post, I present a recap of all the API launches and programs - from Twitter as well as the academic research community - that has made it easier for researchers to do research with Twitter data.

Improvements for academics & students with the Twitter API v2

Academic research product track.

Launched in January 2021 , the academic research product track is one of the biggest updates to the Twitter API v2 for the academic research community. It provides qualified academic researchers free access to the full-archive of public Tweets (previously, academics had to use the paid premium API to get Tweets older than 7 days). In addition to access to the full-archive search functionality, academics get 10 million Tweets per month. This is higher than the other access levels - the essential access provides 500K Tweets per month and the Elevated access provides 2 Million Tweets per month. The academic research track also lets academics further refine their search queries by supporting search operators for geo-filtering.

Academic Product Track

Tweet counts endpoints

Batch compliance endpoints.

In August 2021, we launched the batch compliance endpoints that let developers and researchers upload large number of Tweet or User IDs and get the current status of those Tweets or Users (whether they are deleted, suspended etc). This lets developers and researchers keep their datasets compliant with Twitter's developer policy . In the past, developers had to use the Tweet lookup endpoints to do this, which was quite slow and supported maximum 100 IDs per request, so this new solution is very helpful for researchers.

Essential access

In November 2021, the essential access was added to the Twitter API v2 . With essential access, anyone can get access to basic functionality in the Twitter API just by signing up - without applying for a developer use-case review. This gives you instant access to the Twitter API v2 and is thus a good option for teaching use-cases. Professors can just ask their students to sign up for essential access ahead of class, if they want to use Twitter data in their classes.

Essential Access

Livestreams and virtual events for academic research with the Twitter API

We hosted livestreams and virtual events with various guests from the academic community to showcase how researchers can use the Twitter API v2 to get data for their research studies. Below is a list of livestreams and virtual events we did in 2021:

  • February 25th 2021 - Introduction to the academic research product track
  • March 18th 2021 - Deep-dive into the full-archive search with the Twitter API v2
  • April 22nd 2021 - Getting started with the Twitter API v2 in Python using Twarc2
  • May 20th 2021 - Getting started with the Twitter API v2 in R using academictwitteR
  • June 23rd 2021 - Exploratory data analysis in R with the Twitter API v2 with Dr. Maria Rodriguez
  • July 9th 2021 - Building visualizations with the Tweet counts endpoints in the Twitter API v2
  • August 7th 2021 - Postman Student Summit

Postman Student Summit

  • August 26th 2021 - Back to school event - how PhD & Postdocs use the Twitter API in their research studies
  • September 3rd 2021 - Getting started with the Twitter API v2 in Python using OSoMeTweet package
  • September 30th 2021 - Academic Research with the Twitter API v2 with Dr. Ernesto Calvo
  • October 5th 2021 - Academic Research with the Twitter API v2 with CSMaP NYU
  • Octover 29th 2021 - Academic Research with the Twitter API v2 with Dr. Deen Freelon
  • November 12th 2021 - Twitter API in the classroom

Office hours

In March 2021, we announced monthly office hours for academics that need help with their usage of the Twitter API v2 and the academic research product track. During these office hours, researchers can get 1:1 technical help and get their questions about the Twitter API v2 answered.

Getting started with the Twitter API v2 course

In June, we launched a comprehensive course on getting started with the Twitter API v2 for academic research. This course is available for free and contains content, cheatsheets and code samples in Python and R to help academics learn how to get the Twitter data for research studies using the Twitter API. Since its launch, this course has been adopted as a prerequisite model by various professors who teach courses on computational social sciences, text mining etc.

Academic advisory board

In August 2021, we announced our inaugural Academic Research Advisory Board aimed at creating regular and frequent dialogue between members of the academic community and our Developer Platform.

Sharing your research with Twitter

In August 2021, we also provided researchers with a way to share their research (done with the Twitter API) with us so that we can better learn how researchers use the Twitter API and how we can better serve their needs.

Sample apps, videos, blogs and code samples

In 2021, we shared various code samples, videos, blogs and sample apps for researchers to learn how to work with the Twitter API v2. Below is a list of these.

Tutorials and guides

  • Getting historical Tweets using the full-archive search endpoint
  • Building high-quality filters for getting Twitter data
  • Building visualizations with Tweet counts from the Twitter API v2 in Python
  • Translating Tweets from the Twitter API v2 using AWS Amazon Translate in Python
  • A comprehensive guide for using the Twitter API v2 with Tweepy in Python
  • Understanding the Tweet Payload in the Twitter API v2
  • Introduction to the academic research product track
  • Introducing the getting started with Twitter API v2 for academic research course

Sample apps

Below are some some sample apps, for educational purposes that we built in 2021 to showcase different use-cases pertaining to research with the Twitter API v2.

From the community

In 2021, we saw some amazing updates and resources from the academic community that make it easy for researchers to work with the Twitter API for their research studies. Below is a list of some of these:

  • The Document The Now project launched the twarc2 library in Python (with command line option) that supports the Twitter API v2 and the academic research product track
  • academictwitteR - a new package in R from Chris Barrie and Justin Ho was launched that supports the Twitter API v2 and the academic product track in R.
  • focalevents - open source codebase from Ryan Gallagher with tools to work with Twitter data for research
  • OSoMeTweets - a package in Python to work with the Twitter API
  • A course on cultural analytics in Python from Melanie Walsh that uses the Twitter API v2

This was a great year for academics that work with the Twitter API for research. If you want to share with the community how you work with the Twitter API for academic research use-cases or have any feedback on our community programs, please feel free to reach out to me @suhemparack

Top comments (0)

pic

Templates let you quickly answer FAQs or store snippets for re-use.

Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink .

Hide child comments as well

For further actions, you may consider blocking this person and/or reporting abuse

sheepbox8646 profile image

Newcar - A modern animation engine based on CanvasKit-WASM

Liu Chenyang - Apr 3

madhusaini22 profile image

How to remove background of image with CSS

Madhu Saini - Apr 3

annoh_karlgusta profile image

1 Common Mistake Novice Developers Make

Karlgusta Esimit - Apr 3

esproc_spl profile image

a very tool for developing quantitative strategy model

Judy - Apr 3

DEV Community

We're a place where coders share, stay up-to-date and grow their careers.

Columbia Journalism Review

Q&A: What happened to academic research on Twitter?

In February of this year Elon Musk put Twitter’s API behind a paywall. A new survey reveals that over a 100 projects were impacted as a result.

Prior to Elon Musk’s purchase of X, once called Twitter, the platform was a playground for academic research. Thanks to X’s free application programming interface, or API, thousands of papers were written based on its data. That all changed in February when Musk put the API behind a paywall . A new survey reveals that over 100 projects were canceled, halted, or pivoted to other platforms as a result of these changes. Despite hateful and inaccurate posts surging on X, researchers are now left with the fewest options for studying it. 

The Coalition for Independent Technology Research, a group of researchers and journalists who work to defend the right to study the impact of technology on society, surveyed researchers affected by the new policies put in place by Musk. Of the 167 researchers surveyed, about half said they are increasingly worried about the potential legal repercussions of studying the platform. This concern comes after X’s July lawsuit against the Center for Countering Digital Hate, a nonprofit group that documented the rise of hate speech towards minorities on X since Musk acquired the platform. Several researchers surveyed called for efforts to regulate social media, emphasizing the importance of regulation that gives them access to social media data while being protected from litigation. 

Tow talked to Josephine Luktio, Assistant Professor at the University of Texas and one of the researchers behind the study, about what they found and what the future holds for academic research on social media platforms. 

This interview has been edited for length and clarity.

SG: Prior to Elon Musk’s takeover, Twitter’s API allowed researchers to gather millions of tweets per day for free. Now, researchers have to pay $100 per month for just 10,000 tweets. Why was the free API such an important tool in academia, and what did it allow researchers to study? 

JL: For a lot of researchers, both academics and civil society researchers, Twitter was a really important platform to study because many politicians, journalists, and public figures were on Twitter. Researchers who wanted to understand those public figures needed to study Twitter. From that perspective, the API was essential to gather data and information about public conversations of ongoing current events and social topics. We relied on the Twitter API as well as Twitter’s relationship with researchers to provide that access. 

Relative to other social media platforms, Twitter has historically been quite generous with its API access. The Twitter API became useful not only for doing research about important current events and what public figures were saying but also as an educational tool to show researchers how to responsibly use an API. 

Hate speech spiked on X since Musk purchased the site last year. For instance, antisemitic content shot up by 61 percent just two weeks after the purchase. If researchers can’t study X via the API anymore, what are some other ways to keep track of hate speech on the platform? 

It’s tricky because prior to the closure of the Twitter API, but after Elon Musk purchased it, there was really important research coming out about the increase in hate speech and trolling attacks. Not just for anti-Semitic content but for uses of the N-word and harassment of LGBTQ+ activists. I noticed that a lot of that research has been stilted in some way because of the Twitter API closure. 

There are researchers who have been attempting to do it within Twitter’s new paid structure, but there’s no way to make that sort of research generalizable. It’s such a small number of posts and messages that you’re basically looking for a needle in a haystack. 

I think moving forward, people will be trusting that research less and less. There are researchers who have considered alternative or non-traditional approaches to data collection, for example, data crawling or scraping. But because Elon Musk is so litigious, not only to these scraping strategies but also to anyone who speaks critically about Twitter, there is this chilling effect among researchers who are willing to actually call Elon Musk or X out for the increase in this sort of content. 

That seems related to X’s July lawsuit against the Center for Countering Digital Hate? 

Absolutely. Of the 167 responses to our survey about 50 percent, perhaps a little bit more, did mention that they were concerned about legal actions that they might be incurring as a result of continuing to study this platform.

Now that academics don’t have access to the free API, the survey found that researchers plan to pivot to other platforms. Reddit was by far the most popular platform of choice, followed by TikTok and YouTube. What makes these platforms attractive for researchers? 

Similar to Twitter, these are platforms that have tried to engage with researchers in a variety of ways to provide access. Another reason why I think these platforms are popular to study is because they’re popular to use. Reddit, Facebook, Instagram, and TikTok are among the most popular platforms, at least among American adults eighteen and older. So it doesn’t surprise me that people are trying to turn from one popular platform to another popular platform. 

The platforms that a lot of folks listed as transitioning into are platforms that have some form of API access. They don’t provide nearly as much data as Twitter does. But I think these are still really important platforms to study. That being said, some platforms have been really hesitant about providing a lot of data or will place limits on that data. For example, TikTok has very specific limitations if you want to use their researcher API. For instance, you need to run your projects by them. That has made a lot of researchers both eager to study the platform but also hesitant about what that relationship with the platform might look like.

So, will we see more studies on Reddit and TikTok? 

I do think that there will be more research on other platforms. I don’t know if it’s ever going to be at the scale of Twitter data or Twitter research as we’ve seen it previously. We might be entering the stage where the golden era of social media research might be over.

Do you think the future of academic research on X is dead? 

I’ll be upfront and say I’m kind of speculating. I can’t say with certainty what will happen. But I do think there are two potential Silver Linings that give me some hope about the future of social media research. The first is that more research methods rely on user consent. One growing area of research is data donation research, where you ask a user in a survey to also provide social media data. I think that sort of work is becoming more popular because it has the backing of user consent. 

The other thing is that politicians are increasingly recognizing that independent social media research is really important. Particularly in Europe, we are seeing efforts to effectively require social media companies to make some sort of data available to researchers. I don’t know what that looks like tangibly, like how that’ll manifest for researchers and what that process looks like. But I am optimistic that politicians and public figures are starting to recognize the importance of doing this research because, certainly, social media platforms are not going to go away. They do a lot of great and terrible things for society. I think it’s really important for researchers to understand what those great and terrible things are. 

More than 100 studies were canceled or suspended due to the changes to the API. What are some examples of projects that were shut down?

One big example that I was especially frustrated by was this great tool called Botometer. It was developed by a team at Indiana University and was specifically used to detect bots on Twitter. This is a tool that Elon Musk himself quoted when he was talking about the number of bots on Twitter, now X. That tool is effectively defunct now that they cannot access the Twitter API.

Musk’s restrictions on gathering data on Twitter have limited researchers’ ability to study real-time events like the spread of misinformation around the attack on Israel, the airstrikes in Gaza, and the upcoming  2024 election. What do you think are some of the implications on our understanding of these events? 

I think the implication is that our understanding of these events will be much worse. Twitter provided us a window into understanding how public discourse around these events unfolds. Especially the importance of political elites and journalists in shaping the way people thought about these things. I imagine that that will still be important in the 2024 election, and it’ll still be important as current events unfold. We just won’t be able to study them or understand how they’ve changed.

Is there anything that surprised you about the findings from the survey?

It didn’t necessarily surprise me but I think it is really important to recognize that the closure of the Twitter API is something that doesn’t just impact academic researchers, but it absolutely impacts journalists. It impacts civil society researchers and things that we’re trying to do to understand the media ecology and the information environment and the potential harms within it. When we talk about the closure of the API, we tend to focus on individual projects or topics like mis- and disinformation and hate speech. But really, the reality is that everything has been shut. So even looking at things like expressions of joy and connectivity on Twitter, people can’t study that anymore either. So it’s not just different types of pockets of research. All research pertaining to Twitter has effectively ended.

academic research api twitter

About the Tow Center

The Tow Center for Digital Journalism at Columbia's Graduate School of Journalism, a partner of CJR, is a research center exploring the ways in which technology is changing journalism, its practice and its consumption — as we seek new ways to judge the reliability, standards, and credibility of information online.

View other Tow articles »

Visit Tow Center website »

academic research api twitter

The voice of journalism, since 1961

  • Privacy Policy

Support CJR

  • Become a Member
  • Join our Mailing List
  • Privacy Policy
  • Terms and Conditions

Center for an Informed Public

Twitter’s API access changes could mark ‘end of an era’ in academic research on the platform

Feb 2, 2023

Twitter headquarters on Market Street in San Francisco.

In a Feb. 1 tweet, Twitter announced that it will soon no longer support free access to the social media platform’s application programming interface (API), which allows developers and researchers, including those at the University of Washington’s Center for an Informed Public and peers at other universities and research centers, to collect and analyze Twitter data. 

While Twitter’s tweet about API access indicated that details about pricing would be forthcoming, the future of continued academic research on the platform is very much in question. 

“This could very well be the end of an era for platform transparency and social media research,” said Center for an Informed Public director Kate Starbird , a UW Human Centered Design & Engineering associate professor.

In a Mastodon thread on Thursday, Starbird noted that, while studying “crisis informatics” as a PhD student, she built her first Twitter collection script in 2010 in the wake of the Haiti earthquake. Since then much of her work has focused on studying how information about crisis events, including the 2010 Deepwater Horizon oil spill , 2013 Boston Marathon bombing , the Syrian Civil War , and the 2018 Hawaii false missile alert , travels and spreads via the platform. 

In recent years, CIP researchers, including Starbird, have focused much of their attention studying how rumors, conspiracy theories, mis- and disinformation about voting in U.S. elections emerged, traveled and spread on Twitter, work that has led to peer-reviewed research in academic journals like the Journal of Quantitative Description: Digital Media and Nature Human Behaviour and citations in the final report from the U.S. House Select Committee to Investigate the January 6th Attack on the U.S. Capitol .  

As Justin Hendrix wrote in Tech Policy Press last year , the CIP’s research and Twitter data-collection efforts around the 2020 U.S. elections will “serve as a substantial building block for years of future research on phenomena at the intersection of social media, politics, and democracy.”

As researchers await more details about Twitter API pricing, what might come next? 

“It’s long been time (for my team, at least) to move on,” Starbird wrote on Mastodon, noting that this “will profoundly change how researchers (and society) can study and understand online behavior.” 

Depending on pricing for academic access, researchers at the Center for an Informed Public may be able to continue research on Twitter, albeit “to a much smaller extent,” Starbird said, adding that “the under-resourced PhD student won’t have a chance to work in this environment. As a wide-eyed PhD student on a ‘crisis informatics’ team, in the wake of the Haiti Earthquake, I built an infrastructure to support crisis mapping during disaster events. That kind of emergent ‘action research’ won’t be possible.”

Since January 2020, research infrastructure at the UW Center for an Informed Public has collected about 2 billion tweets, including narratives around COVID-19, vaccines and elections.  

PHOTO ABOVE: Twitter’s San Francisco headquarters by Dale Cruse via Flickr / CC BY 2.0

Share this:

  • Click to share on Facebook (Opens in new window)
  • Click to share on Twitter (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)

The Center for an Informed Public’s election research in 2024

Mar 23, 2024

Following our pathbreaking work in 2020 and 2022, CIP researchers continue to conduct “rapid research” identifying and analyzing rumors about U.S. election administration to support both information and election integrity in 2024.

A "vote here" sign outside a school in Des Moines, Iowa. Photo by Phil Roeder / Flickr via CC BY 2.0 DEED.

A boosted video resonates with ‘border crisis’ and ‘rigged election’ frames

Mar 14, 2024

Examining the origins, content, and social media spread of a misleading video that connects U.S. border narratives to concerns with election integrity.

The U.S.-Mexico border near Sunland, New Mexico by Anne Adrian / Flickr via CC BY 2.0 DEED

CIP in the News: February 2024

Feb 29, 2024

News coverage from February 2024 about the Center for an Informed Public and CIP-affiliated research and researchers.

The University of Washington Quad and its bare cherry blossom trees in February.

Where to get Twitter data for academic research

It has been my experience that faculty, students, and other researchers have no shortage of compelling research questions that require Twitter data. However, many face an immediate barrier in understanding the options for acquiring that data. The purpose of this blog post is to describe the options for getting Twitter data for academic research in the hopes of lowering at least that initial barrier.

Just as the research to be performed is varied, so are the requirements for Twitter data. These include:

  • Are historical tweets needed? Or current tweets?
  • How many tweets are needed?
  • Is a complete dataset needed (i.e., every tweet that meets criteria) or is an incomplete or sampled dataset acceptable?

In addition, other relevant factors of the research include:

  • Does the researcher have funding to acquire Twitter data?
  • Does the researcher need to share the Twitter dataset as part of publication / reproducible research?
  • What are the technical skills of the researcher?
  • How will the researcher be performing analysis? With her own tools? Or would analytic tools for Twitter data be beneficial?

These factors will determine the most appropriate means of acquiring a Twitter dataset.

There are 4 primary ways of acquiring Twitter data (and I’m not including “cutting and pasting” from the Twitter website!):

  • Retrieve from the Twitter public API.
  • Find an existing Twitter dataset.
  • Purchase from Twitter.
  • Access or purchase from a Twitter service provider.

Let’s explore each of these.

1. Retrieve from the Twitter public API

API is short for “Application Programming Interface” and in this case is a way for software to access the Twitter platform (as opposed to the Twitter website, which is how humans access Twitter). While supporting a large number of functions for interacting with Twitter, the API functions most relevant for acquiring a Twitter dataset include:

  • Retrieving tweets from a user timeline (i.e., the list of tweets posted by an account)
  • Searching tweets
  • Filtering real-time tweets (i.e., the tweets as they are passing through the Twitter platform upon posting)

While you can write your own software for accessing the Twitter API , a number of tools already exist. They are quite varied in their capabilities and require different levels of technical skills and infrastructure. These include:

  • Software libraries (e.g., Tweepy for Python and rtweet for R)
  • Command line tools (e.g., Twarc )
  • Web applications (e.g., DMI-TCAT and our very own Social Feed Manager )
  • Plugins for popular analytic packages (e.g., NVIVO , NodeXL for Excel, and TAGS for Google Sheets)

Some of these tools are focused on retrieving tweets from the API, while others will also do analysis of the Twitter data. For a more complete list, see the Social Media Research Toolkit from the Social Media Lab at Ted Rogers School of Management, Ryerson University.

Note when selecting a tool that some may only support part of the Twitter API for retrieving tweets, most commonly, search. Further, some tools may be designed to support one-time retrieval from the Twitter API, while others support retrieval on an ongoing basis. (For example, Social Feed Manager allows you to specify a schedule for recurring data collection.)

What all of these tools share in common is that they use Twitter’s public API. The Twitter public API has a number of limitations that you should be aware of:

  • Access to historical tweets is extremely limited. You can retrieve the last 3,200 tweets from a user timeline and search the last 7-9 days of tweets.
  • Access to current tweets is limited. Depending on how broad your filter is, the API may not return all tweets.
  • Twitter may sample or otherwise not provide a complete set of tweets in searches.

2. Find an existing Twitter dataset

One way to overcome the limitations of Twitter’s public API for retrieving historical tweets is to find a dataset that has already been collected and satisfies your research requirements. For example, here at GW Libraries we have proactively built collections on a number of topics including Congress, the federal government, and news organizations.

Twitter’s Developer Policy (which you agree to when you get keys for the Twitter API) places limits on the sharing of datasets. If you are sharing datasets of tweets, you can only publicly share the ids of the tweets, not the tweets themselves. Another party that wants to use the dataset has to retrieve the complete tweet from the Twitter API based on the tweet id (“hydrating”). Any tweets which have been deleted or become protected will not be available.

DocNow’s Hydrator is a useful tool for retrieving tweets from the Twitter API based on tweet id. Note that Twitter places rate limits on hydrating (as it does on most API functions) so this may take some amount of time depending on the size of the dataset.

A number of individuals and organizations have publicly posted Twitter datasets, e.g., in a dataset repository or on a website. For example, we posted our 280 million tweet dataset from the 2016 U.S. presidential election on Harvard’s Dataverse. Deen Freelon has published the 40 million tweet dataset for the “Beyond the Hashtags: #Ferguson, #Blacklivesmatter, and the Online Struggle for Offline Justice” report on his website. The DocNow Catalog provides a listing of publicly available Twitter datasets.

Twitter’s Developer Policy is generally interpreted as allowing sharing of tweets locally, i.e., within an academic institution. For example, we share the datasets we have collected at GW Libraries with members of the GW research community (but when sharing outside the GW community, we only share the tweet ids). However, only a small number of institutions proactively collect Twitter data – your library is a good place to inquire.

Another option for acquiring an existing Twitter dataset is TweetSets , a web application that I’ve developed. TweetSets allows you to create your own dataset by querying and limiting an existing dataset. For example, you can create a dataset that only contains original tweets with the term “trump” from the Women’s March dataset. If you are local, TweetSets will allow you to download the complete tweet; otherwise, just the tweet ids can be downloaded. Currently, TweetSets includes nearly a half billion tweets.

3. Purchase from Twitter

You can purchase historical Twitter data directly from Twitter, using the Historical PowerTrack enterprise product.

Historical Twitter data was previously available from Gnip, a data service provider purchased by Twitter. Gnip has now been folded into Twitter. The way this used to work is that you provided a set of query terms and other limiters and a Gnip sales rep replied with a cost estimate. With recent changes, the process is less clear.

For filtering tweets, the Historical Powertrack offers a number of enhancements over the public Twitter API. This includes additional filter operators and tweet enhancements (e.g., profile location and unshortened URLs).

When considering purchasing tweets, you should be aware that it is not likely to be a trivial amount of money. The cost depends on both the length of the time period and the number of tweets; often, the cost is driven by the length of the time period, so shorter periods are more affordable. The cost may be feasible for some research projects, especially if the cost can be written into a grant. Further, I am not familiar with the conditions placed on the uses / sharing of the purchased dataset. Nonetheless, this is likely to be as complete a dataset as it is possible to get.

4. Access or purchase from a Twitter service provider

A number of commercial and academic organizations act as Twitter service providers, usually for a fee. These services provide:

  • Access to Twitter data
  • Value-added services for the Twitter data, such as coding, classification, analysis, or data enhancement. If you are not using your own tools for analysis, these value-added services may be extremely useful for your research (or they may be used in combination with your own tools).

Twitter data options available from a service provider generally include one or more of the following types (available at different costs):

  • Data from the public Twitter APIs. This obviously comes with the limitations described previously with the public Twitter APIs, but will be less costly than the other Twitter data options.
  • Data from the enterprise Twitter APIs, which have access to all historical tweets. Like purchasing data directly from Twitter, the cost will depend on factors such as the number of tweets and the length of the time period. DiscoverText offers this type of data acquisition.
  • Datasets built by querying against an existing set of historical tweets. The service provider will have an arrangement with Twitter that will provide them with access to the “firehose” of all tweets to build this collection. Crimson Hexagon offers this type of data acquisition.

Twitter service providers generally provide reliable access to the APIs, with redundancy and backfill. This means that you will not miss tweets because of network problems or other issues that might occur when using a tool to access the APIs yourself. Note, also, that some service providers can provide data from other social media platforms, such as Facebook.

Despite what the sales representative may tell you, most Twitter service providers’ offerings focus on marketing and business intelligence, not academic research. The notable exception is DiscoverText, which is focused primarily on supporting academic researchers. DiscoverText allows you to acquire data from the public Twitter Search API; purchase historical tweets through the Twitter data access tool, Sifter; or upload other types of textual data. Sifter provides free cost estimates and has a lower entry price point ($32.50) than purchasing from Twitter. Within the DiscoverText platform, tweets can be searched, filtered, de-duplicated, coded, and classified (using machine learning), along with a host of other functionality. Key for academics are features for measuring inter-coder reliability and adjudicating annotator disagreements.

Crimson Hexagon focuses on marketing, but also supports academic research. Soda Analytics is a new entry in the academic field.

Note that some academic institutions have licenses to Twitter service providers; check with your department or a data services librarian.

There are some limitations of Twitter service providers that you should be aware of. Whether these limitations are significant will depend on your research requirements.

First, when considering a Twitter service provider, it is important to know whether you are able to export your dataset from the service provider’s platform. (All should allow you to export reports or analysis.) For most platforms, export is limited to 50,000 tweets per day. If you need the raw data to perform your own analysis or for data sharing, this may be an important consideration.

Second, while the value-added services offered by a Twitter service provider may be very powerful and not require technical skill to use, they are generally a “black box”. So, for example, if a service provider performs bot detection, you may not know which bot detection algorithm is being used.

As should now be evident, the combination of Twitter’s restrictions on sharing data and the affordances of Twitter’s public API makes acquiring a Twitter dataset for academic research not entirely straight-forward. Hopefully this guide has provided enough of a description of the landscape for Twitter data that you can move forward with your research.

Comments on this guide and questions about Twitter data are welcome.

First steps with the Twitter Academic API

Finally, Twitter released the academic track , a new API endpoint just for researchers! The application is already open, and I am one of the lucky guys with approved access.

However, it is relatively new and, therefore, it is not easy to find many tutorials or related materials online. I want to change this and start with a quick-start on loading historic tweets (incl. package suggestion and configuration). I highly recommend the python package TwitterAPI . It already provides support for the new authentication mechanism and all endpoints of the academic track.

  • First of all, you have to install the package
  • Before we can query the API, we have to authenticate. If you don’t have created an application (app) yet, please create a new one first. You find your application’s API key as well as the secret in the Twitter developer portal (Projects & Apps -> your project -> your application; there is a tab named “Keys and tokens” directly below the application’s name). Such that the package uses the right authentication method, you have to set the auth_type to ‘oAuth2’. Besides, we are going to use the API version 2, which we provide as an argument.
  • As soon as you expect many results, it is wise to use the TwitterPager, which loads all tweets batch-wise. Please take a look at the query builder documentation for more information about the topic. The argument max_results defines the batch-size, and 500 is the API’s maximal value. You can find additional API parameters in the documentation right here .

Twitter’s restrictive API may leave researchers out in the cold

academic research api twitter

Earlier this month, Twitter announced that it is going to curtail free access to its API — the programming interface that lets third-party developers interact with Twitter. While the move certainly affects independent developers and startups building tools to make the platform fun and safe, it also creates a problem for students and academics who use Twitter for research purposes on different topics.

Last week, the Elon Musk-led company sent an email to developers where it mentioned that the basic tier to access Twitter’s API — which will cost $100 per month for “low-level usage” — will replace legacy access levels like Essential, Elevated and Academic Research. At the moment, there is hardly any information about what that $100 per month allows developers to do. According to platformer Casey Newton , a low-level enterprise API access could cost a whopping $42,000 a month.

Not good news. Just in from @TwitterDev , which confirms the elimination of the academic API, and the replacement with a "low level of API usage" for $100/month. pic.twitter.com/o6IHy5yR7l — 🇺🇦 David Lazer (@davidlazer) February 10, 2023
I worked really hard and for months on a tool to be used for data gathering in research projects, for academia, journalists, and OSINT people, and I would have released it on GitHub. In 9 days my work is now useful for just a small minority willing to pay for the service, and https://t.co/0aORMWN8sr — Alberto Olivieri (@AlbertoOlivie13) February 11, 2023

Affordability for researchers

For many folks in the research community, spending hundreds or thousands of dollars every month might not be viable.

“Earlier this week, a HateLab undergraduate dissertation student had to change his thesis design away from collecting data on Twitter, as he has no funds to pay for it. His experience will be shared by thousands up and down the country, and millions worldwide. It’s truly incredibly disruptive and will significantly impact the research ecosystem dependent on this data, as the Twitter researcher pipeline from undergrad to professional, has been disrupted by this change,” said Professor Matthew Williams of HateLab, which is part of Cardiff University’s social data science lab and studies of online hate speech.

Ironically, HateLab is listed as a success story on Twitter’s developer portal for using research for good. It has published multiple research papers on hate speech on the platform. The agency has been using the academic research API of the social network. But that might cease to exist in the new version.

Twitter has also not considered that $100 a month for basic access might be a lot for researchers based in developing nations.

“The decision to end free access to Twitter API will greatly impact researchers who study hate speech online, especially independent researchers and those in the developing world. Its impact will be profoundly felt in India, where hate speech is proliferating on Twitter at a very alarming rate. Paying 100 USD per month and 1,200 USD per year is a significant financial burden on them,” said Raqib Hameed Naik, founder of Hindutva Watch, an India-based research organization.

The process is not even clear for institutions that might be willing to spend money. Rebekah Tromble, director at the Institute for Data, Democracy & Politics, said that when they tried to fill out the form for enterprise-level access, they were redirected to the academic research program. She also mentioned that the person they used to contact on Twitter no longer works at the company.

We’re being directed to the Enterprise ($$$$$) option. The only problem? Filling out their form results in an auto-reply directing researchers back to the Academic Research Program. And the contact person listed no longer works at Twitter. 2/ — Rebekah Tromble | [email protected] (@RebekahKTromble) February 10, 2023

Impact of research API shutdown

Independent research has been a key factor to make Twitter more useful and less toxic. The company has displayed multiple projects working in areas like healthcare, online hate speech and climate change using Twitter data.

Earlier this year, the company launched the Twitter Moderation Research Consortium (TMRC) , inviting members from academia, civil society, nongovernmental organizations and journalism to study the platform’s governance issues. But ever since Musk took over, the program has stalled and employees who worked on it have left.

The Tesla CEO himself has used data from Botometer , a tool to measure bot followers on Twitter accounts, during the public spats that led to the acquisition of the company . The tool was made by the Observatory on Social Media of Indiana University. But the tool’s future might be jeopardized by the new API announcement .

A lot of research projects take a large number of tweets into account. They send hundreds of queries to the platform to study different topics. Twitter has not released any details regarding what might be offered in the $100 per month tier. But it most likely won’t be enough for most of the projects. For reference, under the academic research track, Twitter API previously offered access to 10 million tweets per month and 50 requests per 15 minutes per app.

Kaicheng “Kevin” Yang, one of the researchers who worked on Botometer, expressed concerns over Twitter’s move to shutter free API access to academics.

My guess, this "basic access" is not enough for most research projects, the $100 monthly fee is too high for 95% of the researchers. If we want to keep the whole Twitter research body and community alive, we will have to start considering alternatives. #NoResearchWithoutAPI — Kevin Yang (@yang3kc) February 10, 2023

Joshua Tucker, co-director of the NYU Center for Social Media and Politics, recently published a paper on Russian misinformation campaigns on Twitter during the 2016 U.S. presidential election campaign. He said that the campaign studied data from many thousands of tweets, so if the social network makes academics pay for that data, it would be tough to perform research at scale.

“This [move by Twitter] is just at its essence a step in the wrong direction. We are at a moment where important legislative efforts globally are focused on making data access easier for outside researchers to access, and this decision by Twitter is only going to make access to data for outside researchers harder. This in turn means more blind spots about the impact of the platforms on society for policymakers, the press, civil society, and the business community,” he said.

Over the last few days, many researchers have pointed out that free Twitter API access also helps in crisis response to natural disasters like the recent devastating earthquake in Turkey and Syria. Earlier this month, a group of independent researchers wrote an open letter to Twitter requesting the platform to keep the free API access open.

Couldn't come at a worse time. Most analysts and programmers that are building apps and functions for Turkey earthquake aid and relief, and are literally saving lives, are reliant on Twitter API. Any limit/structure/architecture change will make everyone's life difficult. https://t.co/mpwMnWmSPh — Akin Unver (@AkinUnver) February 8, 2023

“Twitter’s new CEO Elon Musk has promised to make the platform more transparent and to reduce the prevalence of spam and manipulative accounts. We commend and support those priorities,” the letter said.

“In fact, the independent research community has developed many of the most cutting-edge techniques used to manage bots. API access has provided a critical resource for that work. Twitter’s new barriers to data access will reduce the very transparency that both the platform and our societies desperately need.”

Without independent research, the company might become ignorant about misinformation and hate speech issues on the platform. The EU has already issued a “yellow card” to the company for missing data in its disinformation report. Multiple companies have signed the Code of Practice, which promises to provide data to researchers amongst other things.

While not complying with the research code doesn’t have any legal ramifications, it could carry more weight from next year when the Digital Services Act (DSA) takes effect. Separately, the bloc’s high commissioner Josep Borrell criticized Twitter for restricting access to its platform by making its API paid. The ramifications of that change are wider than expected — and a quick revenue win could turn into a more important, long-term issue.

You can contact this reporter on Signal and WhatsApp at +91 816-951-8403 or  [email protected]  by email.

  • Documentation

Twitter Academic API

Hi, I am here to help my friends to ask some questions about applying for an academic API.

would anyone know how to apply for an academic API after twitter’s announcement of charging fees? what kind of materials are needed to prepare for application in the current situation? And will it charge some fees? Thank you!

There’s some advice here Twitter Developer Access - twarc but be aware that they’ve not been approving any accounts recently judging by all the other posts about not hearing back after applying

Thanks! Do we need to pay fees if we apply for an academic API now?

Currently no, but there was suggestion of $100 per month but they never clarified or followed up with that. There is also Enterprise which is rumoured to cost at least $42,000 per month too. But again, nothing was launched or made clear.

hi all, I had an academic API, and has been away for 7 months, now I see it has been removed from my profile. What is the fastest way to gain it? The new application looks scary. Thanks

You won’t get it. All Academic accounts have been “downgraded” to Elevated and now, we have nothing. Everything is on hold.

oh I see. I saw the price page. If I want to read ~3000 tweets per day, how much I need to pay? Also is everything on hold is a good thing?! can we be optimistic they may ease that? I saw academic application form, although it does not open the link but it becomes very very complicated, like writing a big proposal for them? does anyone applied for it recently?

For 3000 tweets per day, so 90000 tweets per months you still need Pro, so USD5000 per month… I have no idea about the academic access to be honest with you…but there is no discussion, nor announcement about it lately.

NACADA

Sr. Associate Dean for Student Success/R100076053

Categories: Not Region Specific

Help | Advanced Search

Computer Science > Computation and Language

Title: diffagent: fast and accurate text-to-image api selection with large language model.

Abstract: Text-to-image (T2I) generative models have attracted significant attention and found extensive applications within and beyond academic research. For example, the Civitai community, a platform for T2I innovation, currently hosts an impressive array of 74,492 distinct models. However, this diversity presents a formidable challenge in selecting the most appropriate model and parameters, a process that typically requires numerous trials. Drawing inspiration from the tool usage research of large language models (LLMs), we introduce DiffAgent, an LLM agent designed to screen the accurate selection in seconds via API calls. DiffAgent leverages a novel two-stage training framework, SFTA, enabling it to accurately align T2I API responses with user input in accordance with human preferences. To train and evaluate DiffAgent's capabilities, we present DABench, a comprehensive dataset encompassing an extensive range of T2I APIs from the community. Our evaluations reveal that DiffAgent not only excels in identifying the appropriate T2I API but also underscores the effectiveness of the SFTA training framework. Codes are available at this https URL .

Submission history

Access paper:.

  • HTML (experimental)
  • Other Formats

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

The problem with making all academic research free

A new funding model for journals could deprive the world of valuable research in the humanities and social sciences..

academic research api twitter

There has been an earthquake in my corner of academia that will affect who teaches in prestigious universities and what ideas circulate among educated people around the world.

And it all happened because a concept rooted in good intentions — that academic research should be “open access,” free for everyone to read — has started to go too far.

The premise of open-access publishing is simple and attractive. It can cost libraries thousands of dollars a year to subscribe to academic journals, which sometimes means only academics affiliated with wealthy colleges and universities may access that research. But under open-access publishing, nearly anyone with an internet connection can find and read those articles for free. Authors win, because they find more readers. Academics around the world benefit, because they can access the latest scholarship. And the world wins, because scientific and intellectual progress is facilitated by the free exchange of ideas.

By now this model has taken hold in the natural sciences, especially in biology and biomedicine; during the pandemic many publishers removed paywalls from articles about vaccines and treatments. The Biden administration requires federally funded scholarly publications to be made freely available without any delay.

Advertisement

However, there is no such thing as a free academic article. Even with digital distribution, the expenses of running a journal are considerable. These costs include hosting the websites where people submit, peer-review, and edit articles; copyediting; advertising; preserving journal archives; and maintaining continuity as editors come and go.

As a result, unless journals have a source of revenue other than subscription fees, any move toward open access raises the question of who will cover the costs of publication.

One answer is that the money will come from authors themselves or their academic institutions or other backers. This works well enough in the natural sciences, because those researchers are often funded by grants, and some of that money can be set aside to cover a journal’s fees for publishing scientific articles. The Bill and Melinda Gates Foundation demands that all research funded by the foundation, including the underlying data, be published open access.

According to a paper published in Quantitative Science Studies , however, only a small fraction of scholars in the humanities publish their articles on an open-access basis. Unlike biologists and biomedical engineers, humanities scholars such as philosophers and historians do not get grants that can cover the publishing costs.

This means that if open access is to take hold in those fields as well — as many publishers and academics are advocating — the costs will have to be covered by some foundation or other sponsor, by the scholars’ institutions, or even by the scholars themselves. And all these models have serious downsides.

I’m a political philosopher. The earthquake in my field that I mentioned earlier shook one of our most prominent journals: the Journal of Political Philosophy.

Publishing an article in this journal has long made the difference between whether a candidate gets hired, tenured, or promoted at an elite institution of higher education. The high quality has stemmed in large part from the rigorous approach of the founding editor, Robert Goodin.

At the end of 2023, the publisher, Wiley, terminated its contract with Goodin. The reasons were not immediately clear, and over 1,000 academics, including me, signed a petition stating that we would not serve on the editorial board or write or review for the journal until Wiley reinstates Goodin. I recently attended a panel at an American Philosophical Association conference where philosophers voiced their anger and puzzlement about the situation.

One source of the problem appears to be that Wiley now charges the authors of an article or their institutions $3,840 to get published open access in the journal.

The Journal of Political Philosophy is actually hybrid open access, which means it waives the article processing charges for authors who permit their work to appear behind a subscription-only paywall. Nonetheless, Goodin and Anna Stilz , a Princeton professor and Journal of Political Philosophy editorial board member, point out that publishers like Wiley now have a strong incentive to favor open-access articles.

In the old model, in which university libraries subscribed to journals, editors were mainly incentivized to publish first-rate material that would increase subscriptions. In the open-access model, however, now that authors or their universities must cover the costs of processing articles, publishers of humanities journals seem to be incentivized to boost revenue by accepting as many articles as possible. According to Goodin , open access has “been the death knell of quality academic publishing.” The reason that Goodin lost his job, Goodin and Stilz imply, is that Wiley pressured Goodin to accept more articles to increase Wiley’s profits, and he said no. (Wiley representatives say that lines of communication had collapsed with Goodin.)

Early this year, Goodin cofounded a new journal titled simply Political Philosophy . The journal will be published by the Open Library of Humanities, which is subsidized by libraries and institutions around the world. But this version of open-access publishing does not have the financial stability of the old subscription model. Scholars affiliated with the Open Library of Humanities have pointed out that the project has substantial overhead costs, and it relied on a grant from the Andrew W. Mellon Foundation that has already ended. The Open Library of Humanities is an experiment, and I hope that it works, but as of now it publishes only 30 journals , compared with the 1,600 journals that Wiley publishes.

The fact remains that no one has satisfactorily explained how open access could work in the humanities and social sciences.

In his 2023 book “ Athena Unbound : Why and How Scholarly Knowledge Should Be Free for All,” UCLA history professor Peter Baldwin attempts an answer. He points to Latin America, where some national governments cover all expenses of academic publishing. But this proposal ignores the fact that the governments of the United States and other nations probably do not want to pay for humanities and social sciences journals.

Baldwin also floats the idea of preprint depositories where academics could share documents on the cloud before they have undergone the (somewhat expensive) process of peer review. But this means that academics would lose the benefits that come from getting double-blind feedback from one’s peers. This idea would reduce the costs of publishing a journal article, but it would turn much academic writing into fancy blogging.

Ultimately, Baldwin’s solution is that authors might “have to participate directly, giving them skin in the game and helping contain costs.” This means academics might ask their employers to pay the article processing charges, ask a journal for the processing fees to be waived, or dig into their own pockets to pay to publish.

And it might mean less gets published overall. The journal Government and Opposition, published by Cambridge University Press, is entirely open access and charges $3,450 for an article to be published. I’d have to apply for a discount or a waiver to publish there. Or I could do what political philosophers in Japan and Bosnia and Herzegovina have told me they do: avoid submitting to open-access journals. Their universities will not cover their article processing charges except maybe in the top journals, and even the reduced fees can run into hundreds of dollars that these professors do not have.

In “Athena Unbound,” Baldwin notes that Harvard subscribes to 10 times as many periodicals as India’s Institute of Science. One can bemoan this fact, but one may also appreciate that Harvard’s largesse spreads enough subscription revenue around to reputable journals to enable academics to avoid paying to publish in them, no matter whether they teach at regional state schools, non-elite private schools, or institutions of higher education in poor countries. For all its flaws, the old model meant that when rich alumni donated to their alma maters, it increased library budgets and thereby made it possible for scholars of poetry and state politics to run and publish in academic journals.

Until we have more evidence that open-access journals in the humanities and social sciences can thrive in the long run, academics need to appreciate the advantages of the subscription model.

This article was updated on March 28 to correct the reference to the paper published in Quantitative Science Studies.

Nicholas Tampio is a professor of political science at Fordham University in New York City.

Globe Ideas

Updates for students, alumni, supporters and constituents

  • Latest News
  • Magazine Archive
  • Student Resources
  • Events Calendar
  • engineering.UND.edu

UND designated as Cyber Security Center of Excellence in Research

CAE logos

UND has been designated as a National Center of Academic Excellence in Cyber Research (CAE-R) institution through 2029. This marks the first time the University has received such a designation and comes on the heels of increased cybersecurity research projects on campus.

The CAE-R program is among a number of programs within the National Centers of Academic Excellence in Cybersecurity (NCAE-C). The notice of approval was sent to Prakash Ranganathan, director of the Center for Cyber Security Research, housed within the UND College of Engineering & Mines.

The designation means that UND has joined the national community of Centers of Academic Excellence in Cybersecurity (CAE-C) institutions, which is committed to increasing the number of cybersecurity professionals dedicated to reducing vulnerabilities in the national infrastructure.

UND President Andy Armacost offered his congratulations to those on campus who diligently worked on receiving the CAE-R designation.

“I am exceptionally proud of the hard work that went into this important designation, which recognizes UND’s ability to make a truly national impact in cyber research,” said Armacost. “Congratulations to the School of Electrical Engineering and Computer Science and Dr. Prakash Ranganathan for this signature achievement. UND’s exceptional faculty and academic programs make us the go-to university for anyone studying cybersecurity, artificial intelligence, and data science.”

Both Ranganathan and Brian Tande, dean of the College of Engineering & Mines offered their thanks to Jamison Jangula, cybersecurity analyst for the College of Engineering & Mines, for his efforts in working to secure the CAE-R designation. Tande expressed his gratitude for receiving the designation and said it will pave the way for future opportunities at UND.

“Achieving this important designation will open up research funding opportunities for our faculty, as well as scholarships and career opportunities for our students,” he said. “We are all grateful to Dr. Ranganathan, Jamison Jangula, and the rest of the team for this significant accomplishment.”

Ranganathan said the CAE-R designation underscores the quality of the UND cybersecurity curriculum as well as its alignment with current industry and governmental standards. He said the designation can boost an institution’s ability to attract grants and funding from both government and private sources interested in advancing cybersecurity research.

In addition, Ranganathan said the increased visibility that comes from the designation has the potential to increase student applications from individuals looking for leading programs in cybersecurity. The designation can facilitate collaborations with other CAE-designated institutions, government agencies and industry partners, providing access to joint projects, shared resources, and expertise.

The designation also carries with it the potential for community engagement and economic development, Ranganathan said. CAE-R institutions frequently engage with their local communities through outreach programs, workshops and seminars to raise awareness about cybersecurity.

The enhanced reputation and capabilities in cybersecurity also can attract companies and investments to the region, contributing to economic development and job creation.

“The CAE-R designation is more than just recognition, it’s a catalyst for comprehensive growth and development in cybersecurity research and education,” Ranganathan said. “It signifies a step toward becoming a hub of excellence in cybersecurity, attracting talent, funding and partnerships that will elevate the institution’s status and impact in the field.”

In June, Ranganathan will attend a welcoming ceremony at an upcoming National Initiative for Cybersecurity Education (NICE) conference. There, he will receive the certificate denoting inclusion in the CAE-R program, meet program staff and engage with members of the cybersecurity community.

The conferring of the CAE-R designation comes shortly after UND received funding to participate with a national laboratory and other institutions and entities to develop software algorithms to protect a portion of the nation’s energy grid.

In furtherance of its cybersecurity efforts, UND has announced calls for papers for the upcoming Institute of Electrical and Electronics Engineers Cyber Awareness and Research Symposium (CARS 2024), taking place Oct. 28-29 on campus. This conference marks the first time the IEEE has approved a national-level cybersecurity symposium in North Dakota.

In receiving the CAE-R designation, Ranganathan offered his thanks and gratitude to Tommy Morris, director of the Center for Cybersecurity Research and Education at the University of Alabama in Huntsville. Morris, a mentor to Ranganathan, offered his generous support in preparing for the application package for the designation.

Written by Adam Kurtz   //  UND Today

Related Posts

academic research api twitter

Upcoming LEEPS lecture to feature Jefferson Tester, Professor of Sustainable Energy Systems at Cornell University

academic research api twitter

Great day for grad students

academic research api twitter

Upcoming workshops for faculty & graduate students (March 27-May 8)

IMAGES

  1. GitHub

    academic research api twitter

  2. Twitter's new API platform now opened to academic researchers

    academic research api twitter

  3. Enabling the future of academic research with the Twitter API

    academic research api twitter

  4. Twitter API Academic Research Application

    academic research api twitter

  5. Twitter launches an Academic Research product track for its API

    academic research api twitter

  6. #5 Twitter API Version 1.1 And 2 Academic Research API

    academic research api twitter

VIDEO

  1. Holy Heart Public Senior Secondary School Chhajli, Sunam. Call for Info: 94174-50100 or 94176-66821

  2. BEST DEFENCE ACADEMY IN DEHRADUN

  3. Experience API at Richmond University in London!

  4. Twitter tweet api sample for academic research

  5. 什麼是訪問方法以及它如何運作? #shorts

  6. What if API in real world ?? #shorts #shortsfeed

COMMENTS

  1. Twitter data for academic research

    Learn the fundamentals of using X data for academic research with tailored get-started guides. Or, take your current use of the API further with tutorials, code samples, and tools. Curated datasets. Free, no-code datasets are intended to make it easier for academics to study topics that are of frequent interest to the research community.

  2. Getting started with the Twitter API v2 for academic research

    twitterdev/getting-started-with-the-twitter-api-v2-for-academic-research This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository. About

  3. Twitter just closed the book on academic research

    Twitter's most affordable API tier, at $100 a month, would only allow third parties to collect 10,000 per month. That's just 0.3 percent of what they previously had free access to in a single ...

  4. An Extensive Guide to collecting tweets from Twitter API v2 for

    At the end of 2020, Twitter introduced a new Twitter API built from the ground up. Twitter API v2 comes with more features and data you can pull and analyze, new endpoints, and a lot of functionalities. With the introduction of that new API, Twitter also introduced a new powerful free product for academics: The Academic Research product track.

  5. Twitter's new API platform now opened to academic researchers

    Twitter today is a new product track on its API platform, as part of its ongoing efforts to from the ground up. The track, which aims to serve the needs of the academic research community's ...

  6. Introducing the new Academic Research product track

    Announcements. suhemparack January 26, 2021, 7:02pm 1. Today, we launched a product track on our new API tailored to serve the needs of the academic research community doing research with Twitter data. This post provides a technical overview of what's available in this Academic Research product track, and how you can get started using it.

  7. Year in review: Academic Research with the Twitter API v2

    Launched in January 2021, the academic research product track is one of the biggest updates to the Twitter API v2 for the academic research community. It provides qualified academic researchers free access to the full-archive of public Tweets (previously, academics had to use the paid premium API to get Tweets older than 7 days).

  8. Using the Twitter Academic API With R for Social Science Research

    In this How-to Guide, you will learn how to access the Twitter Academic Research Product Track API with R and collect tweet data of different kinds. This API was recently introduced and allows users far more access to Twitter data than ever before. In order to make requests to (get data from) this API, you will be using a dedicated R package ...

  9. Twitter as research data

    New development: Twitter Academic Research API. On August 12, 2020, Twitter released Twitter API V2, including a dedicated Academic Research track. In this version, Twitter rebuilt the foundation of its API services, redesigned the access levels and developer portal, and introduced new product tracks for different use scenarios.

  10. Q&A: What happened to academic research on Twitter?

    a. b. Prior to Elon Musk's purchase of X, once called Twitter, the platform was a playground for academic research. Thanks to X's free application programming interface, or API, thousands of papers were written based on its data. That all changed in February when Musk put the API behind a paywall. A new survey reveals that over 100 projects ...

  11. Twitter's API access changes could mark 'end of an era' in academic

    In a Feb. 1 tweet, Twitter announced that it will soon no longer support free access to the social media platform's application programming interface (API), which allows developers and researchers, including those at the University of Washington's Center for an Informed Public and peers at other universities and research centers, to collect and analyze Twitter data.

  12. Where to get Twitter data for academic research

    Access or purchase from a Twitter service provider. Let's explore each of these. 1. Retrieve from the Twitter public API. API is short for "Application Programming Interface" and in this case is a way for software to access the Twitter platform (as opposed to the Twitter website, which is how humans access Twitter).

  13. PDF Reliability of Twitter's Academic API

    Twitter's Academic API. Although the Twitter Academic API is relatively new, it has been used in multiple studies. Regarding the times in which the Twitter's Academic API has been released, it is not surprising that many of these studies cover Covid-19 related topics. Before the Academic API was released, researchers had to use services ...

  14. First steps with the Twitter Academic API :: Blog

    Finally, Twitter released the academic track, a new API endpoint just for researchers! The application is already open, and I am one of the lucky guys with approved access. However, it is relatively new and, therefore, it is not easy to find many tutorials or related materials online. I want to change this and start with a quick-start on loading historic tweets (incl. package suggestion and ...

  15. academictwitteR: an R package to access the Twitter Academic Research

    To access Twitter data, we used the Academic Research developer access and the R package academictwitteR (Barrie & Ho, 2021) to query the API. We used the get_all_tweets function with the ...

  16. Twitter's restrictive API may leave researchers out in the cold

    For reference, under the academic research track, Twitter API previously offered access to 10 million tweets per month and 50 requests per 15 minutes per app.

  17. cjbarrie/academictwitteR: Access the Twitter Academic Research Product

    Package to query the Twitter Academic Research Product Track, providing access to full-archive search and other v2 API endpoints. Functions are written with academic research in mind. They provide flexibility in how the user wishes to store collected data, and encourage regular storage of data to mitigate loss when collecting large volumes of tweets. They also provide workarounds to manage and ...

  18. Twitter Academic API

    Currently no, but there was suggestion of $100 per month but they never clarified or followed up with that. There is also Enterprise which is rumoured to cost at least $42,000 per month too. But again, nothing was launched or made clear. science_rock July 12, 2023, 2:25am 5. hi all, I had an academic API, and has been away for 7 months, now I ...

  19. R100076053

    NACADA promotes and supports quality academic advising in institutions of higher education to enhance the educational development of students. NACADA provides a forum for discussion, debate, and the exchange of ideas pertaining to academic advising through numerous activities and publications. NACADA also serves as an advocate for effective academic advising by providing a Consulting and ...

  20. DiffAgent: Fast and Accurate Text-to-Image API Selection with Large

    Text-to-image (T2I) generative models have attracted significant attention and found extensive applications within and beyond academic research. For example, the Civitai community, a platform for T2I innovation, currently hosts an impressive array of 74,492 distinct models. However, this diversity presents a formidable challenge in selecting the most appropriate model and parameters, a process ...

  21. The problem with making all academic research free

    IDEAS The problem with making all academic research free A new funding model for journals could deprive the world of valuable research in the humanities and social sciences.

  22. 1 Semiconductor Stock to Buy Hand Over Fist After Micron Technology's

    Lam Research is currently trading at 27 times forward earnings estimates. That's a small discount to the Nasdaq- 100 's average multiple of 28 (using the index as a proxy for tech stocks).

  23. UND designated as Cyber Security Center of Excellence in Research

    UND has been designated as a National Center of Academic Excellence in Cyber Research (CAE-R) institution through 2029. This marks the first time the University has received such a designation and comes on the heels of increased cybersecurity research projects on campus. The CAE-R program is among a number of programs within the National ...

  24. 学術研究向けTwitter API

    グローバルなリアルタイムデータと履歴データを使って、学術研究を強化. より正確で網羅された無作為のデータを公共の会話から無料で入手しましょう。. この専用アクセスを使うと、すべてのTwitter API v2エンドポイントへのアクセス、月あたりに収集可能 ...