Putting Data at the Heart of your Organizational Strategy

With the launch of Dimensions Research GPT and Dimensions Research GPT Enterprise, researchers the world over now have access to a solution far more powerful than could have been believed just a few years ago. Simon Linacre takes a look at a new solution that combines the scientific evidence base of Dimensions with the pre-eminent Generative AI from ChatGPT.


For many researchers, the ongoing hype around recent developments with Generative AI (GAI) has left them feeling nonplussed, with so many new, unknown solutions for them to use. Added to well-reported questions over hallucinations and responsibly-developed AI, the advantages that GAI could offer have been offset by some of these concerns.

In response, Digital Science has developed its first custom GPT solution, which combines powerful data from Dimensions with ChatGPT’s advanced AI platform; introducing Dimensions Research GPT and Dimensions Research GPT Enterprise

Dimensions Research GPT’s answers to research queries make use of data from tens of millions of Open Access publications, and access is free to anyone via OpenAI’s GPT Store; Dimensions Research GPT Enterprise provides results underpinned by all publications, grants, clinical trials and patents found within Dimensions and is available to anyone with an organization-wide Dimensions subscription that has ChatGPT enterprise account. Organizations keen to tailor Dimensions Research GPT Enterprise to better meet the needs of specific use cases are also invited to work with our team of experts to define and implement these.

These innovative new research solutions from Dimensions enable users of ChatGPT to discover more precise answers and generative summaries by grounding the GAI response in scientific data – data that comes from millions of publications in Dimensions – through to the increasingly familiar ChatGPT’s conversational interface. 

These new solutions have been launched to enable researchers – indeed anyone with an interest in scientific research – to find trusted answers to their questions quickly and easily through a combination of ChatGPT’s infrastructure and Dimensions’ well-regarded research specific capabilities. These new innovations accelerate information discovery, and represent the first of many use cases grounded in AI to come from Digital Science in 2024.

How do they work?

Dimensions Research GPT and Dimensions Research GPT Enterprise are based on Dimensions, the world’s largest collection of linked research data, and supply answers to queries entered by users in OpenAI’s ChatGPT interface. Users can prompt ChatGPT with natural language questions and see AI-generated responses, with notifications each time any content is based on Dimensions data as a result of their queries on the ChatGPT platform, with references shown to the source. These are in the shape of clickable links, which take users directly to the Dimensions platform where they can see pages with further details on the source records to continue their discovery journey. 

Key features of Dimensions Research GPT Enterprise include: 

  • Answers to research queries with publication data, clinical trials, patents and grant information
  • Set up in the client’s private environment and only available to client’s end users
  • Notifications each time content generated is based on Dimensions data, with references and citation details.
Sample image of a query being run on Dimensions Research GPT.

What are the benefits to researchers?

The main benefit for users is that they can find scientifically grounded, inherently improved information on research topics of interest with little time and effort due to the combination of ChatGPT’s interface and Dimensions’ highly regarded research specific capabilities. This will save researchers significant time while also giving them peace of mind by providing easy access to source materials. However, there are a number of additional key benefits for all users in this new innovation:

  • Dimensions AI solutions makes ChatGPT research-specific – grounding the answers in facts and providing the user with references to the relevant documents
  • It calls on millions of publications to provide information specific and relevant to the query, reducing the risk of hallucination of the generative AI answer while providing an easy route to information validation
  • It can help overcome challenges of sheer volume of content available, time-consuming tasks required in research workflows and need for trustworthy AI products.

What’s next with AI and research?

The launch of Dimensions Research GPT and Dimensions Research GPT Enterprise represents Digital Science’s broader commitment to open science and responsible development of AI tools. 

These new products are just the latest developments from Digital Science companies that harness the power of AI. In 2023, Dimensions launched a beta version of an AI Assistant, while ReadCube also released a beta version of its AI Assistant last year. Digital Science finished 2023 by completing its acquisition of AI-based academic language service Writefull. And 2024 is likely to see many more AI developments – with some arriving very soon! Dimensions Research GPT and Dimensions Research GPT Enterprise, alongside all Digital Science’s current and future developments with AI, exemplify our commitment to responsible innovation and bringing powerful research solutions to as large an audience as possible. If you haven’t tested ChatGPT yet as part of your research activities, why not give it a go today?

Simon Linacre

About the Author

Simon Linacre, Head of Content, Brand & Press | Digital Science

Simon has 20 years’ experience in scholarly communications. He has lectured and published on the topics of bibliometrics, publication ethics and research impact, and has recently authored a book on predatory publishing. Simon is an ALPSP tutor and has also served as a COPE Trustee.

Source link

#Putting #Data #Heart #Organizational #Strategy

Putting Data at the Heart of your Organizational Strategy

‘Have you done your due diligence?’ These six words induce fear and dread in anyone involved in finance, with the underlying threat that huge peril may be about to engulf you if the necessary homework hasn’t been done. Due diligence in the commercial sphere is a hygiene factor – a basic, if detailed, audit of risk to ensure that all possible outcomes have been assessed so nothing comes out of the woodwork once an investment has been made.

The question, however, is just as important for academic institutions looking to check the data on their research programs: have you done your due diligence on that? If not, then a linked database such as Dimensions can help you.

Strategic Objectives

At a recent panel discussion hosted by Times Higher Education (THE) in partnership with Digital Science on optimizing research strategy, the question of due diligence was framed by looking at the academic research lifecycle and the challenges emanating from the increased amount of data now accessible to universities. More specifically, how universities could extract and utilize verified data from the ever–increasing number of sources they had at their disposal. 

Speaking on the panel, Digital Science’s Technical Product Solutions Manager Ann Campbell believes there are numerous benefits to using new modes of data to overcome problems associated with data overload. “It’s important to think holistically, of not only the different systems that are involved here but also the different departments and stakeholders,” she said. “It’s better to have an overarching data model or a perspective from looking at the research life cycle instead of separate research silos or different silos of data that you find within these systems.”

The panel recognized that self–reporting for academics could lead to gaps in the data, while different impact data could also be missed due to a lack of knowledge or understanding on behalf of faculty members. 

Digital Science seeks to address these problems by adding some power to its Dimensions linked database in the shape of Google BigQuery. By marrying this computing power to the size and scope of Dimensions, academics and research managers are empowered to identify specific data from all stages of the research lifecycle. This allows researchers to seamlessly combine external data with their own internal datasets, giving them the holistic view of research identified by Ann Campbell in the discussion. 

Accessing Dimensions on Google BigQuery.

Data Savant

The theme of improving the capabilities of higher education institutions when it comes to data utilization has been most vividly described by Ann Campbell in her November presentation to the Times Higher Education Digital Universities conference in Barcelona in October. Memorably, she compared universities’ use of data to the plot of popular TV drama Game of Thrones. Professors as dragons? Rival departments as warring families? Well not quite, but what Ann did observe was that there are many competing elements within HEIs – research management, research information, academic culture, the library – and above them are senior management who have key questions that can only be answered using data and insights across all of them:

  • Which faculties have a high impact? Should we invest more in them?
  • Which faculties have high potential but are under–resourced?
  • How can we promote our areas of excellence?
  • How can we identify departments with strong links to industry?
  • What real–world research impact can we feed back into our curriculum?
  • Are we mitigating potential reputational risk through openness and transparency? 

Bringing these disparate challenges together requires a narrative, which is another reason why the Game of Thrones analogy works so well as we see that for all the moving parts of the story to work, a coherent story is required. This can be how an institution’s research culture strategy is working with a rise in early career international collaborations, how an increase in new funding opportunities followed a drive to increase interdisciplinary collaborations, or how the global reputation of a university could be seen to have improved its impact rankings position due to increased SDG–related research. 

Any good story needs to have the right ingredients, and where Digital Science can really help an institution is to bring together those ingredients from across an organization into viewable and manageable narratives. 

Telling Stories

But the big picture is not the whole story, of course. There are other, smaller narratives swirling through HEIs at any given time that reflect the different specialisms, hot topics or focus areas of the university. Three of these focus areas most commonly found in modern universities are research integrity, industry partnerships and research impact, and these were discussed recently at another collaborative webinar between THE and Digital Science: Utilising data to deliver research integrity, industry partnerships and impact

This panel discussion was a little more granular, and teased out some specific challenges for institutions when it came to data utilization. For research integrity, certain data relating to authorship can be used as ‘trust markers’, based around authorship, reproducibility and transparency. Representing Digital Science, Technical Product Solutions Manager Kathryn Weber–Boer went through the trust markers that form the basis of the Dimensions Research Integrity solution for universities. 

But why are these trust markers important? The panel discussion also detailed that outside universities’ realm of interest, both funders and publishers were increasingly interested in research integrity and the provenance of research emanating from universities. As such, products like Dimensions Research Integrity were forming a key part of the data management arsenal that universities needed in the modern research funding environment.  

In addition, utilization and scrutiny of such data can help move the dial in other important areas, such as changing research culture and integrity. Stakeholders want to trust in the research that’s being done, know it can be reproduced, and also see there is a level of transparency. All of these factors then influence the promotion and implementation of more open research activities.

Another important aspect of research integrity and data utilization is not just having information on where data is being shared in what way, it is also whether it is being shared as it has been recorded as, and where it is actually located. As pointed out in the discussion, Dimensions is a ‘dataset of datasets’ and allows the cross–referencing of these pieces of information to understand if research integrity data points are aligned. 

Dimensions Research Integrity trust markers.

Positive Outlook

Discussions around research integrity and data management can often be gloomy affairs, but there is some degree of optimism now there are increasing numbers of products on the markets to help HEIs meet their goals and objectives in these spheres of activity. Effective data utilization will undoubtedly be one of THE critical success factors for universities in the future, and it won’t just be for the effective management of issues like research integrity or reputations. With the lightning fast development, adoption of Generative AI in the research space and increasing interest in issues like research security and international collaboration, data utilization – and who universities partner with to optimize it – has never been higher up the agenda. 

You can view the webinars here on utilizing new modes of data and delivering research integrity.

Simon Linacre

About the Author

Simon Linacre, Head of Content, Brand & Press | Digital Science

Simon has 20 years’ experience in scholarly communications. He has lectured and published on the topics of bibliometrics, publication ethics and research impact, and has recently authored a book on predatory publishing. Simon is an ALPSP tutor and has also served as a COPE Trustee.

Source link

#Putting #Data #Heart #Organizational #Strategy

The State of Open Data 2023: unparalleled insights

Digital Science, Figshare and Springer Nature are proud to publish The State of Open Data 2023. Now in its eighth year, the survey is the longest-running longitudinal study into researchers’ attitudes towards open data and data sharing. 

The 2023 survey saw over 6,000 responses and the report that has now been published takes an in-depth look at the responses and purposefully takes a much more analytical approach than has been seen in previous years, unveiling unprecedented insights.

Five key takeaways from The State of Open Data 2023

Support is not making its way to those who need it

Over three-quarters of respondents had never received any support with making their data openly available. 

One size does not fit all

Variations in responses from different subject expertise and geographies highlight a need for a more nuanced approach to research data management support globally. 

Challenging stereotypes

Are later career academics really opposed to progress? The results of the 2023 survey indicate that career stage is not a significant factor in open data awareness or support levels. 

Credit is an ongoing issue

For eight years running, our survey has revealed a recurring concern among researchers: the perception that they don’t receive sufficient recognition for openly sharing their data. 

AI awareness hasn’t translated to action

For the first time, this year we asked survey respondents to indicate if they were using ChatGPT or similar AI tools for data collection, processing and metadata creation. 

Diving deeper into the data than ever before 

This year, we dive deeper into the data than ever before and look at the differing opinions of our respondents when we compare their regions, career stages, job titles and subject areas of expertise. 

Figshare founder and CEO Mark Hahnel said of this approach, “It feels like the right time to do this. Whilst a global funder push towards FAIR data has researchers globally moving in the same direction, it is important to recognize the subtleties in researchers’ behaviors based on variables in who they are and where they are.”

This year features extensive analysis of the survey results data and provides an in-depth and unique view of attitudes towards open data. 

This analysis provided some key insights; notably that researchers at all stages of their careers share similar enthusiasm for open data, are motivated by shared incentives and struggle to overcome the same obstacles. 

These results are encouraging and challenge the stereotype that more experienced academics are opposed to progress in the space and that those driving progress are primarily early career researchers. 

We were also able to look into the nuanced differences in responses from different regions and subject areas of expertise, illuminating areas for targeted outreach and support. These demographic variations also led us to issue a recommendation to the academic research community to look to understand the ‘state of open data’ in their specific setting.  

Benchmarking attitudes towards the application of AI 

In light of the intense focus on artificial intelligence (AI) and its application this year, for the first time, we decided to ask our survey respondents if they were using any AI tools for data collection, processing or metadata collection. 

The most common answer to all three questions was,“I’m aware of these tools but haven’t considered it.”

State of Open Data: AI awareness hasn't translated to action

Although the results don’t yet tell a story, we’ve taken an important step in benchmarking how researchers are currently using AI in the data-sharing process. Within our report, we hear from Niki Scaplehorn and Henning Schoenenberger from Springer Nature in their piece ‘AI and open science: the start of a beautiful relationship?’ as they share some thoughts on what the future could hold for research data and open science more generally in the age of AI. 

We are looking forward to evaluating the longitudinal response trends for this survey question in years to come as the fast-moving space of AI and its applications to various aspects of the research lifecycle accelerate farther ahead. 

Recommendations for the road ahead 

In our report, we have shared some recommendations that take the findings of our more analytical investigation and use them to inform action points for various stakeholders in the community. This is an exciting step for The State of Open Data, as we more explicitly encourage real-world action from the academic community when it comes to data-sharing and open data. 

Understanding the state of open data in our specific settings: Owing to the variations in responses from different geographies and areas of expertise, we’re encouraging the academic community to investigate the ‘state of open data’ in their specific research setting, to inform tailored and targeted support. 

Credit where credit’s due: For eight years running, our respondents have repeatedly reported that they don’t feel researchers get sufficient credit for sharing their data. Our recommendation asks stakeholders to consider innovative approaches that encourage data re-use and ultimately greater recognition. 

Help and guidance for the greater good: The same technical challenges and concerns that pose a barrier to data sharing transcend different software and disciplines. Our recommendation suggests that support should move beyond specific platform help and instead tackle the bigger questions of open data and open science practices. 

Making outreach inclusive: Through our investigation of the 2023 survey results, we saw that the stage of an academic’s career was not a significant factor in determining attitudes towards open data and we saw consensus between early career researchers and more established academics. Those looking to engage research communities should be inclusive and deliberate with their outreach, engaging those who have not yet published their first paper as well as those who first published over 30 years ago. 

What’s next for The State of Open Data?  

The State of Open Data 2023 report is a deliberate change from our usual format; usually, our report has contributed pieces authored by open data stakeholders around the globe. This year, we’ve changed our approach and we are beginning with the publication of this first report, which looks at the survey data through a closer lens than before. We’ve compared different subsets of the data in a way we haven’t before, in an effort to provide more insights and actionable data for the community.

In early 2024, we’ll be releasing a follow-up report, with a selection of contributed pieces from global stakeholders, reflecting on the survey results in their context. Using the results showcased in this first report as a basis, it’s our hope that this follow-up report will apply different contexts to these initial findings and bring new insights and ideas. 

In the meantime, we’re hosting two webinars to celebrate the launch of our first report and share the key takeaways. In our first session, The State of Open Data 2023: The Headlines, we’ll be sharing a TL;DR summary of the full report; our second session, The State of Open Data 2023: In Conversation, will convene a panel of global experts to discuss the survey results. 

You can sign up for both sessions here: 

The State of Open Data 2023: The Headlines

The State of Open Data 2023: In Conversation

Laura Day

About the Author

Laura Day, Marketing Director | Figshare

Laura is the Marketing Director at Figshare, part of Digital Science. Before joining Digital Science, Laura worked in scholarly publishing, focusing on open access journal marketing and transformative agreements. In her current role, Laura focuses on marketing campaigns and outreach for Figshare. She is passionate about open science and is excited by the potential it has to advance knowledge sharing by enabling academic research communities to reach new and diverse audiences.

Source link

#State #Open #Data #unparalleled #insights

New path opens up support for humanities in OA publishing – Digital Science

Can a new Open Access collection help overcome the challenges facing monographs? In the latest in our OA books series to coincide with OA Week, guest author Sarah McKee explains the case for Path to Open.

Path to Open, a new open access pilot for book publications in the humanities and social sciences, has launched its collection this month, with 100 titles covering 36 disciplines from more than 30 university presses. This represents a major and much-needed step forward for Open Access publishing in general, and for the humanities specifically.

The pilot began in January as a collaboration among university presses, libraries, and scholars. It has emerged at a moment when students, administrators, and political leaders in the United States openly doubt the value and relevance of the humanities.1 Their questions stem at least in part from a widespread misunderstanding of the term “humanities”, the disciplines it includes, and the inquiries posed by its scholars.

Such misunderstandings are perhaps not surprising. Scholarly books, often referred to as monographs, have served for decades as the primary mode for sharing research findings in the humanities but are currently distributed in ways that privilege a narrow audience.2

University presses – long-time champions and producers of monographs – have lost crucial institutional support, leaving many in difficult financial circumstances. The resulting high prices for monographs often exclude scholars, students, and others without affiliation at well-funded research libraries, and the problems multiply for those outside the established book distribution networks of North America and Western Europe.

Compared with STEM disciplines, the humanities receive little public funding for research and publication, making the move to open access much more challenging.

A commitment to finding new ways of sharing monographs drives the development of Path to Open. As Charles Watkinson and Melissa Pitts have noted, academic stakeholders “have long seen the value in investing significant resources to sustain science infrastructures that contribute to a common good. It is essential to their mission that they collaborate and invest with that same care in the crucial infrastructure for humanities research embodied by the network of university presses”.

Path to Open seeks to create an infrastructure that allows more publishers – especially small and mid-sized university presses – to experiment with open access distribution while also boosting the circulation of research from a community of diverse humanities scholars. The initiative is distinctive among open access models because, as John Sherer explains, it proposes a “compromise between the legacy model of university press publishing and a fully funded OA model”.

“A commitment to finding new ways of sharing monographs drives the development of Path to Open.”

Sarah McKee

Path to Open operates as a library subscription – administered exclusively by JSTOR – that guarantees payments of at least US$5,000 per title to participating publishers, to help offset potential losses in digital sales. With the launch of the online collection this month, presses also have the option to sell print editions of all books, as well as direct-to-consumer e-books.

A sliding scale for subscription costs provides more equitable access to libraries of varying sizes and budgets, and more than 60 libraries have joined to date, including members of the Big Ten Academic Alliance. The initial 100 titles transition to full open access by 2026, and new titles will be added in each of the following three pilot years to reach an expected total of 1,000 open access books by 2029.

The model aims to reduce financial risk for presses while also acknowledging lingering hesitation about open access publication within the humanities community. As John Sherer finds, many authors fear that “an OA monograph would be viewed less favorably than a traditional print monograph would in the tenure and promotion review process”.

Monographs take years to produce, and they function quite differently from journal articles in the scholarly ecosystem. Many of these books maintain their relevance for years, even decades, past the original publication date. Over the life of the pilot, JSTOR will track various usage metrics for all titles in the collection both before and after the transition to open access.

The partnership with JSTOR provides a unique opportunity to gather data in a controlled environment, with hopes of gaining much-needed insights into the behavior of readers, the effect of open access on print sales, and the timing of peak impact for monographs in various disciplines. Understanding such issues is key to strengthening the vital infrastructure that supports humanities research and to ensure its place alongside open STEM scholarship.

The American Council of Learned Societies (ACLS) has committed to providing a robust and transparent structure for community engagement with Path to Open. In consultation with the Educopia Institute, ACLS is developing a forum to encourage dialogue among key stakeholders, including publishers, libraries, scholars, and academic administrators. Inviting scholars into these conversations is critical for a shared understanding of how open access affects humanistic disciplines, institutions of higher education, students, and individual academic careers.

Our hope at ACLS is that an inclusive dialogue about Path to Open will generate greater understanding of the stakes for various constituents within the humanities community, and guide decisions for the future of scholarly publishing in sustainable and equitable ways.


1 Nathan Heller, “The End of the English Major,” The New Yorker, February 27, 2023.

2 See also Michael A. Elliott, “The Future of the Monograph in the Digital Era,” The Journal of Electronic Publishing 18, no. 4 (fall 2015).

Source link

#path #opens #support #humanities #publishing #Digital #Science

Will researchers try new Threads?

Today sees the launch of Threads, the new social media platform from Facebook and Instagram parent company Meta. The news has been greeted with much anticipation – and not a little humour – from users and the latest clash between Twitter’s Elon Musk and Threads’ Mark Zuckerberg. But will the new channel pack a punch for academics who might use it in their research? Social media and research communications expert Andy Tattersall provides the tale of the tape.

Meta’s new Threads social media app. Stock image.

How will Threads square up to Twitter in the social media arena? Do academics need another platform to disseminate their research?

When Facebook’s parent company Meta announced it was launching its own microblogging rival to Twitter, it felt inevitable but also sent a shudder down the spine of many people living in my part of the world. Whilst Threads might seem like a suitable, if not cliched name for the platform, given Twitter’s use of threaded updates, it also conjures up dystopian images. Firstly as those of a certain age will remember, Threads was a British-Australian BBC produced TV film that depicted a fictional nuclear war, at a time when this felt like a real possibility. It was set in Sheffield, near to where I grew up and currently work. Whilst the newest social media kid on the block is unlikely to result in that kind of devastation, it does appear to be spurred on by an increasingly public spat between the two tech giants Elon Musk and Mark Zuckerberg. And at first glance on launch day, Threads appears remarkably similar to its established rival in terms of functionality, although there is no Direct Message function. In addition, it does not have a desktop version, which for some might seem progressive, but for professionals it implies the whole thing has been rushed. 

What lies ahead for Threads?

The latest addition to the researcher’s communications toolkit is unlikely to gain large numbers of followers from academia overnight. When Musk took over Twitter last year many from the academic community saw it as the final straw due to the platform’s increasingly toxic environment. Mastodon was one of the winners from the exodus with an estimated 200,000 new users in those first few days. The number jumped to over two million new subscribers in the following weeks. I was one of them and like many reminisced as Mastodon felt very much like Twitter a decade earlier, fresher, friendlier and more focused. Yet it did not have the critical mass due to the siloed nature of Mastodon’s servers, known as Instances. Despite the Twitter backlash it was much harder for organisations to make the switch and leave behind carefully constructed audiences. Also, Twitter was widely acknowledged as the number one communications tool for academics, largely due to its ease of use (it is easy to use, harder to use it well), but also because the institutions, media, funders and public were all on there. The initial weeks after Musk’s takeover I found myself juggling both platforms, initially using cross-posting tools until Musk intervened to turn off access to helpful independent platforms that allowed that kind of functionality. Twitter’s changes in policy and direction also led me to use LinkedIn a bit more, where I have seen increased activity across my network, whilst also endeavouring to engage in specialist groups more.  

Where Threads might be different

Twitter is a tool in isolation, it has no associated social media platforms to lean on to for leverage. Threads is different, in that it will rely heavily on its social media siblings Facebook and Instagram to help with the launch. Their combined user base far outstrips that of Twitter, the question will be whether fans of those two platforms will adopt it and how well will they work as a suite of tools. For it to be a useful academic tool it needs the public, the organisations, publishers, funders and the public on board. Where it is likely to be different from Twitter is how it is openly controlled by the owners. Twitter is seen by many as Musk’s plaything which he uses to flirt with conspiracy and controversy. Whilst Facebook, also collectively guilty of various internet misdemeanours, does not have a large personality publicly shaping the platform on the fly. Having a major tech company behind you is no guarantee that your new platform will take off. One only has to look at Google’s various attempts and subsequent failures with their forays into social media. On a personal level, as someone who had given up Instagram, it was annoying that I had to revive my Instagram credentials to sign up for a Threads account. This in itself may be a major barrier to many new users, especially as you are stuck using your Instagram account name by default. This is problematic if you have a personal identity (where you use a fictitious name) and want your academic Threads profile to have your real name. As an aside, it could mean ultimately Instagram gains millions of new users as a by-product, whether they engage is another thing. Whilst its launch has been delayed in the EU, which hardly helps connecting academics together. 

What does this mean for academia?

For those academics communicating their research it means another platform to consider. This in itself is problematic, as with too much choice the easiest option is to just ignore them all or stick with what you know. Communicating one’s research is not only a good thing to do, it is increasingly regarded as an important part of the research lifecycle. It can help increase citations, form collaborations, generate impact and project your work to those who may not be aware of it but find it beneficial. The demands on academics’ time and attention means there is little or no room to explore new platforms. Not only are there a plethora of general and specialist social media platforms, but there are also other mediums to consider. Blogging, podcasts, videos, animations and discussion forums provide valuable ways to reach out to different audiences. Academics do not have the time to critically appraise and  learn this growing suite of technologies, which is something I try to do, which is far from easy. Hence why so many researchers and aligned professionals either pay to learn about which tools to use properly, or outsource the work altogether to external consultants. 

Facebook is the number one social media platform but one that the academic community has never truly taken advantage of. To a large extent, this is a shame as it is global, has a decent demographic spread between young and middle-aged adults, and has good functionality, especially in relation to groups and pages. It is used by academics and groups, in particular for reaching groups and communities or by targeted adverts. However, on an individual level it has struggled to strike a balance between professional and personal identities. Twitter is much easier to navigate between multiple accounts and networks. So if academics can look beyond that and see Threads as a whole new platform it may be useful. No doubt whatever happens, it will highlight even more tensions between Musk and Zuckerberg, how much of it is real or for show, nobody knows. Nor can anyone predict what Musk will do as a result, some have long predicted Twitter’s demise and there is a possibility that one of the contenders could knock the other one out, in the ring or on the web.  


Andy Tattersall

About the Author

Andy Tattersall

Andy Tattersall is an Information Specialist at The School of Health and Related Research (ScHARR) and writes, teaches and gives talks about digital academia, technology, scholarly communications, open research, web and information science, apps, altmetrics, and social media. In particular, their applications for research, teaching, learning, knowledge management and collaboration. Andy received a Senate Award from The University of Sheffield for his pioneering work on MOOCs in 2013 and is a Senior Fellow of the Higher Education Academy. He is also Chair for the Chartered Institute of Library and Information Professionals – Multi Media and Information Technology Committee. Andy was listed as one of Jisc’s Top Ten Social Media Superstars for 2017 in Higher Education. He has edited a book on altmetrics for Facet Publishing which is aimed at researchers and librarians.

Source link

#researchers #Threads

Discovering ‘galaxies’ of research within universities – Digital Science

University research data looks like something from outer space – let’s zoom in and see what’s there

Research institutions need the right tools to discover their strengths and weaknesses, to plan for the future, and to make a greater impact for the communities of tomorrow. In this post, Digital Science’s VP Research Futures, Simon Porter, uses a digital telescope to view the ‘galaxies’ of research within our best and brightest institutions – and explains why that matters.

When we see new images of our universe through the lens of the James Webb Space Telescope (JWST), we’re left in awe of the unique perspective we’ve witnessed, and something about our own universe – even the perception of our own existence – has altered as a result.

What we see are entirely new galaxies, and worlds of possibility.

That’s also what I see when I look at the research data spanning our many universities and research institutions globally. Each one of these institutions represents its own unique universe of research within them.

For me, Dimensions – the world’s largest linked research database and data infrastructure provider – is like the JWST of research data. It enables us to see data in ways we hadn’t thought possible, and it opens up new worlds of possibility, especially for research strategy and decision-making.

What does a university look like?

We began our What does a University Look Like? project in 2019 and it’s rapidly evolved thanks to developments in 3D visualization technology, the expansion of data availability, and the combination of data sources, such as Dimensions and Google BigQuery.

By modelling data from Dimensions into a 3D visualization tool called Blender, we’ve been able to see right into the detail of university research data and capture it in a way that is analogous to the process of taking raw data from JWST and processing it to make a high-quality snapshot of space from afar.

To do this, we’ve used the 2020 Field of Research (FoR) codes, which were developed for research classification in Australia and New Zealand, and we’ve designated a color to each one of those codes (see Figure 2). Each single point of color represents an individual researcher coded by the 2-digit FoR they’re most associated with; researchers are depicted by a sphere, and the size of the sphere is based on the number of publications that researcher has produced.

We then apply algorithms developed by CWTS at Leiden University to determine research clusters – co-authorship networks – within a specific university. These clusters are then layered on top of each other by discipline, with Clinical Science clusters at the bottom, then moving up through Health Sciences, then Science and Engineering, and Linguistics at the top. This is the result.

Figure 1: A 3D visualization of research collaborations within the University of Melbourne. Source: Dimensions/Digital Science. (See also: Figure 2 – color code.)

In Figure 1, we see a 3D visualization of the University of Melbourne, a leading Group of Eight (Go8) research university in Australia. Within this image are 234 research clusters, comprising connections between more than 18,000 co-authored researchers affiliated to the University of Melbourne from 2017-2022.

Figure 2: Network diagram color key, with colors assigned to each of the two-digit FoR codes. 
Figure 3: A zoomed in portion of the University of Melbourne network showing overlapping clusters of Clinical Sciences (red,) Biological Sciences (cream,) Chemical Sciences (light blue), and Engineering (gray). Researchers from different disciplines can be seen to be collaborating within each cluster. Source: Dimensions/Digital Science.

The high-quality nature of this visualization means we can zoom right into the level of the individual sphere (ie, researcher), or pull back to see the bigger picture of the research environment they’re connected to or surrounded by. We can see every research field and every individual or team they’re collaborating with at the university.

If the university has a biological sciences cluster, we can see whether there’s a mathematician interacting with that cluster, clinical scientists, engineers, or someone from the humanities or social sciences. It opens up a new level of understanding about the research landscape of an institution and its individuals.

On our Figshare, you can also watch a video that takes you through the various research clusters found at the University of Melbourne. You can also follow the “What does a university look like?” Figshare project here.

At Digital Science, we’ve created six of these visualizations – five universities from Australia and one from New Zealand – to help demonstrate Dimensions’ unparalleled capabilities to assist with analyzing research data. While many institutions have similarities, some are completely different in research collaboration structures (see Figure 4).

To see a brief video where I walk through all six of the visualizations, visit the Digital Science YouTube.

Looks great – but why does it matter?

These 3D visualizations aren’t just about producing a pretty picture; they’re an elegant and useful way of representing the richness of research data contained about each institution in Dimensions. This is particularly true for university administrations where the ability to measure and promote internal institutional collaboration is just as important as measuring international collaboration.  

To illustrate this point, consider the differences between the collaboration structures of the Australian National University (see Figure 4) and the University of Melbourne. Beyond the immediate difference of network size and discipline focus (the University of Melbourne is larger, and  has a much larger medical and health sciences footprint), the two universities have very different collaboration shapes, with disciplines more distinctly separate in the ANU graph. That two prestigious research institutions can have such different shapes suggest there are different external forces at play that influence the shape of collaboration.

Figure 4: A 3D visualization of research collaborations within the Australian National University (ANU). Source: Dimensions/Digital Science.

Figure 4 represents the Australian National University (ANU), with more than 5,600 co-authored researchers from 2017-2022 and 75 research clusters identified in the data. 

Two reasons that might significantly contribute to the different shape of ANU are its funding model and physical campus shape. ANU’s funding model is unique within Australian higher education, having been endowed with the National Institutes Grant, providing secure and reliable funding for long-term pure and applied research. A key focus of the grant is maintaining and enhancing distinctive concentrations of excellence in research and education, particularly in areas of national importance to Australia. This concentration of excellence is also perhaps reflected in the relative discipline concentration within the visualisation. ANU is also a relatively spread out campus at roughly three times the size of the University of Melbourne’s Parkville campus, making the physical collaboration distance between disciplines larger.

By beginning to identify how factors such as size of campus and funding models can influence the collaboration structures provides key insights for universities, governments and funders. The relative ease of creating these models based on Dimensions data opens the possibility of creating collaboration benchmarks able to be correlated with other external factors. These insights can in turn help shape interventions that maximise local collaboration, in line with the culture of the institution. As with stargazing, the more you look into the past, the better you can see the future. 

Note: Simon Porter first shared these visualizations at the Digital Science Showcase in Melbourne, Australia (28 February to 2 March 2023).

About Dimensions

Part of Digital Science, Dimensions is the largest linked research database and data infrastructure provider, re-imagining research discovery with access to grants, publications, clinical trials, patents and policy documents all in one place. www.dimensions.ai 

Source link

#Discovering #galaxies #research #universities #Digital #Science

TOME sheds light on sustainable open access book publishing – Digital Science

A five-year open access publishing pilot has come to an end, offering key insights into a future of sustainable open access publishing for monographs.

In December of 2022, Emory University in Atlanta hosted the fifth and final stakeholders meeting for TOME (Toward an Open Monograph Ecosystem)

TOME launched in 2017 as a five-year pilot project of the Association of American Universities (AAU), Association of Research Libraries (ARL), and Association of University Presses (AUPresses). The goal of the pilot was to explore a new model for sustainable monograph publishing, one in which participating universities commit to providing baseline grants of $15,000 to support the publication of monographs by their faculty, while participating university presses commit to producing digital open access editions of TOME volumes, openly licensing them under Creative Commons licenses, and depositing the files in selected open repositories.

The December meeting gave stakeholders (publishers, librarians, authors, and representatives from a number of societies and foundations) the opportunity to gather—both virtually and in person—and assess the outcomes of the initiative while also deliberating on next steps. In this post I briefly discuss one discrete piece of the assessment: What did we learn from the pilot about eBook usage and the impact of the OA edition on print sales.

Over the course of the pilot, more than 130 scholarly monographs have been published in OA editions with funding from the 20 participating TOME institutions. Given the long lead time associated with monograph publishing, most of the books (over 70%) were released in the final two years of the pilot, which means that any usage data collected by the publishers would be preliminary at best, so the initial analysis focused on the first 25 books, which were published between May 2018 and September 2019. Prior to the December meeting, the publishers of these 25 books were asked to collect usage data from each of the platforms hosting the OA editions. In addition, they provided print sales figures, both for the TOME editions and for comparable titles on their list. The resulting data were compiled into a spreadsheet for analysis. 

Not surprisingly, the main challenge to analysis of these data was the apples-to-apples problem. Some repositories and platforms collect downloads while others track views only. Some base their stats on single chapters; others on the entire book. Meanwhile, publishers do not all place their OA editions on the same platforms. As a result, the spreadsheet ended up looking a bit like a checkerboard with pieces on some squares but not others. For instance, here’s how a small portion of the spreadsheet looked when the data were filled in:

Figure 1: Sample spreadsheet of downloads/views.

“TOME’s usage stats stand out even more when seen alongside the sales figures for the print editions of the same titles.”

Peter Potter

Still, when all the data were collected, one thing was clear: the OA editions have been heavily accessed online. By July 2022, the first 25 TOME books tallied nearly 195,000 downloads and views. The average per book was 7,754.1

These findings are in line with those of other OA book initiatives. In November 2022 MIT Press reported that the 50 books published OA in 2022 through its Direct to Open program were downloaded over 176,000 times.2 This works out to roughly 3,520 per book. Likewise, the University of Michigan Press reported in January 2023 that the 40 Fund to Mission books released OA in 2022 were downloaded over 149,000 times up to the end of December, reaching an average of 3,826 per book.3 While the per book numbers for both D2O and Michigan are lower than that of TOME, the TOME books accumulated their stats over a longer period of time.

TOME’s usage stats stand out even more when seen alongside the sales figures for the print editions of the same titles. As can be seen in this bar chart, the average number of downloads/views per book (7,754) is significantly higher than the average unit sales per book (590). 

Figure 2: TOME usage/sales (first 25 books).

We also considered one of the biggest questions that publishers continue to ask about OA books: How does the OA edition affect sales of the print edition? With this question in mind, publishers provided not just the sales figures for TOME books but sales figures for comparable titles on their list. (Each publisher was left to decide what it deemed a “comparable” book.)  As this chart shows, the print editions of TOME books actually outsold their comps. 

Figure 3: Print sales: TOME vs. Comps (first 25 books).

“The print editions of TOME books actually outsold their comps.”

Peter Potter

These findings should be taken with a grain of salt. As several publishers pointed out, identifying comps for any single title is mostly guesswork. Furthermore, the sample size (25) is too small to warrant drawing any firm conclusions. For instance, most of the 25 TOME titles had print sales between 300 and 500 copies. Only in four cases did sales exceed 1,000 copies, and if these four titles are excluded from the sample the average drops to a number more consistent with the comps. Understandably, therefore, most presses were reluctant just yet to draw any conclusions about OA’s impact on sales.4

Of course, we know that the impact of scholarly books goes well beyond downloads, views, and sales figures. A future post will look at the Altmetric data for TOME books to see what they tell us about alternative measures of impact. Meanwhile, a final report on TOME, including an in-depth examination of attitudes and motivations of the stakeholder groups, is due to be released in the coming weeks.

 1 The median was 5,243, with a minimum of 800 and a maximum of 27,470. 
 2 https://mitpress.mit.edu/mit-press-direct-to-open-books-downloaded-more-than-176312-in-ten-months/
3 https://ebc.press.umich.edu/stories/2023-02-01-so-how-did-they-do-in-2022/. These figures filter out a digital project with very high usage, which was considered an outlier.
 4 A larger study of OA impact on sales, sponsored by the NEH, is forthcoming from AUPresses. https://aupresses.org/news/neh-grant-to-study-open-access-impact/

Source link

#TOME #sheds #light #sustainable #open #access #book #publishing #Digital #Science

White House OSTP public access recommendations: Maturing your institutional Open Access strategy – Digital Science

While the global picture of Open Access remains something of a patchwork (see our recent blog post The Changing Landscape of Open Access Compliance), trends are nevertheless moving in broadly the same direction, with the past decade seeing a move globally from 70% of all publishing being closed access to 54% being open access

The White House OSTP’s new memo (aka the Nelson Memo) will see this trend advance rapidly in the United States, stipulating that federally-funded publications and associated datasets should be made publicly available without embargo.

In this blog post, Symplectic‘s Kate Byrne and Figshare‘s Andrew Mckenna-Foster start to unpack what the Nelson Memo means, along with some of the impacts, considerations and challenges that research institutions and librarians will need to consider in the coming months.

Demystifying the Nelson Memo’s recommendations

The focus of the memo is upon ensuring free, immediate, and equitable access to federally funded research. 

The first clause of the memo is focused on working with the funders to ensure that they have policies in place to provide embargo-free, public access to research. 

The second clause encourages the development of transparent procedures to ensure scientific and research integrity is maintained in public access policies. This is a complex and interesting space, which goes beyond the remit of what we would perhaps traditionally think of as ‘Open Access’ to incorporate elements such as transparency of data, conflicts of interest, funding, and reproducibility (the latter of which is of particular interest to our sister company Ripeta, who are dedicated to building trust in science by benchmarking reproducibility in research).  

The third clause recommends that federal agencies coordinate with the OSTP in order to ensure equitable delivery of federally-funded research results in data. While the first clause mentions making supporting data available alongside publications, this clause takes a broader stance toward sharing results. 

What does this mean for institutions and faculty?

The Nelson memo introduces a clear set of challenges for research institutions, research managers, and librarians, who now need to consider how to put in place internal workflows and guidance that will enable faculty to easily identify eligible research and make it openly available, how to support multiple pathways to open access, and how to best engage and incentivize researchers and faculty. 

However, the OSTP has made very clear that this is not in fact a mandate, but rather a non-binding set of recommendations. While this certainly relieves some of the potential immediate pressure and panic around getting systems and processes in place, it is clear that what this move does represent is the direction of travel that has been communicated to federal funders. 

Funders will look at the Nelson Memo when reviewing their own policies, and seek alignment when setting their own policy requirements that drive action for faculty members across the US. So while the memo does not in itself mandate compliance for institutions, universities, and research organizations, it will have a direct impact on the activities faculty are being asked to complete – increasing the need for institutions to offer faculty services and support to help them easily comply with their funders requirements.

How have funders responded so far? 

We are already seeing clear indications that funders are embracing the recommendations and preparing next steps. Rapidly after the announcement, the NIH published a statement of support for the policy, noting that it has “long championed principles of transparency and accessibility in NIH-funded research and supports this important step by the Biden Administration”, and over the coming months will “work with interagency partners and stakeholders to revise its current Public Access Policy to enable researchers, clinicians, students, and the public to access NIH research results immediately upon publication”. 

Similarly, the USDA tweeted their support for the guidance, noting that “rapid public access to federally-funded research & data can drive data-driven decisions & innovation that are critical in our fast-changing world.”

How big could the impact be?

While it will take some time for funders to begin to publish their updated OA Policies, there have been some early studies which seek to assess how many publications could potentially fall under such policies. 

A recent preprint by Eric Schares of Iowa State University [Impact of the 2022 OSTP Memo: A Bibliometric Analysis of U.S. Federally Funded Publications, 20217-2021] used data from Dimensions to identify and analyse publications with federal funding sources. Schares found that: 

  • 1.32 million publications in the US were federally funded between 2017-2021, representing 33% of all US research outputs in the same period. 
  • 32% of federally funded publications were not openly available to the public in 2021 (compared to 38% of worldwide publications during the same period). 

Schares’ study included 237 federal funding agencies – due to the removal of the $100m threshold, many more funders now fall under the Nelson memo than under the previous 2013 Holdren memo. This makes it likely that disciplines who previously were not impacted will now find themselves grappling with public access requirements.

Source: Impact of the 2022 OSTP Memo: A Bibliometric Analysis of U.S. Federally Funded Publications, 2017 2021: https://ostp.lib.iastate.edu

In Schares’ visualization here, where each dot represents a research institution, we can see that two main groupings emerge. The first is a smaller group made up of the National Laboratories. They publish a smaller number of papers overall, but are heavily federally funded (80-90% of their works). The second group is a much larger cluster, representing Universities across the US. Those organisations have 30–60% of their publications being federally-funded, but building from a much larger base number of publications – meaning that they will likely have a lot of faculty members who will now need support.

Where do faculty members need support?

According to the 2022 State of Open Data Report, institutions and libraries have a particularly essential role to play in meeting new top-down initiatives, not only by providing sufficient infrastructure but also support, training and guidance for researchers. It is clear from the findings of the report that the work of compliance is wearing on researchers, with 35% of respondents citing lack of time as reason for not adhering to data management plans and 52% citing finding time to curate data as the area they need the most help and support with. 72% of researchers indicated they would rely on an internal resource (either colleagues, the Library or the Research Office) were they to require help with managing or making their data openly available.

How to start?

Institutions who invest now in building capacity in these areas to support open access and data sharing for researchers will be better prepared for the OSTP’s 2025 deadline, helping to avoid any last-minute scramble to support their researchers in meeting this guidance.

Beginning to think about enabling open access can be a daunting task, particularly for institutions who don’t yet have internal workflows or appropriate infrastructure set up, so we recommend breaking down your approach into more manageable chunks: 

1. Understand your own Open Access landscape 

  • Find out where your researchers are publishing and what OA pathways they are currently using. You can do this by reviewing your scholarly publishing patterns and the OA status of those works.
  • Explore the data you have for your own repositories – not only your own existing data sets, but also those from other sources such as data aggregators or tools like Dimensions.
  • Begin to overlay publishing data with grants data, to benchmark where you are now and work to identify the kinds of drivers that your researchers are likely to see in the future. 

2. Review your system capabilities

  • Is your repository ready for both publications and data?
  • Do you have effective monitoring and reporting capabilities that will help you track engagement and identify areas where your community may need more support? Are your systems researcher-friendly; how quickly and easily can a researcher make their work openly available??

3. Consider how you will support your research ecosystem 

  • Identify how you plan to support and incentivize researchers, considering how you will provide guidance about compliant ways of making work openly available, as well as practical support where relevant.
  • Plan communication points between internal stakeholders (e.g. Research Office, Library, IT) to create a joined-up approach that will provide a shared and seamless experience to your researchers.
  • Review institutional policies and procedures relating to publishing and open access, considering where you are at present and where you’d like to get to.

How can Digital Science help? 

Symplectic Elements was the first commercially available research information management system to be “open access aware”, connecting to institutional digital repositories in order to enable frictionless open access deposit for publications and accompanying datasets. Since 2009 through initial integration with DSpace – later expanding our repository support to Figshare, EPrints, Hyrax, and custom home-grown systems – we have partnered with and guided many research institutions around the globe as they work to evolve and mature their approach to open access. We have deep experience in building out tools and processes which will help universities meet mandates set by national governments or funders, report on fulfilment and compliance, and engage researchers in increasing levels of deposit. 

Our sister company Figshare is a leading provider of cloud repository software and has been working for over a decade to make research outputs, of all types, more discoverable and reusable and lower the barriers of access. Meeting and exceeding many of the ‘desirable characteristics’ set out by the OSTP themselves for repositories, Figshare is the repository of choice for over 100 universities and research institutions looking to ensure their researchers are compliant with the rising tide of funder policies.

Below is an example of the type of Open Access dashboard that can be configured and run using the various collated and curated scholarly data held within Symplectic Elements.

In this example, we are using Dimensions as a data source, building on data from Unpaywall about the open access status of works within an institution’s Elements system. Using the data visualizations within this dashboard, you can start to look at open access trends over time, such as the different sorts of open access pathways being used, and how that pattern changes when you look across different publishers or different journals, or for different departments within your organization. By gaining this powerful understanding of where you are today, you can begin to think about how to best prioritise your efforts for tomorrow as you continue to mature your approach to open access. 

Growing maturity of OA initiatives over time – not a “one and done”.

You might find yourself at Level 1 right now where you have a publications repository along with some metadata, and you’re able to track a number of deposits and do some basic reporting, but there are a number of ways that you can build this up over time to create a truly integrated OA solution. By bringing together publications and data repositories and integrating them within a research management solution, you can enter a space where you can monitor proactively, with an embedded engagement and compliance strategy across all publications and data. 

For more information or if you’d like to set up time to speak to the Digital Science team about how Symplectic Elements or Figshare for Institutions can support and guide you in your journey to a fully embedded and mature Open Access strategy, please get in touch – we’d love to hear from you.

This blog post was originally published on the Symplectic website.



Source link

#White #House #OSTP #public #access #recommendations #Maturing #institutional #Open #Access #strategy #Digital #Science

Why is it so difficult to understand the benefits of research infrastructure? – Digital Science


Persistent identifiers – or PIDs – are long-lasting references to digital resources. In other words, they are a unique label to an entity: a person, place, or thing. PIDs work by redirecting the user to the online resource, even if the location of that resource changes. They also have associated metadata which contains information about the entity and also provide links to other PIDs. For example, many scholars already populate their ORCID records, linking themselves to their research outputs through Crossref and DataCite DOIs. As the PID ecosystem matures, to include PIDs for grants (Crossref grant IDs), projects (RAiD), and organisations (ROR), the connections between PIDs form a graph that describes the research landscape. In this post, Phill Jones talks about the work that the MoreBrains cooperative has been doing to show the value of a connected PID-based infrastructure.

Over the past year or so, we at MoreBrains have been working with a number of national-level research supporting organisations to develop national persistent identifier (PID) strategies: Jisc in the UK; the Australian Research Data Commons (ARDC) and Australian Access Federation (AAF) in Australia; and the Canadian Research Knowledge Network CRKN, Digital Research Alliance of Canada (DRAC), and Canadian Persistent Identifier Advisory Committee (CPIDAC) in Canada. In all three cases, we’ve been investigating the value of developing PID-based research infrastructures, and using data from various sources, including Dimensions, to quantify that value. In our most recent analysis, we found that investing in five priority PIDs could save the Australian research sector as much as 38,000 person days of work per year, equivalent to $24 million (AUD), purely in direct time savings from rekeying of information into institutional research management systems.

Investing in infrastructure makes a lot of sense, whether you’re building roads, railways, or research infrastructure. But wise investors also want evidence that their investment is worthwhile – that the infrastructure is needed, that it will be used, and, ideally, that there will be a return of some kind on their investment. Sometimes, all of this is easy to measure; sometimes, it’s not.

In the case of PID infrastructure, there has long been a sense that investment would be worthwhile. In 2018, in his advice to the UK government, Adam Tickell recommended:

Jisc to lead on selecting and promoting a range of unique identifiers, including ORCID, in collaboration with sector leaders with relevant partner organisations

More recently, in Australia, the Minister for Education, Jason Clare, wrote a letter of expectations to the Australian Research Council in which he stated:

Streamlining the processes undertaken during National Competitive Grant Program funding rounds must be a high priority for the ARC… I ask that the ARC identify ways to minimise administrative burden on researchers

In the same letter, Minister Clare even suggested that preparations for the 2023 ERA be discontinued until a plan to make the process easier has been developed. While he didn’t explicitly mention PIDs in the letter, organisations like ARDC, AAF, and ARC see persistent identifiers as a big part of the solution to this problem.

A problem of chickens and eggs?

With all the modern information technology available to us it seems strange that, in 2022, we’re still hearing calls to develop basic research management infrastructure. Why hasn’t it already been developed? Part of the problem is that very little work has been done to quantify the value of research infrastructure in general, or PID-based infrastructure in particular. Organisations like Crossref, Datacite, and ORCID are clear success stories but, other than some notable exceptions like this, not much has been done to make the benefits of investment clear at a policy level – until now.

It’s very difficult to analyse the costs and benefits of PID adoption without being able to easily measure what’s happening in the scholarly ecosystem. So, in these recent analyses that we were commissioned to do, we asked questions like:

  • How many research grants were awarded to institutions within a given country?
  • How many articles have been published based on work funded by those grants?
  • What proportion of researchers within a given country have ORCID IDs?
  • How many research projects are active at any given time?

All these questions proved challenging to answer because, fundamentally, it’s extremely difficult to quantify the scale of research activity and the connections between research entities in the absence of universally adopted PIDs. In other words, we need a well-developed network of PIDs in order to easily quantify the benefits of investing in PIDs in the first place! (see Figure 1.)

Luckily, the story doesn’t end there. Thanks to data donated by Digital Science, and other organisations including ORCID, Crossref, Jisc, ARDC, AAF, and several research institutions in the UK, Canada, and Australia, we were able to piece together estimates for many of our calculations.

Take, for example, the Digital Science Dimensions database, which provided us with the data we needed for our Australian and UK use cases. It uses advanced computation and sophisticated machine learning approaches to build a graph of research entities like people, grants, publications, outputs, institutions etc. While other similar graphs exist, some of which are open and free to use – for example, the DataCite PID graph (accessed through DataCite commons), OpenAlex, and the ResearchGraph foundation – the Dimensions graph is the most complete and accessible so far. It enabled us to estimate total research activity in both the UK and Australia.

However, all our estimates are… estimates, because they involve making an automated best guess of the connections between research entities, where those connections are not already explicit. If the metadata associated with PIDs were complete and freely available in central PID registries, we could easily and accurately answer questions like ‘How many active researchers are there in a given country?’ or ‘How many research articles were based on funding from a specific funder or grant program?’

The five priority PIDs

As a starting point towards making these types of questions easy to answer, we recommend that policy-makers work with funders, institutions, publishers, PID organisations, and other key stakeholders around the world to support the adoption of five priority PIDs:

  • DOIs for funding grants
  • DOIs for outputs (eg publications, datasets, etc)
  • ORCIDs for people
  • RAiDs for projects
  • ROR for research-performing organisations

We prioritised these PIDs based on research done in 2019, sponsored by Jisc and in response to the Tickell report, to identify the key PIDs needed to support open access workflows in institutions. Since then, thousands of hours of research and validation across a range of countries and research ecosystems have verified that these PIDs are critical not just for open access but also for improving research workflows in general.

Going beyond administrative time savings

In our work, we have focused on direct savings from a reduction in administrative burden because those benefits are the most easily quantifiable; they’re easiest for both researchers and research administrators to relate to, and they align with established policy aims. However, the actual benefits of investing in PID-based infrastructure are likely far greater.

Evidence given to the UK House of Commons Science and Technology Committee in 2017 stated that every £1 spent on Research and Innovation in the UK results in a total benefit of £7 to the UK economy. The same is likely to be true for other countries, so the benefit to national industrial strategies of increased efficiency in research are potentially huge.

Going a step further, the universal adoption of the five priority PIDs would also enable institutions, companies, funders, and governments to make much better research strategy decisions. At the moment, bibliometric and scientometric analyses to support research strategy decisions are expensive and time-consuming; they rely on piecing together information based on incomplete evidence. By using PIDs for entities like grants, outputs, people, projects, and institutions, and ensuring that the associated metadata links to other PIDs, it’s possible to answer strategically relevant questions by simply extracting and combining data from PID registries.

Final thoughts

According to UNESCO, global spending on R&D has reached US$1.7 trillion per year, and with commitments from countries to address the UN sustainable development goals, that figure is set to increase. Given the size of that investment and the urgency of the problems we face, building and maintaining the research infrastructure makes sound sense. It will enable us to track, account for, and make good strategic decisions about how that money is being spent.


Phill Jones

About the Author

Phill Jones, Co-founder, Digital and Technology | MoreBrains Cooperative

Phill is a product innovator, business strategist, and highly qualified research scientist. He is a co-founder of the MoreBrains Cooperative, a consultancy working at the forefront of scholarly infrastructure, and research dissemination. Phill has been the CTO at Emerald Publishing, Director of Publishing Innovation at Digital Science and the Editorial Director at JoVE. In a previous career, he was a bio-physicist at Harvard Medical School and holds a PhD in Physics from Imperial College, London.

The MoreBrains Cooperative is a team of consultants that specialise in and share the values of open research with a focus on scholarly communications, and research information management, policy, and infrastructures. They work with funders, national research supporting organisations, institutions, publishers and startups. Examples of their open reports can be found here: morebrains.coop/repository



Source link

Open Access Monographs: Digital Scholarship as Catalyst – Digital Science


Bringing humanistic research into the digital environment – and supporting new and diverse voices and perspectives – is one of the great benefits of Open Access, write the authors of the latest in our OA books series.

How research is generated and shared can drive meaningful change across disciplines, organizations, and communities. Consider digital scholarship. Emerging tools and methodologies prompt new questions; resultant hypotheses and argumentation call for innovative presentations; interactivity and other enhanced user experiences bring about heightened awareness and agency; increased inclusivity leads to new, diverse perspectives. Combining digital scholarship with open access (OA) publishing models expands significantly the possibilities for impact by offering more equitable access to research, alongside new and powerful ways for authors to articulate complex arguments. In sum, the intersection between innovative forms of scholarship and revolutionary dissemination processes can benefit multiple stakeholders the world over.

Creating multimodal digital monographs, for many authors, is about making the humanities relevant and accessible to wider audiences who can both benefit from and contribute to scholarly production in tangible, meaningful ways. At the same, open access publication provides not only wide distribution but also a mechanism by which digital scholarship may undergo formal development and evaluation with a university press. But the ability to create open multimodal publications is itself fraught with inequity, requiring collaboration partners, expertise, and funding not yet widely available to all scholars or to their publishers.

In an effort to take stock of the wide range of innovative practices and system-changing interventions that characterize a growing body of digital scholarly publications, Brown University and Emory University co-hosted a summit in spring 2021. The intention from the start was to call attention to the faculty-led experimentation that was taking place across a number of libraries and humanities centers, some of which already involved university presses. Shifting the focus away from tools and technology, as important as those discussions remain to the larger scholarly communications ecosystem, the summit emphasized author and audience needs and opportunities. As such, it highlighted the importance of investing in a people-centric, content-driven infrastructure.

“How can we encourage a shared vocabulary for these reimagined forms of humanities scholarship?”

Case studies of eight recently published or in-development OA works provided the basis for in-depth, evidence-based discussions among scholars, academic staff experts, and representatives from university presses: What models for publishing enhanced and interactive scholarly projects are emerging? What are the common challenges that remain and how do we address them? How can we encourage a shared vocabulary for these reimagined forms of humanities scholarship among the wider scholarly communications community?

While each of the projects, representing a broad disciplinary range and span of subject matter, offers a different perspective, when taken together they reveal lessons learned and clarify key priorities. All the projects demonstrate in myriad ways how digital content and affordances can enrich and deepen a scholarly argument. Some works provide distinct opportunities to examine the ethical implications of humanities research and to consider the new ways in which digital publication engages with audiences beyond the academy. Others foreground the powerful outcomes of collaborations between university presses and universities, modeling how such partnerships leverage resources and expertise to strengthen the humanities infrastructure and allow for innovation within it.

Although the summit focused on a selection of projects supported by the Mellon Foundation’s Digital Monographs Initiative, the presentations and generative discussions that followed raised important concerns and opportunities that extend well beyond the featured projects. These findings were released in July 2022 at the Association of University Presses 2022 Annual Meeting in Washington, D.C. A key objective of the report, “Multimodal Digital Monographs: Content, Collaboration, Community,” is to promote greater inclusion and equitable access of diverse voices as the development, validation, and dissemination of digital scholarship continues to unfold.

That digital scholarship developed and distributed as an OA monograph can transform how and where research is carried out and whom it reaches is undeniable. In the case of Feral Atlas: The More-than-Human Anthropocene (Stanford University Press, 2021), four project editors compiled the contributions of more than 100 scientists, humanists, artists, designers, programmers, and coders. The atlas contains 330,000 words and 600+ media assets, and had attracted 60,000 unique visitors in the six months between its publication and the date of the summit. The publication As I Remember It: Teachings from the Life of a Sliammon Elder (University of British Columbia Press, 2019), published on the RavenSpace platform, offers a model for including Indigenous communities in the creation of scholarship, while also addressing the needs of both public and academic audiences through a thoughtful interplay of text and multimedia.

“We need to continue putting pressure on what it means for scholarship to be open, to be digital, to be public.”

For all the successes noted in the report, we need to continue putting pressure on what it means for scholarship to be open, to be digital, to be public. Such scholarship has the potential to offer powerful counterpoints and alternatives to the disinformation that pervades current discourse on the web, and to bridge the gap between scholarly and public discourses. 

As the pathways for humanities scholarship expand in the digital era, “Multimodal Digital Monographs: Content, Collaboration, Community” serves as an invitation for all its practitioners to engage in conversation about the evolution of content itself, as well as with the authors who create it and the audiences whom they seek to engage. The importance of sharing and learning together as a community, for finding innovative and productive ways to share expertise and resources through collaborative models, emerges from the summit and cannot be underestimated in these still early and formative days. We further hope that more universities will seek ways to support their own faculty, as well as the publishers of their faculty’s work, in efforts to bring vital humanistic research into the digital environment and to welcome new and diverse voices and perspectives throughout that process.



Source link