Putting Data at the Heart of your Organizational Strategy

With the launch of Dimensions Research GPT and Dimensions Research GPT Enterprise, researchers the world over now have access to a solution far more powerful than could have been believed just a few years ago. Simon Linacre takes a look at a new solution that combines the scientific evidence base of Dimensions with the pre-eminent Generative AI from ChatGPT.


For many researchers, the ongoing hype around recent developments with Generative AI (GAI) has left them feeling nonplussed, with so many new, unknown solutions for them to use. Added to well-reported questions over hallucinations and responsibly-developed AI, the advantages that GAI could offer have been offset by some of these concerns.

In response, Digital Science has developed its first custom GPT solution, which combines powerful data from Dimensions with ChatGPT’s advanced AI platform; introducing Dimensions Research GPT and Dimensions Research GPT Enterprise

Dimensions Research GPT’s answers to research queries make use of data from tens of millions of Open Access publications, and access is free to anyone via OpenAI’s GPT Store; Dimensions Research GPT Enterprise provides results underpinned by all publications, grants, clinical trials and patents found within Dimensions and is available to anyone with an organization-wide Dimensions subscription that has ChatGPT enterprise account. Organizations keen to tailor Dimensions Research GPT Enterprise to better meet the needs of specific use cases are also invited to work with our team of experts to define and implement these.

These innovative new research solutions from Dimensions enable users of ChatGPT to discover more precise answers and generative summaries by grounding the GAI response in scientific data – data that comes from millions of publications in Dimensions – through to the increasingly familiar ChatGPT’s conversational interface. 

These new solutions have been launched to enable researchers – indeed anyone with an interest in scientific research – to find trusted answers to their questions quickly and easily through a combination of ChatGPT’s infrastructure and Dimensions’ well-regarded research specific capabilities. These new innovations accelerate information discovery, and represent the first of many use cases grounded in AI to come from Digital Science in 2024.

How do they work?

Dimensions Research GPT and Dimensions Research GPT Enterprise are based on Dimensions, the world’s largest collection of linked research data, and supply answers to queries entered by users in OpenAI’s ChatGPT interface. Users can prompt ChatGPT with natural language questions and see AI-generated responses, with notifications each time any content is based on Dimensions data as a result of their queries on the ChatGPT platform, with references shown to the source. These are in the shape of clickable links, which take users directly to the Dimensions platform where they can see pages with further details on the source records to continue their discovery journey. 

Key features of Dimensions Research GPT Enterprise include: 

  • Answers to research queries with publication data, clinical trials, patents and grant information
  • Set up in the client’s private environment and only available to client’s end users
  • Notifications each time content generated is based on Dimensions data, with references and citation details.
Sample image of a query being run on Dimensions Research GPT.

What are the benefits to researchers?

The main benefit for users is that they can find scientifically grounded, inherently improved information on research topics of interest with little time and effort due to the combination of ChatGPT’s interface and Dimensions’ highly regarded research specific capabilities. This will save researchers significant time while also giving them peace of mind by providing easy access to source materials. However, there are a number of additional key benefits for all users in this new innovation:

  • Dimensions AI solutions makes ChatGPT research-specific – grounding the answers in facts and providing the user with references to the relevant documents
  • It calls on millions of publications to provide information specific and relevant to the query, reducing the risk of hallucination of the generative AI answer while providing an easy route to information validation
  • It can help overcome challenges of sheer volume of content available, time-consuming tasks required in research workflows and need for trustworthy AI products.

What’s next with AI and research?

The launch of Dimensions Research GPT and Dimensions Research GPT Enterprise represents Digital Science’s broader commitment to open science and responsible development of AI tools. 

These new products are just the latest developments from Digital Science companies that harness the power of AI. In 2023, Dimensions launched a beta version of an AI Assistant, while ReadCube also released a beta version of its AI Assistant last year. Digital Science finished 2023 by completing its acquisition of AI-based academic language service Writefull. And 2024 is likely to see many more AI developments – with some arriving very soon! Dimensions Research GPT and Dimensions Research GPT Enterprise, alongside all Digital Science’s current and future developments with AI, exemplify our commitment to responsible innovation and bringing powerful research solutions to as large an audience as possible. If you haven’t tested ChatGPT yet as part of your research activities, why not give it a go today?

Simon Linacre

About the Author

Simon Linacre, Head of Content, Brand & Press | Digital Science

Simon has 20 years’ experience in scholarly communications. He has lectured and published on the topics of bibliometrics, publication ethics and research impact, and has recently authored a book on predatory publishing. Simon is an ALPSP tutor and has also served as a COPE Trustee.

Source link

#Putting #Data #Heart #Organizational #Strategy

Putting Data at the Heart of your Organizational Strategy

‘Have you done your due diligence?’ These six words induce fear and dread in anyone involved in finance, with the underlying threat that huge peril may be about to engulf you if the necessary homework hasn’t been done. Due diligence in the commercial sphere is a hygiene factor – a basic, if detailed, audit of risk to ensure that all possible outcomes have been assessed so nothing comes out of the woodwork once an investment has been made.

The question, however, is just as important for academic institutions looking to check the data on their research programs: have you done your due diligence on that? If not, then a linked database such as Dimensions can help you.

Strategic Objectives

At a recent panel discussion hosted by Times Higher Education (THE) in partnership with Digital Science on optimizing research strategy, the question of due diligence was framed by looking at the academic research lifecycle and the challenges emanating from the increased amount of data now accessible to universities. More specifically, how universities could extract and utilize verified data from the ever–increasing number of sources they had at their disposal. 

Speaking on the panel, Digital Science’s Technical Product Solutions Manager Ann Campbell believes there are numerous benefits to using new modes of data to overcome problems associated with data overload. “It’s important to think holistically, of not only the different systems that are involved here but also the different departments and stakeholders,” she said. “It’s better to have an overarching data model or a perspective from looking at the research life cycle instead of separate research silos or different silos of data that you find within these systems.”

The panel recognized that self–reporting for academics could lead to gaps in the data, while different impact data could also be missed due to a lack of knowledge or understanding on behalf of faculty members. 

Digital Science seeks to address these problems by adding some power to its Dimensions linked database in the shape of Google BigQuery. By marrying this computing power to the size and scope of Dimensions, academics and research managers are empowered to identify specific data from all stages of the research lifecycle. This allows researchers to seamlessly combine external data with their own internal datasets, giving them the holistic view of research identified by Ann Campbell in the discussion. 

Accessing Dimensions on Google BigQuery.

Data Savant

The theme of improving the capabilities of higher education institutions when it comes to data utilization has been most vividly described by Ann Campbell in her November presentation to the Times Higher Education Digital Universities conference in Barcelona in October. Memorably, she compared universities’ use of data to the plot of popular TV drama Game of Thrones. Professors as dragons? Rival departments as warring families? Well not quite, but what Ann did observe was that there are many competing elements within HEIs – research management, research information, academic culture, the library – and above them are senior management who have key questions that can only be answered using data and insights across all of them:

  • Which faculties have a high impact? Should we invest more in them?
  • Which faculties have high potential but are under–resourced?
  • How can we promote our areas of excellence?
  • How can we identify departments with strong links to industry?
  • What real–world research impact can we feed back into our curriculum?
  • Are we mitigating potential reputational risk through openness and transparency? 

Bringing these disparate challenges together requires a narrative, which is another reason why the Game of Thrones analogy works so well as we see that for all the moving parts of the story to work, a coherent story is required. This can be how an institution’s research culture strategy is working with a rise in early career international collaborations, how an increase in new funding opportunities followed a drive to increase interdisciplinary collaborations, or how the global reputation of a university could be seen to have improved its impact rankings position due to increased SDG–related research. 

Any good story needs to have the right ingredients, and where Digital Science can really help an institution is to bring together those ingredients from across an organization into viewable and manageable narratives. 

Telling Stories

But the big picture is not the whole story, of course. There are other, smaller narratives swirling through HEIs at any given time that reflect the different specialisms, hot topics or focus areas of the university. Three of these focus areas most commonly found in modern universities are research integrity, industry partnerships and research impact, and these were discussed recently at another collaborative webinar between THE and Digital Science: Utilising data to deliver research integrity, industry partnerships and impact

This panel discussion was a little more granular, and teased out some specific challenges for institutions when it came to data utilization. For research integrity, certain data relating to authorship can be used as ‘trust markers’, based around authorship, reproducibility and transparency. Representing Digital Science, Technical Product Solutions Manager Kathryn Weber–Boer went through the trust markers that form the basis of the Dimensions Research Integrity solution for universities. 

But why are these trust markers important? The panel discussion also detailed that outside universities’ realm of interest, both funders and publishers were increasingly interested in research integrity and the provenance of research emanating from universities. As such, products like Dimensions Research Integrity were forming a key part of the data management arsenal that universities needed in the modern research funding environment.  

In addition, utilization and scrutiny of such data can help move the dial in other important areas, such as changing research culture and integrity. Stakeholders want to trust in the research that’s being done, know it can be reproduced, and also see there is a level of transparency. All of these factors then influence the promotion and implementation of more open research activities.

Another important aspect of research integrity and data utilization is not just having information on where data is being shared in what way, it is also whether it is being shared as it has been recorded as, and where it is actually located. As pointed out in the discussion, Dimensions is a ‘dataset of datasets’ and allows the cross–referencing of these pieces of information to understand if research integrity data points are aligned. 

Dimensions Research Integrity trust markers.

Positive Outlook

Discussions around research integrity and data management can often be gloomy affairs, but there is some degree of optimism now there are increasing numbers of products on the markets to help HEIs meet their goals and objectives in these spheres of activity. Effective data utilization will undoubtedly be one of THE critical success factors for universities in the future, and it won’t just be for the effective management of issues like research integrity or reputations. With the lightning fast development, adoption of Generative AI in the research space and increasing interest in issues like research security and international collaboration, data utilization – and who universities partner with to optimize it – has never been higher up the agenda. 

You can view the webinars here on utilizing new modes of data and delivering research integrity.

Simon Linacre

About the Author

Simon Linacre, Head of Content, Brand & Press | Digital Science

Simon has 20 years’ experience in scholarly communications. He has lectured and published on the topics of bibliometrics, publication ethics and research impact, and has recently authored a book on predatory publishing. Simon is an ALPSP tutor and has also served as a COPE Trustee.

Source link

#Putting #Data #Heart #Organizational #Strategy

From subversive to the new normal: 25 years of Open Access

As part of Open Access Week, Simon Linacre looks at 25 years of Open Access through the lens of Dimensions to help us better understand the growth of OA over a quarter of a century.

How old is Open Access? In some ways it is as old as research itself, as at least some results have always been shared publicly. However, since the first journals were published in 1665, accessibility has been an issue, with distribution of paper journals limiting potential readership. When the internet came along, it lowered the barriers to access considerably and opened up the pathway towards Open Access. But that process has been a gradual one.

As a tutor for ALPSP and course leader for some of its industry training modules, I have to be wary of approaching topics such as Open Access. Not because it is especially contentious or difficult, but because as someone who has been involved in scholarly communications for over 20 years, it still feels relatively ‘new’ to me, whereas for most attendees it is simply part of the modern furniture of publishing.

However, as Churchill once said, the longer you can look back, the farther you can look forward, so this year’s OA Week seems as good a time as any to review how its development has progressed over the years. Luckily, in Dimensions we have a tool which can look at millions of articles, both OA and closed access, published over the last quarter of a century.

Back story

Pointing to a specific time to say ‘this is when OA started’ is difficult, as experiments with OA publishing arrived with the internet in the late 1980s and early 1990s. Perhaps the first rallying cry in support of OA came in 1994 when Stevan Harnad published his Subversive Proposal. However, in 1998 several things happened which started to shape the way OA would develop, including the setting up of a number of support networks for authors to advise how to follow the OA path, as well as the founding of the Public Knowledge Project (PKP). New tools and services introduced then started to re-engineer how academic publishing operated, which were only amplified by the global adoption of the internet.

Such developments were followed in subsequent years by major declarations from academics and institutions in support of OA, mainly from European cities starting with ‘B’ – both Budapest and Berlin were the basis for such declarations that propelled Open Access forward and firmly onto the agendas of all stakeholders. Some countries and academic cultures adopted OA principles quickly such as Brazil, however it wasn’t until the 2010s that we started to see significant policy changes in Global North countries such as the US and the UK. 

These OA policies have now not only become commonplace, but have strengthened with initiatives like Plan_S in Europe and the OSTP (or Nelson) Memo in the US driving forward the transition towards fuller OA. It feels like the rate of change has increased in the last few years, but is this true and what does the picture look like globally?

Ch-ch-ch-changes

As we can see in the chart below using Dimensions, growth in OA research article publications has been relatively steady over the last 25 years, with a steeper rise in recent years followed by a shallower rise in 2022. This can perhaps be attributed in part to the introduction of Plan_S in 2018 and the introduction of funder mandates, but also the impact of the Covid-19 epidemic which drove OA publications upwards in 2020 and 2021, not least through the avenue of OA preprints.

Figure 1: Total Open Access research articles by year. Source: Dimensions.

However, appearances can be deceptive. While the chart may seem to plot a steady increase, the 12-fold rise over 25 years is significantly faster than the four-fold rise seen from all research articles, with all OA articles now making up well over half of all articles.

Looking more closely at the type of OA article recorded on Dimensions, if we look just at Gold OA research articles over time (ie. those published in journals, typically after payment of an article processing charge (APC)), we see a similar development, albeit with a slower take off and steeper rise in recent times.

Figure 2: Gold Open Access research articles by year. Source: Dimensions.

However, if we look at Green OA research articles made available over the same period, we see a much more complex development, with higher rates of adoption in the early years of OA following a shallower trajectory before a huge spike in 2020, driven by the aforementioned pandemic. 

Figure 3: Green Open Access research articles by year. Source: Dimensions.

We can see the change more markedly below if we look at all publications (as opposed to just research articles) in more recent years, with Green and Gold running neck-and-neck until they diverged over the last decade or so. For many early proponents of Green Open Access who were opposed to the high profit margins enjoyed by many, this highlights how Green OA has failed in comparison to Gold Open Access. 

Figure 4: Gold vs Green Open Access – all publications. Source: Dimensions.

Looking ahead

What do these data tell us about the next 25 years? Perhaps the key takeaway is that shifts in behaviour of authors can be caused by concerted policymaking. Indeed, even the commitment to future mandates can be a catalyst for change as publishers prepare the groundwork quickly for upcoming changes. However, the biggest single shift towards OA happened during something wholly unforeseen (the pandemic), and as geopolitics is in its most volatile state in the whole 25 year period, maybe the biggest changes in OA are just round the corner. 

Request a demo or quote

Simon Linacre

About the Author

Simon Linacre, Head of Content, Brand & Press | Digital Science

Simon has 20 years’ experience in scholarly communications. He has lectured and published on the topics of bibliometrics, publication ethics and research impact, and has recently authored a book on predatory publishing. Simon is also a COPE Trustee and ALPSP tutor, and holds Masters degrees in Philosophy and International Business.

Source link

#subversive #normal #years #Open #Access

Discovering ‘galaxies’ of research within universities – Digital Science

University research data looks like something from outer space – let’s zoom in and see what’s there

Research institutions need the right tools to discover their strengths and weaknesses, to plan for the future, and to make a greater impact for the communities of tomorrow. In this post, Digital Science’s VP Research Futures, Simon Porter, uses a digital telescope to view the ‘galaxies’ of research within our best and brightest institutions – and explains why that matters.

When we see new images of our universe through the lens of the James Webb Space Telescope (JWST), we’re left in awe of the unique perspective we’ve witnessed, and something about our own universe – even the perception of our own existence – has altered as a result.

What we see are entirely new galaxies, and worlds of possibility.

That’s also what I see when I look at the research data spanning our many universities and research institutions globally. Each one of these institutions represents its own unique universe of research within them.

For me, Dimensions – the world’s largest linked research database and data infrastructure provider – is like the JWST of research data. It enables us to see data in ways we hadn’t thought possible, and it opens up new worlds of possibility, especially for research strategy and decision-making.

What does a university look like?

We began our What does a University Look Like? project in 2019 and it’s rapidly evolved thanks to developments in 3D visualization technology, the expansion of data availability, and the combination of data sources, such as Dimensions and Google BigQuery.

By modelling data from Dimensions into a 3D visualization tool called Blender, we’ve been able to see right into the detail of university research data and capture it in a way that is analogous to the process of taking raw data from JWST and processing it to make a high-quality snapshot of space from afar.

To do this, we’ve used the 2020 Field of Research (FoR) codes, which were developed for research classification in Australia and New Zealand, and we’ve designated a color to each one of those codes (see Figure 2). Each single point of color represents an individual researcher coded by the 2-digit FoR they’re most associated with; researchers are depicted by a sphere, and the size of the sphere is based on the number of publications that researcher has produced.

We then apply algorithms developed by CWTS at Leiden University to determine research clusters – co-authorship networks – within a specific university. These clusters are then layered on top of each other by discipline, with Clinical Science clusters at the bottom, then moving up through Health Sciences, then Science and Engineering, and Linguistics at the top. This is the result.

Figure 1: A 3D visualization of research collaborations within the University of Melbourne. Source: Dimensions/Digital Science. (See also: Figure 2 – color code.)

In Figure 1, we see a 3D visualization of the University of Melbourne, a leading Group of Eight (Go8) research university in Australia. Within this image are 234 research clusters, comprising connections between more than 18,000 co-authored researchers affiliated to the University of Melbourne from 2017-2022.

Figure 2: Network diagram color key, with colors assigned to each of the two-digit FoR codes. 
Figure 3: A zoomed in portion of the University of Melbourne network showing overlapping clusters of Clinical Sciences (red,) Biological Sciences (cream,) Chemical Sciences (light blue), and Engineering (gray). Researchers from different disciplines can be seen to be collaborating within each cluster. Source: Dimensions/Digital Science.

The high-quality nature of this visualization means we can zoom right into the level of the individual sphere (ie, researcher), or pull back to see the bigger picture of the research environment they’re connected to or surrounded by. We can see every research field and every individual or team they’re collaborating with at the university.

If the university has a biological sciences cluster, we can see whether there’s a mathematician interacting with that cluster, clinical scientists, engineers, or someone from the humanities or social sciences. It opens up a new level of understanding about the research landscape of an institution and its individuals.

On our Figshare, you can also watch a video that takes you through the various research clusters found at the University of Melbourne. You can also follow the “What does a university look like?” Figshare project here.

At Digital Science, we’ve created six of these visualizations – five universities from Australia and one from New Zealand – to help demonstrate Dimensions’ unparalleled capabilities to assist with analyzing research data. While many institutions have similarities, some are completely different in research collaboration structures (see Figure 4).

To see a brief video where I walk through all six of the visualizations, visit the Digital Science YouTube.

Looks great – but why does it matter?

These 3D visualizations aren’t just about producing a pretty picture; they’re an elegant and useful way of representing the richness of research data contained about each institution in Dimensions. This is particularly true for university administrations where the ability to measure and promote internal institutional collaboration is just as important as measuring international collaboration.  

To illustrate this point, consider the differences between the collaboration structures of the Australian National University (see Figure 4) and the University of Melbourne. Beyond the immediate difference of network size and discipline focus (the University of Melbourne is larger, and  has a much larger medical and health sciences footprint), the two universities have very different collaboration shapes, with disciplines more distinctly separate in the ANU graph. That two prestigious research institutions can have such different shapes suggest there are different external forces at play that influence the shape of collaboration.

Figure 4: A 3D visualization of research collaborations within the Australian National University (ANU). Source: Dimensions/Digital Science.

Figure 4 represents the Australian National University (ANU), with more than 5,600 co-authored researchers from 2017-2022 and 75 research clusters identified in the data. 

Two reasons that might significantly contribute to the different shape of ANU are its funding model and physical campus shape. ANU’s funding model is unique within Australian higher education, having been endowed with the National Institutes Grant, providing secure and reliable funding for long-term pure and applied research. A key focus of the grant is maintaining and enhancing distinctive concentrations of excellence in research and education, particularly in areas of national importance to Australia. This concentration of excellence is also perhaps reflected in the relative discipline concentration within the visualisation. ANU is also a relatively spread out campus at roughly three times the size of the University of Melbourne’s Parkville campus, making the physical collaboration distance between disciplines larger.

By beginning to identify how factors such as size of campus and funding models can influence the collaboration structures provides key insights for universities, governments and funders. The relative ease of creating these models based on Dimensions data opens the possibility of creating collaboration benchmarks able to be correlated with other external factors. These insights can in turn help shape interventions that maximise local collaboration, in line with the culture of the institution. As with stargazing, the more you look into the past, the better you can see the future. 

Note: Simon Porter first shared these visualizations at the Digital Science Showcase in Melbourne, Australia (28 February to 2 March 2023).

About Dimensions

Part of Digital Science, Dimensions is the largest linked research database and data infrastructure provider, re-imagining research discovery with access to grants, publications, clinical trials, patents and policy documents all in one place. www.dimensions.ai 

Source link

#Discovering #galaxies #research #universities #Digital #Science

For Scholars’ Eyes Only? – Digital Science

Unravelling the academic impact of 007

The first edition of Ian Fleming’s novel Casino Royale (inset) was published 70 years ago on 13 April 1953. Daniel Craig (pictured) portrayed James Bond in the 2006 film adaptation of the book. James Bond remains the property of Eon Productions and Ian Fleming Publications.

On the 70th anniversary of the publication of Ian Fleming’s first James Bond novel, Casino Royale, we ask the question: Why does James Bond have such a large footprint in scholarly literature? Our analysis reveals that Bond, James Bond, is about more than just espionage, vodka martinis and cinema studies.

Every so often a fictional character is so well drawn that even though they often embody the ideals or sensibilities of a non-contemporary era, with all the challenges that can present, they transcend their original zeitgeist to be constantly reinvented, renewed and, to use a modern term, rebooted for new generations.

In science fiction and fantasy, this is a familiar trope, with Doctor Who, Superman and Spider-Man all being prime examples of characters who receive frequent updates for contemporary audiences. Outside science fiction, you will be hard put to call to mind a character with the same enduring appeal and knack for self-reinvention.

The almost sole example of such a character is one Commander James Bond of the British Secret Service – a character who so thoroughly embodies Britishness (even Englishness) of a certain style and period that it is almost at odds with his seeming longevity. And yet, this month he celebrates 70 years since first jumping off the page of Casino Royale, Ian Fleming’s 1953 novel that introduced the world to the suave sophistication of Cold War international espionage.

This first novel introduced readers to Bond’s car, a 1930 Blower Bentley (it was not until Fleming’s 1959 novel Goldfinger that Bond gets an Aston Martin DB Mark III), the .25 Beretta (the Walther PPK was introduced in the novel Dr No in 1958), and the Vesper Martini (a vodka martini of the shaken rather than stirred variety that Fleming invented and named for Bond’s love interest Vesper Lynd).

Despite the challenges of Bond’s originally written misogyny, and references to race that the publisher says are being revised, he has become much loved around the world, and lays claim to one of the most successful film franchises in the history of cinema. A major cultural export for the UK, Bond films have featured and established icons of the British music scene including singer Dame Shirley Bassey and composer John Barry. In addition, the films highlighted both British and non-British brands while pioneering brand positioning in movies while, at the same time, making Q (no, not the one from Star Trek) a household name.

Bond has come to embody a certain brand of Britishness, a fact clearly acknowledged as Daniel Craig was chosen to appear as Bond to escort Her Majesty the Queen to the London Olympics in a short film prepared for the 2012 opening ceremony. And, as life sometimes imitates art, (and perhaps also gives an insight into the wry sense of humour of a particular member of the Royal family), a decade later Daniel Craig was awarded a CMG (the Most Distinguished Order of St Michael and St George) in recognition of his services to theatre and cinema in the Queen’s 2022 Birthday Honours – the same honour given to the fictional Bond by Fleming in 1957’s book, From Russia with Love.

Thousands of scholarly articles have been written about James Bond since his inception – but how do we know this, and what are they about?

A simple Dimensions search limited to titles and abstracts yields 674 references to James Bond, including the descriptively titled 2022 article “No Mr Bond, we expected you to die”: a medical and psychotherapeutic analysis of trauma depiction in the James Bond films, and A Psychological Study of the Modern Hero: The Case of James Bond. Arguably these articles represent those where Bond is a central focus of the work, but even at this level, a quick look at the ANZSRC article classifications (recently updated to the new ANZSRC Field of Research (FoR) 2020 codes as described in our recent paper) is revealing, with work being classified as code 36 – Creative Arts and Writing (with 3605 – Screen and Digital Media accounting for much of the 2-digit-level assignment) only accounting for around 30% of research output. Even though Bond has made his mark beyond the creative arts, Bond-themed titles do appear to be more predictable (compare, for example, 2009’s “Compute? No, Mr. Bond, I Expect You to Die!” with our earlier-mentioned paper).

Figure 1: Advanced search in Dimensions to locate an exact phrase in the Dimensions full-text catalog (including more than 80 million articles at the time of writing). Note the 32 authors that need to be removed as a result of their name containing the string “James Bond”.

Using Dimensions’ advanced searching capabilities, we quickly find that James Bond’s impact on research discourse is much larger than this apparently meagre 600 articles from the basic search above. If we broaden the search to use Dimensions’ Exact Search (one of the advanced search tools that allows more powerful, fine-grained searches of the full-text corpus behind Dimensions), then we can identify more than 28,000 articles that mention James Bond. Of course, this more advanced search includes full text, and hence we need to be more careful with our methodology. In this case, the query needs to be modified to remove all 32 authors who are fortunate (or indeed unfortunate) enough to have the name James Bond contained within their names.

In this expanded dataset, references to Bond can be more tangential – for example, as a cultural reference: Bond as a relatable example, a gateway or a framing for a set of ideas, or to quickly orient the reader to a specific era, or a set of values. Indeed, in this expanded dataset, ANZSRC FoR code 36 – Creative Arts and Writing – is no longer the dominant category, with code 47 – Language, Communication and Culture – taking the top spot. 

However, even this new dominant category only occupies 12% of the “Bondverse”, with a much greater diversity of topics playing a role, including FoR 44 – Human Society with 7.7%, 43 – History, Heritage and Archaeology 4.0%, 35 – Commerce, Management, Tourism and Services 3.7%, and 46 – Information and Computing Sciences at 3.5%. Indeed, articles in the Bondverse have been written on Gender Studies, Built Environment and Design, Political Science, Philosophy and Religion, Psychology, Marketing, Biomedical Sciences and Law, all of which are able to use James Bond as a gateway to help people relate to their topic.

The brand of Bond is so powerful that it is often mentioned through other affiliations, such as those with particular artists as in “Man vs the machine: The Struggle for Effective Text Anonymisation in the Age of Large Language Models”, where singer/songwriter Adele is the principal focus of the commentary, but where Bond receives a collateral mention; or where Bond’s connection to those wonderful gadgets and cars from the long-suffering Q means that he is a natural point of reference as in Automated Driving in Its Social, Historical and Cultural Contexts. Each year, a consistent 1000 or so articles refer to James Bond (approximately the output of a medium-sized research institution). Outlets that regularly publish articles referring to James Bond include SSRN, the Journal of Cold War Studies (MIT Press Direct), Lecture Notes in Computer Science (Springer Nature), The Historian (Taylor & Francis Online) and Nature. It is perhaps of little (Quantum of?) solace to the Journal of British Cinema and Television and Film Quarterly that they are some way down the list.

Figure 2: References in the research literature to well-known fictional characters. Source: Dimensions.

Of course, Bond is not alone as a fictional figure who has made his mark in the research literature, there are other prominent fictional characters that we use as a shorthand for cultural references. James Bond fares well in these stakes, beating most recent characters such as Ethan Hunt from the Mission: Impossible franchise, and Jason Bourne. But he has not yet attained the same level of cultural embeddedness as more established figures such as Sherlock Holmes (who even has his own adjectival form, “Holmesian”), Batman (for which our analysis, perhaps unfairly, also includes mentions of Bruce Wayne, but does remove authors with the name Bruce Wayne as well as publications from Batman University in Turkey), or indeed Mary Poppins. The one modern fictional character who seems to defy all the rules is Harry Potter, but that is for another article and a different anniversary.

Figure 2 begs another question that goes beyond Bond: Despite possessing either cult status or serious literary impact, it seems that women are not getting their due as cultural gateways to support narratives in research literature. Searches for Elizabeth Bennet (from Jane Austin’s Pride and Prejudice) produced a mere 1,687 research outputs and Jane Eyre does a little better, being mentioned in almost 12,000 outputs, but Hermione Granger, a significant source of inspiration for many up-and-coming researchers, is mentioned in a mere 562 publications, not yet receiving the same level of success as her literary school friend Harry, despite being the one who does all the research in the books!  Anna Karenina has given her name to an “effect”, “bias” or “principle” depending on the field, all of which have made the translation of her brand to the research environment successful.  

This lack of reference to female characters from fiction in the research literature is not a surprise. Female characters are just as well drawn as male ones, often more relatable, and hence better suited to performing these key roles in research narrative to help render research itself more relatable. This is a complex sociological issue that deserves more research.  At a high level, a simple explanation may be that the male-dominated media of the past is responsible for establishing male characters in the zeitgeist and that a male-dominated research ecosystem (also of the past) is more apt to use male characters to make their points. However, the fact that these practices endure today is something that requires more analysis and attention, at least in the opinion of this author.

Whether or not this is “No time to die” for Bond is not in question from the perspective of research literature. It is, however, clear that references to Bond serve not only narrative or contextual use cases, but invite us instead to ask more challenging questions. In the final analysis, whether he will ultimately die another day or whether he will only live twice are questions only James Bond can answer.

About Dimensions

Part of Digital Science, Dimensions is the largest linked research database and data infrastructure provider, re-imagining research discovery with access to grants, publications, clinical trials, patents and policy documents all in one place. www.dimensions.ai

Source link

#Scholars #Eyes #Digital #Science

Zooming in on zoonotic diseases – Digital Science

This blog addresses the impact of climate change on infectious diseases, in particular infectious diseases with the potential to transmit from animals to humans, also known as zoonotic diseases. To set the scene for this, we first consider the wider context of how global warming has far-reaching consequences for humans and the planet. The global changes that we are currently experiencing have never happened before, with climate change representing one of the principal environmental and health challenges. We use Dimensions to explore published research, research funding, policy documents and citation data. To help us perform a deeper analysis of the data, we access the Dimensions data through its Google BigQuery (GBQ) provision. This allows us to integrate data from Dimensions with one of the  publicly available World Bank datasets on GBQ.  

We also look at the research in conjunction with two United Nations (UN) Sustainable Development Goals (SDGs) – SDG3 Good Health and Well-being and SDG13 Climate Action – and assess how they add to the narrative. Many of the health impacts associated with climate change are a particular threat to the poorest people in low- and middle-income countries where the burden of climate sensitive diseases is the greatest. This also suggests that the impact in these regions, based on the UN SDGs, may reach beyond climate (SDG13) and health (SDG3) to affect those who live in extreme poverty (SDG1) and/or those who experience food insecurity (SDG2).

“The climate crisis is a health crisis”

Introduction

1. Climate change and zoonotic diseases

Climate change has far-reaching implications for human health in the 21st century, with significant increases in temperature extremes, heavy precipitation, and severe droughts.1 It directly impacts health through long-term changes in rainfall and temperature, climatic extremes (heatwaves, hurricanes, and flash floods), air quality, sea-level rise in low-land coastal regions, and many different influences on food production systems and water resources.2

In terms of human health, climate change has an important impact on the transmission of vector-borne diseases (human illnesses caused by parasites), in particular zoonotic infectious diseases (infections transmitted from animal to humans by the bite of infected arthropod species, such as mosquitoes and bats), and has a particular relevance due to the most recent COVID-19 and Zika virus outbreaks. Arthropods are of major significance due to their abundance, adaptability, and coevolution to different kinds of pathogens.3 

Zoonotic infectious diseases are a global threat because they can become pandemics, as we have seen in the case of COVID-19, and are currently considered one of the most important threats for public health globally. The COVID pathogen spread worldwide, recording 255,324,963 cases with 5,127,696 deaths as of November 2021.4

One reason for this turnaround could be related to the widespread adoption of the United Nations Sustainable Development Goals (SDGs), and in particular SDG6, which sets out to “ensure availability and sustainable management of water and sanitation for all”.9 The achievement of this Goal, even if partially, would greatly benefit people and the planet, given the importance of clean water for socio-economic development and quality of life, including health and environmental protection. SDG6 considers improvement of water quality by reducing by half the amount of wastewater that is not treated by 2030.

The changes in climatic conditions have forced many pathogens and vectors to develop adaptation mechanisms. For example, in the case of African Ebola, climate change is a factor in the rise in cases over the past two decades, with bats and other animal hosts of the virus being driven into new areas when temperatures change, potentially bringing them into closer contact with humans.  

Examples highlighting how the acceleration of zoonotic pathogens is attributable to changes in climate and ecology due to human impact are common. According to the Center for Disease Control (CDC), almost six out of every 10 infectious diseases can be spread from animals to humans; three out of every four emerging infectious diseases in humans originate from animals.5 Zoonotic diseases, such as those spread by mosquitoes and other related vectors, have increased in recent years. This is because the rise in global temperatures has created favourable conditions for breeding specific pathogens, especially in poorly developed countries predominantly in the Global South.6 Further, climate change is causing people’s general health to deteriorate, making it easier for zoonotic infections to spread, as seen with the Zika and dengue viruses.7

The changes in climatic conditions have forced pathogens and vectors to develop adaptation mechanisms. Such development has resulted in these diseases becoming resistant to conventional treatments due to their augmented resilience and survival techniques, thus further favouring the spread of infection.

Figure 1: Effect of climatic changes on infectious diseases.8

2. Exploring links between climate change and zoonotic diseases as evidenced by mentions in policy documents

Developments in policy are generally rooted in academic research. Applying research to policy relevant questions is increasingly important to address potential problems and can often identify what has been successful or not successful elsewhere. Citations to the research that underpins policy documents is known to be an important (proxy) indicator of the quality of the research carried out. Awareness and the course of action taken by governments, NGOs and other health-focused institutions is evident by their activity in this area. For example, in the UK the government has recently allocated £200 million to fight zoonotic diseases.9 Actions that are taken relevant to this are communicated by, for example, relevant policy documents which mention the research influencing public policy decision making in this area. Policy documents provide us with a different perspective for analysis, allowing a closer proximity to ‘real world’, society-facing issues. 

3. The SDG3 and SDG13 crossover: research outputs associated with zoonotic diseases and climate change

The UN launched the 2030 Agenda for Sustainable Development to address an ongoing crisis: human pressure leading to unprecedented environmental degradation, climatic change, social inequality, and other negative planet-wide consequences.10 There is growing evidence that environmental change and infectious disease emergence are causally linked and there is an increased recognition that SDGs are linked to one another. Thus, understanding their dynamics is central to achieving the vision of the UN 2030 Agenda. But environmental change also has direct human health outcomes via infectious disease emergence, and this link is not customarily integrated into planning for sustainable development.11

Two of the 17 UN SDGs of most relevance to zoonotic diseases and climate change are SDG3 and SDG13.

Looking specifically at SDG3, reducing global infectious disease risk is one of the targets for the Goal (Target 3.3), alongside strengthening prevention strategies to identify early warning signals (Target 3.d).12 Given the direct connection between environmental change and infectious disease risk, actions taken to achieve other SDGs also have an impact on the achievement of SDG3. Moreover, strengthening resilience and adaptive capacity to climate-related hazards and natural disasters is one of the targets for SDG13 (Target 13.1).13 The two SDGs perhaps highlight two sides of the same coin – SDG3 focusing on preventing and reducing disease risks and SDG13 focusing on strengthening resilience of climate-related hazards (infectious disease being an obvious hazard).

Exploring the crossover between SDG3 and SDG13 using Dimensions, reveals interlinkages with other SDGs – SDG1 No Poverty and SDG2 Zero Hunger. We know that living in poverty has negative impacts on health, and in respect of climate change, economic loss attributed to climate-related disasters is now a reality. Experiencing hunger can be a consequence of vulnerable agricultural practices that negatively impact food productivity and production. In 2020, between 720 and 811 million persons worldwide were suffering from hunger, as many as 161 million more than in 2019.14 Moreover, climate change, extreme weather, drought, flooding and other disasters progressively deteriorate land and soil quality, severely affecting the cost of food items.

4. Funding of research associated with SDG3 and SDG13 – increases in SDG research funding

Scientific advances reveal empirical observations of the association between climate change and shifts in infectious diseases. Using Dimensions we can examine the scientific evidence for this by looking at the impact of climate change on zoonotic diseases. We can also track the science, through the lens of research outputs associated with both SDG3 and SDG13.  

Being able to assess publishing and funding behaviours by comparing the Global North and Global South countries provides us with an insight into where research is both funded and ultimately published. Moreover, one question we might ask is, given that the Global South is currently hardest hit by the consequences of climate change from an infectious disease perspective, will we see changes in publishing and funding practices in the future?

Being able to assess publishing and funding behaviours by comparing the Global North and Global South countries provides us with an insight into where research is both funded and ultimately published. Moreover, one question we might ask is, given that the Global South is currently hardest hit by the consequences of climate change from an infectious disease perspective, will we see changes in publishing and funding practices in the future?  Furthermore, climate change has exacerbated many influencing factors. It has generated habitat loss, pushed wild animals from hotter to cooler climates where they can mix with new animals and more people, and it has lengthened the breeding season and expanded the habitats of disease-spreading mosquitoes, ticks, etc.,15 and so we could potentially see more zoonotic infectious disease spreading to countries in the Global North. Given these factors, and the capability of Dimensions, we can make comparisons over time and geolocation to track where changes are occurring.

Dimensions search strategy and data investigation

i. Search strategies

Research data were retrieved using Digital Science’s Dimensions database and Google BigQuery (GBQ). For initial searches we created a specific search term to identify publications associated with zoonotic/infectious diseases and climate change. Two sets of terms were used to define the searching keywords. The first was made up of keywords associated with zoonotic and infectious diseases, and the second was simply one word, ‘Climate’, as follows:

Zoonoses OR "zoonotic diseases" OR "parasitic diseases" OR "zoonotic pathogens" OR "vector borne diseases" OR "climate-sensitive infectious diseases" OR "infectious disease risk" OR "infectious diseases" AND Climate.
Figure 2: Word cloud illustrating the strength of association of research that includes both climate change and zoonotic (infectious) diseases and their variants.

Dimensions’ inbuilt SDG classification system allowed for the linking of research outputs associated with SDGs both individually and in combination. On this basis we were able to include SDG3 Good Health and Well-being and SDG13 Climate Action to the search, allowing us to include outputs associated with both Goals. The main focus of the search carried out was on peer-reviewed articles and government policy documents between 2010 and 2022. A set of 1,436 research publications were retrieved and entered into further analyses separately. The research outputs retrieved shared a focus on the impact of climate change on pathogen, host and transmission of human zoonotic/infectious diseases.

A dataset based on the research outputs retrieved from Dimensions was created within GBQ. This allowed integration with publicly available datasets from the World Bank to ascertain low and high income countries and regions. The Dimensions GBQ provision also facilitates in-depth targeted analyses. This allowed us to look solely at the publications resulting from our search in order to identify trends in concepts, citations, policy documents and collaborations by geographic region.

ii. Findings

a) Publication timeline trends for research outputs tagged in Dimensions jointly with SDG3 and SDG13 and associated with zoonotic/infectious diseases and climate change were plotted.

Figure 3: Publications on climate change and zoonotic diseases, and their variants that have been linked to both SDG3 and SDG13 using Dimensions’ SDG classification system

Figure 3 highlights the trajectory over a 13-year time period for publications associated with both SDG3 and SDG13 in Dimensions. Of note, following implementation of the UN SDGs in January 2016, the upward trend in numbers of publications begins to rise sharply until the end of 2021, with a dip in 2022.

b) Co-authorship analysis: Collaboration by geographic region

Figure 4: 4a) One in 40 publications from researchers in high-income countries have been co-authored with researchers from a low-income country; 4b) Two in three publications from researchers in low-income countries have been co-authored with researchers from a high-income country.

Figure 4a reveals that for every 40 publications authored in a high-income country, one publication was in collaboration with a low-income country-based researcher. Figure 4b reveals that two in three publications authored by low-income country based researchers have been in collaboration with high-income country based researchers. We conclude from this that it is proportionately more likely for low-income country researchers to collaborate with researchers in the Global North than for researchers in the Global North to collaborate with researchers in the Global South. However, it is important to note here that numbers of research outputs are disproportionate between the global regions (see Table 1 below). 

2010-2022 Number and percentage of authors publishing climate change and infectious (zoonotic) diseases research Number of authors publishing research outputs associated with SDG13 Number of authors publishing research outputs associated with SDG3 Total number of authors publishing in each geographic income region
Global South
Low-income countries 52 (0.11%) 2,818 (6.22%) 26,649 (58.85%) 45,285 (100%)
Lower-middle-income countries 468 (0.03%) 85,931 (6.07%) 409,355 (28.93%) 1,415,019 (100%)
Global North
High-income countries 618 (0.01%) 365,917 (4.73%) 2,337,971 (30.22%) 7,736,160 (100%)
Upper-middle-income countries 2,419 (0.06%) 194,187 (4.56%) 850,954 (19.97%) 4,260,966 (100%)
Table 1: Number and proportion of authors by geographic income region publishing research on climate change and infectious (zoonotic) diseases, and SDG3 and SDG13

Table 1 outlines the combined total number of authors of published research in the Global South and Global North, including the proportion of researchers against the total number of researchers in each of these regions. The figures in the table reveal that proportionally the number of researchers publishing research on zoonotic diseases and climate change is higher than that of higher-income countries. We argue here that this research focus is not necessarily a niche area for Global South countries (even though their number of research outputs and activity is low in real terms). Consideration of the number of authors publishing zoonotic diseases and climate change research papers against numbers of authors publishing in areas associated more generally with SDG3 and SDG13 provides a glimpse of the breadth of sustainable development research of which our topic area is just one component. 

Despite the crossover with SDG3 and SDG13 not being high, it shows that the engagement of researchers in low-income countries with zoonotic diseases research is notable and contributes to research progress in this area. However, the research is better represented if we look proportionally. For example, 52 researchers in low-income countries represent 8% of the number of zoonotic disease researchers in high-income countries (618), but the total number of researchers publishing overall in low-income countries (45,285) represents just 0.5% of all researchers in high-income countries (7.7 million) making the proportional contribution by low-income country researchers 40 times greater than high-income country researchers in this research area.

c) Research publications by geographic region

Figure 5: Research outputs by year of publication pre- and post-SDG time period.

Figure 5 above reveals a total of 1,419 research publications pre- and post-SDG period from 2010-2022 by country income group have been captured by Dimensions. The numbers represented in the chart reveal that publications have at least one author in the country income groupings outlined. In order to incorporate collaborations, a publication is included twice if it includes an author within each income group. This only applies for the analysis of country income groups. It allows us to see any increases/decreases in collaborative behaviour. In this respect, we note the contribution (either through collaborating or writing their own publications) from low/low-medium-income (Global South) countries has risen both in number and as a proportion of the outputs from 2010.

d) Citation analysis by geographic regions

Figure 6a – Number of publications and corresponding citation counts that include  authors in low- and low -medium income countries.
Figure 6b  Number of publications and corresponding citation counts that include authors in  high- and high-medium income countries.

The data in Figure 6a and 6b above reveal that:

1. South-East Asia as a producer of this research is dominant in the Global South (see Fig. 6b).

2. In the Global South, South-East Asia both publishes research and favourably cites research from the same region (see Fig. 6a).

3. Research output in South-East Asia is not as highly cited by the Global North (see Fig. 6b). What is notable however, is the overall dominance of the Global North for both research output and citation counts. We conjecture one reason for why this might be the case is that the Global South may not have access to the same level of funding or collaboration opportunities. Moreover, differences in research focus could account for the distinction. Moreover, interest in these areas by high-income country research(ers) may be less pronounced than those research areas elsewhere in the Global South (eg, Africa) where there is more collaboration, or more ‘gain’ for Global North countries (Ebola, Zika etc). For example, if India’s research focus was local to aspects of zoonotic diseases that only affect this country, then it might be less likely that higher income countries would cite the research. This warrants a deeper dive into the data to uncover such findings but is outside the scope of the blog.

In conclusion, it is perhaps the case that areas which are most affected by climate change and zoonotic diseases have become publication ‘hotspots’ which are not yet attractive to researchers in Global North countries.

e) Funding – by income/geography; Funder type

Figure 7: Breakdown of Country groupings by income and type of funding organisation revealed by Dimensions. 

The general trend seen in Fig. 7 above reveals government funding to be the major driving force in zoonotic diseases and climate change research in all of the country groupings.  What Dimensions reveals in this respect is that governments in the Global North provide 100% of the government funding that is held in the Dimensions database for research on these topics in the Global South. This would explain perhaps why low-income countries in the Global South, where research infrastructure isn’t as well funded, receives less government funding as it is awarded by the Global North. Looking at funding from non-profit sources, which includes organisations such as Bill and Melinda Gates Foundation, the Wellcome Trust and the Science and Technology Development Fund, we note that such organisations provide nearly a quarter of all research funding held in Dimensions, in the Global South. As with government funding, 98% of all non-profit research funding in both regions comes from non-profit organisations in the Global North. It is interesting to note, given the focus of the research, that only a very small proportion of funding is received across all funder types from the healthcare sector. All other funders included in Fig. 7 92.5% of funding comes from the Global North (healthcare funding is included in this figure).16

f) Policy documents and their citing publications

Figure 8: Top 12 publishers of policy documents citing research on climate change and zoonotic diseases (based on our Dimensions search criteria – see above in “Search strategies”). 

In Dimensions, policy sources and document types range from government guidelines, reports or white papers; independent policy institute publications; advisory committees on specific topics; research institutes; and international development organisations. The top 12 policy publishers that are outlined in Fig. 8 above represent those publishers of policies citing research outputs associated with climate change and zoonotic diseases. It is perhaps not unexpected that the number of publications cited by the World Health Organization would be high given its global vision to eliminate the disease burden globally and to reverse climate change. Zoonotic diseases are very much on the radar of the global agencies concerned with global health which, given climate change, means that spread of these diseases in the Global North is more likely.

Takeaway findings

Using Dimensions’ capability to take a deep dive into research exploring zoonotic diseases and climate change in the context of SDGs has enabled us to uncover a number of interesting findings that are illuminating in the context of a world view.

Our investigations have revealed several interesting findings, including:

  • Research publications in this area have increased more than two-fold since the implementation of the SDGs.  
  • Collaboration patterns in the Global North and Global South reveal that researchers in Global South countries are more likely to collaborate with researchers in the Global North than vice versa.
  • The total number of authors publishing research on zoonotic diseases and climate change in the lowest-income countries represents 8% of the total number of zoonotic disease researchers in high-income countries (see Table 1). Expanding this out across all research publications, the total number of researchers publishing in low-income countries represents just 0.5% of all researchers in high-income countries, making the proportional representation of low-income country researchers 40 times greater than high-income country researchers. Although actual numbers would reveal a different story, we believe that depicting the data in this way provides a balanced representation of the research output.
  • Research carried out on zoonotic diseases and climate change in the lower income countries is less well cited by higher income countries.
  • The data in Dimensions highlights that government organisations in the Global North award much of the funding for research in the Global South, and likewise for funding from non-profit agencies. What we might consider here as an explanation is that numerous organisations in the Global North such as Bill and Melinda Gates Foundation, the SCI Foundation, along with governments, are committed to the elimination of zoonotic diseases and in helping reduce carbon emissions to reverse climate change at a global level.

Conclusion

What is apparent is that governments around the world are investing large sums of money as part of the global mission to halt the spread of animal diseases and to protect the public against zoonotic disease outbreaks before they become pandemics that pose a risk globally.

Digital Science’s Dimensions database provided us with enormous opportunities for the interrogation of data to gather insights on zoonotic diseases and climate change (much more than could be included in this blog). The comprehensiveness of the database in terms of its coverage of publications, policy documents, grant funding and SDG-associated output (among others) in the Global North and Global South allows for creating the most value. As a linked research database, the possibilities for generating downstream link- and flow- analyses across geographies means it is an invaluable tool for the widest possible discovery across the research ecosystem.

About Dimensions

Part of Digital Science, Dimensions is the largest linked research database and data infrastructure provider, re-imagining research discovery with access to grants, publications, clinical trials, patents and policy documents all in one place. www.dimensions.ai

Source link

#Zooming #zoonotic #diseases #Digital #Science

White House OSTP public access recommendations: Maturing your institutional Open Access strategy – Digital Science

While the global picture of Open Access remains something of a patchwork (see our recent blog post The Changing Landscape of Open Access Compliance), trends are nevertheless moving in broadly the same direction, with the past decade seeing a move globally from 70% of all publishing being closed access to 54% being open access

The White House OSTP’s new memo (aka the Nelson Memo) will see this trend advance rapidly in the United States, stipulating that federally-funded publications and associated datasets should be made publicly available without embargo.

In this blog post, Symplectic‘s Kate Byrne and Figshare‘s Andrew Mckenna-Foster start to unpack what the Nelson Memo means, along with some of the impacts, considerations and challenges that research institutions and librarians will need to consider in the coming months.

Demystifying the Nelson Memo’s recommendations

The focus of the memo is upon ensuring free, immediate, and equitable access to federally funded research. 

The first clause of the memo is focused on working with the funders to ensure that they have policies in place to provide embargo-free, public access to research. 

The second clause encourages the development of transparent procedures to ensure scientific and research integrity is maintained in public access policies. This is a complex and interesting space, which goes beyond the remit of what we would perhaps traditionally think of as ‘Open Access’ to incorporate elements such as transparency of data, conflicts of interest, funding, and reproducibility (the latter of which is of particular interest to our sister company Ripeta, who are dedicated to building trust in science by benchmarking reproducibility in research).  

The third clause recommends that federal agencies coordinate with the OSTP in order to ensure equitable delivery of federally-funded research results in data. While the first clause mentions making supporting data available alongside publications, this clause takes a broader stance toward sharing results. 

What does this mean for institutions and faculty?

The Nelson memo introduces a clear set of challenges for research institutions, research managers, and librarians, who now need to consider how to put in place internal workflows and guidance that will enable faculty to easily identify eligible research and make it openly available, how to support multiple pathways to open access, and how to best engage and incentivize researchers and faculty. 

However, the OSTP has made very clear that this is not in fact a mandate, but rather a non-binding set of recommendations. While this certainly relieves some of the potential immediate pressure and panic around getting systems and processes in place, it is clear that what this move does represent is the direction of travel that has been communicated to federal funders. 

Funders will look at the Nelson Memo when reviewing their own policies, and seek alignment when setting their own policy requirements that drive action for faculty members across the US. So while the memo does not in itself mandate compliance for institutions, universities, and research organizations, it will have a direct impact on the activities faculty are being asked to complete – increasing the need for institutions to offer faculty services and support to help them easily comply with their funders requirements.

How have funders responded so far? 

We are already seeing clear indications that funders are embracing the recommendations and preparing next steps. Rapidly after the announcement, the NIH published a statement of support for the policy, noting that it has “long championed principles of transparency and accessibility in NIH-funded research and supports this important step by the Biden Administration”, and over the coming months will “work with interagency partners and stakeholders to revise its current Public Access Policy to enable researchers, clinicians, students, and the public to access NIH research results immediately upon publication”. 

Similarly, the USDA tweeted their support for the guidance, noting that “rapid public access to federally-funded research & data can drive data-driven decisions & innovation that are critical in our fast-changing world.”

How big could the impact be?

While it will take some time for funders to begin to publish their updated OA Policies, there have been some early studies which seek to assess how many publications could potentially fall under such policies. 

A recent preprint by Eric Schares of Iowa State University [Impact of the 2022 OSTP Memo: A Bibliometric Analysis of U.S. Federally Funded Publications, 20217-2021] used data from Dimensions to identify and analyse publications with federal funding sources. Schares found that: 

  • 1.32 million publications in the US were federally funded between 2017-2021, representing 33% of all US research outputs in the same period. 
  • 32% of federally funded publications were not openly available to the public in 2021 (compared to 38% of worldwide publications during the same period). 

Schares’ study included 237 federal funding agencies – due to the removal of the $100m threshold, many more funders now fall under the Nelson memo than under the previous 2013 Holdren memo. This makes it likely that disciplines who previously were not impacted will now find themselves grappling with public access requirements.

Source: Impact of the 2022 OSTP Memo: A Bibliometric Analysis of U.S. Federally Funded Publications, 2017 2021: https://ostp.lib.iastate.edu

In Schares’ visualization here, where each dot represents a research institution, we can see that two main groupings emerge. The first is a smaller group made up of the National Laboratories. They publish a smaller number of papers overall, but are heavily federally funded (80-90% of their works). The second group is a much larger cluster, representing Universities across the US. Those organisations have 30–60% of their publications being federally-funded, but building from a much larger base number of publications – meaning that they will likely have a lot of faculty members who will now need support.

Where do faculty members need support?

According to the 2022 State of Open Data Report, institutions and libraries have a particularly essential role to play in meeting new top-down initiatives, not only by providing sufficient infrastructure but also support, training and guidance for researchers. It is clear from the findings of the report that the work of compliance is wearing on researchers, with 35% of respondents citing lack of time as reason for not adhering to data management plans and 52% citing finding time to curate data as the area they need the most help and support with. 72% of researchers indicated they would rely on an internal resource (either colleagues, the Library or the Research Office) were they to require help with managing or making their data openly available.

How to start?

Institutions who invest now in building capacity in these areas to support open access and data sharing for researchers will be better prepared for the OSTP’s 2025 deadline, helping to avoid any last-minute scramble to support their researchers in meeting this guidance.

Beginning to think about enabling open access can be a daunting task, particularly for institutions who don’t yet have internal workflows or appropriate infrastructure set up, so we recommend breaking down your approach into more manageable chunks: 

1. Understand your own Open Access landscape 

  • Find out where your researchers are publishing and what OA pathways they are currently using. You can do this by reviewing your scholarly publishing patterns and the OA status of those works.
  • Explore the data you have for your own repositories – not only your own existing data sets, but also those from other sources such as data aggregators or tools like Dimensions.
  • Begin to overlay publishing data with grants data, to benchmark where you are now and work to identify the kinds of drivers that your researchers are likely to see in the future. 

2. Review your system capabilities

  • Is your repository ready for both publications and data?
  • Do you have effective monitoring and reporting capabilities that will help you track engagement and identify areas where your community may need more support? Are your systems researcher-friendly; how quickly and easily can a researcher make their work openly available??

3. Consider how you will support your research ecosystem 

  • Identify how you plan to support and incentivize researchers, considering how you will provide guidance about compliant ways of making work openly available, as well as practical support where relevant.
  • Plan communication points between internal stakeholders (e.g. Research Office, Library, IT) to create a joined-up approach that will provide a shared and seamless experience to your researchers.
  • Review institutional policies and procedures relating to publishing and open access, considering where you are at present and where you’d like to get to.

How can Digital Science help? 

Symplectic Elements was the first commercially available research information management system to be “open access aware”, connecting to institutional digital repositories in order to enable frictionless open access deposit for publications and accompanying datasets. Since 2009 through initial integration with DSpace – later expanding our repository support to Figshare, EPrints, Hyrax, and custom home-grown systems – we have partnered with and guided many research institutions around the globe as they work to evolve and mature their approach to open access. We have deep experience in building out tools and processes which will help universities meet mandates set by national governments or funders, report on fulfilment and compliance, and engage researchers in increasing levels of deposit. 

Our sister company Figshare is a leading provider of cloud repository software and has been working for over a decade to make research outputs, of all types, more discoverable and reusable and lower the barriers of access. Meeting and exceeding many of the ‘desirable characteristics’ set out by the OSTP themselves for repositories, Figshare is the repository of choice for over 100 universities and research institutions looking to ensure their researchers are compliant with the rising tide of funder policies.

Below is an example of the type of Open Access dashboard that can be configured and run using the various collated and curated scholarly data held within Symplectic Elements.

In this example, we are using Dimensions as a data source, building on data from Unpaywall about the open access status of works within an institution’s Elements system. Using the data visualizations within this dashboard, you can start to look at open access trends over time, such as the different sorts of open access pathways being used, and how that pattern changes when you look across different publishers or different journals, or for different departments within your organization. By gaining this powerful understanding of where you are today, you can begin to think about how to best prioritise your efforts for tomorrow as you continue to mature your approach to open access. 

Growing maturity of OA initiatives over time – not a “one and done”.

You might find yourself at Level 1 right now where you have a publications repository along with some metadata, and you’re able to track a number of deposits and do some basic reporting, but there are a number of ways that you can build this up over time to create a truly integrated OA solution. By bringing together publications and data repositories and integrating them within a research management solution, you can enter a space where you can monitor proactively, with an embedded engagement and compliance strategy across all publications and data. 

For more information or if you’d like to set up time to speak to the Digital Science team about how Symplectic Elements or Figshare for Institutions can support and guide you in your journey to a fully embedded and mature Open Access strategy, please get in touch – we’d love to hear from you.

This blog post was originally published on the Symplectic website.



Source link

#White #House #OSTP #public #access #recommendations #Maturing #institutional #Open #Access #strategy #Digital #Science

Will we only ever dream of endless energy? – Digital Science

The recent nuclear fusion ignition event at the National Ignition Facility at the Lawrence Livermore National Laboratory in California is a triumph of modern science and of the persistence of scientists who continue to strive to solve some of the most difficult technical and engineering challenges of a generation. However, it is important to see this development in a broader context of global events as well as the research environment that has been created to support the nuclear energy developments upon which society is increasingly likely to depend in the coming years.

Did we vote for this?

It may be argued that geopolitics has been driven by an energy agenda since the late 19th century, when the industrial revolution had moved solidly beyond the borders of the UK and countries began competing for global resources to fuel their burgeoning industrial economies. As our economies have become larger so has our need for energy. Most recent wars (including the one in Ukraine) have been about control of energy resources – oil or gas. As supplies become more scarce or more expensive to extract, tensions will rise. While voters do not vote (in most cases) directly to support a specific energy-based geopolitical stance, in recent years energy has become a more overt topic in elections.

Even in countries where energy independence is a critical geopolitical issue, green parties do not command a large percentage of the vote, nor do mainstream political parties necessarily have well-articulated policies related to energy independence. In Germany, a country with significant foreign energy dependencies (63.7%) that have appeared in the news this year, the Greens garnered 20.5% of the vote in the 2021 federal elections. Meanwhile, in The Netherlands and Belgium next door, countries with even higher percentage dependencies on foreign energy (68.1% and 78% respective) than Germany, green parties have begun to slowly gain ground.

This is perhaps due to the fact that our homes have, until this winter, remained warm at a reasonably affordable cost. However, the phase change that we have all experienced in 2022 (for some very painfully) is a sign of things to come. Indeed, if electorates were to cast their votes more directly based on the growing issues of energy dependence, we might see a significant change in the political landscape in the next few years. Trading blocs like the EU may become more robust in their energy policy – we have already seen the establishment of the EU Energy Platform to start to mitigate the effects of dependency on Russian gas. Being outside such a bloc in current times appears foolish at best.

Enter the apparent saviour of the day, courtesy of a nuclear fusion experiment from the National Ignition Facility (NIF) at Lawrence Livermore National Laboratory in California. Hailed by a number of media outlets as a solution to our energy problems, we need to be careful about being overly optimistic. Anyone who has had an interest in nuclear fusion knows that we have been 30 years away from commercial nuclear fusion for the last 40 years. Indeed, it will come as a surprise to precisely no one who knows me that the seminar I gave in English class 31 years ago as a 14-year-old was on tokamak fusion. I clearly recall stating that nuclear fusion was 30 years away. Which just goes to show – I was wrong!

But, this all sounds a bit dangerous…

Perhaps unsurprisingly, some voters have been worried about the risks of developing nuclear solutions. Harnessing the energy source that, uncontrolled, underlies the most destructive weapons that our species has ever produced, and which powers the Sun, and consequently our entire lives, is an illusive and sometimes perilous pursuit. Classic science fiction novels such as Asimov’s Robot series, and TV shows like the 1980s adaptation of Buck Rogers have shown the post-apocalyptic atomic horrors that paint vivid pictures in our minds of both promises of success and failure with fusion. For many, fusion is not just a technology but a cultural phenomenon. As a technology it looms large in our collective consciousness partly because it is one that has been in development and which holds so much power both for positive and negative outcomes. As a young researcher, it is a beguiling field of study – some of the best minds on the planet, for several generations, have wrestled with taming nuclear fusion.

Figure 1: Timeline of the key developments in nuclear fusion research.

Our knowledge of both forms of nuclear energy – fission and fusion – originate in Einstein’s famous observation that energy and mass are equivalent: E = mc2. In the case of nuclear fission (the process used in current nuclear power plants and in the earliest atomic weapons), heavy elements such as Uranium and Plutonium are used. A heavy element is one in which there are many protons and neutrons in the nucleus of each atom. A configuration of many protons and neutrons (beyond 92 protons) is unstable, which means that the energy required to keep the nucleus together is more than if the atom were to split into two (or more) lighter elements. Just a little interaction with, say, a free neutron is enough to break down the nucleus of some heavy elements into the nuclei of two or more lighter elements. As this process takes place a little energy is given off, which can be converted to heat to turn a turbine. The downside of nuclear fission is that you end up with residual elements that, while more stable than the original atoms in the reaction, are still radioactive and remain so for many years. Such waste products require careful storage in locations where they cannot damage living organisms.

Figure 2: Nuclear Fission versus Nuclear Fusion processes. In the left pane, a heavy element is broken apart via interaction with a neutron into two smaller (but still radioactive) elements and an amount of energy. In the right pane, a deuterium nucleus (a proton and a neutron) and a tritium nucleus (a proton and two neutrons) are brought together to form helium (two protons and two neutrons), a “spare” neutron and energy. In both cases, the right side of each pane is “energetically favourable”, which is to say that the configuration of protons and neutrons on the right of the interaction requires less energy than the configuration on the left, which means that energy is released.

Nuclear fusion, however, is a process that takes place at the other end of the periodic table with very light elements. The energy produced in the fusion reaction is around 5-10x larger than that in a fission reaction. In addition, the by-products are not radioactive – just helium, some neutrons, and energy. In essence, nuclear fusion is a completely clean energy source. Such is the promise of nuclear fusion that some of the best minds in physics have worked on nuclear fusion over the last century. Today, the best minds are also supplemented by AIs, which help to optimise calculations and design the next generation of test reactors.

There are many approaches being developed as a candidate for a commercial nuclear fusion reactor. The main ones include: Magnetic confinement fusion (the type involving ring-style devices – probably the most famous until the recent announcement from NIF), inertial confinement fusion (the type reported on recently); laser-driven fusion; magnetised-target fusion, acoustic inertial confinement fusion, Z-pinch fusion, Muon-catalysed fusion and Nuclear reaction control fusion. Each of these approaches has a different risk profile and different pros and cons, but a successful solution may well need learnings from several of these different technologies.

While the experiment reported recently from the NIF is a significant step in getting to nuclear fusion it is not actually a “break even” event – if you include all the energy used in creating the reaction, you’ll find that the reaction still didn’t get more energy out than was put in. There is still a long way to go but, there may be a value to making something out of this step. Returning science to the public consciousness in a positive way, especially in the face of recent developments in Ukraine and their fallout in the oil industry, may have its benefits. But, it will be important not to overplay the hand – presenting this as fusion being “just around the corner” can backfire badly.

OK, so when will we have it?

Given the increasing importance of this technology to the future of humanity, one would expect to see a significant amount of research funding going into the various different routes to fusion. And while the amount is substantial it is, perhaps, less than might be expected.

Global competitive grant funding for fusion research is at the level of around USD $800 million per year. Put another way, the US spends around USD $45 billion per year on the total budget of the National Institutes of Health (NIH) and the world spends around USD $32 billion annually on Sustainable Development Goal-related competitive research grants.

I contend neither that health research is not critical, nor that SDG-related research is not an excellent way to spend public money. However one may expect that an effectively limitless, clean energy source that would reduce global dependency on fossil fuels, make a considerable contribution not only to the reduction in greenhouse gases and the cost of living, but which would also reduce global geopolitical tensions, might warrant more than 1.5% of the annual funding spent on these other worthy and critical initiatives. 

I don’t want to address issues of lobbying in this piece as the point is well known, rather I want to finish by exploring two points that are closer to research. Firstly, the observation that metrics are powerful drivers of behaviour and, secondly, that links to immediacy seem to be critical in decision making.

Over the last few years, the global nuclear fusion community has consistently produced around 4,000-5,000 research papers per year. However, over the same period the biomedical research community has produced between 800k and 1.25m papers per year; SDG communities have published between 400k and 1m articles per year. A naive argument would be that fusion papers look expensive relative to the more recent papers in either SDG-related research or biomedicine. But, while it is objectively clear that these areas of research are not comparable in their nature, the incentives in the research world are very much skewed toward paper production, which will tend to disadvantage nuclear fusion research. Of course, papers are only one measure of research output. The recent announcement with which I started this blog is a very tangible output of research and its media coverage is positive, but such events are few and far between and hence don’t easily play into a higher speed research narrative.

At a more fundamental level, immediacy plays a critical role in this discussion. It took the better part of 20 years to build momentum for research and funding of SDG-related research, but similar levels of research output and funding were achieved for COVID research in just 24 months. The threat of not understanding the SDGs is not immediately evident in the lives of those with established advanced economies or large continental territories that are not so directly at risk from rising water levels or energy challenges – it has not been a burning platform for them. While the threat of COVID is not as existential or as long-lived for humanity as either SDGs or the emerging energy crisis, the immediacy of the issue in the G20 made the topic instantly appealing both for funding and for publication.

At its heart, nuclear fusion suffers from a perception problem – it is always 30 years away. Because we don’t associate everyday challenges such as energy prices, war, and economic stagnation with not having nuclear fusion as one of our power options, we don’t make research decisions or political choices based on funding and solving this problem. We need a long-term alignment across the political spectrum that strives for nuclear fusion with consistent funding and clear strategic intent to gain this.  

If the NIF announcement leads to a broad realisation that we are getting closer and that voters and hence politicians will take note of the seriousness of our situation, then perhaps another 30 years will not be needed.

Funding levels and publication counts in this article are sourced from Dimensions.

Source link

#dream #endless #energy #Digital #Science

Pandemic exposes critical gaps in Japan’s health research – Digital Science

While Japan has weathered the COVID-19 storm better than most, new data shows Japan’s infectious diseases research effort has been lagging behind for years, drawing criticism from the country’s researchers.

“We are standing on the brink of a global crisis in infectious diseases. No country is safe from them. No country can any longer afford to ignore their threat.”

Dr Hiroshi Nakajima (1928–2013) Former Director-General of WHO (1996)

These prophetic words from the late Dr Hiroshi Nakajima headlined the release of the World Health Organization’s World Health Report 1996, warning of “fatal complacency among the international community” and urging preventative action in the face of impending crises for the globe. Just one generation later, all nations globally have been subjected to a one-in-100-year pandemic that has so far killed more than 6.6 million people and infected more than 650 million.

One wonders what Dr Nakajima would say of his home country, Japan, which has fared better during the COVID-19 pandemic compared with most nations, with 52,000 dead among more than 26 million cases (source: Johns Hopkins University). But new data and the voices of key researchers suggest Japan has been ignoring Dr Nakajima’s warning – and the threat – for too long, by not investing enough in infectious diseases research, despite Japan’s economic status and various strengths in research and innovation.

This exclusive analysis – using data from the Dimensions database of 130 million publications and journals included in the Nature Index – builds a picture of how infectious diseases research in Japan has stalled over the last few decades, and in particular in the years leading up to and including the start of the pandemic. It’s data that comes as no surprise to some of Japan’s leading researchers in the field.

“Cancer is king”

Concerns about the level of Japanese government funding for infectious diseases research have been held by scientists in Japan’s top universities, hospitals and research centres for years.

“Cancer is king, and the genome is queen. Infectious disease is just a pathogen,” quips Professor Makoto Suematsu, Dean of the School of Medicine at Keio University, one of Japan’s research hospital universities. Professor Suematsu, who is keenly interested in biology and public health, describes funding in Japan for infectious diseases research as being “very weak” and “very minor”, the majority of which goes to the government-controlled National Institute of Infectious Diseases (NIID) – with not enough to share around.

Exactly why “cancer is king” is a matter of demography. “The Japanese are suffering from an ageing population, so the budget has increased for taking care of old people. The budget for the elderly is huge – imagine it is a watermelon and one seed is the budget for infectious diseases research. But it [ageing] is a big problem – two-thirds of the Japanese population will be over 60 in 2040,” Professor Suematsu says.

He says funding is also hampered by regulations within Japan and a lack of private investment: “Unlike in the UK, there is no Wellcome Trust or similar bodies.

“Only prestigious institutions get funding from the government, so Tokyo University for example gets lots of funding. Keio and other private universities get limited government support so it’s quite tough for staff supporting COVID research.”

His comments are echoed by Dr Norio Ohmagari, Director of Disease Control and Prevention at Japan’s National Center for Global Health and Medicine (NCGM). He also is not surprised to learn that the data shows Japan lagging behind on infectious diseases research.

“There is little interest in infectious diseases in Japanese medical research,” says Dr Ohmagari, who is also Head of the WHO Collaborating Centre for Prevention, Preparedness and Response to Emerging Infectious Diseases.

“I have been an independent infectious disease physician for 18 years now. During this time, however, infectious disease research has been at a low ebb. The development of new drugs has gradually declined in activity.”

Dr Ohmagari confirms that the ageing population’s health is taking priority: “There is a high level of interest in regenerative medicine, genome medicine, cardiovascular disease, which has a large number of patients, lipid disorders and diabetes mellitus.”

Among the indicators of low research activity in the field, Dr Ohmagari points to a lack of collaboration between Japanese infectious disease researchers and colleagues internationally.

“I have the impression that there are not many researchers actively collaborating with foreign countries, perhaps because there are not many researchers in infectious diseases to begin with. Personally, I am conducting research in Vietnam, and I have exchanges and joint clinical trials with researchers in Europe and the United States,” he says.

Face masks on sale in Japan.

Professor Masanori Fukushima raises a further issue: the pandemic could have enabled Japanese researchers to better understand the impact on patients, but due to a lack of access to patients at research hospitals this hasn’t been possible on a large scale.

“COVID-19 patients are not concentrated in university hospitals with research capabilities, and the annual number of COVID-19 patients at university hospitals itself is small,” says Professor Fukushima, Representative Director of the Learning Health Society Institute (LHSI) and Professor Emeritus at Kyoto University.

“Patients admitted to university hospitals are referred from other hospitals, seriously ill, and typically emergency cases, making it difficult for university hospitals to establish a system for continuous research on them.

“COVID-19 patients admitted to university hospitals are not treated by specialists in infectious diseases but by specialists in respiratory medicine and cardiology, as respiratory management is the primary treatment for these patients. In addition, hematologists will be in charge of treating patients with thrombosis; COVID-19 is out of the scope of the study due to their expertise (respiratory medicine, cardiology, and hematology).”

Professor Fukushima says that according to the Ministry for Health and Welfare’s policy, patient samples and other data have been concentrated at the NIID, which is under the direct control of the Ministry. “This makes it difficult for university hospitals with research capabilities to plan and develop virological studies,” he says.

He also says that expert advice has also not always been followed. In spring 2021, Professor Fukushima published a paper (Asking about measures to combat the novel coronavirus – Clinical recommendations: COVID-19 control – Critical appraisal and proposals; Rinsho Hyoka (Clinical Evaluation), May 2021) in which he proposed that all strategic and practical measures against COVID-19 in Japan be left to medical associations and university hospitals, and that specialized hospitals be created or designated and patients concentrated there. “Together with Dr Yokokura, the former president of the Japan Medical Association, I submitted the report to the government, the heads of local governments, and the media, but there has been little response so far,” he says.

“Japan used to be at the forefront of vaccination”

Despite these concerns about the lack of support for infectious diseases research, some scientists are quick to point out that Japan has fared relatively well during the pandemic compared with many nations, and in some ways has handled it better.

Professor Suematsu says: “Despite the size of the limited budget, researchers have very actively investigated infectious diseases. Data sharing has been good with COVID, but it should have been much better with infectious diseases.” 

One leading researcher who was actively involved in the effort to prevent the spread of COVID-19 in Japan is Professor Hiroaki Kitano, President & CEO of Sony Computer Science Laboratories, Inc. and Professor at Okinawa Institute of Science and Technology Graduate School (OIST), who was contacted by the Japanese government to work with the Office for Promotion of Countermeasures against Novel Coronavirus Infections.

Professor Kitano assembled a team of researchers including Dr Makoto Tsubokura of RIKEN who carried out a series of hi-tech simulations to better understand and predict the impact of the contagion on Japanese people within real-world environments, including some important work on the spread of the virus in indoor environments, such as restaurants and bars, and on trains. He has also been involved in international collaborations to produce a global “COVID-19 Disease Map”. See below: Research critical to Japan’s success.

But even Professor Kitano says Japan’s lack of infectious diseases research had impacted on the country’s ability to respond to the COVID-19 pandemic. “We’ve failed to create any effective vaccine so far,” he says. “We haven’t got a domestically developed vaccine approved yet – even now.

“Japan used to be at the forefront of vaccination; we had a very strong vaccination program, and very strong companies that would create vaccinations. Many companies have actually withdrawn from the vaccine business, so that has substantially reduced the capability for manufacturing and quick response. At the same time, the research funding for infectious diseases has not been that abundant.”

Glove dispensers in a Japanese restaurant.

Perhaps recognizing that it had been slow to develop its own vaccines and needing to catch up, the Japanese government recently pledged US$2 billion for vaccine research against future epidemics.

Professor Kitano’s assessment was that it could take up to three years before Japan has its own approved and manufactured COVID-19 vaccine. On that front, he says: “The game is pretty much over, unless vaccines desired for the next stage of infection control – such as nasal vaccines potentially more effective for infection prevention – are to be developed.”

Nevertheless, Professor Kitano praised the Japanese government for its handling of vaccine contracts with the major pharmaceutical companies, and for its leadership in appointing Mr Taro Kono as a Minister in charge of vaccinations. “I think the end result is that their actions saved many people’s lives – I’m sure of that,” he says.

Japan falls behind – what the data shows

In early 2022, data from Nature Index and Dimensions started to point to a disparity in Japan’s reputation as one of the world’s leaders in research, with the amount of research focusing on infectious diseases surprisingly low compared to other leading nations. Furthermore, it was dramatically lower in the case of COVID-19.

But Japan itself is highly regarded for its research, so how did this occur?

Stung by criticism of the lack of research funds by high-profile researchers such as 2012 Nobel Prize winner Shinya Yamanaka – and perhaps cognisant of league tables that show Japan slipping behind arch-rivals South Korea and China in publications – the Japanese Ministry of Education, Culture, Sports, Science and Technology (MEXT) announced a major overhaul of research funding in 2017, followed up in 2020 with the establishment of a ¥4.5 trillion (US$43 billion) fund for research. However, researchers such as Yamanaka have pointed out that funding allocation can be mixed, with some areas losing out over other hot topics. 

When we look at how these factors play out on the world stage, we can see in data from Nature Index that Japan’s overall research output had been in steady decline from 2015-2019. It saw a rise in 2020 but resumed its decline in 2021 and into 2022. (see Figure 1).

Figure 1: All research outputs from Japan 2015-2021 that are tracked by Nature Index. (2022 data is for a 12-month period to 30 September 2022.) Output is measured by Japan’s share of authorship of articles in the index.

Data derived from Dimensions shows that while Japan ranked fifth in the world in terms of all article outputs in 2019-2021 (see Figure 2), it was ranked below 11th globally for infectious diseases articles (Figure 3).

All research articles in Dimensions (2019-2021)

United States 2,357,592
China 2,141,367
United Kingdom 729,785
Germany 612,787
Japan 568,577
India 565,016
Italy 402,965
Russia 400,272
Canada 388,198
France 383,458
Figure 2: World ranking of all research outputs from 2019-2021. (Source: Dimensions.)

All infectious diseases publications* in Dimensions (2019-2021)

United States 179,465
China 74,010
United Kingdom 61,122
India 42,280
Italy 33,113
Germany 26,830
Brazil 26,053
Canada 25,252
France 24,312
Spain 23,013
Australia 22,557
Japan 18,737
Figure 3: World ranking of all infectious diseases research outputs from 2019-2021. (Source: Dimensions.)
* includes articles, preprints and conference proceedings.

To put Japan’s research output across all areas in context, between the years 2015 and 2021 Japan accounted for 3.8% of total publications with nearly 1.3 million according to Dimensions data, making it the fifth biggest in the world in terms of output. However, while this position rises to fourth when it comes to cancer research with 5.5% of publications, it drops markedly to below 11th for infectious disease research, accounting for only 2.5%, and this drops to 2% when we look at just the last two years in 2020 and 2021 (see Figure 4).

Japan 2015-2021 – publications* and rank

Field Japan Publications Global Total % of Global Rank
All fields 1,279,452 34,108,770 3.8 5th
Cancer 105,924 1,926,313 5.5 4th
Infectious Disease 31,613 1,268,300 2.5 <11th
Infectious Diseases (2020-21 only) 15,206 745,496 2.0 <11th
Figure 4: Global ranking and comparison of all Japanese publications, compared with publications about cancer and infectious diseases. (Source: Dimensions.)
* includes articles, preprints and conference proceedings.

When we flesh this out with the performance of other countries in related areas, we can see that while China, the UK and Germany have surged ahead in recent years when it comes the output of research across 90 different infectious diseases in Nature Index (tracked in Dimensions), Japan has fallen behind the likes of Switzerland and The Netherlands. Even more starkly, it has failed to match the huge spikes in coronavirus-related research seen in other major industrialized countries (in Figure 5, the US has been removed due to it being so far ahead).

Figure 5: Global comparison (excluding the US) of infectious diseases research articles. (Source: Nature Index journals, tracked in Dimensions. NB articles tracked in Nature Index journals in Dimensions include review articles and news, whereas in Nature Index only primary research articles are tracked. But the trends for articles in Nature Index journals are very similar to the trends for Nature Index).

The search strings used to draw out infectious disease articles from Nature Index journals in Dimensions were based on those used in the 2021 Nature Index supplement on infectious disease: https://www.nature.com/nature-index/supplements/nature-index-2021-infectious-disease/tables/dimensions-search-strings

As stated earlier, while much of the Japanese government’s funding for infectious diseases research is directed to the National Institute of Infectious Disease (NIID), and despite being regarded as one of the top institutions in Japan for infectious diseases by Japan’s Ministry of Health, Labour and Welfare (MHLW), NIID does not even appear in the top 10 of Japanese institutions by number of publications on COVID in 2020 and 2021, with only 287 articles out of a total of 14,960 articles for Japan – or only 1.9% of the country’s output – while the University of Tokyo had 1,417 articles or 10% of overall publications.

Patents pending?

Further to the earlier criticism about Japan’s reduced vaccine development capacity, by exploring data of patents recorded during the first two calendar years of the pandemic we can see that Japan’s activity has mirrored that of its research performance, ranking 11th in the total number of COVID-19 patents recorded. The countries and regions ahead of it, however, are quite different, with South Korea, India and Taiwan all well ahead of Japan (see Figure 6).

COVID-19 patents recorded in Dimensions (2020-2021)

United States 7,254
China 4,326
South Korea 1,883
India 1,600
Germany 804
Spain 524
United Kingdom 521
Taiwan 421
Canada 403
France 359
Japan 346
Figure 6: World ranking of all COVID-19-related patents recorded, 2020-2021. (Source: Dimensions.)

“We must prepare for the next pandemic.”

While COVID-19 isn’t showing any signs of going away, what lessons can Japan learn from its experience? And what does the future hold for its infectious diseases research and collaborations?

The recent announcement of a concerted vaccination research program to protect against future epidemics in Japan will see a US$2 billion injection of funds into this critical area, which is no doubt welcome news. And while the experts say there needs to be increased government funding for research, that’s not their sole focus.

Dr Ohmagari says despite the lack of infectious diseases research being conducted in Japan, the country already has a good base to build upon. “I think the level of Japanese research on infectious diseases, especially basic research, is high by global standards. However, epidemiological and clinical research is not so active. The number of researchers is small,” he says.

The pandemic might already be spurring on that change: “In recent years, young researchers have gradually become interested in clinical and research work on infectious diseases. I hope that they will quickly build up their strength and produce results.”

Professor Suematsu says Japan must learn from the research and healthcare systems in place in other countries. In particular, he’s “very impressed” with the UK’s approach to foster researchers with integrated biotechnology training, something he says “has never happened in Japan”. He also envies the UK government’s central information overview and a network of data sharing.

Professor Kitano agrees that improved data sharing needs to be an outcome from the pandemic. He also proposes that the government pool all of its experts and learn from their collective experience, “in case the next thing comes”.

“That structure is yet to be seen but I am proposing that we need to have this – a group of people who have gone through this kind of ‘wartime emergency’ and understand how chaotic things can be.”

He says this group would be “more of a permanent structure, to provide the government with expert advice next time we have a pandemic”.

“There is a stronger awareness that Japan may need to do better on this front for the benefit of the population,” he says.

Dr Ohmagari says Japan needs to be ready now for what’s next. “COVID-19 has revealed that there is room for improvement in research and development in the field of infectious diseases in Japan. We must prepare for the next pandemic,” he says.

“We have already started to build a system in terms of policy in Japan. However, the same problem was pointed out after the 2009 pandemic influenza, but no measures had been taken. We must reflect on this. We must continue to promote these policies without interruption.

“This will require political will backed by a deep understanding of the public. And our generation of researchers must do our best to ensure that this trend will never be halted,” he says.

In the words of the late Dr Nakajima, Japan must learn the lessons of its past or risk “fatal complacency”.

In his book How to Prevent the Next Pandemic, Bill Gates suggests some harsh lessons the world should learn from its collective experience of COVID-19. Gates had famously published a paper in the New England Journal of Medicine in 2015 expressing concern a worldwide pandemic could cost millions of lives and trillions of dollars. While Gates is critical of much of what happened before and during the pandemic, he reserves praise for some countries’ handling of the chaos. In particular he singles out Japan, referring to the country as the “King of masks”. Japan’s cultural differences have been its salvation.

In 2022, the world has, for the most part, tried to move on from the COVID-19 pandemic. In many countries, to walk around the streets of busy cities now, one would hardly notice that something so monumental had happened. One or two people wearing masks and some faded signs on shop windows pointing out abandoned policies for customers are all that is left of those days not so long ago when towns and cities were under lockdown.

In Japan, however, things are different. While restrictions are easing, many rules are still in effect, and mask-wearing is ubiquitous. Measures such as plastic panels between diners in restaurants and donning of plastic gloves when collecting food at buffets still persist in Tokyo and other major cities – measures that were ditched long ago in other countries, if they were ever adopted in the first place. Japan and its strict procedures have received more coverage than most in the global media, thanks in part to its relatively good record on COVID-19, but also because it hosted the single biggest global event of 2021 in the shape of the Tokyo Olympics, delayed from 2020 at the height of the pandemic, and held with hardly any spectators from outside Japan. The resolute approach to put on the Games no matter what and the mandate of strict adherence to social distancing and other measures meant that Japan was put under a huge amount of scrutiny in the Western media, intrigued about how the country and its government approached the event.

When it came to a critical test, the stereotypical image of Japan as an ordered, disciplined population, one that is prepared to comply with restrictions, has worked in its favour. Like many countries, Japan has also seen protests and political backlash. And like many, Japan has also been hit with additional waves of infections, continuing to test its resolve.

Despite criticism of a lack of research into infectious diseases and COVID-19 in Japan, the country also saw some outstanding examples of scientific and technological knowhow, helping to safeguard the community.

Professor Hiroaki Kitano, President and CEO of Sony Computer Science Laboratories Inc., played a central role in the early days of the pandemic to better understand the spread of the virus and how it could be protected against.

Professor Kitano was able to use the modelling his teams had produced to show the startling impact preventative measures had on the spread of the disease. In two areas – confined spaces, such as a karaoke bar (see image and video), and mask-wearing – Professor Kitano was able to show the efficacy of certain restrictions that could massively reduce infection of COVID-19 and its variants. This helped to justify lockdown procedures but also supported a measured opening up of society with certain behavioural guidelines, such as maintaining contact within your own community – known as the “Stay with your community” campaign in late 2021.

He also demonstrated an optimal vaccination strategy that was implemented during late spring to fall of 2021 possibly resulted in very low COVID-19 cases in Japan in the fall of 2021.

The ability of Professor Kitano and his colleagues in Japan in translating the data they had collected and impacting policy may have been crucial in keeping the number of deaths so low since the start of the COVID-19 pandemic; even more remarkable in an environment in which the country has faced declining levels of funding and publications in infectious disease research.

About Dimensions

Part of Digital Science, Dimensions is a modern, innovative, linked research data infrastructure and tool, re-imagining discovery and access to research: grants, publications, citations, clinical trials, patents and policy documents in one place. www.dimensions.ai 

About Nature Index

The Nature Index is a database of author affiliations and institutional relationships. The index tracks contributions to research articles published in 82 high-quality natural-science journals, chosen by an independent group of researchers.

The Nature Index provides absolute and fractional counts of article publication at the institutional and national level and, as such, is an indicator of global high-quality research output and collaboration. Data in the Nature Index are updated regularly, with the most recent 12 months made available under a Creative Commons licence at natureindex.com. The database is compiled by Nature Portfolio, part of Springer Nature.

Image credits:
Main image: Masked commuters in Osaka, Japan. Source: Stock image.
Face masks on sale in Japan. Source: David Swinbanks.
Glove dispensers in a Japanese restaurant. Source: David Swinbanks.
Masked geisha dolls. Source: Rafael Randy Cardoso Garcia.
COVID-19 test and Tokyo 2020 Games concept. Source: Stock image.
Masked commuters in Tokyo, Japan. Source: Stock image.

Simon Linacre, Head of Content, Brand & Press | Digital Science

Simon has 20 years’ experience in scholarly communications. He has lectured and published on the topics of bibliometrics, publication ethics and research impact, and has recently authored a book on predatory publishing. Simon is also a COPE Trustee and ALPSP tutor, and holds Masters degrees in Philosophy and International Business. He lived and worked in Japan for three years in the 1990s.

David Ellis, Press, PR & Social Manager | Digital Science

David has 30 years’ experience in media and communications. With a background in broadcast journalism, his career focus has been in research communication – including science, health science and medicine – spanning 25 years of service in the university sector. His experience also includes both internal and external communications in the health and manufacturing sectors.

David Swinbanks, Chairman | Springer Nature Australia & NZ

David is Chairman for Springer Nature in Australia and New Zealand and Founder of Nature Index. He is also a Senior Advisor to Digital Science. Following a postdoc in deep-sea research at Tokyo University, David began his career with Nature as Tokyo Correspondent in 1986 and established Nature Japan KK in 1987 with two Japanese colleagues, which expanded to 120 employees by 2012 spanning the Asia-Pacific region.

Source link

#Pandemic #exposes #critical #gaps #Japans #health #research #Digital #Science

Why is it so difficult to understand the benefits of research infrastructure? – Digital Science


Persistent identifiers – or PIDs – are long-lasting references to digital resources. In other words, they are a unique label to an entity: a person, place, or thing. PIDs work by redirecting the user to the online resource, even if the location of that resource changes. They also have associated metadata which contains information about the entity and also provide links to other PIDs. For example, many scholars already populate their ORCID records, linking themselves to their research outputs through Crossref and DataCite DOIs. As the PID ecosystem matures, to include PIDs for grants (Crossref grant IDs), projects (RAiD), and organisations (ROR), the connections between PIDs form a graph that describes the research landscape. In this post, Phill Jones talks about the work that the MoreBrains cooperative has been doing to show the value of a connected PID-based infrastructure.

Over the past year or so, we at MoreBrains have been working with a number of national-level research supporting organisations to develop national persistent identifier (PID) strategies: Jisc in the UK; the Australian Research Data Commons (ARDC) and Australian Access Federation (AAF) in Australia; and the Canadian Research Knowledge Network CRKN, Digital Research Alliance of Canada (DRAC), and Canadian Persistent Identifier Advisory Committee (CPIDAC) in Canada. In all three cases, we’ve been investigating the value of developing PID-based research infrastructures, and using data from various sources, including Dimensions, to quantify that value. In our most recent analysis, we found that investing in five priority PIDs could save the Australian research sector as much as 38,000 person days of work per year, equivalent to $24 million (AUD), purely in direct time savings from rekeying of information into institutional research management systems.

Investing in infrastructure makes a lot of sense, whether you’re building roads, railways, or research infrastructure. But wise investors also want evidence that their investment is worthwhile – that the infrastructure is needed, that it will be used, and, ideally, that there will be a return of some kind on their investment. Sometimes, all of this is easy to measure; sometimes, it’s not.

In the case of PID infrastructure, there has long been a sense that investment would be worthwhile. In 2018, in his advice to the UK government, Adam Tickell recommended:

Jisc to lead on selecting and promoting a range of unique identifiers, including ORCID, in collaboration with sector leaders with relevant partner organisations

More recently, in Australia, the Minister for Education, Jason Clare, wrote a letter of expectations to the Australian Research Council in which he stated:

Streamlining the processes undertaken during National Competitive Grant Program funding rounds must be a high priority for the ARC… I ask that the ARC identify ways to minimise administrative burden on researchers

In the same letter, Minister Clare even suggested that preparations for the 2023 ERA be discontinued until a plan to make the process easier has been developed. While he didn’t explicitly mention PIDs in the letter, organisations like ARDC, AAF, and ARC see persistent identifiers as a big part of the solution to this problem.

A problem of chickens and eggs?

With all the modern information technology available to us it seems strange that, in 2022, we’re still hearing calls to develop basic research management infrastructure. Why hasn’t it already been developed? Part of the problem is that very little work has been done to quantify the value of research infrastructure in general, or PID-based infrastructure in particular. Organisations like Crossref, Datacite, and ORCID are clear success stories but, other than some notable exceptions like this, not much has been done to make the benefits of investment clear at a policy level – until now.

It’s very difficult to analyse the costs and benefits of PID adoption without being able to easily measure what’s happening in the scholarly ecosystem. So, in these recent analyses that we were commissioned to do, we asked questions like:

  • How many research grants were awarded to institutions within a given country?
  • How many articles have been published based on work funded by those grants?
  • What proportion of researchers within a given country have ORCID IDs?
  • How many research projects are active at any given time?

All these questions proved challenging to answer because, fundamentally, it’s extremely difficult to quantify the scale of research activity and the connections between research entities in the absence of universally adopted PIDs. In other words, we need a well-developed network of PIDs in order to easily quantify the benefits of investing in PIDs in the first place! (see Figure 1.)

Luckily, the story doesn’t end there. Thanks to data donated by Digital Science, and other organisations including ORCID, Crossref, Jisc, ARDC, AAF, and several research institutions in the UK, Canada, and Australia, we were able to piece together estimates for many of our calculations.

Take, for example, the Digital Science Dimensions database, which provided us with the data we needed for our Australian and UK use cases. It uses advanced computation and sophisticated machine learning approaches to build a graph of research entities like people, grants, publications, outputs, institutions etc. While other similar graphs exist, some of which are open and free to use – for example, the DataCite PID graph (accessed through DataCite commons), OpenAlex, and the ResearchGraph foundation – the Dimensions graph is the most complete and accessible so far. It enabled us to estimate total research activity in both the UK and Australia.

However, all our estimates are… estimates, because they involve making an automated best guess of the connections between research entities, where those connections are not already explicit. If the metadata associated with PIDs were complete and freely available in central PID registries, we could easily and accurately answer questions like ‘How many active researchers are there in a given country?’ or ‘How many research articles were based on funding from a specific funder or grant program?’

The five priority PIDs

As a starting point towards making these types of questions easy to answer, we recommend that policy-makers work with funders, institutions, publishers, PID organisations, and other key stakeholders around the world to support the adoption of five priority PIDs:

  • DOIs for funding grants
  • DOIs for outputs (eg publications, datasets, etc)
  • ORCIDs for people
  • RAiDs for projects
  • ROR for research-performing organisations

We prioritised these PIDs based on research done in 2019, sponsored by Jisc and in response to the Tickell report, to identify the key PIDs needed to support open access workflows in institutions. Since then, thousands of hours of research and validation across a range of countries and research ecosystems have verified that these PIDs are critical not just for open access but also for improving research workflows in general.

Going beyond administrative time savings

In our work, we have focused on direct savings from a reduction in administrative burden because those benefits are the most easily quantifiable; they’re easiest for both researchers and research administrators to relate to, and they align with established policy aims. However, the actual benefits of investing in PID-based infrastructure are likely far greater.

Evidence given to the UK House of Commons Science and Technology Committee in 2017 stated that every £1 spent on Research and Innovation in the UK results in a total benefit of £7 to the UK economy. The same is likely to be true for other countries, so the benefit to national industrial strategies of increased efficiency in research are potentially huge.

Going a step further, the universal adoption of the five priority PIDs would also enable institutions, companies, funders, and governments to make much better research strategy decisions. At the moment, bibliometric and scientometric analyses to support research strategy decisions are expensive and time-consuming; they rely on piecing together information based on incomplete evidence. By using PIDs for entities like grants, outputs, people, projects, and institutions, and ensuring that the associated metadata links to other PIDs, it’s possible to answer strategically relevant questions by simply extracting and combining data from PID registries.

Final thoughts

According to UNESCO, global spending on R&D has reached US$1.7 trillion per year, and with commitments from countries to address the UN sustainable development goals, that figure is set to increase. Given the size of that investment and the urgency of the problems we face, building and maintaining the research infrastructure makes sound sense. It will enable us to track, account for, and make good strategic decisions about how that money is being spent.


Phill Jones

About the Author

Phill Jones, Co-founder, Digital and Technology | MoreBrains Cooperative

Phill is a product innovator, business strategist, and highly qualified research scientist. He is a co-founder of the MoreBrains Cooperative, a consultancy working at the forefront of scholarly infrastructure, and research dissemination. Phill has been the CTO at Emerald Publishing, Director of Publishing Innovation at Digital Science and the Editorial Director at JoVE. In a previous career, he was a bio-physicist at Harvard Medical School and holds a PhD in Physics from Imperial College, London.

The MoreBrains Cooperative is a team of consultants that specialise in and share the values of open research with a focus on scholarly communications, and research information management, policy, and infrastructures. They work with funders, national research supporting organisations, institutions, publishers and startups. Examples of their open reports can be found here: morebrains.coop/repository



Source link