Introduction
Funder, publisher, and institutional policies increasingly encourage effective data management practices to enhance the value of research outputs and promote open scholarship, research integrity, and reproducibility. In response, data management and sharing plans (DMSPs) have become a widely adopted tool for defining and assessing data management activities (Simms and Jones 2017). However, in their current static, narrative format, DMSPs are often thought of as merely a requirement researchers must fulfill as part of grant proposals to funding agencies (Simms et al. 2017; Feger et al. 2020). Further, because they are rarely updated, they quickly become outdated and inaccurate once research activities are underway (Miksa, Oblasser, and Rauber 2021). As such, the usefulness of DMSPs by researchers, funders, and institutions can be limited.
Therefore, to effectively support researchers in meeting emerging requirements, academic and research libraries of all sizes need to enhance resilient infrastructures and services capable of supporting data management and data sharing. There is growing recognition that adopting machine-actionable Data Management and Sharing Plans (maDMSPs) can address the limitations of standard DMSPs. maDMSPs would support automated processing of the content contained in DMSPs, facilitating information exchange across research systems and enabling workflow integrations and automated actions (Miksa, Oblasser, and Rauber 2021). Yet, few documented examples exist of how institutions are implementing maDMSPs or integrating them into existing systems and workflows.
The California Digital Library (CDL) and the Association of Research Libraries (ARL), with support from the Institute of Museum and Library Services (IMLS), launched the Machine Actionable Plans (MAP) Pilot Project to develop projects related to the adoption of machine-actionable Data Management and Sharing Plans (maDMSPs) at academic institutions (Association of Research Libraries, n.d.). Five pilot institutions were selected for the MAP Pilot Project, and this paper presents case studies from four of them that explore the integration of maDMSPs into existing research infrastructures. Each team led a local project focused on enhancing campus coordination to improve data management and sharing workflows, while also contributing to the testing of enhanced DMP Tool capabilities and persistent identifier registries. Approaches included prototyping AI into plan feedback and implementation, interdepartmental notification and tracking, onboarding, outreach, capacity building, and assessment.
Case Studies
Arizona State University
Arizona State University (ASU) contributed to the project by prototyping a data management plan feedback system using generative AI, which would provide nearly immediate responses to requesters, rather than the traditional three-to-four business days necessary for manual review. ASU has a small, dedicated support network for thousands of researchers, and most DMSP feedback requests come in at the last minute. The primary goal was to offer a streamlined, just-in-time response. The ASU team developed a user interface that allows users to paste DMSP content, upload a file, or submit a DMP ID (DataCite Support, n.d.) provided by DMP Tool, and retrieve near-immediate feedback on where the plan is effective and where improvement is needed. We note that this immediate guidance is generated by an AI agent trained against a rubric, rather than the researcher’s plans, and provides instructions for requesting additional human feedback, which uses the standard three-to-four-day response rate.
The secondary goal is to create an administrative dashboard that collects the input for human intervention, allowing university support personnel (such as librarians, data managers, and others) to review the feedback, provide appropriate follow-up information to the researcher, refine the data model, and notify potential service providers. The AI model hosted through ASU is a general large language model (LLM) that requires no special training for the end user and is prompted with 18 pages of instructions, policies, and output format.
Background
The ASU Research Data Management Office and the ASU Library have a long-term formal partnership, and we recognized a need for a scalable solution that could provide consistent DMSP feedback. Over the years, despite our outreach efforts, there has been a general lack of awareness about our support capabilities, resulting in data management plans that were either unrealistic or not implemented by the researchers as written. Additionally, because authors frequently requested feedback at the last minute, we struggled to provide feedback in time for the proposal submission deadlines. Therefore, our primary goal was to deliver more immediate DMSP feedback.
The library and the data management office use separate customer management systems, presenting challenges for consistent feedback support, and we needed to remediate responses to requestors more effectively. A few weeks into the project, the team ruled out the Enterprise Research Administration (ERA) system because there is no established development environment, and experimenting with a critical system for grant proposals and setting up false expectations was too risky.
We then considered the types of crosswalk integrations available for notifications, including DMP Tool, ORCIDs, and downstream data sharing service providers such as the ASU Research Data Repository, which typically do not receive notifications until the end of a project.
ASU DMSP AI Feedback Assistant Tool
Developed by Nathan Rollins and student worker Akash Agarwal (ASU Knowledge Enterprise 2024), the data management plan feedback system utilizes Google’s Gemini 2 Flash model (Google DeepMind, n.d.) and compares submitted plans against the DART rubric (Westra et al. 2016) and does not “train” against or compare with other plans. Our goal was to enhance the process, not work against it or create confusion for end-users. The prompt points to the DART rubric and support documentation related to services at ASU.
Initial development included testing a potential integration using DMP IDs and the DMP Tool API as one option and a secondary approach of pasting DMSP content into a text box. Because DMP IDs are only created for DMSPs published using DMP Tool, and users will not generally have published their plans before requesting feedback, the focus and goals shifted primarily to the copy/paste and file upload method. Currently, the beta only provides a download or clipboard copy option for saving the feedback. Human feedback is still essential, so a mechanism for email notifications to both the requester and the service providers was suggested but tabled for an expanded post-pilot phase, which is currently underway.

Figure 1: Initial architecture overview for the AI DMSP Feedback Tool.
We have a basic architecture for the agent. On the right, the client/user interface allows researchers to enter the DMSP text, enabling the agent to work against the rubrics, rules, and funder templates processed by the AI DMSP topics and DMSP Review bots. The tool sends the final product back to the client interface.

Figure 2: The Workflow diagram illustrates the processing of user-entered information through the APIs and the reports returned to the end user.
Establishing trust was a crucial component of the project, and we are utilizing our licensing agreements with our AI vendors to safeguard user information from being used in publicly available AI models. By submitting pre-existing plans from our own projects, we found that the AI-generated suggestions were far more consistent in structure and delivery than the traditional process, which involved varied responses from data managers and librarians. That said, we cannot rely solely on automation when providing feedback.
Using the DART Rubric
The DART Rubric, a 2015 project aimed at standardizing the review of data management plans, serves as a framing device for this review. The DART score sheet provides a structured but basic breakdown of each section. Using this rubric to train the model, instead of other DMSPs, prevents bad information from being reused. We utilized the DART to evaluate individual sections of a plan against funder templates and provide guidance on missing or incorrect information, or to suggest if the content belongs in a different section of a plan.
We also utilized the 2023 FASEB Dataworks update (FASEB 2021), which combines the Belmont forum and DART, adding additional and more current elements to the scoring. The review assesses whether they answered a specific question according to the expectations outlined in the rubric, rather than whether the information aligns with a funded or unfunded plan.
Tool Assessment
Initial tests performed in July 2024 provided basic information. Still, by the fall of 2024, with the introduction of additional rules and templates, the rubrics had evolved into a much more useful form of feedback. The response was broken into respective sections, making it more readable and reflective of the type of information the rubric-centered method provides.
Throughout the process, our sense of context, transparency, and human interactions was key. We want to ensure that researchers are aware of what happens when they click the “Request Feedback” button in their system and that an opt-out option is always available. If they choose the more immediate AI-driven response, they will receive a message noting that this is an auto-generated AI response, which may not be accurate. The AI agent will send every request to our team for an additional review and possible follow-up for either corrections, further advice, or a heads-up that respective support providers may be reaching out to the researchers. This model promotes accountability and lets them know we care about their projects. We will provide a human-centered review if they prefer, but the human-centered feedback takes more time.
The Pennsylvania State University
As part of a broader institutional, strategic initiative to enhance researcher support and reduce administrative and compliance burdens, Penn State participated in the MAP Pilot Project to explore how the adoption and integration of maDMSPs could streamline data management practices and improve the overall research experience. The focus of Penn State’s MAP Pilot Project was stakeholder engagement and collaboration because it was identified that a shared understanding around data needs, workflows, and institutional roles was critical for identifying pathways to technical solutions and scalable infrastructure. The project team aimed to enhance campus-wide coordination for research data management and explore the potential of maDMSPs to improve research support services. Specifically, the team focused on exploring how information contained within maDMSPs could improve service efficiency, reduce researcher burden, foster collaboration, and proactively support evolving data management needs.
To advance this work, Penn State hosted a two-day, cross-campus workshop with 33 participants representing key stakeholder groups such as research administrators, research support staff, IT professionals, librarians, compliance officers, and researchers. The workshop sought to raise awareness about maDMSPs and explore their potential to support scalable infrastructure and more effective communication across institutional units and with researchers. Day one began with level-setting activities that enabled participants to map the research project and data lifecycles and clarify their role throughout. Participants also examined the structure and function of traditional data management and sharing plans (DMSPs) before transitioning to an introduction to maDMSPs, including real-world use case examples. Day two focused on institutional implementation. Participants reflected on Penn State’s current research data management environment and discussed which elements of maDMSPs would be most useful for their work and at what point in the research lifecycle the information would be most beneficial (Wham 2025).
The discussions and activities surfaced a shared interest across units in the information captured by maDMSPs, though their priorities and interactions with that information differed. For instance, one participant described maDMSPs as a “list of promises” researchers make to funders about how their data will be documented, secured, stored, preserved, and shared. The need to understand these commitments when they are made as well as to track the changes throughout the grant period, emerged as a critical need for units responsible for data storage, compliance, human subjects’ protections, and sponsored programs management. Mapping these distinct perspectives, including what information is needed, when, and by whom, is essential to effectively integrating maDMSPs into institutional workflows.
Overall, this educational and collaborative event facilitated dialogue among previously siloed units, enabling them to collectively explore how maDMSPs could be used to address shared challenges and align institutional strategies. Follow-up survey feedback confirmed the success of the workshop: all respondents reported increased understanding of maDMSPs and valued the opportunity to engage with colleagues from other units. Participants noted that the collaborative environment deepened their understanding of broader research data management challenges and helped lay the groundwork for future coordinated efforts.
University of Colorado Boulder
For the MAP Pilot Project, University of Colorado Boulder (CU Boulder) aimed to enhance post-award DMSPs by developing workflows, automating the tracking and connection of grant-related research outputs and data deposits, and streamlining data publication and grant compliance for researchers through metadata reuse. These automated systems would update DMSP-related metadata in different platforms, pull data from maDMSPs into existing campus systems, and incentivize researchers to deposit data in CU Boulder’s institutional repository.
A secondary goal was to create automated communication and alert systems to provide timely notification to the Center for Research Data & Digital Scholarship (CRDDS) when it or its infrastructure resources (e.g., institutional repository, large-scale data storage system, high-performance computing and cloud computing resources, etc.) were included in DMSPs from awarded proposals. In the past, these were included by researchers in their DMSPs without communication with CRDDS, which has led to confusion.
Our first step was to create a team featuring representatives from across campus, including CRDDS (a partnership between the University Libraries and the Office of Information Technology’s Research Computing group), the Research & Innovation Office (RIO), the Faculty Information System (FIS), and the Laboratory for Atmospheric and Space Physics (LASP).
We then identified the current CU Boulder systems used by researchers that already or had the potential to interact with maDMSPs and DMP Tool, including the CU Scholar repository, the CU Boulder Elements (CUBE) research information management system, the CU Experts researcher profile system, and the LASP scientific dataset and metadata repositories.
We also identified external sources in the research ecosystem that could be used to enhance machine-actionability and completeness of post-award DMSPs, including GitHub, ORCID, DataCite, funder databases, journals, and data repositories. After mapping the connections and potential connections between these systems, we began what we hoped would be the process of creating active connections between them. Unfortunately, we ran into several barriers.

Figure 3: Desired state of maDMSP-related information at CU Boulder.
First Barrier: Getting Access to DMSPs
The first issue was related to acquiring a large enough sample of DMSPs for testing. We had hoped to work with RIO, the campus research office, to access DMSPs from successful research grant applications. Unfortunately, we were informed that we would need to obtain permission from the PI for each research project, a much more challenging and labor-intensive process than we had anticipated.
We then searched for DMSPs created by CU Boulder-affiliated researchers available to us through DMP Tool. An immediate issue was the low quality of the metadata associated with these plans. While some researchers at our institution had published their DMSPs on the DMP Tool platform, without being able to compare these to the versions submitted with grants, there was no way for us to know the accuracy of the information they contained. We found that researchers were unlikely to return to published DMSPs after the proposal stage, meaning there was no way to tell if a published DMSP was part of a proposal that was awarded or whether a related proposal had even been submitted at all. For example, a researcher might create and publish a DMSP using DMP Tool and then edit it in other software before submitting it as part of a grant proposal, use it for a similar, but different, grant proposal, or never submit it at all.
An additional problem we encountered when looking for DMSPs to use in this project was related to high-security research projects that worked with sensitive data. The high system security standards for these projects meant we were unable to use these DMSPs due to campus data classification policies (University of Colorado, n.d.).
Facing these issues, we identified two DMSPs from large completed research projects we would be able to use to test what information we could identify using existing data sources to populate actual maDMSPs (even if this needed to be done manually).We chose DMSPs from the U.S. National Science Foundation-funded “NeuroNex: From Odor to Action: Discovering Principles of Olfactory-Guided Natural Behavior” project and the NASA-funded “Decoupling Solar Variability and Instrument Trends Over Solar Cycles 21 to 24 to Develop an Improved Solar Spectral Irradiance Composite Record.” These grants were chosen due to the familiarity individual project team members had with them, which allowed us to better evaluate the completeness of data sources in identifying particular types of information related to the grant (e.g. publications).
Second Barrier: Datasets Do not Identify Grants
We uploaded and published these DMSPs on DMP Tool and encountered our second major challenge, the lack of quality grant-related metadata for published datasets. While publications, such as journal articles, often (though not always) feature acknowledgement sections that include information about grants and other awards, we found that published datasets (including software and code) rarely provide this information and that even when it is included, there is no standard for how or where this information might be found.
When we knew a grant had been awarded and we had the associated grant ID information, we were able to identify some publications, such as journal articles, related to these grants. However, we found a lack of consistency and accuracy in the results across the available data sources we evaluated (e.g., CUBE, Dimensions, NSF Award Search, DMP Tool, etc.).
Datasets were much worse. Many datasets are published without good metadata or documentation and there is limited awareness among researchers of when, where, or how a dataset should include grant information within its documentation. A dataset might seem like it is related to a grant or an article due to a similar name, research area, or published date, but researchers often work in the same field for many years, and a dataset may end up being connected to an older or more recent grant or publication than originally thought. Thus, even when working with the small number of projects we had, the process of connecting a specific DMSP to a specific dataset was often impossible without asking the researchers associated with the project.1
Next Steps
Our pilot project demonstrated a surprising lack of integration and connection across different systems and sources of information related to DMSPs and grants. This lack of integration makes linking different systems much more challenging than we had originally thought.
However, the project allowed us to create or improve connections with other units across the campus and improve our understanding of how maDMSPs fit within the research information ecosystem. In doing so, we laid the groundwork for future projects by identifying metadata from maDMSPs that could be pulled into other campus systems. For example, we identified DataCite as a logical source of information about published datasets and have updated our local practices to include grant information (where available) in DOIs for datasets published in CU Scholar.
Northwestern University
Northwestern University contributed to the MAP Pilot Project through a holistic analysis of institutional workflows and needs, culminating in development of a proof-of-concept for networked RDM and DMSP workflows throughout the university. Through a detailed landscape analysis and in-depth interviews and conversations, we were able to identify researcher, administrator, and support staff wants and needs, avenues to explore to reduce duplication of effort and to increase synergies, and the most effective ways to develop and promote the MAP Pilot Project’s solutions to our audiences.
Background
Located on the Chicago campus of Northwestern University, Galter Health Sciences Library serves the faculty, staff, and students of the Feinberg School of Medicine. While as librarians we enjoy robust collaborations with our peers across the university, operationally we experience segmentation and siloing between our site and IT Research Computing, the Office for Research, and the university library on the main campus. This has led to instances of similar work being done by different units concurrently, resulting in duplication of effort and unclear scope. In addition, given the complexities of the NIH Data Management and Sharing Policy (National Institutes of Health 2025) established workflows have been slow to fully take root for the research community and in administrative operations, leading to questions such as “Where should research data be stored?”, “How many researchers need support for sharing human subjects data?” and “How many projects will generate data that will not have a designated repository?”
While Northwestern University has maintained its membership and instance of DMP Tool for several years, its full adoption has understandably been slow as a result of the confusion and lack of messaging around the above questions. Participation in the MAP Pilot Project has allowed us to advance such conversations and effort for DMP Tool on campus, yielding critical information sharing, and new bridges for further collaboration.
Building Campus-Wide Connections
During the MAP Pilot Project, we conducted a landscape analysis (Figure 4) of research administration and data services at Northwestern University in order to form a full picture of workflows and services that could be impacted through implementing the MAP Pilot Project’s objectives. Through conducted interviews, we received in-depth insights from those working within key organizational units that impact research data: the health sciences library and IT departments, researchers, the university library, IT Research Computing, the Office for Research, and Sponsored Research. The various administrations of these units often have priorities which do not align, as well as differing financial and reporting structures. Yet, as a result of the interviews, we found several areas of aligned interest in RDM workflows. In several cases, we discovered inadvertently duplicated effort caused by siloing.
As a first measure to address the siloing, we in the health sciences library partnered with the university library and IT Research Computing to launch a shared service portal. Through this shared platform, we have pooled our expertise across our specialties, providing consultations across disciplines and supporting research data management needs throughout the research workflow. The impact of the portal has been notable and promising within its first year, and it has provided us insights into how the MAP Pilot Project can allow us to streamline and collect meaningful metrics from researchers’ data management workflows.
Second, with support from IT Research Computing, Northwestern University established a new centralized resource for RDM planning, data collection and storage, data sharing and archiving, and RDM resources. The site provides clarity to researchers on how to get support and training based on department or unit affiliation.
Third, our health sciences library team became co-administrators of Northwestern University's DMP Tool instance along with IT Research Computing and the university library. We now contribute to researcher support for the tool, update documentation, and incorporate DMP Tool guidance in our RDM classes.
Next Steps: Proof-of-Concept Projects
Through our landscape analysis and interviews with research services personnel, we identified a subset of interested parties, tools, and documents that are part of our research services landscape and could benefit from the structure and content of maDMSPs (Figure 4). From this subset we identified interest holders with 1) interest in pursuing a collaboration and, 2) the ability to contribute some effort towards the project.

Figure 4: Northwestern University identified a subset of interest holders, tools, and documents associated with research services at the university that could potentially benefit from content in DMSPs, either by extracting information using the DMP Tool API or by linking the DMP ID in existing documents.
During discussions with our partners, we decided to focus on creating well-defined, proof-of-concept projects that demonstrate how our research services could be improved in a specific workflow if we made a change. The focus of these projects will be to facilitate the transfer of information where it needs to be in a low-barrier and efficient way, with as little duplication as possible.
For example, our Institutional Review Board (IRB) requires a Data Security Plan (DSP) from IT for each funded project. It includes data classification, platforms to be used, where data will be stored and how it will be shared, and other details, some of which overlaps with DMSP content. If we could connect these documents through PIDs or be able to notify IT and the IRB when a new DMSP is created or updated to help them plan and allocate resources, we could remove some of the barriers between these data sources.
We are collaborating with our partners in the Office for Research and IT to demonstrate a workflow for the project described above. We are also exploring how DMSP data can be extracted using the DMP Tool API and analyzed to allow for targeted notifications for our partners. Any technical implementations will be followed up by outreach, communication, and training for interest holders.
The MAP Pilot Project supported our efforts to create a more complete picture of the RDM services landscape at Northwestern University and to identify opportunities for improvement. It allowed us to demonstrate the value of maDMSPs to campus partners in their workflows while engaging and establishing relationships with these partners around data management services. We plan to continue to build on this foundation and advocate for maDMSP integration with other campus tools like research information management systems and institutional repositories.
Conclusion
The results of these four pilot projects should hopefully indicate that there are many areas where the use of maDMSPs would be beneficial to both research institutions and the larger research community. However, there is no “one size fits all” solution for how maDMSPs can be used and there are many different approaches that still need to be explored.
One takeaway from our pilots is that cross-campus collaboration is key to the success of maDMSP-related projects. While the library plays a major role in the adoption of and use of maDMSPs, involving other units on campus allows us to include domain-specific expertise from researchers and other university employees. This will hopefully avoid creating unclear or siloed policies related to DMSPs and research data in general and allow for the creation of fully integrated systems.
Finally, we are closely monitoring changes at federal funding agencies that could impact how maDMSPs can be implemented. The idea that awarded and completed grants might disappear from government grant databases was not one that we had considered when starting this project but is something that researchers and libraries should now be aware of. It will be essential to stay up to date with changes to federal policies about public access to research data. Assuming that these policies remain priorities for funders, libraries should continue to explore new technologies and deeper integrations across campus units that would allow all researchers and interested stakeholders to take advantage of maDMSP functionality both pre- and post-award. This could result in more competitive grant proposals, more streamlined compliance efforts, and greater impact of publicly available research data.
Further Reading
This section includes a pilot project report from University of Colorado Boulder, blog posts by members of the MAP Pilot project team from the Association of Research Libraries and the California Digital Library concerning visits to individual institutions, and a GitHub repo for the AI assistant developed by ASU.
ASU Knowledge Enterprise. 2024. “ASU DMSP AI Assistant in Kubernetes with Cloud Build trigger.” Released October 17, 2024. Accessed November 21, 2025. https://github.com/ASU-KE/rtic-gcp-ai-dmsp-assistant.
Johnson, Andrew, Thea Lindquist, Matthew Murray, Aditya Ranganath, Layla Freeborn, Shelly Knuth, Barb Schnell, Samuel Oskar Klopsch, Vida Sabeti, Jamie Wittenberg, Doug Lindholm, Karen Regan, Don Elsborg, and Alex Viggio. 2025. “Machine Actionable Plans (MAP) Pilot Project Report.” University of Colorado Boulder. https://doi.org/10.25810/TKNV-JT07.
Hudson Vitale, Cynthia and Judy Ruttenberg. 2024. “Machine Actionable Plans (MAP) Pilot Team Visits Penn State University.” Association of Research Libraries. October 24, 2024. https://www.arl.org/blog/machine-actionable-plans-map-pilot-team-visits-penn-state-university.
———. 2024. “ARL/CDL Machine Actionable Plans (MAP) Pilot Team Visits University of Colorado Boulder.” Association of Research Libraries. November 26, 2024. https://www.arl.org/blog/arl-cdl-machine-actionable-plans-map-pilot-team-visits-university-of-colorado-boulder.
———. 2025. “Insights from Arizona State University: Advancing Research Support Through Open Science, Data Sovereignty, and Machine-Actionable Plans.” Association of Research Libraries. January 9, 2025. https://www.arl.org/blog/insights-from-arizona-state-university-advancing-research-support-through-open-science-data-sovereignty-and-machine-actionable-plans.
Ruttenberg, Judy. 2025. “UC Riverside Machine Actionable Plans (MAP) Team Builds Promising Communication Prototype.” Association of Research Libraries. January 8, 2025. https://www.arl.org/blog/uc-riverside-machine-actionable-plans-map-team-builds-promising-communication-prototype.
Taylor, Shawna. 2024. “ARL and California Digital Library Kick Off Machine Actionable Plans (MAP) Pilot Site Visits at Northwestern University.” Association of Research Libraries. October 18, 2024. https://www.arl.org/our-priorities/scholars-scholarship/arl-and-the-california-digital-library-kick-off-machine-actionable-plans-map-pilot-site-visits-at-northwestern-university-feinberg-school-of-medicine-galter-health-sciences-library.
References
Association of Research Libraries. “Machine Actionable Plans (MAP) Pilot: Building a Scalable Data-Management Infrastructure for Strategic Institutional.” n.d. Accessed November 15, 2025. https://www.arl.org/building-a-scalable-data-management-infrastructure-for-strategic-institutional-coordination.
ASU Knowledge Enterprise. 2024. “ASU DMSP AI Assistant in Kubernetes with Cloud Build trigger.” Released October 17, 2024. Accessed November 21, 2025. https://github.com/ASU-KE/rtic-gcp-ai-dmsp-assistant.
DataCite Support. n.d. “DMP IDs.” Accessed November 16, 2025. https://support.datacite.org/docs/dmp-ids.
FASEB. 2021. “DataWorks! Data Management Plan (DMP) Challenge Evaluation Rubric.” https://www.faseb.org/getmedia/cb681545-2ed5-4970-b167-e1b47b1f225e/Rubric-for-DataWorks-DMP-Challenge-12-14-21.pdf.
Feger, Sebastian S., Paweł W. Wozniak, Lars Lischke, and Albrecht Schmidt. 2020. “‘Yes, I comply!’ Motivations and Practices around Research Data Management and Reuse across Scientific Fields.” Proceedings of the ACM on Human-Computer Interaction 4 (CSCW2): 1-26. https://doi.org/10.1145/3415212.
Google DeepMind. n.d. “Gemini 2.5 Flash.” Accessed November 15, 2025. https://deepmind.google/models/gemini/flash.
Miksa, Tomasz, Simon Oblasser, and Andreas Rauber. 2021. “Automating research data management using machine-actionable data management plans.” ACM Transactions on Management Information Systems (TMIS) 13 (2): 1-22. https://doi.org/10.1145/3490396.
National Institutes of Health. “Data Management and Sharing Policy | Grants & Funding.” n.d. Accessed November 15, 2025. https://grants.nih.gov/policy-and-compliance/policy-topics/sharing-policies/dms.
Simms, Stephanie Renee, and Sarah Jones. 2017. “Next-generation data management plans: global, machine-actionable, FAIR.” International Journal of Digital Curation 12 (1): 36-45. https://doi.org/10.2218/ijdc.v12i1.513.
Simms, Stephanie, Sarah Jones, Daniel Mietchen, and Tomasz Miksa. 2017. “Machine-actionable data management plans (maDMPs).” Research Ideas and Outcomes 3: e13086. https://doi.org/10.3897/rio.3.e13086.
University of Colorado. n.d. “Data Classification.” Accessed November 19, 2025. https://www.cu.edu/data-governance/resources-support/data-classification.
Westra, Brian, Amanda Whitmire, Susan Wells Parham, Patricia Hswe, and Jake Carlson. 2016. “The DART Project: Using Data Management Plans as a Research Tool.” https://doi.org/10.17605/OSF.IO/KH2Y6.
Wham, Briana E. 2025. “Machine-Actionable Data Management & Sharing Plan Workshop: Exploring Opportunities to Develop Scalable Data Management Infrastructure.” ScholarSphere. https://www.doi.org/10.26207/a9hb-q125.
A related issue involved ORCID identifiers, where we were not always able to correctly identify if a given ORCID profile belonged to a CU Boulder researcher, as ORCID integration with the CU Experts research tracking system has not been possible since 2022.↩︎