Share your work
The University Digital Conservancy is home to open access articles, institutional documents, dissertations, datasets, university produced publications, campus newspapers, podcasts & more. Learn about the UDC.
- Openly share and provide access to your publications and scholarly works through the University's Open Access to Scholarly Articles policy.
- Publish, share, and preserve your digital data for long-term access and future use in the Data Repository.
- Make your thesis or dissertation openly accessible to share your work.
- Preserve core institutional documents and university publications as part of the University of Minnesota Archives.
Communities in the UDC
Select a community to browse its collections.
Recent Submissions
Item type:Item, Research Brief: Managing urban pond vegetation to enhance water quality benefits(2026-03-25) Carlson, JessyItem type:Item, Transitioning to EV Fleets: Best Practices and a Decision Tool(Minnesota Department of Transportation, 2026-03) Tork, Nastaran; Baek, Kwangho; Khani, Alireza; Bhandari, Sushmita; Ryan, AlyssaNew U.S. regulatory targets aim for an electric vehicle (EV) market share of at least 50% by 2030, a shift mirrored by manufacturers phasing out internal combustion engine (ICE) vehicles. While public organizations are increasingly eager to transition, the pathway is complicated by cold-climate range uncertainties, infrastructure gaps in rural areas, and the technical hurdles of transitioning medium- and heavy-duty fleets for large agencies like MnDOT. This study presents a comprehensive framework to help agencies navigate fleet transition and secure available funding. The research began by assessing human factors and organizational readiness through survey of agencies and in-depth interviews with fleet managers to identify operational barriers and perceptions. Subsequent stages involved a detailed analysis of fleet composition and infrastructure capacity, gathering data on vehicle types, usage patterns, and parking facilities across various Minnesota agencies. These foundational data informed a 10-year life-cycle cost analysis, comparing EVs against traditional ICE vehicles across multiple classes. Finally, the project developed an optimization model, applied to Minneapolis trip data, to demonstrate a strategic plan for EV acquisition and charging infrastructure deployment that minimizes total cost while ensuring uninterrupted operation. The study’s findings and strategic roadmap was disseminated via a statewide webinar to support Minnesota agencies in achieving a cost-effective transition.Item type:Item, Research Ethics Day 2025: "Oversight of AI in Research"(2025-03-05) Consortium on Law and Values in Health, Environment, & the Life SciencesResearchers across many disciplines are increasingly utilizing artificial intelligence (AI), including large language models (LLMs) such as ChatGPT to support empirical research and data analysis, academic writing, peer review, and development of new tools. The broad reach of AI in research raises pressing ethical questions about scientific integrity, authorship, data privacy, bias, and equity. Related issues include how trainees and students should be instructed to use and acknowledge the use of AI tools in their research. Ethical guidance from research institutions, professional organizations, journals, and governmental oversight authorities is only beginning to emerge, and ethical oversight of AI in research also remains in flux. This conference will bring together leading experts from a range of disciplines, from biomedical sciences to the humanities, to confront the challenge of ethical use of AI in research. National leaders will discuss how AI is being used in research, the challenges to research ethics and integrity, current guidance on using AI in research and publication, including how to address concerns that training sets for LLMs may not be sufficiently representative, leading to biased models. Speakers will also debate how LLMs should be used in academic writing and peer review, and how students should use these tools. The conference will consider when and how researchers should seek informed consent to use of AI in research protocols, and how IRBs can effectively provide oversight for research with AI tools. The conference will offer recommendations for researchers, students, administrators, and IRB professionals on how to ensure ethical use of AI in research.Item type:Item, Research Ethics Day 2025: "Informed Consent"(2025-03-05) Consortium on Law and Values in Health, Environment, & the Life SciencesResearchers across many disciplines are increasingly utilizing artificial intelligence (AI), including large language models (LLMs) such as ChatGPT to support empirical research and data analysis, academic writing, peer review, and development of new tools. The broad reach of AI in research raises pressing ethical questions about scientific integrity, authorship, data privacy, bias, and equity. Related issues include how trainees and students should be instructed to use and acknowledge the use of AI tools in their research. Ethical guidance from research institutions, professional organizations, journals, and governmental oversight authorities is only beginning to emerge, and ethical oversight of AI in research also remains in flux. This conference will bring together leading experts from a range of disciplines, from biomedical sciences to the humanities, to confront the challenge of ethical use of AI in research. National leaders will discuss how AI is being used in research, the challenges to research ethics and integrity, current guidance on using AI in research and publication, including how to address concerns that training sets for LLMs may not be sufficiently representative, leading to biased models. Speakers will also debate how LLMs should be used in academic writing and peer review, and how students should use these tools. The conference will consider when and how researchers should seek informed consent to use of AI in research protocols, and how IRBs can effectively provide oversight for research with AI tools. The conference will offer recommendations for researchers, students, administrators, and IRB professionals on how to ensure ethical use of AI in research.Item type:Item, Research Ethics Day 2025: "Norms on AI/ML in Scholarship"(2025-03-05) Consortium on Law and Values in Health, Environment, & the Life SciencesResearchers across many disciplines are increasingly utilizing artificial intelligence (AI), including large language models (LLMs) such as ChatGPT to support empirical research and data analysis, academic writing, peer review, and development of new tools. The broad reach of AI in research raises pressing ethical questions about scientific integrity, authorship, data privacy, bias, and equity. Related issues include how trainees and students should be instructed to use and acknowledge the use of AI tools in their research. Ethical guidance from research institutions, professional organizations, journals, and governmental oversight authorities is only beginning to emerge, and ethical oversight of AI in research also remains in flux. This conference will bring together leading experts from a range of disciplines, from biomedical sciences to the humanities, to confront the challenge of ethical use of AI in research. National leaders will discuss how AI is being used in research, the challenges to research ethics and integrity, current guidance on using AI in research and publication, including how to address concerns that training sets for LLMs may not be sufficiently representative, leading to biased models. Speakers will also debate how LLMs should be used in academic writing and peer review, and how students should use these tools. The conference will consider when and how researchers should seek informed consent to use of AI in research protocols, and how IRBs can effectively provide oversight for research with AI tools. The conference will offer recommendations for researchers, students, administrators, and IRB professionals on how to ensure ethical use of AI in research.
