Home > Academics > Undergraduate > Undergrad Research Opportunities > 2024-2025 SURE Research Projects in CSE

2024-2025 SURE Research Projects in CSE

This page lists summer research opportunities in CSE that are available through the SURE Program. To learn more or apply, visit: https://sure.engin.umich.edu/.

Note: CSE does not require additional materials unless noted in the project description.

Directions

  • Please carefully consider each of the following projects, listed below, before applying to the SURE Program.
  • You must indicate your top three project choices on your SURE application, in order of preference, using the associated CSE project number.
  • Questions regarding specific projects can be directed to the listed faculty mentor. 
  • Timeline: SURE applications will be reviewed throughout the month of March and recipients will be notified sometime in late-March/early-April.

Project descriptions

Project #1: Scalable Large Language Model Evaluation on Real-World Tasks
Faculty Mentor: Lu Wang, [email protected]
Prerequisites: Fundamental knowledge of natural language processing and AI, proficiency in Python and/or some programming language(s).
Description: Our project addresses the need for expert-level evaluation benchmarks to improve LLM evaluation, especially in complex and practical tasks where current models lack alignment with expert judgment. To tackle these challenges, we aim to develop a comprehensive, multi-domain benchmark that covers tasks requiring domain expertise both for solving and evaluation. Our objectives are as follows:
Create a benchmark of expert-level problems that require advanced knowledge to solve and evaluate; Include real-world tasks across diverse disciplines, with an open framework that allows for community contributions; Use expert-designed rubrics to systematically analyze AI evaluation gaps, biases, and reasoning limitations, comparing AI performance against expert standards across domains; Explore modular evaluation methods that improve AI models’ grading accuracy on expert-level tasks.

Ultimately, this project aims to enhance AI’s capability to align with expert standards, making AI grading systems more reliable and impactful in high-stakes applications such as medical diagnosis, legal summarization, educational assessment, and scientific research. Through this, we will contribute to more trustworthy, expert-aligned AI evaluation systems in fields of significant societal impact.

Project #2: Moral Reasoning with Large Language Models
Faculty Mentor: Lu Wang, [email protected]
Prerequisites: Preferred: interest in narrative understanding, experience in NLP or ML.
Description: The goal of our project is to develop an evaluation framework that provides deeper insights into LLMs’ moral reasoning abilities. Current benchmarks for evaluating moral reasoning in LLMs are mainly static, leading to issues such as data leakage and quickly becoming outdated. While some dynamic benchmarks exist, they often present only simple scenarios, which are insufficient to assess the complex moral reasoning abilities of LLMs. To address these limitations, our project aims to introduce a dynamic evaluation framework that generates complex, evolving synthetic narratives, enabling a more thorough assessment of moral reasoning.

Project #3: Wafer-scale networks for HPC and ML
Faculty mentor: Nathaniel Bleier, [email protected]
Prerequisites: EECS 370.
Description: HPC and machine learning applications use a variety of network topologies, including tori, Clos, and dragonfly networks, and their variants. However, wafer-scale networks on chip are typically implemented as mesh networks. Thus, wafer-scale systems running HPC and machine learning applications either use a suboptimal network topology, or must logically implement the preferred network on top of the physical mesh. This project will 1) establish the feasibility of implementing non-mesh networks-on-wafer, and 2) quantify the power and performance gains which machine learning and HPC applications can realize with non-mesh networks-on-wafer.

Project #4: Epidemic agent-based modeling
Faculty mentor: Alexander Rodríguez, [email protected]
Prerequisites: Machine learning, deep learning.
Description: We will develop agent-based models for epidemiological modeling using AI agents.

Project #5: Sound Awareness Systems for Deaf and Hard of Hearing People
Faculty mentor: Dhruv Jain, [email protected]
Prerequisites: Some experience with implementing machine learning (ML) algorithms is preferred. Experience with designing front-end user interfaces and/or conducting user studies is a huge plus, and if you have strong experience with this, but are uncomfortable with ML, please still apply.
Description: We will research, build, and deploy systems to provide sound awareness to people who are deaf and hard of hearing. These could include, but not limited to: (1) augmented-reality sound visualizations on head-mounted displays, (2) smartwatch-based sound recognition app, and (3) a web-based personalizable sound recognition system. You will be part of a next-generation team who is actively working with Deaf/disabled population with a history of successful product launches. Once your system is built, you will participate in conducting field studies with DHH users and help with open-sourcing your system, ultimately leading to a huge real-world impact.

Project #6: Next-Generation Hearables
Faculty mentor: Dhruv Jain, [email protected]
Prerequisites: Software Track: Strong ML or NLP expertise, especially with speech or audio analysis is required. Experience with designing front-end user interfaces and/or conducting user studies is a plus.
Hardware Track: Hands-on experience working with custom hardware (e.g., PCBs, small microphones, speakers) is required. You’ll be building fully usable and response earphones and/or hearing aids, and should feel very comfortable with analyzing, assembling, and testing hardware components present in everyday earphones (e.g.. Apple Airpods).
Description: Our lab is developing next-generation earphones or hearing aids capable of customizing audio delivery based on the user’s intent or the surrounding environment. Imagine, for example, earphones that can isolate a specific speaker’s voice from the crowd while canceling background noise. Or earphones that can adjust frequency response based on a user’s audio profile, or selectively tune audio according to the user’s current activity (e.g., focusing on a task vs. actively listening to music). Building such context-aware earphones requires advancements in both software algorithms and on-device hardware.

Project #7: Human Acoustics Modeling and Predicting Auditory Conditions
Faculty mentor: Dhruv Jain, [email protected]
Prerequisites: Strong data science and ML expertise related to speech and audio analysis is required.
Description: This project is at the inserction of machine learning and healthcare. We’ll be working with our collaborators in Michigan Medicine to collect human audio data, and subsequently use this data to model human acoustics and diagnose auditory related medical conditions (e.g., dizziness, hearing loss). You will assist in data collection and training acoustics machine learning models to model and predict auditory conditions.

Project #8: Quantum computing: classical simulation, algorithms, error correction
Faculty mentor: Gokul Ravi, [email protected]
Prerequisites: It is preferable if the students have some background in quantum computing, python programming, and one or more of compilers / computer architecture / logic design.
Description: Multiple potential projects such as: 1. Advancing scalable classical simulation techniques for quantum computing (e.g simulation of near-Clifford circuits); 2. Building efficient decoding techniques for quantum error correction; 3. Studying novel algorithms / applications that can bridge the gap between near-term and long-term quantum computing.

Project #9: Simplifying Cloud Management with Cloudless Computing
Faculty mentor: Ang Chen, [email protected]
Prerequisites: OS, or networking, or security courses or experience.
Description: Cloud computing has transformed the IT industry, but managing cloud infrastructures remains a difficult task. We make a case for putting today’s management practices, known as “Infrastructure-as-Code,” on a firmer ground via a principled design. We call this end goal Cloudless Computing: it aims to simplify cloud infrastructure management tasks by supporting them “as-a-service,” analogous to serverless computing that relieves users of the burden of managing server instances. By assisting tenants with these tasks, cloud resources will be presented to their users more readily without the undue burden of complex control. We are exploring in particular the power of AI/LLM agents for this task.

Project #10: Hazel: Live Functional Programming with Typed Holes
Faculty mentor: Cyrus Omar, [email protected]
Prerequisites: EECS 390 or EECS 490 or EECS 483 recommended, though strong students willing to pick up this background in an accelerated fashion can also apply.
Description: SURE students will contribute to the design, implementation, and theory of the Hazel programming environment. Hazel is a live programming environment, meaning that it maintains a running program at all times. To maintain liveness even in the presence of incomplete code, Hazel inserts holes into the program to maintain its syntactic structure. These holes are typed, and Hazel uses the type and context around the hole in the development of its AI assistant system. There are also a number of other editor services that take advantage of liveness to offer helpful feedback to students and others using the system. Specific student projects will be developed in line with student interests and skills.

Project: #11 Fully-Integrated and Skin-Conformal Wearable On-Body Motion Sensing System
Faculty mentor: Alanson Sample, [email protected]
Prerequisites: Experience in one of the following areas is preferred: Electronics, Embedded Systems, or Machine Learning.
Description: Accurate and untethered 3D tracking of body movements has the potential for applications in augmented/virtual reality, sports performance analysis, as well as rehabilitation and physical therapy. However, existing wearable tracking technologies face significant challenges: IMUs suffer from drifting errors, and optical or depth-based sensing is hindered by line-of-sight occlusions caused by moving body parts. This project seeks to develop a novel, fully integrated, skin-conformal wearable tracking system capable of reconstructing 3D body motions in real-time. The proposed system will leverage advanced sensing modalities, such as electromagnetic and ultrasonic acoustic sensing. The use of low-profile, flexible circuit designs will ensure comfort and enable long-term field deployment on various body joints. Students participating in this project will join a collaborative team of undergraduate and graduate researchers in the Interactive Sensing and Computing Lab in Computer Science and Engineering. They will contribute to the development of flexible, conformal sensing circuits that adhere to body surfaces and detect user motions in real-time. Participants will gain valuable hands-on experience in designing robust embedded systems and applying machine learning techniques to solve real-world challenges in wearable technology.

Project #12: Waves to Words: Extending the Power of LLMs to New Sensing Modalities
Faculty mentor: Alanson Sample, [email protected]
Prerequisites: EECS 281 or experience with ML and LLMs.
Description: Large language models (LLMs) have demonstrated exceptional abilities in understanding, reasoning, and connecting information, transforming the fields of text and image processing. However, their application to other sensing modalities remains underexplored. This project seeks to leverage the unique strengths of LLMs to design an advanced signal processing pipeline for sensor data, enabling more efficient and powerful analysis of complex sensory inputs. Students will join of team of undergraduate and graduate researchers in the Interactive Sensing and Computing Lab within the Computer Science and Engineering department. Participants will contribute to cutting-edge research at the intersection of artificial intelligence and signal processing. Responsibilities include implementing code for LLM-based models, designing and applying signal processing algorithms, developing machine learning techniques, training models, and conducting data analysis. This hands-on experience will equip students with invaluable skills in using LLMs for innovative applications, fostering expertise in AI-driven signal processing for real-world applications.

Project #13: Building Tools to Improve Civic Discourse Online
Faculty mentor: Farnaz Jahanbakhsh, [email protected]
Prerequisites: Skill needed: web development (full stack) and familiarity with APIs, familiarity with statistical analysis is a huge plus.
Description: We explore how computational tools can enhance inter-personal trust and improve the quality of online discourse. We aim to design, build, and evaluate systems that leverage theories from the Social Sciences and personalized feedback to foster more constructive and empathetic interactions in contexts such as civic discussions and misinformation correction. The project includes developing AI-powered systems that adapt users’ communication in real-time, personalizing fact-checking information, and conducting experiments to measure the impact of these tools on human behavior. By joining this project, you will contribute to creating technologies that address critical challenges in trust-building and online communication, with the potential for meaningful societal impact.

Project #14: Distributed Multi-Modal Wearable Foundation Models for Comprehensive Daily Life Logging
Faculty mentor: Ke Sun, [email protected]
Prerequisites: Mobile Development (iOS and Android), Machine Learning, Signal Processing, Embedded Systems.
Description: The growing demand for personalized and real-time insights into health and behavior is fueled by the rapid adoption of wearable technology. As users seek comprehensive and non-intrusive methods to track their activities, traditional single-device and single-sensor systems often fall short in delivering rich, multi-dimensional insights. This project envisions a usage scenario where individuals can leverage a network of wearable devices—such as smartwatches, smartphones, wireless earbuds, fitness bands, and smart rings—to continuously and comprehensively log and analyze daily activities.

The project focuses on the development of distributed multi-modal foundation models tailored for wearable devices, aimed at facilitating comprehensive daily life logging. By integrating data streams from a diverse range of wearable technologies, it seeks to build a distributed system capable of seamlessly capturing, processing, and interpreting user activities. The goal is to achieve a holistic understanding of daily patterns while optimizing energy efficiency, communication requirements and ensuring data privacy. Such a system could be especially valuable for individuals managing chronic health conditions, enhancing fitness routines, or monitoring stress and wellness during work and leisure activities, ultimately improving overall quality of life, paving the way for next-generation wearable intelligence.

Project #15: Encouraging Bridge-Building Content Assessments Online
Faculty mentor: Farnaz Jahanbakhsh, [email protected]
Prerequisites: Web development (full stack), familiarity with statistical analysis is a huge plus.
Description: This project explores how to encourage people to write assessments of online content that foster understanding and resonate across divides, such as political or demographic differences. Through the development of an experimental platform, we aim to study how gamified incentives and feedback mechanisms can promote thoughtful, inclusive evaluations of news and social media content. The goal is to create systems that facilitate constructive interactions and contribute to reducing polarization in online discourse.

Project #16: WorldScribe: Spatial and Temporal Awareness in Live Visual Descriptions for Long-Term Real-World Understanding
Faculty mentor: Anhong Guo, [email protected]
Prerequisites: Skill needed: computer vision, natural language processing, mobile app programming.
Description: This project extends from WorldScribe (https://worldscribe.org/), which is a system that provides context-aware live visual descriptions for blind people to access the real world. However, WorldScribe only took the current context for description generation and fell short in noting and tracing history (e.g., what had been described, where the user visited, and what has been changed), which could create information overload and confusion. In this project, we want to extend WorldScribe with memory so that it can remember visual, spatial, and temporal information or changes, and can provide the most relevant information for blind users. In this SURE project, you will contribute to a mobile real-world agent architecture including the front-end (e.g., iOS) or back-end (e.g., LLM, CV, NLP techniques).

Project #17: HandProxy: Speech-controlled Hand Input for Extended Input Bandwidth in Virtual Environments
Faculty mentor: Anhong Guo, [email protected]
Prerequisites: Skill needed: Unity, VR/AR, natural language processing, programming experiences with C# and python.
Description: Hand interactions have been commonly used as the primary input modality in virtual environments. However, users may face challenges performing the required hand movements when their hands are occupied, fatigued, or when virtual objects are out of reach. This project explores the use of speech to command a virtual hand that performs hand interactions on behalf of the user. The system is able to understand the user’s dynamic ways of giving instructions, gather necessary information through context understanding, decompose it into executable steps, and generate the sequence of hand pose data to be used by the target XR system. In this project, you will contribute to the development of the system, including building the backend (integrating LLM, processing speech input, building processing pipeline), constructing Unity testing environment, and designing front-end overlay (designing visual feedback).

Project #18: A11y Agents: Persona-based Multimodal Large Language Model Agent for Accessibility Research Role-playing
Faculty mentor: Anhong Guo, [email protected]
Prerequisites: Skill needed: natural language processing, generative AI, user research.
Description: Role-playing language agents (RPLAs) research is becoming popular and has emerged as a powerful tool for simulating social interactions and representing diverse personas. This project aims to design and develop multimodal large language model (MMLLM) architectures that can effectively role play and simulate the experiences of individuals with disabilities, particularly those with visual impairments. We will create and implement multimodal large language model pipelines. Then, we will evaluate these agents on their ability to perform tasks such as co-designing assistive tool prototypes, participating as research subjects, collaborating with humans or other agents, and undertaking other roles commonly performed by blind individuals. As part of this research, you will contribute to the design and development of the agents’ core architecture, assist in implementing multimodal LLM pipelines, and support the user studies process that involve blind participants to evaluate the agents’ performance and usability.

Project #19: IoT enabled Medication Adherence Monitoring
Faculty mentor: Alanson Sample, [email protected]
Prerequisites: EECS 373 or other embedded C experience.
Description: Glaucoma is an eye disease that damages the optic nerve, leading to irreversible vision loss. While medications can effectively slow or prevent disease progression, non-adherence remains a significant challenge, particularly among the populations most affected by glaucoma. This project aims to address this issue by developing low-power embedded devices that monitor and track medication adherence, providing care providers with detailed insights into patient care status. SURE students will work alongside a team of graduate and undergraduate researchers that have designed cellular and Bluetooth-enabled devices that collect and transmit fine-grained medication usage statistics. The student will work with hardware from the NRF52 and NRF91 families to enable firmware-over-the-air (FOTA) updates and control deployed devices from an online portal. SURE students will gain hands-on experience in embedded systems development, wireless communication, and healthcare-focused technology design. This work has the potential to significantly enhance glaucoma management by empowering care providers to make more informed decisions based on real-time adherence data.

Project #20: Machine Learning for Healthcare
Faculty mentor: Jenna Wiens, [email protected]
Prerequisites: EECS 445.
Description: While ML algorithms often optimize for accuracy, in doing so resulting models can exacerbate existing inequities in clinical care. In this project, you will investigate health outcomes in the CMS Medicare population and evaluate the equity of risk adjustment solutions. With the overall goal of developing more accurate and more equitable risk adjustment for the Medicare advantage population.