Undergraduate Project Winners

First Place

UC-144 Attack Surface Management and Analysis (Undergraduate Project) by McLemore, Danard S, Tanner, Nick A, Berry, Keshaun, Thairu, Nelson, Ciin, Niang
Abstract: Recent advancements in AI have made knowledge more accessible, but this also introduces risks, as vulnerabilities can now be quickly found and exploited. To address this, we developed a comprehensive, cloud-native attack surface monitoring suite in Google Cloud. Integrating open-source intelligence tools like OWASP Amass and Project Discovery, along with custom Python-based processing, we gather extensive security data—covering subdomain enumeration, open ports, HTTP responses, and DNS configurations. This data is stored in BigQuery, processed, and visualized in Looker Studio for easy client interpretation. A containerized, scalable backend with a Flask-based API ensures seamless tool integration and adaptability. BigQuery ML further classifies domains’ security, empowering organizations with proactive risk assessment and attack surface monitoring.
Department: Computer Science
Supervisor: Prof. Sharon Perry
Presentation | Poster | More Information
 

Second Place

UC-226 Real-Time Bus Monitoring Using Kafka (Undergraduate Project) by Bostian, Samuel A, Rizig, Michael, McLarty, Charlie, Pruitt, Brian A, Roman, Allen
Abstract: The GCPS Real-Time Bus Monitoring System aims to enhance bus operations for Gwinnett County Public Schools by transitioning from a polling-based system to a real-time Kafka event-streaming architecture. This project processes telemetry data from over 2,000 buses, simulating a scalable, near-instantaneous data flow into an SQL Server database. Key features include real-time data validation, efficient data storage, and containerized deployment for consistency across environments. Using an Agile approach, our team handled evolving requirements from the sponsor, who is new to senior project collaborations. This system enables GCPS to monitor bus locations with reduced latency, enhanced accuracy, and improved resource management, laying a robust foundation for future scalability and analytics.
Department: Computer Science
Supervisor: Prof. Sharon Perry
Presentation | Poster | More Information
 

Third Place

UC-247 Using Dynamic Difficulty Adjustment (DDA) to improve health and wellness apps and programs (Undergraduate Project) by Orfila, Fernando
Abstract: Physical inactivity, obesity and Type 2 Diabetes cost the United States’ economy more than $700 billion a year (CDC). Yet, individuals spend $137 billion dollars a year on gym memberships to get in shape and feel better, without attaining results and dropping out. “…63% of new members will abandon activities before the third month, and less than 4% will remain for more than 12 months of continuous activity.” (Sperandei et al). The personal training apps don’t fare better, with 71% of users disengaging within 90 days (Amagai et al). The higher chances of people dropping out are due to "a higher degree of discomfort and distress during exercise sessions" (Sperandei 919). Additionally, individuals with less than 2 training sessions per week have higher attrition rates (Garay et al 7). Our hypothesis is that Digital Difficulty Adjustment (DDA) could be used beyond videogames to create positive habits and to increase the amount of physical exercise by making the exercises’ intensity levels adapt to the physical levels of the person exercising in real-time. DDA is a technique used in video games to adaptively change the game's difficulty level in response to the player's performance and creates an engaging and tailored playing experience that lasts longer for the player. We expect the findings of this research can be applied to designs in other areas of healthcare and wellness programs to effectively improve adherence, and reduce attrition of these programs, potentially reducing the national and personal costs in poorly designed digital health and wellness products.
Department: Software Engineering and Game Development
Supervisor: Dr. Lei Zhang
Presentation | Poster
 

Graduate Project Winners

First Place

GMC-4190 CellNucleiRAG - Smart Search Tool for Cell Nuclei Research (Graduate Project) by Koganti, Sai Chandana
Abstract: CellNucleiRAG is a specialized tool developed to address a significant challenge in medical research: the rapid retrieval and synthesis of detailed information on cell nuclei. Understanding cell nuclei characteristics is crucial in fields like pathology, oncology, and diagnostics, where detailed cell analysis can guide disease identification and treatment planning. However, accessing relevant, organized information on specific cell nuclei types, datasets, models, and methods is often time-consuming, requiring manual searches through multiple, disparate sources. CellNucleiRAG solves this problem by acting as a smart search engine, designed specifically for cell nuclei research, combining traditional retrieval methods with advanced AI capabilities. Built with an underlying Retrieval-Augmented Generation (RAG) architecture, CellNucleiRAG leverages MinSearch for rapid data retrieval, pulling relevant records from a curated dataset that contains information on various nuclei types, datasets, and analytical models. Once relevant data is retrieved, it is processed by an LLM (Large Language Model) to generate contextually accurate, human-readable responses. This dual approach ensures both precision and clarity, allowing researchers to receive comprehensive answers rather than isolated data points. Key technologies used in this project include Docker, for environment consistency; Flask, for a streamlined user interface; PostgreSQL, for storing interactions and user feedback; and Grafana, for real-time system performance monitoring. User feedback is incorporated to continually refine the tool, enhancing the accuracy and relevance of responses.
Department: Computer Science
Supervisor: Dr. Coskun Cetinkaya
Presentation | Poster
 

Second Place

GMC-2162 Prompt Engineering and its Effects On AI and Human Relationships: A Contemporary Approach (Graduate Project) by Madu, Francis, Kadiyala, Naga Janaki Madhav, Thallapally, Nivesh
Abstract: A. Background: Prompt engineering refers to the process of designing and refining input prompts for AI models (especially language models like GPT) to improve their outputs. It has become a critical tool in maximizing the performance and utility of AI models in diverse applications, from customer service to content creation. Beyond technical aspects, the interaction between humans and AI is increasingly shaped by the effectiveness of these prompts. B. Motivation: As AI becomes more integrated into daily life, the way humans interact with AI models is profoundly influenced by prompt engineering. Misaligned prompts can lead to misunderstanding, confusion, or unintended outcomes, affecting both the utility of AI systems and the trust people place in them. Our project seeks to understand how different prompt strategies impact not only AI performance but also human perceptions and relationships with AI systems. By exploring these dynamics, we aim to develop best practices in prompt engineering that foster both efficient AI performance and positive human-AI relationships. C. Expected Results: We expect to demonstrate that well-constructed prompts not only improve AI output quality but also lead to more transparent, trustworthy, and meaningful human-AI interactions. This will be quantified through various metrics such as response accuracy, user satisfaction, and interaction smoothness.
Department: Computer Science
Supervisor: Dr. Chen Zhao
Presentation | Poster
 

Third Place

GMC-157 Text-to-Digital Person Video Generator: DigitalAvatarGen (Graduate Project) by Kesharwani, Akansha, Bagdwal, Nisha, Hossain, MD E, Patel, Drashti, ADIGOPPULA, NIKHIL
Abstract: The Text-to-digital person video generator: DigitalAvatarGen project uses AI to create lifelike videos of 2D digital avatars from user text input. Users enter text, select a voice and select or upload an avatar, and generate a video using DigitalAvatarGen web application which uses Google TTS and SadTalker, to synchronize voice, expressions, and lip movements. Key contributions include a customizable user interface, personalized voice and avatar options, and an optimized backend for efficient video generation. This tool provides an engaging, realistic solution for applications in education, media, and customer interaction.
Department: Information Technology
Supervisor: Dr. Ying Xie
Presentation | Poster | More Information
 

Undergraduate Research

First Place

UR-147 An 8-bit Digital Computer Design & Implementation (Team COA-WM1) (Undergraduate Research) by Sherard, Adrian L, Flores, Jesus, Lamsal, Biswash, Hammontree, Blake, Pitts, William
Abstract: 8 bit computer design using NI multisim
Department: Computer Science
Supervisor: Prof. Waqas Majeed
Presentation | Poster
 

Second Place

UR-172 A Comparative Study of LLM Effectiveness in Mental Health Assistance (Undergraduate Research) by Prasad, Kris
Abstract: This study evaluates the effectiveness of LLMs in supporting mental health applications by analyzing their performance in understanding and categorizing user (mental health-related) inputs. We collected data from various mental health apps on the Google Play Store, including user reviews and app descriptions, and filtered content using a targeted mental health keyword bank. Sentiment analysis and keyword similarity scores were generated for reviews using RoBERTa-based models, this showed us how each review aligned with the mental health keywords advertised by the app and how users felt about the app. We prompted four modern LLMs: GPT-4o, Claude 3.5 Sonnet, Gemma 2, and GPT-3.5-Turbo. We provided Gemma 2 and GPT-3.5-Turbo with our dataset for more informed outputs. Our prompts consisted of five common mental health conditions (depression, anxiety, ADHD, PTSD, and insomnia) and we asked for the models to provide us with up to five app recommendations. The results showed that our data-enhanced LLMs noticeably outperformed the other state-of-the-art LLMs in accuracy, quality, and variety of outputs while being much more cost-effective. This suggests that data-enhanced, low-cost LLMs can serve as an effective alternative to newer, more powerful, and more expensive models, achieving notably better results in interpreting nuanced text for mental health applications.
Department: Computer Science
Supervisor: Dr. Md Abdullah Al Hafiz Khan
Presentation | Poster
 

Third Place

UR-213 Generative AI & Cybersecurity (Undergraduate Research) by Canada, Seth G, Katare, Shreya
Abstract: This research project details the impact of Generative AI on Cybersecurity through both its potential enhancements and threats. Using advanced AI algorithms, this project explores how Generative AI can strengthen cybersecurity through systems like Anomaly Detection, Intrusion Detection Systems (IDS), and Malware Analysis. Also, this project addresses the growing challenges posed from Generative AI. In particular, issues surrounding Deepfake Phishing and Polymorphic Malware are discussed. Solutions to mitigate these issues are also provided to engage further understanding in the field. The goal of this research is to offer practical solutions for addressing the growing field of AI-driven cybersecurity.
Department: Computer Science
Supervisor: Dr. Yong Shi
Project Advisor: Prof. Sharon Perry
Presentation | Poster | More Information

 

Master's Research

First Place

GMR-4234 Evaluating Instance Segmentation Models on Histopathology Datasets (Master's Research) by Koganti, Sai Chandana
Abstract: Instance segmentation is transforming digital pathology by enhancing the speed and accuracy of tissue sample analysis through advanced image processing techniques. Whole Slide Imaging (WSI) converts traditional microscope slides into high-resolution digital formats, enabling detailed examinations. This paper presents a brief experimental survey of instance segmentation models on two prominent histopathology datasets: PanNuke and NuCLS. Unlike previous surveys that merely describe deep learning models for general pathology images, we conduct experiments using state-of-the-art models including Mask R-CNN, Detectron2, YOLOv8, YOLOv9, and HoverNet on both datasets. Our study evaluates these models for both binary and multiclass instance segmentation tasks. The NuCLS dataset, featuring over 220,000 annotated nuclei from breast cancer histopathology images, is used for multiclass segmentation across 13 distinct nuclear classes. The PanNuke dataset, comprising 205,343 labeled nuclei across 19 tissue types, is employed for both multiclass and binary instance segmentation of five cell types: neoplastic, inflammatory, soft tissue, dead, and epithelial. We assess each model's performance using metrics such as mean average precision (mAP), F1 score, and Dice coefficient, providing a comprehensive evaluation of their strengths and limitations. The results of our study offer valuable insights into the capabilities of different instance segmentation models in histopathology image analysis. We observe varying performance across tissue types and cell categories, highlighting the importance of model selection based on specific histopathology tasks. Our findings aim to guide researchers in choosing appropriate models for their specific needs, ultimately contributing to the advancement of digital pathology and improving diagnostic accuracy in clinical practice. Also provides a foundation for future research in instance segmentation for histopathology images.
Department: Computer Science
Supervisor: Dr. Sanghoon Lee
Presentation | Poster
 

Second Place

GMR-229 Semantic Search using Sentence Transformers (Master's Research) by Satish, Roshni, Challa, Arpana
Abstract: Traditional keyword-based search engines struggle to accurately capture the semantics of user queries in today's enormous digital resources. Our research study focuses on creating a semantic search engine that uses Sentence Transformers to improve information retrieval by understanding the context of queries and documents. Our method creates sentence embeddings for documents and user queries, allowing retrieval based on semantic similarity rather than keyword matching. The project involves data collection and preprocessing, feature extraction with Sentence Transformers, and implementation of a search engine that ranks documents based on cosine similarity to query embeddings. According to preliminary testing, this method greatly improves search relevancy and accuracy, we have compared the results with a baseline algorithm, BM25, to assess the effectiveness of Sentence Transformers in enhancing retrieval relevance. This work opens the door for future refinements in retrieval systems based on natural language processing and shows how semantic search engines can deliver results that are more contextually aligned.
Department: Computer Science
Supervisor: Dr. Md Abdullah Al Hafiz Khan
Presentation | Poster
 

Third Place

GMR-7175 Enhancing Alzheimer’s Diagnosis through Spontaneous Speech Recognition: A Deep Learning Approach with Data Augmentation (Master's Research) by Mutala, Venkata Sai Bhargav
Abstract: Alzheimer’s disease (AD) is a growing public health issue due to its progressive nature and rising prevalence. This study explores a neural network model trained on speech data from the ADReSS2020 Challenge dataset to distinguish AD patients from healthy individuals, using log-Mel spectrogram features. To improve accuracy, five data augmentation methods, including pitch and time shifting, were used. The results highlight deep learning, combined with data augumentation, as a promising, scalable, and noninvasive approach for early AD diagnosis
Department: Information Technology
Supervisor: Dr. Seyedamin Pouriyeh
Presentation | Poster
 

PhD Research

First Place

GPR-185 A Multimodal Approach to Quiz Generation: Leveraging RAG Models for Educational Assessments (PhD Research) by Kunuku, Mourya Teja
Abstract: Crafting quiz questions that effectively assess students’ understanding of lectures and course materials, such as textbooks, poses significant challenges. Recent AI-based quiz generation efforts have predominantly concentrated on static resources, like textbooks and slides, often overlooking the dynamic and interactive elements of live lectures—contextual cues, discussions, and interactions—that contribute to the learning experience. In this work, we propose a Retrieval-Augmented Generation (RAG) model that processes multimodal inputs by combining text, audio, and video to produce quizzes that capture a fuller context. Our method incorporates Whisper for audio transcription and utilizes a Large Vision-Language Model (LVLM) to extract essential visual data from lecture videos. By integrating both spoken and visual elements, our model generates quizzes that more closely represent the lecture environment. We evaluate the model’s impact on quiz relevance, diversity, and engagement, showing that this multimodal approach fosters a more dynamic and immersive learning experience. Performance metrics, including hit rate and mean reciprocal rank (MRR), are used to assess question relevance and accuracy. A high hit rate indicates the model’s reliability in producing pertinent questions, while MRR highlights ranking quality, demonstrating the prompt appearance of relevant questions. Strong results in these metrics confirm our model’s effectiveness, though current limitations include challenges in handling abstract concepts absent in the lecture material—a gap we aim to bridge in future developments by integrating external knowledge sources.
Department: Computer Science
Supervisor: Dr. Nasrin Dehbozorgi
Presentation | Poster
 

Second Place

GPR-1194 Computer Vision-Enhanced Spectroscopy for Glucose Prediction: An In Vitro Validation Study (PhD Research) by Belfarsi, El Arbi
Abstract: This study introduces a novel computer vision-based spectral approach for non-invasive glucose detection using synthetic blood samples. We developed an experimental setup with glucose concentrations from 70 to 120 mg/dL, using two dye methods. Light sources tested included an 850 nm LED, 850 nm laser, 808 nm laser, and 650 nm laser, with image capture via a 1080p IR camera. Data augmentation, including Gaussian noise, contrast and brightness adjustments, rotations, and zooming, produced seven variants per image. Three machine learning models—CNN, AdaBoost, and ResNet—were evaluated, with the 850 nm light source yielding the best results: 87.5% of predictions fell within Zone A of the Clarke Error Grid. Findings support the potential of this approach for non-invasive glucose monitoring.
Department: Computer Science
Supervisor: Dr. Maria Valero
Presentation | Poster | More Information
 

Third Place

GPR-6126 Utilizing ML techniques for a Quantum Augmented HTTP Protocol (PhD Research) by Jha, Nitin
Abstract: Over the past decade, several small-scale quantum key distribution (QKD) networks have been implemented worldwide. However, achieving scalable, large-scale quantum networks relies on advancements in quantum repeaters, channels, memories, and network protocols. To enhance the security of current networks while utilizing available quantum technologies, integrating classical networks with quantum elements appears to be the next logical step. In this study, we propose modifications to the HTTP protocol's data packet structure, adjustments to end-to-end encryption methods, and optimized bandwidth distribution between quantum and classical channels for high-traffic network routes.
Department: Computer Science
Supervisor: Dr. Abhishek Parakh (KSU), Dr. Mahadevan Subramaniam (University of Nebraska Omaha)
Presentation | Poster
 

Audience Favorite Presenter

UC-131 Karah Khronicles (Undergraduate Project) by Green, Dion, Stipetich, Jake, Bowe, Grace, Israel, Jesse, Malatker, Vedasri
Abstract: Karah is a thief with a heart of gold, you raid enemy camps and dungeons to steal back the money stolen from towns and villages and upgrade enchanted items to deal with dangerous foes. After successfully returning the wealth to the local town, you must then face down and defeat a general of the evil king.
Department: Software Engineering and Game Development
Supervisor: Dr. Sungchul Jung
Presentation | Poster | More Information