First Place
GMC-4190 CellNucleiRAG - Smart Search Tool for Cell Nuclei Research (Graduate Project) by Koganti, Sai Chandana Abstract: CellNucleiRAG is a specialized tool developed to address a significant challenge
in medical research: the rapid retrieval and synthesis of detailed information on
cell nuclei. Understanding cell nuclei characteristics is crucial in fields like pathology,
oncology, and diagnostics, where detailed cell analysis can guide disease identification
and treatment planning. However, accessing relevant, organized information on specific
cell nuclei types, datasets, models, and methods is often time-consuming, requiring
manual searches through multiple, disparate sources. CellNucleiRAG solves this problem
by acting as a smart search engine, designed specifically for cell nuclei research,
combining traditional retrieval methods with advanced AI capabilities. Built with
an underlying Retrieval-Augmented Generation (RAG) architecture, CellNucleiRAG leverages
MinSearch for rapid data retrieval, pulling relevant records from a curated dataset
that contains information on various nuclei types, datasets, and analytical models.
Once relevant data is retrieved, it is processed by an LLM (Large Language Model)
to generate contextually accurate, human-readable responses. This dual approach ensures
both precision and clarity, allowing researchers to receive comprehensive answers
rather than isolated data points. Key technologies used in this project include Docker,
for environment consistency; Flask, for a streamlined user interface; PostgreSQL,
for storing interactions and user feedback; and Grafana, for real-time system performance
monitoring. User feedback is incorporated to continually refine the tool, enhancing
the accuracy and relevance of responses. Department: Computer Science Supervisor: Dr. Coskun Cetinkaya Presentation | Poster |
|
Second Place
GMC-2162 Prompt Engineering and its Effects On AI and Human Relationships: A Contemporary
Approach (Graduate Project) by Madu, Francis, Kadiyala, Naga Janaki Madhav, Thallapally, Nivesh Abstract: A. Background: Prompt engineering refers to the process of designing and refining
input prompts for AI models (especially language models like GPT) to improve their
outputs. It has become a critical tool in maximizing the performance and utility of
AI models in diverse applications, from customer service to content creation. Beyond
technical aspects, the interaction between humans and AI is increasingly shaped by
the effectiveness of these prompts. B. Motivation: As AI becomes more integrated into
daily life, the way humans interact with AI models is profoundly influenced by prompt
engineering. Misaligned prompts can lead to misunderstanding, confusion, or unintended
outcomes, affecting both the utility of AI systems and the trust people place in them.
Our project seeks to understand how different prompt strategies impact not only AI
performance but also human perceptions and relationships with AI systems. By exploring
these dynamics, we aim to develop best practices in prompt engineering that foster
both efficient AI performance and positive human-AI relationships. C. Expected Results:
We expect to demonstrate that well-constructed prompts not only improve AI output
quality but also lead to more transparent, trustworthy, and meaningful human-AI interactions.
This will be quantified through various metrics such as response accuracy, user satisfaction,
and interaction smoothness. Department: Computer Science Supervisor: Dr. Chen Zhao Presentation | Poster |
|
Third Place
GMC-157 Text-to-Digital Person Video Generator: DigitalAvatarGen (Graduate Project) by Kesharwani, Akansha, Bagdwal, Nisha, Hossain, MD E, Patel, Drashti, ADIGOPPULA, NIKHIL Abstract: The Text-to-digital person video generator: DigitalAvatarGen project uses AI to
create lifelike videos of 2D digital avatars from user text input. Users enter text,
select a voice and select or upload an avatar, and generate a video using DigitalAvatarGen
web application which uses Google TTS and SadTalker, to synchronize voice, expressions,
and lip movements. Key contributions include a customizable user interface, personalized
voice and avatar options, and an optimized backend for efficient video generation.
This tool provides an engaging, realistic solution for applications in education,
media, and customer interaction. Department: Information Technology Supervisor: Dr. Ying Xie Presentation | Poster | More Information |
|
|