BIOS Hackathon BIOS Hackathon
← Back to gallery

Incognito

TruSheild PRO

Problem statement: OC PS 8: AI-Powered Cybersecurity Tool for Detecting Fake and AI-Generated Content Updated Nov 28, 2025 13:32 IST Upwotes: 1
TruSheild PRO thumbnail

Project overview

TruSheild PRO is a next-gen AI deepfake forensics engine built to defend creators, brands, and governments against the rise of hyper-realistic synthetic media. Unlike traditional detectors that rely on single-model heuristics, TruSheild PRO uses a multi-modal transformer pipeline combining visual forgery detection, voice-clone identification, lip-sync analysis, semantic drift tracking, and diffusion fingerprinting. The system extracts signals from frames, audio, spectrograms, gestures, semantics, and LLM-based linguistic patterns, fuses them with a weighted trust-score engine, and produces a complete forensic report with heatmaps, radar-diagnostics, and explainability. We built two versions: Creator Edition — simple interface for influencers, journalists, and everyday users. National Security Edition — an expanded, high-sensitivity model designed to evolve into a government-grade forensic toolkit. Our philosophy: “India needs trustworthy media verification. As citizens first, we built TruSheild PRO to contribute to digital safety at scale.” Jai Hind 🇮🇳

Inspiration

The rise of highly convincing deepfakes—political, celebrity, and misinformation-driven—showed us that current detectors are outdated and unreliable. We wanted to build something actually useful in 2025: a multi-modal, transformer-powered forensic engine that creators, journalists, and authorities can trust. As citizens first, we felt responsible to build a tool that protects digital truth.

What it does

TruShield PRO analyzes a video across five modalities: visual forgery, audio realism, lip-sync, semantic coherence, and speaker identity. It generates heatmaps, spectrograms, cross-consistency checks, and a unified Trust Score (0–100). It outperforms popular online detectors by catching inconsistencies they completely miss.

How we built it

We built a custom pipeline using ViT/ConvNeXt for visual embeddings, Wav2Vec2 + ECAPA-TDNN for spectral audio forensics, GPT-based perplexity for semantic drift, and multi-agent explainers. We fused everything using FAISS-based similarity scoring and built a futuristic React + Vite frontend for real-time reports. The backend is fully Python + PyTorch.

Individual contributions

None

Challenges

Making multiple heavy transformer models run efficiently together Fixing broken audio pipelines on Windows Aligning audio–video embeddings for lip-sync checks Building a clean, futuristic frontend from scratch Processing diverse video formats reliably

Accomplishments

Built a production-grade deepfake forensic engine in one hackathon Outperformed existing online detectors Fully working heatmaps, spectrograms, consistency graphs Created a unified Trust Score instead of random “real/fake” output Achieved smooth frontend–backend integration

Learnings

We learned how unreliable traditional deepfake detectors are and how essential multimodal forensics is. We also learned advanced PyTorch model handling, cross-consistency reasoning, and how to build a professional UI that feels like a real product—not a hackathon demo.

Next steps

We are building two versions: A creator-friendly version for everyday content safety. A government-grade, hardened forensic version with stronger fingerprinting and legal-grade evidence trails. Our mission: empower creators—and protect the nation. Jai Hind 🇮🇳

Back to list