Demo Night - Recap - Fear Detection to Production-Ready AI Agents
- Chandan Kumar

- Dec 14, 2025
- 4 min read
Updated: Dec 16, 2025

Date: December 9, 2025
Location: Shift Labs, Toronto
Community: TorontoAI
Slides from the Demo Night:
Full Recording Demo 1 - https://youtu.be/cqzwpBPW83s
Full Recording Demo 2 - https://youtu.be/HWs4XAO75XA
Last week, the TorontoAI community came together for another hands-on Demo Night—this time focused on practical, real-world AI applications rather than hype. Despite snowy weather and a smaller in-person turnout, the evening delivered deep technical insights, candid startup stories, and live demos that showcased how AI can move from experimentation to meaningful impact.
The event featured two main demos:
A live walkthrough of a FearSense application built using Falcons AI’s fear-mongering detection model. Falcons.AI
A deep dive into Moorcheh.ai, a platform designed to help teams build scalable, production-grade AI assistants and agents using Retrieval-Augmented Generation (RAG).
Why TorontoAI Exists

The evening opened with an introduction to TorontoAI, a community founded to bridge the gap between developers, startups, and applied AI use cases. With over 6,000 members across platforms, TorontoAI focuses on demo nights, panel discussions, and applied learning—especially for people building and deploying AI systems, not just talking about them.
A special thanks was given to Shift Labs for hosting the event and supporting the local AI ecosystem by opening their space to the community.
Demo #1: FearSense – Detecting Fear-Mongering with Falcons AI
The Problem Being Explored
Fear-driven content is everywhere—news, political speech, social media, and even children’s content. While sentiment analysis is common, fear detection is a more nuanced and underexplored area, especially when considering its psychological and societal impact.
This demo explored a simple but powerful question:
Can we quantify fear in media content, and can that signal be used responsibly—especially in healthcare and research contexts?
About Falcons AI - Most downloaded Model on Huggingface
Falcons AI is a lean, developer-focused AI company whose models consistently rank among the most downloaded on Hugging Face, despite being built by a small, bootstrapped team. Their success comes from solving specific, real problems—not chasing generic AGI narratives.
Huggingface URL - https://huggingface.co/Falconsai

The FearSense demo leveraged a DistilBERT-based model, fine-tuned specifically for fear-mongering detection. Unlike large LLMs:
It runs efficiently on CPUs
Requires no GPU acceleration
Can be deployed locally or on small cloud instances
Prioritizes determinism and explainability
Live Demo Highlights
The FearSense application was built as a Streamlit app that:

Accepts YouTube URLs or raw text transcripts
Extracts and processes content
Breaks text into chunks
Scores each chunk for fear intensity
Visualizes fear peaks and distributions
Despite some real-world demo friction (cloud tunnels, missing Python dependencies, and live debugging), the audience got a realistic look at what actual AI
development looks like—not polished slides, but real systems being deployed and fixed on the fly.
Healthcare & Research Implications
One of the most compelling discussions centered around healthcare applications:
Correlating fear-heavy media consumption with heart rate or stress data from wearables
Studying impacts on vulnerable populations, including seniors and children
Providing researchers with tools, not conclusions—allowing them to explore correlation vs causation responsibly
The key takeaway:This was not a “medical diagnosis” tool, but a research-enabling prototype designed to spark deeper investigation.
Demo #2: Moorcheh.ai – Building Scalable AI Assistants and Agents
The second half of the evening shifted from model-level demos to production AI systems.
The Problem Moorcheh.ai Solves
Many teams struggle when moving from:
A prototype chatbot
To a scalable, accurate, and cost-efficient AI assistant
Common pain points include:
Complex RAG pipelines
High latency
Hallucinations
Expensive vector databases and re-rankers
Difficulty exporting prototypes into real applications
Moorcheh.ai positions itself as an infrastructure abstraction layer for AI assistants—reducing complexity while maintaining performance and accuracy.
Case Study: DoctorPal AI (Healthcare)
One featured use case was DoctorPal AI, a healthcare assistant built on top of thousands of pages of medical and nutrition documents.
Using Moorcheh.ai, the team was able to:
Upload large document sets (PDFs, websites, structured data)
Automatically chunk, embed, summarize, and index content
Enforce strict relevance thresholds to prevent hallucinations
Provide citation-backed responses
Control which questions the AI is allowed to answer (kiosk mode)
The result: a domain-specific AI assistant that only answers based on verified source material, not general internet knowledge.
Key Technical Capabilities Demonstrated
Namespace-based knowledge isolation
Built-in re-ranking and relevance scoring
Toggleable kiosk mode to block irrelevant questions
Model flexibility (Claude, LLaMA, Bedrock-native models)
API-first design for embedding assistants into real products
Serverless, cloud-native architecture for cost efficiency
A major differentiator highlighted was the ability to export AI assistants into production apps, unlike tools that remain locked inside notebooks or playgrounds.
From Chatbots to AI Agents
The demo concluded with an advanced walkthrough showing how Moorcheh.ai can be used to build dynamic AI agents, not just static Q&A bots.
Example shown:
A legal intake AI agent
Dynamically changes questions based on user responses
Uses RAG not for documents—but for decision rules and instructions
Produces structured summaries for human review
The entire workflow—from knowledge base to UI—was assembled in hours, not weeks.
Here is where developers can access Moorcheh.ai and its integrations:
Slide Deck from the Event -
GitHub (Official Repositories & SDKs): https://github.com/moorcheh-ai/moorcheh-python-sdk
Moorcheh-LangChain Integration: https://docs.langchain.com/oss/python/integrations/vectorstores/moorcheh
Moorcheh-LlamaIndex Integration: https://developers.llamaindex.ai/python/examples/vector_stores/moorchehdemo/
Moorcheh MCP Server: https://github.com/moorcheh-ai/moorcheh-mcp
Moorcheh-N8N Integration (Low-Code/No-Code), Official Verified Node: https://n8n.io/integrations/moorcheh/.
Key Takeaways from the Night
Small, focused models still matter—especially when accuracy, cost, and deployability are critical.
AI demos should reflect reality: debugging, tradeoffs, and iteration.
RAG is no longer optional for serious AI products—but it must be done carefully.
The future isn’t just chatbots—it’s context-aware, task-driven AI agents.
Community-driven learning accelerates real innovation far more than polished marketing.
Thank You & What’s Next
A big thank you to:
Falcons.AI for sharing their models and philosophy
Moorcheh.ai for an in-depth, transparent technical walkthrough
Shift Labs for hosting
Everyone who attended and participated in discussions
TorontoAI will continue hosting demo nights, panels, and hands-on sessions focused on applied AI, platform engineering, and real-world deployments.
Stay tuned for upcoming events—and if you’re building something interesting in AI, TorontoAI is your platform.
