Skip to content

Robust workflow

This workflow is designed to help us produce river knowledge and media that are fast, AI‑assisted, but still deeply genuine and grounded in real sources. NotebookLM is at the core, but humans remain fully responsible for what we publish.


Layer 0 – Break Big Topics Into Small Topics

Section titled “Layer 0 – Break Big Topics Into Small Topics”

Before any research or AI work, we first break a large theme (like “Mahanadi”) into many small, tightly focused topics. For example, instead of “Everything about Mahanadi,” we use sub‑topics such as “Origin at Sihawa,” “Hirakud’s submerged temples,” or “Bhitarkanika mangroves.” This makes our research more precise and reduces the chance of vague or speculative answers, because NotebookLM is answering narrow questions from a well‑defined slice of sources.

For Mahanadi, lets say create a 17‑day video series: each day covers one specific angle, allowing us to dig deeper and verify facts thoroughly for that one piece before moving on. Smaller topics also make it easier to see gaps in our knowledge and consciously gather missing sources.


Layer 1 – Human‑First Research (Source Curation)

Section titled “Layer 1 – Human‑First Research (Source Curation)”

We always start from a focused topic from Layer 0 (for example, “Rajim as the Prayagraj of Chhattisgarh” rather than “Mahanadi’s religious importance”). Narrow questions are easier to research and easier for NotebookLM to answer accurately from given documents.

Step 1.2 – Manual Source Hunt (No NotebookLM Yet)

Section titled “Step 1.2 – Manual Source Hunt (No NotebookLM Yet)”

For that topic, we perform old‑school research first:

  • Search for government reports, academic papers, historical or scriptural references, and high‑quality journalism.
  • Open suggested articles and videos manually (including anything AI recommended earlier) and decide consciously whether each source deserves to be in our “canon.”
  • Save PDFs, screenshots, and notes into a topic‑specific folder.

In this phase, humans—not AI—decide which voices and documents are allowed to shape our understanding. This preserves ownership of the research and filters out low‑quality or biased material.

Once we have enough material, we assemble a small but trustworthy source set for that topic:

  • PDFs (government docs, books, papers).
  • Exported or copied web pages.
  • Our own field notes and observations.
  • Any previous Nadikosh articles on the same subject.
  • Verified youtube videos

Only after this curation step do we upload these documents to NotebookLM as the grounding corpus for that specific topic.


Layer 2 – AI as Analyst & Writer (Inside NotebookLM)

Section titled “Layer 2 – AI as Analyst & Writer (Inside NotebookLM)”

With our vetted documents loaded, we use NotebookLM to help us read and connect the dots faster:

  • Ask it to summarize each source and highlight key facts with citations.
  • Ask specific, constrained questions like
    “Using only these sources, explain why Rajim is compared to Prayagraj in 600 words. Include citations for every factual claim.”

NotebookLM is designed to reduce hallucinations by answering only from the uploaded sources and showing explicit citations, but we still treat every answer as a draft, not as final truth.

From the NotebookLM drafts, we:

  • Select or merge the best passages into a single article.
  • Manually verify each citation by opening the original documents and checking the quoted ideas or data.
  • Correct misinterpretations, over‑generalizations, or missing context directly in our own editor.

The output of this step is a human‑approved “Final Article – v1” for that micro‑topic. This article is our reference standard for all downstream content.

Step 2.3 – Use NotebookLM to Derive Scripts and Posts

Section titled “Step 2.3 – Use NotebookLM to Derive Scripts and Posts”

We then add “Final Article – v1” back into the notebook as another source and instruct NotebookLM to work only from it (plus field notes if needed). From this trusted base, we ask for:

  • A 5‑minute video script (with clear structure and scene suggestions).
  • A 30‑second short script or hook.
  • Caption drafts and social media posts tailored to different platforms.

Because the model is now transforming our verified article instead of raw, messy sources, the risk of new hallucinations is much lower and easier to spot.

Step 2.4 – Video Overviews as Internal Storyboards Only

Section titled “Step 2.4 – Video Overviews as Internal Storyboards Only”

NotebookLM’s Video Overview feature can then auto‑generate a rough video from our article:

  • We use this for storyboard and pacing ideas—how the topic might be visually organized and narrated.
  • We do not treat this AI video as the final public product, because we cannot surgically fix voiceover or visuals inside NotebookLM.

Layer 3 – Human‑Controlled Production (Outside NotebookLM)

Section titled “Layer 3 – Human‑Controlled Production (Outside NotebookLM)”

We export the 5‑minute script and perform one more focused check:

  • Mark every hard fact (dates, distances, names, numbers, quotes).
  • Confirm each one against the original documents from Layer 1, not just NotebookLM’s answer.
  • Make any last clarifications or additions (local stories, field experiences, devotional insights).

Once this pass is done, we freeze a “Script v2 – Fact Checked” that will be used exactly as written for recording and subtitles.

To avoid tonal and trust issues:

  • We record the entire narration in one consistent voice—either a human narrator from our team or a single TTS voice profile.
  • We do not mix NotebookLM’s built‑in AI hosts with our own audio, so the viewer never experiences abrupt voice changes that suggest patching or hidden edits.

This keeps our storytelling coherent and clearly “ours,” even if AI helped us write the script.

Using the storyboard as a guide, we assemble visuals in a normal video editor (CapCut, Premiere, DaVinci, etc.):

  • Real footage and photographs wherever possible.
  • Carefully prompted AI images or animations when needed, matching real geography and culture.
  • Branded elements for Nadi Stuti (logo, typography, color palette).

We then add subtitles, on‑screen labels, and optional source mentions like “Data: [Report Name, Year]” to make the content transparent and educational.

Step 3.4 – Shorts and Social Media Posts

Section titled “Step 3.4 – Shorts and Social Media Posts”

From “Script v2 – Fact Checked” and the same article, we cut:

  • 30–60 second short video segments with strong hooks.
  • Platform‑specific captions (YouTube Shorts, Instagram, Facebook, Reddit, etc.).

If needed, we can ask NotebookLM (still grounded in the same Final Article) to help rewrite or adapt these snippets for each platform, while we manually approve tone and correctness.


  • Narrow topics first: Breaking a big river into many small, focused themes gives us better control and reduces AI drift.
  • Humans choose sources: We never outsource the canon; AI operates inside a human‑curated document set.
  • AI drafts, humans decide: NotebookLM accelerates reading and drafting, but every final article and script is checked and owned by us.
  • AI video as reference, not product: We use NotebookLM videos as internal inspiration, while real publishing happens through our own editing pipeline.
  • One consistent voice: Narration and final edit are controlled by Nadi Stuti, so the content feels authentic, not generic AI output.

This layered workflow lets us move at AI speed without surrendering the soul, credibility, and responsibility that our river movement demands.