Reddit Stories AI Course Full Access
🤖 Full AI Automation · No Camera · No Voice

Reddit Stories YouTube Channel
Full Automation Guide

From zero to a fully automated channel that generates AI scripts, voiceovers, and videos — then posts them automatically with a YouTube API token.

📚 6 Modules
🤖 100% AI Production
Automated Posting
💰 High CPM Niche

📋 Course Contents

1
Why Reddit Stories? Niche Analysis & Channel Setup
Understand why this niche prints money and set up your channel the right way from day one.

Why Reddit Stories Channels Work

Reddit story channels are one of the most proven faceless YouTube formats. The content writes itself — Reddit generates millions of viral stories daily. Your job is to turn them into videos automatically.

  • High CPM: $8–25 (English audience, Tier-1 countries). Subreddits like r/AITA, r/relationship_advice, r/AmItheAsshole, r/tifu are goldmines.
  • Algorithm-friendly: High watch time — people stay to hear the full story + comments. 60–80% retention is common.
  • Scalable: Once the pipeline is built, one script runs 50 videos/day if you want.
  • No face, no voice required: 100% AI handles both.

Best Subreddits for High Views

🔥
r/AmItheAsshole (AITA)
Most popular format. Viewers love voting. "AITA for…" titles get insane CTR (6–12%). Perfect for Shorts and long-form.
💔
r/relationship_advice
Emotional stories = high retention. Viewers watch until the end to find out what happened. Great for 5–15 min videos.
😱
r/tifu (Today I F*cked Up)
Funny and relatable. Very shareable. Good for building subscribers quickly.
🕵️
r/MaliciousCompliance / r/ProRevenge
Satisfying endings = viewers watch til the end + high comment engagement = algorithm boost.

Channel Setup Checklist

  1. Create a new Google account Use a fresh Gmail. Don't use your personal account — keep things separate for scaling.
  2. Channel name formula Use something like: StoryTime Daily, Reddit Confessions, AITAHub, RevengeStories. Short, memorable, niche-specific.
  3. Channel art & branding Use Canva AI — generate a banner and icon in 5 minutes. Dark background, bold text, simple icon. Canva has free Reddit story templates.
  4. Channel description & keywords Include: "reddit stories", "aita", "relationship advice", "storytime", "reddit readings". This helps suggested video algorithm.
  5. Set channel to "Made for Kids: No" Required for monetization and ads.
  6. First 3 videos manually Before automation, post 3 videos manually so YouTube indexes your channel type correctly.
💡 Pro tip: Don't add your real name or personal info anywhere on the channel. Use a fictional persona. This is a business, not a personal brand.
2
AI Script Generation — Auto Reddit Story Writer
Automatically fetch top Reddit posts and turn them into ready-to-voice scripts using AI.

Step 1 — Fetch Reddit Stories via API

Reddit has a free public API. You don't need a key for basic access. The script below fetches the top posts from any subreddit and saves them as text files.

Python import requests, json, os SUBREDDIT = "AmItheAsshole" LIMIT = 10 # stories per run OUTPUT_DIR = "scripts_raw" os.makedirs(OUTPUT_DIR, exist_ok=True) headers = {"User-Agent": "Mozilla/5.0"} url = f"https://www.reddit.com/r/{SUBREDDIT}/top.json?limit={LIMIT}&t=day" res = requests.get(url, headers=headers) posts = res.json()["data"]["children"] for i, post in enumerate(posts): d = post["data"] title = d["title"] body = d.get("selftext", "") score = d["score"] if len(body) < 200: # skip short posts continue filename = f"{OUTPUT_DIR}/story_{i+1}.txt" with open(filename, "w", encoding="utf-8") as f: f.write(f"TITLE: {title}\n\nSCORE: {score}\n\n{body}") print(f"Saved: {filename}")
💡 Filter stories: Only take posts with score > 500 and > 200 characters. These are proven viral stories.

Step 2 — AI Script Formatting with GPT

Raw Reddit posts are messy. The AI reformats them into a clean, engaging YouTube narration script with intro hook, story, and outro.

Python from openai import OpenAI import os client = OpenAI(api_key="YOUR_OPENAI_KEY") SYSTEM_PROMPT = """You are a YouTube script writer for a Reddit stories channel. Format the story as: 1. HOOK (1-2 sentences that grab attention — do NOT reveal the ending) 2. STORY (rewrite in engaging second-person narration, keep all key details) 3. OUTRO (ask viewers: "What would YOU have done? Comment below!") Rules: - Keep names (use OP, NTA, YTA etc.) - Sound like a human narrator, not a robot - No hashtags, no bullet points in output - Output ONLY the script text, nothing else""" def format_script(raw_story): response = client.chat.completions.create( model="gpt-4o-mini", # cheap and good messages=[ {"role": "system", "content": SYSTEM_PROMPT}, {"role": "user", "content": raw_story} ] ) return response.choices[0].message.content # Process all raw stories for file in os.listdir("scripts_raw"): raw = open(f"scripts_raw/{file}", encoding="utf-8").read() script = format_script(raw) out_path = f"scripts_ready/{file}" os.makedirs("scripts_ready", exist_ok=True) open(out_path, "w", encoding="utf-8").write(script) print(f"✅ Script ready: {out_path}")

Prompt Engineering for High-Retention Scripts

The key to high watch time is the script structure. Use this proven formula:

  • Hook (0–5 sec): "She found out her husband had been lying for 3 years. What she did next shocked everyone." — Never reveal the ending in the title or hook.
  • Setup (5–30 sec): Introduce characters. Keep it simple: OP, husband, best friend, coworker.
  • Escalation (30 sec – 80%): Build tension. The story unfolds. Keep sentences short for the AI voice to sound natural.
  • Payoff (last 20%): The twist or resolution. Make it satisfying.
  • Outro CTA: "Was OP right? Drop your verdict below — NTA or YTA?" This drives comments = algorithm boost.
⚠️ Copyright note: You are not copying Reddit posts. You are using them as inspiration and rewriting them with AI. The output is a new, original script. This is standard practice across hundreds of YouTube channels.

Batch Processing — Run 10 Scripts Automatically

Python — full_pipeline_step1.py # Run this once per day to get 10 fresh scripts import subprocess subprocess.run(["python", "fetch_reddit.py"]) subprocess.run(["python", "format_scripts.py"]) print("📝 10 scripts ready for voiceover!")
3
AI Voiceover — Automated Audio Production
Turn your scripts into professional-sounding narration using AI TTS — fully automated.

Best AI Voiceover Tools for Reddit Channels

🎙️
ElevenLabs $5/mo starter
Best quality AI voice. "Rachel" and "Adam" voices sound almost human. Has API for full automation. Most Reddit channels use this.
🔊
Google Cloud TTS Free tier
WaveNet voices are very good. Free up to 1M characters/month. Perfect for starting out. Use en-US-Wavenet-D (male) or en-US-Wavenet-F (female).
🎤
Microsoft Azure TTS Free tier
500k characters free/month. Neural voices are excellent. Good fallback option.

ElevenLabs API Automation Script

Python — voiceover.py import requests, os ELEVEN_API_KEY = "YOUR_ELEVENLABS_KEY" VOICE_ID = "21m00Tcm4TlvDq8ikWAM" # Rachel voice OUTPUT_DIR = "audio" os.makedirs(OUTPUT_DIR, exist_ok=True) def text_to_speech(text, output_file): url = f"https://api.elevenlabs.io/v1/text-to-speech/{VOICE_ID}" headers = { "xi-api-key": ELEVEN_API_KEY, "Content-Type": "application/json" } body = { "text": text, "model_id": "eleven_monolingual_v1", "voice_settings": { "stability": 0.5, "similarity_boost": 0.75 } } resp = requests.post(url, json=body, headers=headers) with open(output_file, "wb") as f: f.write(resp.content) print(f"🎙️ Audio saved: {output_file}") # Process all ready scripts for file in sorted(os.listdir("scripts_ready")): script_text = open(f"scripts_ready/{file}", encoding="utf-8").read() audio_out = f"{OUTPUT_DIR}/{file.replace('.txt', '.mp3')}" text_to_speech(script_text, audio_out)

Google TTS (Free Option)

Python — voiceover_google.py from google.cloud import texttospeech import os client = texttospeech.TextToSpeechClient() def tts_google(text, output_file): synthesis_input = texttospeech.SynthesisInput(text=text) voice = texttospeech.VoiceSelectionParams( language_code="en-US", name="en-US-Wavenet-F", ssml_gender=texttospeech.SsmlVoiceGender.FEMALE ) audio_config = texttospeech.AudioConfig( audio_encoding=texttospeech.AudioEncoding.MP3, speaking_rate=1.1, # slightly faster = more engaging pitch=0.0 ) response = client.synthesize_speech( input=synthesis_input, voice=voice, audio_config=audio_config ) with open(output_file, "wb") as f: f.write(response.audio_content) print(f"✅ {output_file}")
💡 Speaking rate tip: Set speaking rate to 1.05–1.15. Slightly faster narration feels more engaging and keeps viewers from clicking away.

Background Music (Optional but Recommended)

Add subtle ambient music under the narration — it improves watch time by ~15%. Use royalty-free sources:

  • YouTube Audio Library — free, YouTube-approved
  • Pixabay Music — free, good quality
  • Epidemic Sound — $15/mo, professional quality

The script handles mixing automatically in Module 4.

4
Automated Video Assembly — The Full Pipeline
Combine AI script + voiceover + background footage into a finished video automatically using Python + FFmpeg.

What the Finished Video Looks Like

  • Background: Minecraft parkour / GTA driving / Subway Surfers / nature footage (vertical for Shorts, horizontal for long-form)
  • Text overlay: Story title + subtitles synced to voice
  • Audio: AI narration + background music at 10–15% volume
  • Duration: 3–15 min for long-form, 30–59 sec for Shorts

Required Tools

  1. FFmpeg Free, open-source video processing. Install: winget install ffmpeg (Windows) or brew install ffmpeg (Mac)
  2. Python 3.10+ For the automation scripts. Install from python.org
  3. MoviePy Python library: pip install moviepy
  4. Background videos Download a 1-hour loop of Minecraft parkour / Subway Surfers from YouTube (search "gameplay no commentary"). Store locally.

Video Assembly Script

Python — assemble_video.py from moviepy.editor import * import os, random BACKGROUNDS_DIR = "backgrounds" # folder with .mp4 background loops AUDIO_DIR = "audio" OUTPUT_DIR = "videos_ready" MUSIC_FILE = "music/ambient.mp3" # optional background music os.makedirs(OUTPUT_DIR, exist_ok=True) def make_video(audio_file, output_file, title_text): # Load voiceover narration = AudioFileClip(audio_file) duration = narration.duration # Pick random background, trim to duration bg_files = [f for f in os.listdir(BACKGROUNDS_DIR) if f.endswith(".mp4")] bg_path = os.path.join(BACKGROUNDS_DIR, random.choice(bg_files)) bg = VideoFileClip(bg_path).without_audio() # Random start point in background max_start = max(0, bg.duration - duration - 5) start_t = random.uniform(0, max_start) bg = bg.subclip(start_t, start_t + duration) # Resize to 1920x1080 (or 1080x1920 for Shorts) bg = bg.resize((1920, 1080)) # Optional: add background music at low volume try: music = AudioFileClip(MUSIC_FILE).subclip(0, duration) music = music.volumex(0.10) final_audio = CompositeAudioClip([narration, music]) except: final_audio = narration # Title text overlay txt = TextClip(title_text, fontsize=52, color='white', font='Arial-Bold', stroke_color='black', stroke_width=2, size=(1600, None), method='caption') txt = txt.set_position(('center', 80)).set_duration(min(5, duration)) # Compose video = CompositeVideoClip([bg, txt]) video = video.set_audio(final_audio) video.write_videofile(output_file, fps=30, codec='libx264', audio_codec='aac', threads=4, logger=None) print(f"✅ Video ready: {output_file}") # Process all audio files for audio_file in sorted(os.listdir(AUDIO_DIR)): if not audio_file.endswith(".mp3"): continue base_name = audio_file.replace(".mp3", "") script_file = f"scripts_ready/{base_name}.txt" output_file = f"{OUTPUT_DIR}/{base_name}.mp4" if os.path.exists(output_file): continue # already done # Get title from script file title = "Reddit Story" if os.path.exists(script_file): first_line = open(script_file, encoding="utf-8").readline().strip() title = first_line[:60] make_video(f"{AUDIO_DIR}/{audio_file}", output_file, title)

Auto-Generate Subtitles (Optional — Big CTR Boost)

Python — add_subtitles.py import whisper, json # pip install openai-whisper model = whisper.load_model("base") def transcribe_audio(audio_path): result = model.transcribe(audio_path, word_timestamps=True) return result["segments"] # Use segments to add word-by-word text overlay with MoviePy # (each segment has start/end time + text — add as TextClip per segment)
💡 Performance: On a modern PC, one 8-minute video takes about 2–4 minutes to render. You can run multiple processes in parallel. 10 videos = ~40 minutes unattended.

Full One-Click Pipeline Script

Python — run_all.py import subprocess, datetime print(f"\n🚀 Starting pipeline — {datetime.datetime.now()}\n") print("📡 Step 1: Fetching Reddit stories...") subprocess.run(["python", "fetch_reddit.py"], check=True) print("✍️ Step 2: Formatting scripts with AI...") subprocess.run(["python", "format_scripts.py"], check=True) print("🎙️ Step 3: Generating voiceovers...") subprocess.run(["python", "voiceover.py"], check=True) print("🎬 Step 4: Assembling videos...") subprocess.run(["python", "assemble_video.py"], check=True) print("📤 Step 5: Uploading to YouTube...") subprocess.run(["python", "upload_youtube.py"], check=True) print("\n✅ All done! Videos uploaded to YouTube.")
5
Auto-Posting via YouTube API Token
Upload videos automatically to YouTube with optimized titles, descriptions, and tags — no manual work required.

Step 1 — Set Up YouTube Data API

  1. Go to Google Cloud Console Visit console.cloud.google.com and create a new project called "YouTubeBot"
  2. Enable YouTube Data API v3 APIs & Services → Library → search "YouTube Data API v3" → Enable
  3. Create OAuth 2.0 credentials APIs & Services → Credentials → Create Credentials → OAuth Client ID → Desktop App
  4. Download client_secret.json Save it in your project folder. Keep this file private — never share it.
  5. First run — authorize Run the upload script once manually. A browser window will open asking you to authorize. After that, the token is saved and future runs are fully automatic.

YouTube Upload Script

Python — upload_youtube.py import os, pickle from googleapiclient.discovery import build from googleapiclient.http import MediaFileUpload from google_auth_oauthlib.flow import InstalledAppFlow from google.auth.transport.requests import Request # pip install google-api-python-client google-auth-oauthlib SCOPES = ["https://www.googleapis.com/auth/youtube.upload"] CLIENT_SECRETS = "client_secret.json" TOKEN_FILE = "token.pickle" VIDEOS_DIR = "videos_ready" UPLOADED_LOG = "uploaded.txt" def get_youtube_client(): creds = None if os.path.exists(TOKEN_FILE): with open(TOKEN_FILE, "rb") as f: creds = pickle.load(f) if not creds or not creds.valid: if creds and creds.expired and creds.refresh_token: creds.refresh(Request()) else: flow = InstalledAppFlow.from_client_secrets_file(CLIENT_SECRETS, SCOPES) creds = flow.run_local_server(port=0) with open(TOKEN_FILE, "wb") as f: pickle.dump(creds, f) return build("youtube", "v3", credentials=creds) def upload_video(youtube, video_path, title, description, tags): body = { "snippet": { "title": title, "description": description, "tags": tags, "categoryId": "22", # People & Blogs "defaultLanguage": "en" }, "status": { "privacyStatus": "public", "selfDeclaredMadeForKids": False } } media = MediaFileUpload(video_path, chunksize=-1, resumable=True) request = youtube.videos().insert( part="snippet,status", body=body, media_body=media ) response = request.execute() print(f"✅ Uploaded! ID: {response['id']} — {title}") return response["id"] def generate_metadata(script_path): """Generate SEO title, description, tags from script""" script = open(script_path, encoding="utf-8").read() first_line = script.split("\n")[0][:80] title = f"{first_line} #reddit #aita #storytime" description = ( f"{script[:300]}...\n\n" "🔔 Subscribe for daily Reddit stories!\n" "👍 Like if you think OP was right!\n" "💬 Comment your verdict below — NTA or YTA?\n\n" "#reddit #aita #redditstories #storytime #relationshipadvice" ) tags = ["reddit", "aita", "reddit stories", "storytime", "relationship advice", "am i the asshole", "reddit reading", "reddit narration", "tifu", "reddit compilation", "best reddit posts"] return title, description, tags # Main upload loop youtube = get_youtube_client() uploaded = set(open(UPLOADED_LOG).read().splitlines() if os.path.exists(UPLOADED_LOG) else []) for video_file in sorted(os.listdir(VIDEOS_DIR)): if not video_file.endswith(".mp4"): continue if video_file in uploaded: continue # already uploaded video_path = f"{VIDEOS_DIR}/{video_file}" script_path = f"scripts_ready/{video_file.replace('.mp4', '.txt')}" title, desc, tags = generate_metadata(script_path) video_id = upload_video(youtube, video_path, title, desc, tags) # Log as uploaded with open(UPLOADED_LOG, "a") as f: f.write(f"{video_file}\n") # YouTube API quota: ~6 uploads/day on free tier # Upgrade to paid or space uploads across multiple channels
⚠️ YouTube API Quota: Free tier allows ~6 video uploads per day per project. To upload more, either create multiple Google Cloud projects (each has its own quota) or apply for quota increase in Google Cloud Console — usually approved within 48 hours.

Scheduling Uploads (Best Times)

Set a Windows Task Scheduler or cron job to run run_all.py at the optimal time:

  • Best upload times (US audience): 12:00 PM – 3:00 PM EST
  • Frequency: 1–2 videos/day is optimal. More than that can hurt individual video performance.
  • Consistency is key: Upload at the same time every day. YouTube algorithm rewards consistent channels.
Windows Task Scheduler (run daily at 1 PM) # In PowerShell (run as admin): $action = New-ScheduledTaskAction -Execute "python" -Argument "run_all.py" -WorkingDirectory "C:\YourProjectFolder" $trigger = New-ScheduledTaskTrigger -Daily -At "1:00PM" Register-ScheduledTask -Action $action -Trigger $trigger -TaskName "YouTubeAutoPost" -RunLevel Highest
6
Monetization, Scaling & Agency Partnership
From first AdSense check to $10k+/month — the roadmap and how TubeSos scales with you.

Timeline to Monetization

  1. Week 1–2: Setup Channel created, scripts running, first 5 manual videos uploaded to test the system.
  2. Week 3–4: Automation live Pipeline running daily. 1 video/day = 14+ videos. Start seeing impressions.
  3. Month 1–2: Growth phase 100–500 subscribers. Some videos may get picked up by algorithm. Tweak titles and thumbnails based on CTR data.
  4. Month 1.5–3: Monetization threshold 1,000 subscribers + 4,000 watch hours. Apply for YouTube Partner Program. Approval in 1–4 weeks.
  5. Month 3+: First checks First AdSense payment ($100 threshold). Typical CPM for English Reddit channels: $8–25. A video with 100k views = $800–2,500.

Revenue Streams

  • AdSense (primary): $8–25 CPM on English audience. Relationship/advice content has one of the highest CPMs on YouTube.
  • Channel memberships: Once at 1,000+ subscribers. Offer "early access" to stories.
  • Sponsored integrations: At 50k+ subscribers, brands in relationships/lifestyle will pay $500–$2,000 per integration.
  • Multiple channels: Once the pipeline is built, run 3–5 channels simultaneously. Each runs on autopilot.

Scaling with TubeSos Agency

Once your channel is monetized, TubeSos takes over growth:

  • SEO optimization on every video
  • Paid traffic campaigns to boost new videos
  • Thumbnail A/B testing
  • Analytics review and strategy adjustments
  • Access to the agency network — cross-promotion between partner channels
✅ Partnership tiers: <$10k/mo → 50/50 split. $10k–$100k/mo → 75% you / 25% us. $100k–$1M/mo → 88% you / 12% us. The more you earn, the better your share.

Minimum Requirements to Join TubeSos Agency

  • ✔ YouTube Partner Program approved (monetized)
  • ✔ At least 1,000 subscribers
  • ✔ Consistent upload schedule (minimum 3 videos/week)
  • ✔ English-language content preferred (higher CPM)
  • ✔ Channel in good standing (no strikes)
💡 Fast track: Students who complete this course and reach monetization within 90 days get priority placement in the agency review queue. Message @inlintra on Telegram with your channel link when you're ready.

🚀 Ready to Build Your Channel?

You have the full system. Start today — run the scripts, post the first video, and let automation do the work.