BXL2010 Timed text exercises

From ActiveArchives
Jump to: navigation, search

Contents

Day 1: Timed Text Exercises

Code

Example (example.sh)

this command (example.sh) plays the movie with subtitles (if present) named: OriginAndTheoryOfTheTapeCut-Ups.srt with the settings (if present) named: OriginAndTheoryOfTheTapeCut-Ups.flv.conf

mplayer OriginAndTheoryOfTheTapeCut-Ups.flv -use-filedir-conf

OriginAndTheoryOfTheTapeCut-Ups.flv.conf

The <movie-filename>.conf (in this case, 'OriginAndTheoryOfTheTapeCut-Ups.flv.conf') file allows specifying in a single document what would usually be included as command-line switches (-ass-color, -overlapsub, -subfont, etc). The .conf is utilized only with the presence of '-use-filedir-conf', but it significantly reduces the complexity of the command ("one switch to rule them all"). Lines beginning with '#' are treated as a comment and thus are not used to configure mplayer. This allows one to easily ignore certain options while working out the optimal configuration without deleting any options---only the '#' need be inserted/deleted.

# settings used by aaplay / mplayer
#
# overlapsub=1
font="Bitstream Charter"
ass=1
ass-color=00FF0000
subalign=0
# subfont-text-scale=5
fs=1

audio2movie

This script takes a sound-file (of any format supported by mplayer) and generates a movie file by adding a blank video track. This is necessary to utilize subtitle files, which require a video track to be 'projected' on.

The following generates a 15 frames-per-second AVI with motion jpeg (video) + pcm audio. The advantage of motion JPEG is that there are no keyframes, which makes allows the videogrep script (mplayer + edl) to work very precisely.

#!/bin/bash
INPUT=$1
OUTPUT=${INPUT%.*}.mkv
 
# imagemagick to generate an image to be used for the film
convert -size 640x640 xc:green image.jpg
 
ffmpeg -loop_input -f image2 -i image.jpg -r 15 -vcodec flv \
       -i "$INPUT" -ar 44100 -acodec vorbis \
       -shortest -y "$OUTPUT"

The following generates a 15 frames-per-second AVI with motion jpeg (video) + pcm audio (large file size!).

#!/bin/bash
INPUT=$1
OUTPUT=${INPUT%.*}.avi
 
# imagemagick to generate an image to be used for the film
convert -size 640x640 xc:green image.jpg
 
ffmpeg -loop_input -f image2 -i image.jpg -r 15 -vcodec flv \
       -i "$INPUT" -ar 44100 -acodec pcm_s16le \
       -shortest -y "$OUTPUT"

The following uses mp3-encoded audio (producing a smaller file, but it requires ffmpeg be compiled with libmp3lame support)

#!/bin/bash
INPUT=$1
OUTPUT=${INPUT%.*}.avi
 
# imagemagick to generate an image to be used for the film
convert -size 640x640 xc:green image.jpg
 
ffmpeg -loop_input -f image2 -i image.jpg -r 15 -vcodec flv \
       -i "$INPUT" -ar 44100 -acodec libmp3lame \
       -shortest -y "$OUTPUT"

videogrep

This script implements a form of video search that returns snippets of video which contain words matching the search term. This is done by searching the subtitle file and returning the timestamps of video corresponding to the word matches.

It requires '.sub' format, so the videogrep script will first run mplayer for a second to dump a '.srt' file into '.sub' format. Other formats should be easy to add into the script by adding more 'if' statements to execute if there is no '.sub' file available.

#!/bin/bash
# videogrep
# Usage:
#   ./videogrep <movie_file> <search_term> (optional: <subtitle_to_display>)
#
# The subtitle file to be searched is implicit--it is always named the same 
# as the movie file, but with a .sub extension. We convert automatically 
# to .sub when another format (right now only .srt) is used.
#
# The optional <subtitle_to_display> argument allows a separate .sub file
# to be displayed, such that a user can search in one language .sub and
# display a separate (for instance, search for English and display Spanish).
 
framerate=15
 
vid=$1
term=$2
base=${vid%%.*}
 
if [ ! -e $base.sub ]
then
  if [ -e $base.srt ]
  then
    echo "Dumping $base.sub from $base.srt..."
    mplayer "$vid" -sub "$base.srt" -dumpmicrodvdsub -endpos 1
    mv dumpsub.sub "$base.sub"
  fi
fi
 
subtitle=${vid%%.*}.sub		# look for subtitles as (moviename - ext) + ".sub"
 
dstart=-0.5	# seconds to add to the start time (negative = earlier)
dend=2		# seconds to add to the end time
 
echo searching \'$vid\' for \'$term\' using \'$subtitle\'...
 
cat $subtitle  | \
egrep -i $2 | \
sed -e 's/{\([[:digit:]]*\)}{\([[:digit:]]*\)}.*/\1 \2/' | \
awk '{ print ($1/'$framerate') " " ($2/'$framerate') }' | \
awk 'BEGIN { cur=0 }
{
if ($1 > cur) print cur " " $1 " 0"
cur = $2
}
END { print cur " 10000 0" }
' > videogrep.edl
 
showsub=${3:-$subtitle}
mplayer -sub "$showsub" -edl videogrep.edl -fs "$vid"

Typography

Using an abstract font

mplayer -ass -ass-color FFFFFF00 -subfont "Broodthaers" -subfont-text-scale 5  -overlapsub 
 soundMarkTompkinsAboutSigiKeil.flv

Red text, different font

mplayer -ass -ass-color FF333300 -subfont "W\-Droge\-itinerary\-beta" -subfont-text-scale 14  
soundMarkTompkinsAboutSigiKeil.flv

Different tracks

mplayer -ass -ass-color FFCC0055  -sub tracks.srt -font 'Bitstream Vera Serif:style=Bold' -subfont-text-scale 10 -overlapsub soundMarkTompkinsAboutSigiKeil.flv

This last command uses an srt (tracks.srt) where timecodes overlap. Mplayer puts them back in order.

1 00:00:00,000 --> 00:00:03,500 failure

2 00:00:03,750 --> 00:00:07,250 a good piece

3 00:00:05,700 --> 00:00:11,200 what happened

4 00:00:08,528 --> 00:00:13,250 yes, let's go to the first night

5 00:00:11,500 --> 00:00:17,000 because that for me

6 00:00:14,850 --> 00:00:20,050 what happened

7 00:00:18,200 --> 00:00:24,500 from the outside

8 00:00:22,950 --> 00:00:28,250 eruption

9 00:00:29,600 --> 00:00:38,300 Tell your witness story. You're now a real witness.

Three tracks

mplayer -ass -ass-color FFFFFF00  -sub eruptionH.srt -font 'Gentium Basic:style=Bold' \
-subfont-text-scale 6 -ass-border-color FFFFFFFF -overlapsub soundMarkTompkinsAboutSigiKeil.flv

After Lunch Meeting

Femke, Myriam, Pieter etc.

- change the font, color, size -> `ass` is the way to go! red font on a green square!

- overlapping -> makes 2 sentences happening at the same time appear indeed at the same time on the screen.

- use abstract font: white blocks instead of letters

Alex & Marijke

edlout/edl: select a piece of video to censor play the video, press "I" once for beginning of censoring scene, press "I" a second time to end the censored version. The timestamps are saved in a text file. use edlout to put sentences inside a timestamp (making easier to produce subtitles)

change the font size while mplayer is playing: make the more frequent words bigger Michael: the more frequent words can also be the least important. You could repeat a word to talk about repetition

An, Nicolas, Ivan

Mark Tompkins see which word is repeated. in the beginning he repeats "first night", "failure" after a while the repetition disappears we made a list of the repeated words, we wanted to order them by number of repetitions isolate the part where the repeated words are. replace the repeated words by their definition

Sound files from Merriam Webster (are there open alternatives?)
stage > http://media.merriam-webster.com/soundc11/s/stage001
yes > http://media.merriam-webster.com/soundc11/y/yes00001
witness > http://media.merriam-webster.com/soundc11/w/witnes01

An, Nicolas, and Ivan show their work in progress to combine:

  1. Subtitles where keywords are replaced / augmented with definitions.
  2. Augmenting the audio to overlay the pronunciation of the keyword over the locations where the word is used (according to the srt file).

sox, mp3splt, SoundConverter

The imagined process

  • Split the interview file to extract the repeated words (videogrep applied to mp3 file of the interview)
  • Download the files with voices pronouncing the words
  • Convert these files to mp3 (An which software are you using?)
  • Sox to mix the file with the prononciation over the interview fragment
  • mp3wrap to reassemble all fragments in a file for listening
  • adapt the srt file to "subtitle" the new file
mp3splt sound.mp3 3.46 3.52 44.mp3 3.54 3.59 45.mp3

Sound files from Wikipedia dictionary
stage > http://en.wiktionary.org/wiki/File:en-us-stage.ogg
yes > http://en.wiktionary.org/wiki/File:en-us-yes.ogg
witness > http://en.wiktionary.org/wiki/File:en-us-witness.ogg


SSA: style your subtitles!

Advanced SubStation Alpha (ASS) is a script for more advanced subtitles than SSA. It is technically SSA v4+. It is able to produce anything from simple texts to manual graphic editing used in karaoke. There are few programs designed to create these scripts. The main feature of ASS is it has more specifications than normal SSA, like in styles programming. For example, the above script changed into ASS:

http://www.aegisub.org/

See Cookbook#Convert .srt subtitles into .ssa allowing styling

Script generated by Aegisub

; http://www.aegisub.net
Title: Neon Genesis Evangelion - Episode 26 (neutral Spanish)
Original Script: RoRo
Script Updated By: version 2.8.01
ScriptType: v4.00+
Collisions: Normal
PlayResY: 600
PlayDepth: 0
Timer: 100,0000
Video Aspect Ratio: 0
Video Zoom: 6
Video Position: 0
 
[V4+ Styles]
Format: Name, Fontname, Fontsize, PrimaryColour, SecondaryColour, OutlineColour, BackColour, Bold, Italic, Underline, StrikeOut, 
ScaleX, ScaleY, Spacing, Angle, BorderStyle, Outline, Shadow, Alignment, MarginL, MarginR, MarginV, Encoding Style: DefaultVCD, Arial,28,&H00B4FCFC,&H00B4FCFC,&H00000008,&H80000008,-1,0,0,0,100,100,0.00,0.00,1,1.00,2.00,2,30,30,30,0 [Events] Format: Layer, Start, End, Style, Name, MarginL, MarginR, MarginV, Effect, Text Dialogue: 0,0:00:01.18,0:00:06.85,DefaultVCD, NTP,0000,0000,0000,,{\pos(400,570)}Like an Angel with pity on nobody

Results of the day

Stephanie showed some super styled SSA examples.

What links here

Personal tools
Namespaces

Variants
Actions
Navigation
Toolbox