Data radio (notes)

From ActiveArchives
Jump to: navigation, search

Dataradio.gif 250px-Tdkc60cassette.jpg

Contents

Schedule

radio = fallback([ request.queue(id="request"),
                    switch([({ 6h-22h }, day),
                            ({ 22h-6h }, night)]),
                    default])

Day programme

  • 41:56 min. interview with Perttu Rastas about the archive
  • 07:39 min. interview Kathy
  • 01:00 h average audio
  • 03:58 min. Associative Memory
  • 2048 Random fragments of 11 mins.
  • Grep
  • All minutes order by amplitude
  • All minutes gradually added
  • All seconds permutated
  • Grep


  • Night programme
  • average length cassette 1h00
  • Interesting to produce manually
  • Tag Selection
  • Singing
  • film audio

Notes

singing: C4142.aif, 40:00 - 44:00

Cookbook

Using liquidsoap, and festival

Starting points:

  • relay: setup server with simple program to relay mount (in-out), with fallback playlist
  • "audio grep": example -- sent from PC ?!
  • ability to play extracts from a larger file (Casette Tapes)
  • chat connection?
  • test pagekite to control tiny

Relay

Following the complete case example, using fallback.

Discovery: fallback stream is properly interuptable (ie when new track is available, it cuts in, and when it ends/fails, the fallback stream continues where it left off). Now the question becomes: How to access metadata / narrate what is currently being played (esp in the case of the fallback content then).

Audio Grep

http://ubuntuforums.org/showthread.php?t=751169

aha, audio_to_stereo

Toward a "chatty" stream

Liquidsoap + Icecast have a standard notion of metadata in the stream... Metadata Historical projects like Geekradio, used IRC to allow dynamic "requests" to be made via chat. The Radio PI page (based on an older version of the software), in interesting in it's use of the on_metadata source filter. Interesting how functional programming concepts (like a function that returns a function as a notion of "source" lends itself so well to the pipeline way of working of a stream.

Using Transmit.liq

set("log.stdout",true)
input = playlist.safe(mode="normal", "transmit.pls")
output.icecast(%vorbis, 
  host="activearchives.org",port=2048,password="thelibraryofbabel",
  mount="remote.ogg",input)


Added various metadata to OGG file with Audacity export, checking it with ffmpeg shows: Dataradio FFMPEG Metadata 20130105-142026.png

Now, when transmitting this via the relay, icecast shows the artist + track title (only) in the stream metadata: IcecastStreamMetadata 20130105-140324.png

Note: when metadata is missing Current Song is still visible and shows "Unknown".

IN the extended (admin) view, you can see that in fact artist and title have been included, while comment, date, album, ... are not included. Dataradio IceCastMetadataExtended 20130105-145907.png

So where does the ogg's metadata get filtered / passed on to the Icecast stream, and is this process customizable (to for instance include more/custom information).

Audiogrep will be really good if it could (just) include the text in the metadata as a kind of subtitle... How will it be possible to have time-based metadata ?!

Todo: look for better looking IRC web client. For install: Have a proper IRC client + mplayer to play stream!

Test with the annotate protocol in playlist... does "arbitrary" metadata get sent to icecast? Maybe not.

Challenge: Transmit ALL metadata to the stream so that the resulting "stream" is "complete"?... though this isn't strictly necessary given that the chat itself will not (necessarily) be in the stream...

Adding:

input = say_metadata(input)

Results in the preparation of the stream:

2013/01/05 14:36:11 [protocols.say:3] Synthetizing "It was Erkki Kurenniemi, Audiogrep: Newton." to "/tmp/say23254d.wav"

Which then plays *after* the audiogrep ogg file.

store_metadata is a command that makes last 5 packets available via the telnet server.

So, after some more testing it seems that the following metadata gets passed through from an OGG when passed through icecast:

ON_META
album: In 2048
genre: Dataradio
artist: Erkki Kurenniemi
encoder: Liquidsoap/1.0.1 (Unix; OCaml 3.12.1)
vendor: Xiph.Org libVorbis I 20101101 (Schaufenugget)
date: 2013
title: Audiogrep: Newton

while others are ignored (this file had a "comments" meta field set, as well as some custom metadata added by an annotation protocol).

So it is possible to add/override metadata that does get passed through a la this script, which sets/overrides the genre value of the ogg:

set("log.stdout",true)
 
# src = 'annotate:nick="nickname":/home/murtaugh/projects/kurenniemi/audiogrep/newton_inverted.ogg'
src = 'annotate:genre="viral":/home/murtaugh/projects/kurenniemi/audiogrep/newton_inverted.ogg'
input = mksafe(single(src))
 
# input = playlist.safe(mode="normal", "transmit.pls")
 
def on_meta (meta)
    print("ON_META")
    list.iter(fun (i) -> print(fst(i)^": "^snd(i)), meta)
end
input = on_metadata(on_meta, input)
 
#output.icecast(%vorbis, 
#  host="localhost",port=2048,password="thelibraryofbabel",
#  mount="remote.ogg",input)
output.icecast(%vorbis, 
  host="localhost",port=2048,password="thelibraryofbabel",
  mount="remote.ogg",
  icy_metadata="guess",
  input
)

And on the server stream (separate liquidsoap instance):

ON_META
album: In 2048
genre: viral
artist: Erkki Kurenniemi
encoder: Liquidsoap/1.0.1 (Unix; OCaml 3.12.1)
vendor: Xiph.Org libVorbis I 20101101 (Schaufenugget)
date: 2013
title: Audiogrep: Newton

eventually

  • explore midi track rendering -- audio to midi ?!!

Image to Text

With the help of pyexiv2, humanize, and num2eng

(Eventually to report on colors, this xkcd report on Color Survey Results.)

Network States

These states are defined as follows (first the network then the ready states):

NETWORK_EMPTY (0) Not yet initialized
NETWORK_IDLE (1) Network is not being used now (video is completely loaded for example)
NETWORK_LOADING (2) Browser is loading data from the net
NETWORK_LOADED (3) Data has been loaded
NETWORK_NO_SOURCE (4) Video resource could not be found/loaded
 
The documentation of the network states is not correct on most sites, so better refer to the states above.
 
HAVE_NOTHING (0) No data available
HAVE_METADATA (1) Duration and dimensions are available
HAVE_CURRENT_DATA (2) Data for the current position is available
HAVE_FUTURE_DATA (3) Data for the current and future position is available, so playback could start
HAVE_ENOUGH_DATA (4) Enough data to play the whole video is availabl

Live Stream

gst-launch autoaudiosrc ! oggenc ! shout2send

What links here

Personal tools
Namespaces

Variants
Actions
Navigation
Toolbox