2014-06-26 Wrote a small shell script to download specific files on webpages and the webpages as well. I use this for a specific website with podcasts. These days I try to automate such stuff. #!/bin/sh for i do channel="${i#*.}"; channel="${channel%%.*}" track="${i##*/}"; track="${track%.html}" out="${channel}_$track" echo "Downloading $out ..." # dump the webpage w3m -dump "$i" >"$out.txt" # extract the link and get the content file=`w3m -dump_source "$i" | gunzip | \ sed -n '/podcast/s,.*"\(.*\)".*,\1,p'` wget -O "$out.mp3" "$file" done The `gunzip' call is necessary because w3m doesn't unzip the source (sometimes). This is an annoying behavior of w3m. http://marmaro.de/lue/ markus schnalke