Grab from a remote site page all resources that a browser will probably download visiting the page
Let say you need to use the HTTP load testing and benchmarking utility siege on a web page and you also want to use the --internet option, to simulate at best the behavior of a web browser.
When a web browser load a page, it also load all the resources inside that page:
So you need a list of all URLs taken from that page.
This utility (its name mean “You Will Download”) will simply create this list for you.
You simply need to redirect the utility output to a file, then use also the siege --file option.
$ youlldownload http://host.com/section/page
Using with siege:
$ youlldownload http://host.com/section/page > list.txt $ siege -i -f list.txt [other options]
Also: CSS sources are deeply analyzed for found additional resources inside them (like background images, fonts, …).