Looking at it now, it's obvious how web crawlers were able to get past it easily.
But no more!!
I even tried a web crawler to try and grab all the scans (where most of the bandwidth has been going to) and all I get back are lots of pages saying the same error message over and over

That should sort the bastards out!! Now going to edit my server config files again to block other stuff
p.s. if you have any problems viewing scans with the new code, please let me know! I have set a limit of 60 scan views for every 4 hours.