Title: Internet geeks here! Who can determine how many web spiders/crawlers are on 4um?? Source:
none URL Source:http://none Published:Nov 20, 2009 Author:X-15 Post Date:2009-11-20 03:04:08 by X-15 Keywords:None Views:2754 Comments:170
"Web spiders/crawlers: programs that search websites looking for specific words or patterns to compile into a database."
A popular gun website I visit had 20 running, if 4um has less then I assume it has a lower profile in the eyes of FedGov.
Internet geeks here! Who can determine how many web spiders/crawlers are on 4um??
You'd need access to christine's server logs to get a good idea. However, there are many kinds of spiders, some quite difficult to detect.
Here is a quickie spider that I wrote. It runs on the Mac, OS X 10.5 Leopard. However, it is a standard Bash script and should work easily on Linux or Unix systems, probably in a Cygwin setup on Windows too.
The script uses Lynx, a venerable text-only browser, to fetch my Comments page to a file called htmlsource1. It then uses the stream editor Sed to parse this captured HTML file by scanning the right column for news stories, capturing the thread names and URLs at 4um to a file called htmlsource2.
It then uses Lynx to capture each thread to a separate file by thread number in a subdirectory called '4um'.
You could build a database or use text search tools like grep to mine the stored threads for info.
No doubt, various federal agencies and people like ADL or SPLC use scripts like this to capture many forums and use grep and other search tools to scan each thread captured for relevant keywords to flag them for review by human beings.
I'm presenting this so that 4um folks can get some idea of how these agencies and busybodies operate. Geeks know this stuff but the average person has no idea how easy it is. People should know how easy it is to database their every remark since we no longer live in a free country.
I'm presenting this so that 4um folks can get some idea of how these agencies and busybodies operate. Geeks know this stuff but the average person has no idea how easy it is. People should know how easy it is to database their every remark since we no longer live in a free country.
BTW, by changing only a few lines in the above code, I could slowly download your entire database and reconstruct each poster's remarks. Essentially, your mySQL database could be replicated by downloading all the threads and parsing the user comments into a new mySQL database. I'm sure Neil could point this out as well, probably better than I can. This is why watching your server logs for an IP that downloads every thread or an IP that downloads every thread in the database sequentially is good to do.
Anyway, this seemed a good thread to point this stuff out.
You only have to parse for the HTML tags and CSS classes. Not at all difficult.
I once wrote a Firefox extension that allowed me to entirely replace the look and feel of TOS, add backgrounds, insert YouTubes to replace the YouTube links, implement my own browser-based bozo filter, etc.
It's quite easy. You have to have good server log analytic software to find out if your site is being mined. Now, 4um isn't really high traffic so you can probably get a good idea by looking at IP addresses. You should watch for IP addresses that only read threads (and that read every thread) and never post. Those lurkers can just as easily be spiders for ADL, FBI, SPLC, NetNanny, Google, etc. In fact, you should assume that you are being spidered this way until you can prove otherwise.
And the spidering of your threads could just as easily come from multiple IP addresses. You can deter some of this by requiring the use of cookies but a competent programmer can fake that too.
You should assume that every word you put online will be recorded. The Feds are building huge new datacenters to store the content of the entire internet and all cell phones and landlines. They may make no use of that unless and until they detect (or need) to chase a domestic threat for terrorism or hate crimes. It is at that point that you will receive a subpoena for your server logs to obtain IP addresses (if they don't already have them by being on the backbone and sniffing everything) and an included order that you will not tell anyone that they are assembling evidence. After that, the ISPs for those requested IP addresses will get national security directives and subpoenas to provide their logs to identify the IP address and they would also be silenced.
This is is the new Soviet Amerika. Welcome to the gulag, comrade.
She owns both domain names and points both of them to her site. If you use libertypost.net as your URL, it mostly works but you won't have the same browser cookies (it seems) so it may look or act a little differently on a few screens. You might have to sign in again or whatnot.
Libertypost.net points to the server for libertypost.org. Libertypost.com and libertypost.us are owned by the USPS and that isn't sinister either.
Pointing multiple domains at a single server or server farm is a very common practice. This is how places like Amazon and Newegg work too.
Libertypost.com and libertypost.us are owned by the USPS and that isn't sinister either.
Huh? Who would have known that LP is owned by the Post Office. So there is no real Goldi, it's some person sitting in a post office somewhere running the show. Well that explains why "Goldi" goes "postal" at times I guess.
What sort of detective work did you do to arrive at that conclusion?
Huh? Who would have known that LP is owned by the Post Office. So there is no real Goldi, it's some person sitting in a post office somewhere running the show. Well that explains why "Goldi" goes "postal" at times I guess.
USPS owns libertypost.us and libertypost.com. I can't figure out why but I assume it is historical.
Goldi owns libertypost.org and libertypost.net.
What sort of detective work did you do to arrive at that conclusion?
Umm...I looked them up at GoDaddy.com?
Just enter the domain name and hit Search. When the page comes up to tell you the name is already taken, click the link for info on who owns the domain.
Quite revealing. It's easy to find this stuff out.