Python小白,分享一个网站信息搜集的思路,并用python写出脚本,仅供参考~
搜集常见的备份文件后缀类型,以及一些固定的文件名,可自由添加修改:
suffixList = ['.rar','.zip','.sql','.gz','.tar','.bz2','.tar.gz','.bak','.dat'] keyList=['install','INSTALL','index','INDEX','ezweb','EZWEB','flashfxp','FLASHFXP'] # 请输入目标URL print "Please input the URL:" url = raw_input() if (url[:5] == 'http:'): url = url[7:].strip() if (url[:6] == 'https:'): url = url[8:].strip() numT = url.find('/') if(numT != -1): url = url - url[:numT] # 根据URL,推测一些针对性的文件名: num1 = url.find('.') num2 = url.find('.',num1 + 1) keyList.append(url[num1 + 1:num2]) keyList.append(url[num1 + 1:num2].upper()) keyList.append(url) # www.test.com keyList.append(url.upper()) keyList.append(url.replace('.','_')) # www_test_com keyList.append(url.replace('.','_').upper()) keyList.append(url.replace('.','')) # wwwtestcom keyList.append(url.replace('.','').upper()) keyList.append(url[num1 + 1:]) # test.com keyList.append(url[num1 + 1:].upper()) keyList.append(url[num1 + 1:].replace('.','_')) # test_com keyList.append(url[num1 + 1:].replace('.','_').upper()) # 生成字典列表,并写入txt文件: tempList =[] for key in keyList: for suff in suffixList: tempList.append(key + suff) fobj = open("success.txt",'w') for each in tempList: each ='/' + each fobj.write('%s%s' %(each,'\n')) fobj.flush() print 'OK!'
本文作者为Mr.Bai,转载请注明。