登录后台

已禁止登录!

页面导航

介绍

信息搜集,也称踩点,主要搜集包括端口信息、DNS信息、员工邮箱等各种目标信息。
信息搜集是渗透测试的最重要的阶段,占据整个渗透测试的60%,可见信息搜集的重要性。根据收集的有用信息,可以大大提高渗透测试的成功率。




开启收集

1.灯塔

个人觉得灯塔有几个点还是非常好用的,比如说:
(1)子域名收集

子域名

(2)C段收集

C段

其他的站点识别文件泄露什么的一般扫不出,但扫的时候也可以加上,说不定有惊喜 本人就(1)(2)我用得多一点

2.查看子域名

得到子域名以后就可以导出了,这个大有用处,但是由于导出的子域名前面是不加协议头的,为了之后使用的一些工具方便一些,写了两个脚本加上了协议头,并且探测可访问的子域名:

(1)http://

import threading
import sys
import re

is_py2 = (sys.version_info[0] == 2)
if is_py2:
    import Queue
    import urllib2
    workQueue = Queue.Queue()
else:
    import queue
    import urllib.request
    workQueue = queue.Queue()

output_file_200=open('http_checked_200.txt','a+')
output_file_302=open('http_checked_302.txt','a+')
error_file=open('http_error.txt','a+')

queueLock = threading.Lock()
thread_num = 50
threads = []


class MyThread (threading.Thread):
    def __init__(self, Queue,id):
        threading.Thread.__init__(self)
        self.q = Queue

    def run(self):                                                                                                                                                                                                 
        while not workQueue.empty():                                                                                                                                                                               
            check_online(self.q.get())                                                                                                                                                                             
                                                                                                                                                                                                                   
                                                                                                                                                                                                                   
                                                                                                                                                                                                                   
def check_online(url):                                                                                                                                                                                             
    url = 'http://'+url                                                                                                                                                                                            
    try:                                                                                                                                                                                                           
        headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36'}                                                                         
        if is_py2:                                                                                                                                                                                                 
            request = urllib2.Request(url=url, headers=headers)                                                                                                                                                    
            html = urllib2.urlopen(request)                                                                                                                                                                        
        else:                                                                                                                                                                                                      
            request = urllib.request.Request(url=url, headers=headers)                                                                                                                                             
            html= urllib.request.urlopen(request)                                                                                                                                                                  
        status_code=html.code                                                                                                                                                                                      
        if status_code == 200:
            queueLock.acquire()
            output_file_200.write(url+'\n')
            output_file_200.flush()
            print_color("[+] %s 200 ok" % url,'blue')
            queueLock.release()
        elif status_code == 302:
            queueLock.acquire()
            output_file_302.write(url+'\n')
            output_file_200.flush()
            print_color("[+] %s 302 ok" % url,'gray')
            queueLock.release()
    except Exception as e:
        error_file.write('%s    %s\n'%(url,str(e)))
        error_file.flush()
        print(str(e))

def print_color(data,color="white"):
    if color == 'green': print('\033[1;32m%s\033[1;m' % data)
    elif color == 'blue' : print('\033[1;34m%s\033[1;m' % data)
    elif color=='gray' : print('\033[1;30m%s\033[1;m' % data)
    elif color=='red' : print('\033[1;31m%s\033[1;m' % data)
    elif color=='yellow' : print('\033[1;33m%s\033[1;m' % data)
    elif color=='magenta' : print('\033[1;35m%s\033[1;m' % data)
    elif color=='cyan' : print('\033[1;36m%s\033[1;m' % data)
    elif color=='white' : print('\033[1;37m%s\033[1;m' % data)
    elif color=='crimson' : print('\033[1;38m%s\033[1;m' % data)
    else : print(data)

logo='''
               __                  ___                     __              __  
  ____ _____ _/ /     ____  ____  / (_)___  ___      _____/ /_  ___  _____/ /__
 / __ `/ __ `/ /_____/ __ \/ __ \/ / / __ \/ _ \    / ___/ __ \/ _ \/ ___/ //_/
/ /_/ / /_/ / /_____/ /_/ / / / / / / / / /  __/   / /__/ / / /  __/ /__/ ,<   
\__, /\__, /_/      \____/_/ /_/_/_/_/ /_/\___/____\___/_/ /_/\___/\___/_/|_|  
  /_//____/                                  /_____/                                                                

An adaptive URL online checker for python2 and python3
'''


def main():
    print_color(logo,'green')
    if len(sys.argv)!=2:
        print_color("Usage: python online-checker.py filename",'blue')
        exit()

    f=open(sys.argv[1],'r')
    for i in f.readlines():
        workQueue.put(i.strip())
    for i in range(thread_num):
        thread = MyThread(workQueue, i)
        thread.start()
        threads.append(thread)
    for t in threads:
        t.join()

if __name__ == '__main__':
    main()

(2)https://

import threading
import sys
import re

is_py2 = (sys.version_info[0] == 2)
if is_py2:
    import Queue
    import urllib2
    workQueue = Queue.Queue()
else:
    import queue
    import urllib.request
    workQueue = queue.Queue()

output_file_200=open('https_checked_200.txt','a+')
output_file_302=open('https_checked_302.txt','a+')
error_file=open('https_error.txt','a+')

queueLock = threading.Lock()
thread_num = 50
threads = []


class MyThread (threading.Thread):
    def __init__(self, Queue,id):
        threading.Thread.__init__(self)
        self.q = Queue

    def run(self):
        while not workQueue.empty():
            check_online(self.q.get())



def check_online(url):
    url = 'https://'+url
    try:
        headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36'}
        if is_py2:
            request = urllib2.Request(url=url, headers=headers)
            html = urllib2.urlopen(request)
        else:
            request = urllib.request.Request(url=url, headers=headers)
            html= urllib.request.urlopen(request)
        status_code=html.code
        if status_code == 200:
            queueLock.acquire()
            output_file_200.write(url+'\n')
            output_file_200.flush()
            print_color("[+] %s 200 ok" % url,'blue')
            queueLock.release()
        elif status_code == 302:
            queueLock.acquire()
            output_file_302.write(url+'\n')
            output_file_200.flush()
            print_color("[+] %s 302 ok" % url,'gray')
            queueLock.release()
    except Exception as e:
        error_file.write('%s    %s\n'%(url,str(e)))
        error_file.flush()
        print(str(e))

def print_color(data,color="white"):
    if color == 'green': print('\033[1;32m%s\033[1;m' % data)
    elif color == 'blue' : print('\033[1;34m%s\033[1;m' % data)
    elif color=='gray' : print('\033[1;30m%s\033[1;m' % data)
    elif color=='red' : print('\033[1;31m%s\033[1;m' % data)
    elif color=='yellow' : print('\033[1;33m%s\033[1;m' % data)
    elif color=='magenta' : print('\033[1;35m%s\033[1;m' % data)
    elif color=='cyan' : print('\033[1;36m%s\033[1;m' % data)
    elif color=='white' : print('\033[1;37m%s\033[1;m' % data)
    elif color=='crimson' : print('\033[1;38m%s\033[1;m' % data)
    else : print(data)

logo='''
               __                  ___                     __              __  
  ____ _____ _/ /     ____  ____  / (_)___  ___      _____/ /_  ___  _____/ /__
 / __ `/ __ `/ /_____/ __ \/ __ \/ / / __ \/ _ \    / ___/ __ \/ _ \/ ___/ //_/
/ /_/ / /_/ / /_____/ /_/ / / / / / / / / /  __/   / /__/ / / /  __/ /__/ ,<   
\__, /\__, /_/      \____/_/ /_/_/_/_/ /_/\___/____\___/_/ /_/\___/\___/_/|_|  
  /_//____/                                  /_____/                           

An adaptive URL online checker for python2 and python3
'''
def main():
    print_color(logo,'green')
    if len(sys.argv)!=2:
        print_color("Usage: python online-checker.py filename",'blue')
        exit()

    f=open(sys.argv[1],'r')
    for i in f.readlines():
        workQueue.put(i.strip())
    for i in range(thread_num):
        thread = MyThread(workQueue, i)
        thread.start()
        threads.append(thread)
    for t in threads:
        t.join()

if __name__ == '__main__':
    main()

导出以后复制到1.txt 然后就可以嘎嘎看能不能访问了

导出

结果会存进对应txt里

结果

之后可以用EHole(指纹探测工具)去探测重点资产,但是工具没有人智能,我就喜欢拿扫到的网址一个个去看,说不定哪个不起眼的地方就有逻辑漏洞呢

3.扫C段

像这种有很多ip在一个C段里的就可以用goby去扫了,一般会扫出来很多旁站 很有用

很多ip

4.端口扫描

端口扫描也很重要,推荐

0x01 业界冠军:Nmap:
nmap 是最古老的端口扫描工具,也是如今使用最多最为广泛的工具,无论是安全从业者还是运维、研发人员,都知道它并且在实际的工作中使用它来验证远程服务是否正常,端口是否开放等。 经历了多年的发展,它不仅仅验证端口是否开放,而且还可以根据不同的端口发送特定 payload 来抓取端口返回信息,从而识别端口指纹,判断其运行的服务类型,除此之外还集成了脚本引擎,可以做一些漏洞探测的工作,直接实现从端口扫描到漏洞检测的完整流程。

0x02 业界新秀:masscan:
masscan 是以互联网全端口扫描而诞生,扫描速度极快,它的核心思想是异步扫描,与 Nmap 的同步扫描相反,异步扫描可以同时发送和处理多个网络连接。 理论上一次最多可以处理 1000 万个数据包,限制在于 TCP/IP 的堆栈处理能力,以及运行扫描工作的主机系统能力。 其优点是扫描速度快以及独特的探针随机化功能,其缺点是只能扫描 IP 或者 IP 段,无法指定域名目标。

那如果把两者结合呢,妥妥王炸,又快又准:


#!/usr/bin/python
# coding: utf-8


import nmap
import datetime
import time
import threading
import requests
import chardet
import re
import json
import os
import sys
import socket
import Queue

requests.packages.urllib3.disable_warnings()

reload(sys)
sys.setdefaultencoding('utf-8')

ports = []
final_url = []
ips = []


class PortScan(threading.Thread):
    def __init__(self, queue):
        threading.Thread.__init__(self)
        self._queue = queue

    def run(self):
        while not self._queue.empty():
            scan_ip = self._queue.get()
            try:
                Masportscan(scan_ip)
                Nmapscan(scan_ip)
            except Exception as e:
                print e
                pass


# 调用masscan
def Masportscan(scan_ip):
    temp_ports = []  # 设定一个临时端口列表
    os.system('../../../../../usr/bin/masscan ' + scan_ip + ' -p 1-65535 -oJ masscan.json --rate 1000')
    # 提取json文件中的端口
    with open('masscan.json', 'r') as f:
        for line in f:
            if line.startswith('{ '):
                temp = json.loads(line[:-2])
                temp1 = temp["ports"][0]
                temp_ports.append(str(temp1["port"]))

    if len(temp_ports) > 50:
        temp_ports.clear()  # 如果端口数量大于50,说明可能存在防火墙,属于误报,清空列表
    else:
        ports.extend(temp_ports)  # 小于50则放到总端口列表里


# 调用nmap识别服务
def Nmapscan(scan_ip):
    nm = nmap.PortScanner()
    try:
        for port in ports:
            ret = nm.scan(scan_ip, port, arguments='-sV')
            service_name = ret['scan'][scan_ip]['tcp'][int(port)]['name']
            print '[*] 主机 ' + scan_ip + ' 的 ' + str(port) + ' 端口服务为:' + service_name
            if 'http' in service_name or service_name == 'sun-answerbook':
                if service_name == 'https' or service_name == 'https-alt':
                    scan_url_port = 'https://' + scan_ip + ':' + str(port)
                    Title(scan_url_port, service_name)
                else:
                    scan_url_port = 'http://' + scan_ip + ':' + str(port)
                    Title(scan_url_port, service_name)
            else:
                with open('result.txt', 'ab+') as f:
                    f.writelines(scan_ip + '\t\t' + 'port: ' + str(port) + '\t\t' + service_name + '\n')
    except Exception as e:
        print e
        pass


# 获取网站的web应用程序名和网站标题信息
def Title(scan_url_port, service_name):
    try:
        r = requests.get(scan_url_port, timeout=3, verify=False)
        # 获取网站的页面编码
        r_detectencode = chardet.detect(r.content)
        actual_encode = r_detectencode['encoding']
        response = re.findall(u'(.*?)', r.content, re.S)
        if response == []:
            with open('result.txt', 'ab+') as f:
                f.writelines('[*] Website: ' + scan_url_port + '\t\t' + service_name + '\n')
        else:
            # 将页面解码为utf-8,获取中文标题
            res = response[0].decode(actual_encode).decode('utf-8').encode('utf-8')
            banner = r.headers['server']
            with open('result.txt', 'ab+') as f:
                f.writelines('[*] Website: ' + scan_url_port + '\t\t' + banner + '\t\t' + 'Title: ' + res + '\n')
    except Exception as e:
        print e
        pass


# 扫描结果去重
def Removedup():
    if os.path.exists('result.txt'):
        for line in open('result.txt', 'rb'):
            if line not in final_url:
                final_url.append(line)
                with open('final_result.txt', 'ab+') as f:
                    f.writelines(line)
        time.sleep(1)
        os.remove('result.txt')
        for line in open('final_result.txt', 'rb'):
            if 'Website' in line:
                line = line.strip('\n\r\t').split('\t\t')[0].replace('[*] Website: ', '')
                with open('url.txt', 'ab+') as f:
                    f.writelines(line+'\n')
    else:
        pass


# 获取子域名对应ip
def Get_domain_ip():
    f = open(r'subdomain.txt', 'rb')
    for line in f.readlines():
        try:
            if 'www.' in line:
                extract_line = line.replace('www.', '')
                print line.strip('\n\r\t'), socket.gethostbyname(extract_line.strip('\n\r\t'))
                with open('subdomain-ip.txt', 'ab+') as l:
                    l.writelines(line.strip('\n\r\t') + '\t\t' + socket.gethostbyname(extract_line.strip('\n\r\t')) + '\n')
            else:
                print line.strip('\n\r\t'), socket.gethostbyname(line.strip('\n\r\t'))
                with open('subdomain-ip.txt', 'ab+') as l:
                    l.writelines(line.strip('\n\r\t') + '\t\t' + socket.gethostbyname(line.strip('\n\r\t')) + '\n')
        except Exception, e:
            print e
            pass
    time.sleep(1)
    # 对子域名解析的ip进行去重
    ip_temps = []
    l = open(r'subdomain-ip.txt', 'rb')
    for line in l.readlines():
        line = line.strip('\n\t\r').split('\t\t')[-1]
        ips.append(line)
    for ip_temp in ips:
        if ip_temp not in ip_temps:
            ip_temps.append(ip_temp)
    for ip in ip_temps:
        with open('ip.txt', 'ab+') as f:
            f.writelines(ip + '\n')
    f.close()
    l.close()
    time.sleep(1)


# 传入ip启用多线程
def Multithreading():
    queue = Queue.Queue()
    f = open(r'ip.txt', 'rb')
    for line in f.readlines():
        final_ip = line.strip('\n')
        queue.put(final_ip)
    threads = []
    thread_count = 200
    for i in range(thread_count):
        threads.append(PortScan(queue))
    for t in threads:
        t.start()
    for t in threads:
        t.join()
    f.close()


# 判断扫描文件是否存在,存在则直接扫描,不存在则调用域名解析
def main():
    try:
        if os.path.exists('ip.txt'):
            Multithreading()
        else:
            Get_domain_ip()
            Multithreading()
    except Exception as e:
        print e
        pass


if __name__ == '__main__':
    start_time = datetime.datetime.now()
    main()
    Removedup()
    spend_time = (datetime.datetime.now() - start_time).seconds
    print 'The program is running: ' + str(spend_time) + ' second'
                                                                   

5.目录扫描

定好一个站以后就可以扫目录咯,工具先,人工后

扫目录

6.wappalyzer插件看中间件,很多在线指纹识别网址看cms

中间件|cms

7.黑语法

黑语法1

黑语法2

其他:
site:xxx.com inurl:file|load|editor|Files
site:xxx.com inurl:ewebeditor|editor|uploadfile|eweb|edit
等等很多,看你需要找什么