没什么意思, 就是随手试试, 记录下代码, 顺便看看 cdp 代替 selenium

bilibili_demo.gif

特意截了个先失败一次的图 https://clericpy.github.io/blog/images/bilibili_demo.gif

思路

  1. 根据目标方框位置的边角特征, 投票选出最可能的左边界
    1. 特征图包括左侧边/左上角/左下角得到左边界, 右侧边/右上角/右下角剪掉方框宽度 42px 得到右边界
    2. 特征图比较小 (2 * 5 左右), 直接用 b64 就够了
    3. 左边界结果还要额外 -5, 因为滑块并不在 0 的位置
    4. 按照 3px 的误差分组, 间隔 3px 以内的算一组, 最后取中位数作为该组结果
    5. 按组投票, 票数最多的三组最接近实际结果, 按顺序一个个拖拽即可
  2. 如果三组偏移拖完依然没成功, 则刷新页面重新来过.
  3. 考虑到拖拽时候的拟人轨迹, 使用的是 pyautogui, 所以要前台显示窗口
    1. 用 chrome cdp 里其实也有, https://chromedevtools.github.io/devtools-protocol/tot/Input#method-dispatchMouseEvent , 但是这是基于发送鼠标事件的方式来调用的, 并没有移动鼠标的实际位置, 基本 100% 被检测到.
    2. 通过 pyautogui 被当作机器的可能比较小, 速度也很快, 用 cdp 模拟拖拽事件虽然支持 Headless 后台拖拽, 却基本都是被识别为机器, 这部分需要伪造人类拖拽轨迹.
    3. 对比通过 cdp 和 pyautogui 操作鼠标, 前者会产生几十个 mouse move Event, 后者却只产生两个.

代码

import asyncio
import base64
import json
import random
import numpy as np
import pyautogui
from ichrome import AsyncChrome, AsyncChromeDaemon, AsyncTab
from pyscreeze import locateAll

import cv2

BUTTON_IMG_B64 = b'iVBORw0KGgoAAAANSUhEUgAAACMAAAAdCAYAAAAgqdWEAAAACXBIWXMAAA7EAAAOxAGVKw4bAAAAEXRFWHRTb2Z0d2FyZQBTbmlwYXN0ZV0Xzt0AAAJ6SURBVEiJ7ZdNaxNRFIafM5nJ2LQpljZSKtpWqIgiuBN3bsS9/8GVv8KV+BN0405B8GulgkLVXVV0YSmtFDREtF8G6XQSm4/jopNk7sydkp1Z5IXAPee999x35j3MvZFms6kMCJz/LSCOoZgsuNrtmM5AAEWMbBSJ9qbFIdFEIcGbCbGn0Sh209XVkAZQDX+xE1Yo+OOUCrN4uXyX2wrKbIcVJgszHBub6+Zb7QYbu9/Za1SZGj3BxMg0qedOxH3Z9LHygicrt3n97Q5BY8fg3qw/4OHyTRYr94x8rRHwcu0uj1du8XnzeT/b9CdmeXORteAtn4JHhK2qwS39eEp5f4n3u/eNfL0Z8uHnM9b/vmM1fNWXGFe142XangNvhbD1m7a/R+hXacs+oKgerNptbOCM7xPkyyiKRAWUJnu6hTdap+ZtoFFzSlTT7NFIDKi1J3vKFEQRV3G8NgiodklwQDxFvDYk+9sBx1Vwe1ljbeLx3SwdBiTSLxlc9LNxZHEWDN53pqtc469PUOsTqem/QR0UU02UitbZIVHNTs9Y5/UaQDsftITe5KZqTNDMeVkiB8qmoZgsuJptphWqvQa2cbaxLbZhoN5MX2Jy4iIIOXySXz7XyUfHv3/IOq8vMQmbUhcSAGaKp9muf2HKL5F3CsYrnz96gbL8Ye7IOSPvis/J8fMEzjrT+bMWm9J7JY4Du68Xj19jMpyiNDZLMVcyuCunrvO1vsDCxCUjP+IVuTp/g+32KmeKly1V03tJrVYb/juwwVXVWEuKecsQ6F0xpBMC0ZUkxsRODANGvW4Qy8aGbnxOamQYqMkwxRzmt32tebANlE1DMVn4Bx8DGiTD+XldAAAAAElFTkSuQmCC'
BUTTON_IMG_DATA = np.frombuffer(base64.b64decode(BUTTON_IMG_B64), np.uint8)
BUTTON_IMG = cv2.imdecode(BUTTON_IMG_DATA, cv2.IMREAD_UNCHANGED)
BUTTON_IMG = cv2.cvtColor(BUTTON_IMG, cv2.COLOR_BGR2GRAY)


def get_image_b64():
    names = (
        ('zuo', 1),
        ('zuoxiazuo', 1),
        ('zuoxiaxia', 1),
        ('zuoshangshang', 1),
        ('zuoshangzuo', 1),
        ('you', 1),
        ('you2', 1),
        ('youxiaxia', 9),
        ('youxiayou', 1),
    )
    for name, offset in names:
        # print(name, offset)
        b64_string = ''
        with open(f'{name}.png', 'rb') as f:
            b64_string = base64.b64encode(f.read())
        print((name, offset, b64_string))


def get_most_common(items, diff=5):
    # 按误差为 5 像素来简单分组, 并按组内元素数量排序, 实际上就是一种简单的投票策略
    result = []
    chunk = []
    s = 0
    for i in sorted(items):
        if i - s > diff:
            result.append(chunk)
            chunk = []
        chunk.append(i)
        s = i
    result.sort(key=lambda x: len(x), reverse=1)
    print('坐标分组:', result)
    # 每个分组取中位数, 并减去 5 像素起始偏差
    result = [i[len(i) // 2] for i in result if i]
    result = [i for i in result if 220 > i > 0]
    return result


def get_x_positions(pic=None):
    if pic:
        np_data = np.frombuffer(base64.b64decode(pic), np.uint8)
        bg = cv2.imdecode(np_data, cv2.IMREAD_UNCHANGED)
    else:
        bg = cv2.imread('bg.png')
    bg = cv2.cvtColor(bg, cv2.COLOR_BGR2GRAY)
    ret, bg = cv2.threshold(bg, 150, 255, cv2.THRESH_BINARY)
    items = []
    # 以下图片实际都是判断黑方框位置的边角特征图, 好奇的自己在浏览器上带 data:image/png;base64,
    # 以图找图, 坐标加上偏移就是实际的正方形左侧竖线位置
    for name, offset, b64_string in (
        (
            'zuo',
            1,
            b'iVBORw0KGgoAAAANSUhEUgAAAAIAAAAICAYAAADTLS5CAAAACXBIWXMAAA7EAAAOxAGVKw4bAAAAEXRFWHRTb2Z0d2FyZQBTbmlwYXN0ZV0Xzt0AAAAXSURBVAiZY/j///9/BgaG/0wMUEAeAwBW+QUKzThPQQAAAABJRU5ErkJggg==',
        ),
        (
            'zuoxiazuo',
            1,
            b'iVBORw0KGgoAAAANSUhEUgAAAAIAAAAKCAYAAACe5Y9JAAAACXBIWXMAAA7EAAAOxAGVKw4bAAAAEXRFWHRTb2Z0d2FyZQBTbmlwYXN0ZV0Xzt0AAAAbSURBVAiZY/j///9/BgaG/0wMUEANxv///xkAutoICy4BHW8AAAAASUVORK5CYII=',
        ),
        (
            'zuoxiaxia',
            1,
            b'iVBORw0KGgoAAAANSUhEUgAAAA4AAAACCAYAAABoiu2qAAAACXBIWXMAAA7EAAAOxAGVKw4bAAAAEXRFWHRTb2Z0d2FyZQBTbmlwYXN0ZV0Xzt0AAAAeSURBVAiZY/z///9/RkZGBlIBCwMDA8P///9J1ggAV9MHAkyrA+YAAAAASUVORK5CYII=',
        ),
        (
            'zuoshangshang',
            1,
            b'iVBORw0KGgoAAAANSUhEUgAAABIAAAAFCAYAAABIHbx0AAAACXBIWXMAAA7EAAAOxAGVKw4bAAAAEXRFWHRTb2Z0d2FyZQBTbmlwYXN0ZV0Xzt0AAAAmSURBVBiVrc4BDQAgAMOwnuDf8hHBJ6BZ2tags0DgQpJvaHY0gx6xZAQL5ARySgAAAABJRU5ErkJggg==',
        ),
        (
            'zuoshangzuo',
            1,
            b'iVBORw0KGgoAAAANSUhEUgAAAAIAAAAKCAYAAACe5Y9JAAAACXBIWXMAAA7EAAAOxAGVKw4bAAAAEXRFWHRTb2Z0d2FyZQBTbmlwYXN0ZV0Xzt0AAAAfSURBVAiZY/z///9/BgYGBiYGBgYGRkZGCAMuQj4DAGFwBBPQx9kIAAAAAElFTkSuQmCC',
        ),
    ):
        # front = cv2.imread(f'{name}.png')
        np_data = np.frombuffer(base64.b64decode(b64_string), np.uint8)
        front = cv2.imdecode(np_data, cv2.IMREAD_UNCHANGED)
        front = cv2.cvtColor(front, cv2.COLOR_BGR2GRAY)
        result = locateAll(front, bg)
        xs = set((i.left + offset for i in result))
        items.extend(xs)
        # print(name, xs)
    for name, offset, b64_string in (
        (
            'you',
            1,
            b'iVBORw0KGgoAAAANSUhEUgAAAAMAAAAMCAYAAACnfgdqAAAACXBIWXMAAA7EAAAOxAGVKw4bAAAAEXRFWHRTb2Z0d2FyZQBTbmlwYXN0ZV0Xzt0AAAAbSURBVAiZY2RgYPj///9/BgYGBgYmBiQw8BwAX7AEFGOnJnMAAAAASUVORK5CYII=',
        ),
        (
            'you2',
            1,
            b'iVBORw0KGgoAAAANSUhEUgAAAAIAAAAXCAYAAAAhrZ4MAAAACXBIWXMAAA7EAAAOxAGVKw4bAAAAEXRFWHRTb2Z0d2FyZQBTbmlwYXN0ZV0Xzt0AAAAXSURBVAiZY2RgYPj///9/BiYGKBjSDAA2bwQqmibIMQAAAABJRU5ErkJggg==',
        ),
        (
            'youxiaxia',
            9,
            b'iVBORw0KGgoAAAANSUhEUgAAAAsAAAACCAYAAACOoybuAAAACXBIWXMAAA7EAAAOxAGVKw4bAAAAEXRFWHRTb2Z0d2FyZQBTbmlwYXN0ZV0Xzt0AAAAYSURBVAiZY2RgYPjPQAD8/w9RwgJjEAMAcwQG/zbp0osAAAAASUVORK5CYII=',
        ),
        (
            'youxiayou',
            1,
            b'iVBORw0KGgoAAAANSUhEUgAAAAIAAAAICAYAAADTLS5CAAAACXBIWXMAAA7EAAAOxAGVKw4bAAAAEXRFWHRTb2Z0d2FyZQBTbmlwYXN0ZV0Xzt0AAAAeSURBVAiZlcYxAQAwDMAgOv+e06cGxsWgynP+UoEFIHgHCV1SfSIAAAAASUVORK5CYII=',
        ),
    ):
        # front = cv2.imread(f'{name}.png')
        np_data = np.frombuffer(base64.b64decode(b64_string), np.uint8)
        front = cv2.imdecode(np_data, cv2.IMREAD_UNCHANGED)
        front = cv2.cvtColor(front, cv2.COLOR_BGR2GRAY)
        result = locateAll(front, bg)
        xs = set((i.left - 42 + offset for i in result))
        items.extend(xs)
        # print(name, xs)
    items = [i for i in items if i > 0]
    items.sort()
    print('全部坐标:', items)
    results = get_most_common(items, 2)
    print('坐标列表:', results)
    # cv2.imshow('', bg)
    # cv2.waitKey(0)
    return results


def drag(x):
    # 像人类一样拖拽, 先快后慢, 速度控制
    button_position = pyautogui.locateOnScreen(BUTTON_IMG, grayscale=True)
    pyautogui.moveTo(button_position)
    pyautogui.mouseDown()
    pyautogui.moveRel(x, 12, duration=0.5)
    pyautogui.moveRel(-10, 1, duration=0.2)
    pyautogui.moveRel(10, -5, duration=0.2)
    # pyautogui.moveRel(-1, duration=0.5)
    pyautogui.mouseUp()


async def cdp_drag(x, tab: AsyncTab):
    # 有偏移值则拖拽
    print('尝试坐标:', x)
    rect = (await tab.get_element_clip('.geetest_slider_button'))
    # print(rect)
    start_x, start_y = rect['left'], rect['top']
    # await tab.drag(start_x, start_y, start_x + x, start_y, duration=2)
    await tab.mouse_move(start_x, start_y)
    # (await tab.mouse_press(x=start_x, y=start_y))
    await tab.mouse_drag_rel_chain(start_x, start_y).move(x + 15, 3).move(-25, -5, 0.5).move(10, 1, 0.5)
    # await tab.mouse_release(start_x + x, start_y)


async def bilibili(user=123, password=123, close_after_checking=True):
    async with AsyncChrome() as chrome:
        tabs = await chrome.tabs
        tab: AsyncTab = tabs[0]
        async with tab():
            await tab.clear_browser_cache()
            await tab.clear_browser_cookies()
            while 1:
                await tab.set_url('https://passport.bilibili.com/login')
                # 填入帐号密码, 并触发 input 事件
                await tab.js(
                    "var u = document.querySelector('.username >input');u.value = '"
                    + str(user) +
                    "';var p=document.querySelector('.password >input');p.value = '"
                    + str(password) +
                    "';var event = new Event('input', { bubbles: true });u.dispatchEvent(event);p.dispatchEvent(event);setTimeout(()=>{document.querySelector('.btn-login').click()}, 1000)"
                )
                while 1:
                    # 等待验证码出来
                    if 'geetest_canvas_slice' in (await tab.html):
                        break
                    await asyncio.sleep(1)
                # # 隐藏滑块方便给背景截图, 拿到背景图的位置范围才可以截图.
                # rect = (await tab.js(
                #     'document.querySelector(".geetest_canvas_slice").hidden = true;JSON.stringify(document.querySelector(".geetest_canvas_bg").getBoundingClientRect())'
                # ))['result']['result']['value']
                # rect = json.loads(rect)
                # # 截图为 base64 格式
                # pic = (await tab.send(
                #     'Page.captureScreenshot',
                #     clip=dict(
                #         x=rect['x'],
                #         y=rect['y'],
                #         width=rect['width'],
                #         height=rect['height'],
                #         scale=1,
                #     )))['result']['data']
                # print(rect)
                # 截图为 base64 格式, 这种图片会有比例变化
                pic = (await tab.get_variable(
                    "document.querySelector('.geetest_canvas_bg ').toDataURL('image/png')"
                ))[22:]
                # 这里存为文件是为了后期调试, 实际使用中直接丢 base64 在内存里计算就够了
                # with open('bg.png', 'wb') as f:
                #     f.write(base64.b64decode(pic))
                # 通过图像识别得到偏移值
                offsets = get_x_positions(pic)[:3]
                print('尝试坐标列表:', offsets)
                # 等待 1 秒, 截图会有残影
                await asyncio.sleep(1)
                # 只要前三个就够了
                for x in offsets:
                    # 5 的偏移给滑块
                    x = float(x - 7)
                    if x < 5:
                        continue
                    # 判断下 chrome 还在不在
                    assert (await chrome.ok)
                    # 通过 cdp 的拖拽, 是产生事件的, 绕不过去, 用 pyautogui 操作真实鼠标比较容易过
                    # await cdp_drag(x, tab)
                    drag(x)
                    # 通过流量判断一下是否成功, 成功时会发送一个 passport.bilibili.com/login 请求
                    ok = await tab.wait_response(
                        filter_function=
                        lambda r: 'https://passport.bilibili.com/login' in str(r['params']['response']['url']),
                        timeout=2)
                    err = (await tab.get_variable(
                        "document.querySelector('.geetest_panel_error').style.display"
                    ))
                    if err != 'none':
                        print('人机反爬:', err)
                        break
                    if ok:
                        print('验证成功')
                        # print(ok)
                        # 优雅地关闭浏览器
                        # if close_after_checking:
                        #     await chrome.close_browser()
                        return
                print('验证失败, 重新尝试')


async def bilibili_with_daemon(user, password):
    async with AsyncChromeDaemon():
        await bilibili(user, password)


def main_with_daemon():
    asyncio.run(bilibili_with_daemon('user', 'password'))


def main_without_daemon():
    # 可以在外面启动守护进程, 方便调试
    # python -m ichrome --extra_config="--window-size=1212,800"
    asyncio.run(bilibili('user', 'password'))


# get_image_b64()
# get_x_positions()
# 捎带着启动 chrome 守护进程, 如果平时调试, 则使用 main_without_daemon
main_with_daemon()
# main_without_daemon()

日志:

全部坐标: [1, 2, 42, 42, 46, 57, 57, 68, 71, 71, 71, 72, 82, 91, 96, 96, 108, 108, 123, 123, 124, 124, 148, 175, 180, 185, 187, 196, 202, 215, 215, 220, 225, 225, 225]
坐标分组: [[68, 71, 71, 71, 72], [123, 123, 124, 124], [1, 2], [42, 42], [57, 57], [96, 96], [108, 108], [185, 187], [215, 215], [46], [82], [91], [148], [175], [180], [196], [202], [220]]
坐标列表: [66, 119, 37, 52, 91, 103, 182, 210, 41, 77, 86, 143, 170, 175, 191, 197, 215]
尝试坐标: 66
尝试坐标: 119
验证成功

小结

识别率不算太高, 一次成功率(一般三次滑动, 第一次的坐标就正确)大概七成, 二次成功率九成左右, 遇到干扰直线太多的时候, 也可能三次都失败, 刷新重来.

ichrome 里整个拖拽功能的相关代码都是从 pyautogui 上复制下来的, 但就是不如 pyautogui 的能避开人机检测, 每次都被怪兽吞了, 这部分折腾良久依然没什么好办法, 毕竟不是真实的鼠标动作, 只是一堆鼠标事件. 不过 pyautogui 的拖拽基本就是干脆利落, 超过 98% 其他用户的速度.

代码学习使用, 请勿商用, 毕竟网上随手能搜到准确率更高的代码, 如有侵权, 请 issue 联系删掉本文.