首先放一些测试环境,不保证其他环境也能够这样使用:
首先需要说明的是,我们是与scrcpy-server建立连接,而单纯想显示手机上的画面与控制,作者github发布有scrcpy.exe可以直接运行使用,而这里我们相当于做另一个scrcpy,从而达到一些自定义控制的目的。与scrcpy-server建立连接,github上开发文档也说明了https://github.com/Genymobile/scrcpy/blob/master/doc/develop.md,这里更详细的说明下与scrcpy-server建立连接的具体细节。为了更好的描述细节,下面所有操作使用Qt代码演示。
启动scrcpy-server的所有操作都是经过Adb进行的,不了解Adb命令建议先学习一下相关命令,因此,连接设备前先确保手机上打开了“USB调试”开关。连接设备使用命令adb connect,Qt中执行Adb命令使用QProcess类,这里我们封装一个Adb工具类以方便的执行命令:
//头文件
#pragma once
#include <qobject.h>
#include <qprocess.h>
/**
* @brief Adb命令执行封装类
*/
class AdbCommandRunner {
public:
explicit AdbCommandRunner(const QString& deviceName = QString());
~AdbCommandRunner();
/**
* @brief 执行Adb命令
* @param cmds 参数列表
* @param waitForFinished 是否等待执行完成
*/
void runAdb(const QStringList& cmds, bool waitForFinished = true);
/**
* @brief 获取执行结果的错误
* @return
*/
QString getLastErr();
QString lastFeedback; //执行结果返回的字符串
private:
QProcess process;
QString deviceName;
};
//cpp
#include "adbcommandrunner.h"
#include <qdebug.h>
AdbCommandRunner::AdbCommandRunner(const QString &deviceName)
: deviceName(deviceName)
{}
AdbCommandRunner::~AdbCommandRunner() {
if (process.isOpen()) {
process.kill();
process.waitForFinished();
}
}
void AdbCommandRunner::runAdb(const QStringList &cmds, bool waitForFinished) {
if (deviceName.isEmpty()) {
process.start("adb/adb", cmds);
} else {
process.start("adb/adb", QStringList({"-s", deviceName}) + cmds);
}
qDebug() << "do adb execute command:" << "adb " + cmds.join(' ');
if (waitForFinished) {
process.waitForFinished();
}
lastFeedback = process.readAllStandardOutput();
}
QString AdbCommandRunner::getLastErr() {
QString failReason = process.readAllStandardError();
if (failReason.isEmpty()) {
failReason = lastFeedback;
}
return failReason;
}
需要注意的是,Adb服务是后台运行的,我们可以直接执行adb connect命令连接设备,adb会自动启动服务,然而启动服务是需要个几秒钟,直接QProcess执行会有个等待时间,正确的做法是,先使用adb start-server启动服务,这个过程可以在线程中执行:
QThread::create([] {
QProcess process;
process.start("adb/adb", {"start-server"});
process.waitForFinished();
if (process.exitCode() == 0 && process.exitStatus() == QProcess::NormalExit) {
qDebug() << "adb server start finished!";
} else {
qDebug() << "adb server start failed:" << process.readAll();
}
))->start();
如果服务启动成功,并且设备存在,连接时几乎没有等待时间
bool connectToDevice() {
AdbCommandRunner runner;
runner.runAdb({"connect", deviceAddress});
if (runner.lastFeedback.contains("cannot connect to")) {
qDebug() << "connect device:" << deviceAddress << "failed, error:" << runner.getLastErr();
return false;
}
qInfo() << "connect device:" << deviceAddress << "success!";
return true;
}
推送文件自然是使用adb push命令,建议是推送到临时目录/data/local/tmp
下:
bool pushServiceToDevice() {
auto scrcpyFilePath = QDir::currentPath() + "/scrcpy/scrcpy-server";
qDebug() << "scrcpy path:" << scrcpyFilePath;
AdbCommandRunner runner;
runner.runAdb({"-s", deviceAddress, "push", scrcpyFilePath, "/data/local/tmp/scrcpy-server.jar"});
if (!runner.lastFeedback.contains("1 file pushed")) {
qDebug() << runner.getLastErr();
return false;
}
return true;
}
默认情况下,scrcpy-server是作为客户端,通过adb隧道连接到电脑端的本地Tcp服务器,如开发者文档上描述,这个角色也是可以反转的,只需要在启动服务命令里面添加tunnel_forward=true
(注意不是启动scrcpy.exe的命令行参数)。默认角色下,使用adb reverse命令开启隧道连接,需要注意的是,隧道名中需要携带一个8位字符串scid作为标识,这里我们可以使用时间戳代替:
scid = QString::asprintf("%08x", QDateTime::currentSecsSinceEpoch());
AdbCommandRunner runner;
runner.runAdb({"-s", deviceAddress, "reverse", "localabstract:scrcpy_" + scid, "tcp:27183")});
记住这个27183
端口,下面使用QTcpServer时正是使用这个端口监听服务的连接。
scrcpy-server本身是一个可执行的jar包,启动这个jar包,使用adb shell命令:
serverRunner = new AdbCommandRunner;
QStringList scrcpyServiceOpt;
scrcpyServiceOpt << "-s" << deviceAddress << "shell";
scrcpyServiceOpt << "CLASSPATH=/data/local/tmp/scrcpy-server.jar";
scrcpyServiceOpt << "app_process";
scrcpyServiceOpt << "/";
scrcpyServiceOpt << "com.genymobile.scrcpy.Server";
scrcpyServiceOpt << SCRCPY_VERSION;
scrcpyServiceOpt << "scid=" + scid;
scrcpyServiceOpt << "audio=false"; //不传输音频
scrcpyServiceOpt << "max_fps=" + QString::number(maxFrameRate); //最大帧率
scrcpyServiceOpt << "max_size=1920"; //视频帧最大尺寸
serverRunner->runAdb(scrcpyServiceOpt, false);
需要注意的是,这里QProcess对象需要保存,关闭服务时需要杀死对应的adb shell子进程。在上面参数中scid以及之前的参数是必要的,如果版本号和scid对应不上无法启动服务。更多的控制参数可以参考源代码scrcpy\app\src\server.c
第212行开始,其中参数的默认值在scrcpy\app\src\options.c
中,启动成功后就会立即通过adb与电脑端本地服务建立连接。
关闭服务时,首先需要结束shell进程,然后关闭隧道即可:
if (serverRunner) {
delete serverRunner;
serverRunner = nullptr;
}
AdbCommandRunner runner;
runner.runAdb({"-s", deviceAddress, "reverse", "--remove", "localabstract:scrcpy_" + scid});
关闭服务之后,scrcpy-server会自己在设备中删除,重新启动服务需要从第2步骤推送文件开始。
上面说了,默认情况下电脑端作为tcp服务器,scrcpy-server作为客户端建立连接,因此,使用QTcpServer监听本地adb隧道连接端口即可:
ScrcpyServer::ScrcpyServer(QObject *parent)
: QObject(parent)
{
//tcp服务
tcpServer = new QTcpServer(this);
connect(tcpServer, &QTcpServer::acceptError, this, [] (QAbstractSocket::SocketError socketError) {
qCritical() << "scrcpy server accept error:" << socketError;
});
connect(tcpServer, &QTcpServer::newConnection, this, &ScrcpyServer::handleNewConnection);
}
void ScrcpyServer::handleNewConnection() {
auto socket = tcpServer->nextPendingConnection();
//第一个socket为视频流
if (!videoSocket) {
videoSocket = socket;
connect(socket, &QTcpSocket::readyRead, this, &ScrcpyServer::receiveVideoBuffer);
qInfo() << "video socket pending connect...";
} else if (!controlSocket) {
controlSocket = socket;
connect(socket, &QTcpSocket::readyRead, this, &ScrcpyServer::receiveControlBuffer);
qInfo() << "control socket pending connect...";
} else {
qWarning() << "unexpect socket appending...";
}
connect(socket, &QTcpSocket::stateChanged, this, [=] (QAbstractSocket::SocketState state) {
qDebug() << "socket state changed:" << state;
if (state == QAbstractSocket::UnconnectedState) {
socket->deleteLater();
}
});
}
bool ScrcpyServer::start() {
if (!tcpServer->isListening()) {
bool success = tcpServer->listen(QHostAddress::AnyIPv4, 27183);
if (!success) {
qDebug() << "tcp server listen failed:" << tcpServer->errorString();
}
}
}
根据开发者文档描述,scrcpy-server连接到QTcpServer后,会有3个tcp连接分别用来传输:视频、音频、控制命令,这里我们在启动时设置了audio=false
关闭了音频传输,因此第2个为控制socket。
上面讲了启动scrcpy-server和使用QTcpServer建立连接,事实上,建立连接和启动tcp服务是需要按照顺序进行的:
27183
文档中详细描述了视频流的数据组成,最开始视频流会传输64字节表示设备的名称,然后依次传输4字节编码方式、4字节帧图像宽度、4字节帧图像高度,接着开始传输视频帧,其中视频帧由帧头和数据组成,帧头中包含有PTS标志(8字节)和帧数据长度(4字节)两个信息,后面接收帧数据长度的数据即可,然后等待接收下一帧数据。视频默认编码为H.264,可以通过启动服务参数更改编码类型,这里我们使用FFmpeg来解析视频帧。
由于解码是个耗时任务,需要放到线程中运行,这里就需要与QTcpSocket接收到的数据进行线程同步处理,为了让解码线程看起来像是以同步方式读取数据,编写一个工具类来接收QTcpSocket发送来的数据:
//头文件
#pragma once
#include <qobject.h>
#include <qmutex.h>
#include <qwaitcondition.h>
#include "byteutil.h"
class BufferReceiver : public QObject {
public:
explicit BufferReceiver(QObject *parent = nullptr);
void sendBuffer(const QByteArray& data);
void endCache();
template<typename T>
T receive() {
enum {
T_Size = sizeof(T)
};
T value = T();
receive((void*)&value, T_Size);
ByteUtil::swapBits(value);
return value;
}
void receive(void* data, int len);
bool isEndReceive() const {
return endBufferCache;
}
private:
QByteArray receiveBuffer;
QMutex mutex;
QWaitCondition receiveWait;
bool endBufferCache;
};
//cpp
#include "bufferreceiver.h"
BufferReceiver::BufferReceiver(QObject *parent)
: QObject(parent)
, endBufferCache(false)
{}
void BufferReceiver::sendBuffer(const QByteArray &data) {
QMutexLocker locker(&mutex);
receiveBuffer.append(data);
receiveWait.notify_all();
}
void BufferReceiver::endCache() {
QMutexLocker locker(&mutex);
endBufferCache = true;
receiveWait.notify_all();
}
void BufferReceiver::receive(void *data, int len) {
mutex.lock();
if (endBufferCache) {
mutex.unlock();
return;
}
while (receiveBuffer.size() < len && !endBufferCache) {
receiveWait.wait(&mutex);
}
if (!endBufferCache) {
memcpy(data, receiveBuffer.data(), len);
receiveBuffer = receiveBuffer.mid(len);
}
mutex.unlock();
}
在主线程中收到视频流数据就缓存到BufferReceiver中:
void ScrcpyServer::receiveVideoBuffer() {
if (videoDecoder) {
videoDecoder->appendBuffer(videoSocket->readAll());
}
}
解码器线程按照协议依次接收数据包:
void VideoDecoder::run() {
QByteArray remoteDeviceName(64, '\0');
bufferReceiver.receive(remoteDeviceName.data(), remoteDeviceName.size());
auto name = QString::fromUtf8(remoteDeviceName);
if (!name.isEmpty()) {
qInfo() << "device name received:" << name;
}
if (bufferReceiver.isEndReceive()) {
return;
}
if (codecCtx == nullptr) {
auto codecId = bufferReceiver.receive<uint32_t>();
auto width = bufferReceiver.receive<int>();
auto height = bufferReceiver.receive<int>();
if (!codecInit(codecId, width, height)) {
codecRelease();
qCritical() << "video decode init failed!";
return;
}
}
qInfo() << "video decode is running...";
for (;;) {
if (!frameReceive()) {
break;
}
if (!frameMerge()) {
av_packet_unref(packet);
break;
}
frameUnpack();
av_packet_unref(packet);
}
//释放资源
codecRelease();
qInfo() << "video decoder exit...";
}
注意上面解码线程的读取数据步骤,在读取到解码器和帧大小时就可以进行解码器初始化了:
//初始化解码器
auto codec = avcodec_find_decoder(AV_CODEC_ID_H264);
if (!codec) {
qDebug() << "find codec h264 fail!";
return false;
}
//初始化解码器上下文
codecCtx = avcodec_alloc_context3(codec);
if (!codecCtx) {
qDebug() << "allocate codec context fail!";
return false;
}
codecCtx->width = width;
codecCtx->height = height;
codecCtx->pix_fmt = AV_PIX_FMT_YUV420P;
int ret = avcodec_open2(codecCtx, codec, nullptr);
if (ret < 0) {
qDebug() << "open codec fail!";
return false;
}
packet = av_packet_alloc();
if (!packet) {
qDebug() << "alloc packet fail!";
return false;
}
decodeFrame = av_frame_alloc();
if (!decodeFrame) {
qDebug() << "alloc frame fail!";
return false;
}
获取到帧数据时,依次读取PTS和帧数据大小,设置到AVPacket中:
bool VideoDecoder::frameReceive() {
auto ptsFlags = bufferReceiver.receive<uint64_t>();
auto frameLen = bufferReceiver.receive<int32_t>();
if (bufferReceiver.isEndReceive()) {
return false;
}
Q_ASSERT(frameLen != 0);
if (av_new_packet(packet, frameLen)) {
qDebug() << "av new packet failed!";
return false;
}
bufferReceiver.receive(packet->data, frameLen);
if (bufferReceiver.isEndReceive()) {
return false;
}
if (ptsFlags & SC_PACKET_FLAG_CONFIG) {
packet->pts = AV_NOPTS_VALUE;
} else {
packet->pts = ptsFlags & SC_PACKET_PTS_MASK;
}
if (ptsFlags & SC_PACKET_FLAG_KEY_FRAME) {
packet->flags |= AV_PKT_FLAG_KEY;
}
packet->dts = packet->pts;
return true;
}
根据PTS判断是否需要进行帧合并:
bool VideoDecoder::frameMerge() {
bool isConfig = packet->pts == AV_NOPTS_VALUE;
if (isConfig) {
free(mergeBuffer);
mergeBuffer = (uint8_t*)malloc(packet->size);
if (!mergeBuffer) {
qDebug() << "merge buffer malloc failed! required size:" << packet->size;
return false;
}
memcpy(mergeBuffer, packet->data, packet->size);
mergedSize = packet->size;
}
else if (mergeBuffer) {
if (av_grow_packet(packet, mergedSize)) {
qDebug() << "av grow packet failed!";
return false;
}
memmove(packet->data + mergedSize, packet->data, packet->size);
memcpy(packet->data, mergeBuffer, mergedSize);
free(mergeBuffer);
mergeBuffer = nullptr;
}
return true;
}
视频帧解包分别使用avcodec_send_packet
和avcodec_receive_frame
,下面代码中演示了如何循环解包,然后转换为QVideoFrame对象(供后面视频渲染使用),注意这里图像格式为YUV420P:
void VideoDecoder::frameUnpack() {
if (packet->pts == AV_NOPTS_VALUE) {
return;
}
int ret = avcodec_send_packet(codecCtx, packet);
if (ret < 0 && ret != AVERROR(EAGAIN)) {
qCritical() << "send packet error:" << ret;
} else {
//循环解析数据帧
for (;;) {
ret = avcodec_receive_frame(codecCtx, decodeFrame);
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
break;
}
if (ret) {
qCritical() << "could not receive video frame:" << ret;
break;
}
QVideoFrame cachedFrame(codecCtx->width * codecCtx->height * 3 / 2,
QSize(codecCtx->width, codecCtx->height),
codecCtx->width, QVideoFrame::Format_YUV420P);
int imageSize = av_image_get_buffer_size(codecCtx->pix_fmt, codecCtx->width, codecCtx->height, 1);
if (cachedFrame.map(QAbstractVideoBuffer::WriteOnly)) {
uchar *dstData = cachedFrame.bits();
av_image_copy_to_buffer(dstData, imageSize, decodeFrame->data, decodeFrame->linesize,
codecCtx->pix_fmt,
codecCtx->width, codecCtx->height, 1);
cachedFrame.unmap();
emit frameDecoded(cachedFrame);
}
av_frame_unref(decodeFrame);
}
}
}
显示视频最好的办法就是使用OpenGL渲染,这样不会消耗大量的CPU资源,并且原视频帧解码出来的YUV420P也可以在OpenGL中计算。Qt中使用OpenGL自然是继承QOpenGLWidget,Qt官方正好有一个显示视频的控件QVideoWidget,只是没有提供直接设置视频流的方法,仔细阅读Multimedia模块中的QVideoWidget源代码发现,如果使用GLSL,经过QPainterVideoSurface实例,最终进行渲染使用的是QVideoSurfaceGlslPainter,其中支持各种图像帧类型的渲染,其中YUV420P也包含在内,对于YUV420P转RGB使用的是BT709标准。复制源代码中multimediawidgets/qmediaopenglhelper_p.h
、multimediawidgets/qpaintervideosurface_p.h
、multimediawidgets/qpaintervideosurface.cpp
3个文件,自定义一个VideoWidget其中实例化一个QPainterVideoSurface,刷新图片是使用QPainterVideoSurface::present函数即可:
//.h
#pragma once
#include <qwidget.h>
#include <qopenglwidget.h>
#include "qpaintervideosurface_p.h"
class VideoWidget : public QOpenGLWidget {
public:
explicit VideoWidget(QWidget *parent = nullptr);
~VideoWidget();
QPainterVideoSurface *videoSurface() const;
QSize sizeHint() const override;
public:
void setAspectRatioMode(Qt::AspectRatioMode mode);
protected:
void hideEvent(QHideEvent *event) override;
void resizeEvent(QResizeEvent *event) override;
void paintEvent(QPaintEvent *event) override;
private slots:
void formatChanged(const QVideoSurfaceFormat &format);
void frameChanged();
private:
void updateRects();
private:
QPainterVideoSurface *m_surface;
Qt::AspectRatioMode m_aspectRatioMode;
QRect m_boundingRect;
QRectF m_sourceRect;
QSize m_nativeSize;
bool m_updatePaintDevice;
};
//.cpp
#include "videowidget.h"
#include <qevent.h>
#include <qvideosurfaceformat.h>
VideoWidget::VideoWidget(QWidget *parent)
: QOpenGLWidget(parent)
, m_aspectRatioMode(Qt::KeepAspectRatio)
, m_updatePaintDevice(true)
{
m_surface = new QPainterVideoSurface(this);
connect(m_surface, &QPainterVideoSurface::frameChanged, this, &VideoWidget::frameChanged);
connect(m_surface, &QPainterVideoSurface::surfaceFormatChanged, this, &VideoWidget::formatChanged);
}
QPainterVideoSurface *VideoWidget::videoSurface() const {
return m_surface;
}
VideoWidget::~VideoWidget() {
delete m_surface;
}
void VideoWidget::setAspectRatioMode(Qt::AspectRatioMode mode)
{
m_aspectRatioMode = mode;
updateGeometry();
}
QSize VideoWidget::sizeHint() const
{
return m_surface->surfaceFormat().sizeHint();
}
void VideoWidget::hideEvent(QHideEvent *event)
{
m_updatePaintDevice = true;
}
void VideoWidget::resizeEvent(QResizeEvent *event)
{
updateRects();
}
void VideoWidget::paintEvent(QPaintEvent *event)
{
QPainter painter(this);
if (testAttribute(Qt::WA_OpaquePaintEvent)) {
QRegion borderRegion = event->region();
borderRegion = borderRegion.subtracted(m_boundingRect);
QBrush brush = palette().window();
for (const QRect &r : borderRegion)
painter.fillRect(r, brush);
}
if (m_surface->isActive() && m_boundingRect.intersects(event->rect())) {
m_surface->paint(&painter, m_boundingRect, m_sourceRect);
m_surface->setReady(true);
} else {
if (m_updatePaintDevice && (painter.paintEngine()->type() == QPaintEngine::OpenGL
|| painter.paintEngine()->type() == QPaintEngine::OpenGL2)) {
m_updatePaintDevice = false;
m_surface->updateGLContext();
if (m_surface->supportedShaderTypes() & QPainterVideoSurface::GlslShader) {
m_surface->setShaderType(QPainterVideoSurface::GlslShader);
} else {
m_surface->setShaderType(QPainterVideoSurface::FragmentProgramShader);
}
}
}
}
void VideoWidget::formatChanged(const QVideoSurfaceFormat &format)
{
m_nativeSize = format.sizeHint();
updateRects();
updateGeometry();
update();
}
void VideoWidget::frameChanged()
{
update(m_boundingRect);
}
void VideoWidget::updateRects()
{
QRect rect = this->rect();
if (m_nativeSize.isEmpty()) {
m_boundingRect = QRect();
} else if (m_aspectRatioMode == Qt::IgnoreAspectRatio) {
m_boundingRect = rect;
m_sourceRect = QRectF(0, 0, 1, 1);
} else if (m_aspectRatioMode == Qt::KeepAspectRatio) {
QSize size = m_nativeSize;
size.scale(rect.size(), Qt::KeepAspectRatio);
m_boundingRect = QRect(0, 0, size.width(), size.height());
m_boundingRect.moveCenter(rect.center());
m_sourceRect = QRectF(0, 0, 1, 1);
} else if (m_aspectRatioMode == Qt::KeepAspectRatioByExpanding) {
m_boundingRect = rect;
QSizeF size = rect.size();
size.scale(m_nativeSize, Qt::KeepAspectRatio);
m_sourceRect = QRectF(
0, 0, size.width() / m_nativeSize.width(), size.height() / m_nativeSize.height());
m_sourceRect.moveCenter(QPointF(0.5, 0.5));
}
}
开始视频推流之前,初始化Surface,设置使用OpenGL渲染,并指定视频格式为YUV420P:
videoWidget->videoSurface()->setShaderType(QPainterVideoSurface::GlslShader);
videoWidget->videoSurface()->start(QVideoSurfaceFormat(QSize(1920, 1080), QVideoFrame::Format_YUV420P));
从VideoDecoder获取到视频帧时发送到Surface:
connect(decorder, &VideoDecoder::frameDecoded, this, [&](const QVideoFrame& frame) {
videoWidget->videoSurface()->present(frame);
});
关闭推流时,同时关闭Surface渲染:
videoWidget->videoSurface()->stop();
命令的控制是通过第二个socket发送数据,其命令的编码协议定义和编码在源代码scrcpy\app\src\control_msg.h
、scrcpy\app\src\control_msg.c
这两个文件中。例如,发送一个点击事件:
namespace ByteUtil {
/**
* @brief 字节序交换
* @tparam T 数值类型
* @param data 转换目标数值
* @param size 字节序交换大小
*/
template<typename T>
static void swapBits(T& data, size_t size = sizeof(T)) {
for (size_t i = 0; i < size / 2; i++) {
char* pl = (char*)&data + i;
char* pr = (char*)&data + (size - i - 1);
if (*pl != *pr) {
*pl ^= *pr;
*pr ^= *pl;
*pl ^= *pr;
}
}
}
/**
* @brief char*转指定数值类型(大端序)
* @tparam T 数值类型
* @param data 转换目标数值
* @param src 原字节数组
* @param srcSize 原字节数组大小
*/
template<typename T>
static void bitConvert(T& data, const void* src, int srcSize = sizeof(T)) {
memcpy(&data, src, srcSize);
swapBits(data, srcSize);
}
}
class ControlMsg {
public:
static QByteArray injectTouchEvent(android_motionevent_action action, android_motionevent_buttons actionButton,
android_motionevent_buttons buttons, uint64_t pointerId,
const QSize& screenSize, const QPoint& point, float pressure)
{
char bytes[32];
bytes[0] = SC_CONTROL_MSG_TYPE_INJECT_TOUCH_EVENT;
bytes[1] = action;
ByteUtil::bitConvert(*(uint64_t*)(bytes + 2), &pointerId);
uint32_t x = point.x();
ByteUtil::bitConvert(*(uint32_t*)(bytes + 10), &x);
uint32_t y = point.y();
ByteUtil::bitConvert(*(uint32_t*)(bytes + 14), &y);
uint16_t w = screenSize.width();
ByteUtil::bitConvert(*(uint16_t*)(bytes + 18), &w);
uint16_t h = screenSize.height();
ByteUtil::bitConvert(*(uint16_t*)(bytes + 20), &h);
uint16_t pressureValue = sc_float_to_u16fp(pressure);
ByteUtil::bitConvert(*(uint16_t*)(bytes + 22), &pressureValue);
ByteUtil::bitConvert(*(uint32_t*)(bytes + 24), &actionButton);
ByteUtil::bitConvert(*(uint32_t*)(bytes + 28), &buttons);
return { bytes, 32 };
}
};
注册videoWidget事件过滤器,模拟发送鼠标事件:
bool App::eventFilter(QObject *watched, QEvent *event) {
if (watched == videoWidget) {
if (auto mouseEvent = dynamic_cast<QMouseEvent*>(event)) {
auto dstPos = QPoint(qRound(mouseEvent->x() * framePixmapRatio.width()), qRound(mouseEvent->y() * framePixmapRatio.height()));
if (mouseEvent->type() == QEvent::MouseButtonPress) {
scrcpyServer->sendControl(ControlMsg::injectTouchEvent(AMOTION_EVENT_ACTION_DOWN, AMOTION_EVENT_BUTTON_PRIMARY,
AMOTION_EVENT_BUTTON_PRIMARY, 0,
frameSrcSize, dstPos, 1.0));
} else if (mouseEvent->type() == QEvent::MouseButtonRelease) {
scrcpyServer->sendControl(ControlMsg::injectTouchEvent(AMOTION_EVENT_ACTION_UP, AMOTION_EVENT_BUTTON_PRIMARY,
AMOTION_EVENT_BUTTON_PRIMARY, 0,
frameSrcSize, dstPos, 0.0));
} else if (mouseEvent->type() == QEvent::MouseMove) {
scrcpyServer->sendControl(ControlMsg::injectTouchEvent(AMOTION_EVENT_ACTION_MOVE, AMOTION_EVENT_BUTTON_PRIMARY,
AMOTION_EVENT_BUTTON_PRIMARY, 0,
frameSrcSize, dstPos, 1.0));
}
}
}
return QObject::eventFilter(watched, event);
}
//ScrcpyServer
void ScrcpyServer::sendControl(const QByteArray &controlMsg) {
if (controlSocket) {
controlSocket->write(controlMsg);
}
}
需要注意的是,screenSize参数必须为原视频发送来的图片帧大小,如果界面上的控件进行了缩放,需要按照比例映射到原图片帧位置才能正确的点击。
demo程序的源代码:https://github.com/daonvshu/qt-scrcpyservice