
Recherche avancée
Autres articles (110)
-
Emballe médias : à quoi cela sert ?
4 février 2011, parCe plugin vise à gérer des sites de mise en ligne de documents de tous types.
Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ; -
Menus personnalisés
14 novembre 2010, parMediaSPIP utilise le plugin Menus pour gérer plusieurs menus configurables pour la navigation.
Cela permet de laisser aux administrateurs de canaux la possibilité de configurer finement ces menus.
Menus créés à l’initialisation du site
Par défaut trois menus sont créés automatiquement à l’initialisation du site : Le menu principal ; Identifiant : barrenav ; Ce menu s’insère en général en haut de la page après le bloc d’entête, son identifiant le rend compatible avec les squelettes basés sur Zpip ; (...) -
Formulaire personnalisable
21 juin 2013, parCette page présente les champs disponibles dans le formulaire de publication d’un média et il indique les différents champs qu’on peut ajouter. Formulaire de création d’un Media
Dans le cas d’un document de type média, les champs proposés par défaut sont : Texte Activer/Désactiver le forum ( on peut désactiver l’invite au commentaire pour chaque article ) Licence Ajout/suppression d’auteurs Tags
On peut modifier ce formulaire dans la partie :
Administration > Configuration des masques de formulaire. (...)
Sur d’autres sites (9580)
-
How ffmpeg does mpegts streaming ?
14 avril 2020, par xybrekHere's how I stream MPEG-TS to a relay using ffmpeg :



ffmpeg -re -i out.ts -f mpegts -vcodec copy -acodec copy http://localhost:8081/secret




My question is in the internals of ffmpeg, I want to understand the core process as to how ffmpeg stream mpegts, what it does to the file to stream it, does it manipulate the byte it streams or it just stream as-is ?


-
Launch Symfony 4 command from controller works on dev but not in prod environment
14 août 2019, par JoakDAWhen an application loads, I make 2 AJAX request to start 2 proccess needed for showing a RTSP video streaming on the website.
It is working great in DEV environment but making some tests in PROD, it only works if the page is loaded on the server webbrowser (same host where the application is installed).
If I use an external browser installed on another machine, it doesn’t launch the video.
If I use an external browser installed on another machine, it doesn’t launch the video.
/**
* Start transcoding video.
* @param Request $request
* @return Response
* @Route("devices/show/videotranscoding", name="start_video_transcoding", methods={"POST"})
* @IsGranted("ROLE_OPERATOR")
*/
public function startTranscodingVideo(Request $request)
{
$value = '';
try {
//Setup needed variables
$this->initialize();
$this->logger->info('Start Video transcoding: Ok. Video started successfully');
//Get device id from POST data
$deviceid = $request->request->get('deviceid');
//Find device to show from system
$deviceToShow = $this->repository->find($deviceid);
if ($deviceToShow) {
$this->logger->info('Start Video transcoding: . Device has been found. Delete it... Data: ' . $deviceToShow->__toString());
$realHost = $this->getRealHost($_SERVER['HTTP_HOST']);
$tcpHost = (isset($_SERVER['HTTPS']) && $_SERVER['HTTPS'] === 'on' ? "https" : "http") . "://{$realHost}";
//Launch transcoding command
$transcodingCommand = 'php ' . $this->getParameter('kernel.project_dir') . '/bin/console device:videotranscoding ' .
'rtsp://' . $deviceToShow->getUsername() . ':' . $deviceToShow->getPassword() . '@' . str_replace('http://', '', $deviceToShow->getHost()) . ':' . $deviceToShow->getRTSPPort() . ' ' .
str_replace(' ', '', $deviceToShow->getName()) . ' ' . $tcpHost . ' ' . $deviceToShow->getVideoHTTPPort();
$transcodingProcess = \Symfony\Component\Process\Process::fromShellCommandline($transcodingCommand);
$transcodingProcess->start();
$success = true;
$message = '';
} else {
$message = $this->translator->trans('Device with identifier %deviceid% was not found.',
['%deviceid%' => $deviceid]);
$success = false;
$this->addFlash('error', $message);
$this->logger->error('Start Video transcoding: Ko. Device with identifier ' . $deviceid . ' was not found.');
}
} catch (Throwable $exception) {
$message = $this->translator->trans('Error while executing action. Error detail: %detail%.',
['%detail%' => $exception->getMessage()]);
$this->addFlash(
'error', $message
);
$success = false;
$this->logger->critical('Start Video transcoding: Ko. Exception catched. Error detail: ' . $exception->getMessage());
}
$this->logger->info('Start Video transcoding: Ok. Video started successfully');
return new JsonResponse(array(
'success' => $success,
'message' => $message,
'value' => $value
));
}I have a nodejs script executing in background to listen o a specific port to broadcast the data on the TCP port to a websocket server.
The ffmpeg command transcodes the RTSP stream and sent to port TCP 8102 and broadcast the data to a websocket server listening on port 8105.
The transcoding command code :
/**
* @param InputInterface $input
* @param OutputInterface $output
* @return int|void|null
* @throws \Exception
*/
protected function execute(InputInterface $input, OutputInterface $output)
{
try {
$this->logger->info('Start video transcoding: Setup video transcoding...');
$io = new SymfonyStyle($input, $output);
$now = new \DateTime();
$io->title('Start video transcoding at ' . $now->format('d-m-Y G:i:s') . '...');
//Get input parameters
$rtspUri = $input->getArgument('rtsp');
$secret = $input->getArgument('secret');
$portsString = $input->getArgument('tcp_port');
$tcpHost = $input->getArgument('tcp_host');
$this->logger->debug('Start video transcoding: RTSP: "' . $rtspUri . '". TCP Port: ' . $portsString);
//Absolute path to logs
$logPath = $this->path . DIRECTORY_SEPARATOR . 'var' . DIRECTORY_SEPARATOR . 'log';
$stdOutPath = $logPath . DIRECTORY_SEPARATOR . 'transcoding_out.log';
$stdErrrorPath = $logPath . DIRECTORY_SEPARATOR . 'transcoding_error.log';
//FFMPEG
$arguments = '-nostdin -t 00:01:00 -rtsp_transport tcp -i ' . $rtspUri . ' -f mpegts -codec:v mpeg1video -s 1920x1080 -b:v 800k -r 30 -bf 0 ' . $tcpHost . ':' . $portsString . '/' . $secret . ' > '
. $stdOutPath . ' 2> ' . $stdErrrorPath . ' &';
$ffmpegParams = '/usr/bin/ffmpeg ' . $arguments;
//$ffmpegProcess = new Process($ffmpegParams);
$ffmpegProcess = \Symfony\Component\Process\Process::fromShellCommandline($ffmpegParams);
$ffmpegProcess->setTimeout(60);
$ffmpegProcess->setIdleTimeout(60);
try {
$ffmpegProcess->start();
$this->logger->info('Start video transcoding: OK. Video streaming successfully started...');
$io->success('Start video transcoding: OK. Video streaming successfully started...');
}catch (ProcessTimedOutException $timedOutException){
$ffmpegProcess->stop(3, SIGINT);
$this->io->success('Start video transcoding: Ko. Transcoding finished with error.');
}
} catch (Throwable $exception) {
$message = 'Start video transcoding: Ko. Exception catched. Error detail: ' . $exception->getMessage();
$this->logger->critical($message);
$io->error($message);
}
}The node.js code (got from here JSMpeg – MPEG1 Video & MP2 Audio Decoder in JavaScript :
// Use the websocket-relay to serve a raw MPEG-TS over WebSockets. You can use
// ffmpeg to feed the relay. ffmpeg -> websocket-relay -> browser
// Example:
// node websocket-relay yoursecret 8081 8082
// ffmpeg -i <some input="input"> -f mpegts http://localhost:8081/yoursecret
var fs = require('fs'),
http = require('http'),
WebSocket = require('ws');
if (process.argv.length < 3) {
console.log(
'Usage: \n' +
'node websocket-relay.js <secret> [ ]'
);
console.log(process.cwd());
process.exit();
}
var STREAM_SECRET = process.argv[2],
STREAM_PORT = process.argv[3] || 8081,
WEBSOCKET_PORT = process.argv[4] || 8082,
RECORD_STREAM = false;
// Websocket Server
var socketServer = new WebSocket.Server({port: WEBSOCKET_PORT, perMessageDeflate: false});
socketServer.connectionCount = 0;
socketServer.on('connection', function(socket, upgradeReq) {
socketServer.connectionCount++;
console.log(
'New WebSocket Connection: ',
(upgradeReq || socket.upgradeReq).socket.remoteAddress,
(upgradeReq || socket.upgradeReq).headers['user-agent'],
'('+socketServer.connectionCount+' total)'
);
socket.on('close', function(code, message){
socketServer.connectionCount--;
console.log(
'Disconnected WebSocket ('+socketServer.connectionCount+' total)'
);
});
});
socketServer.broadcast = function(data) {
socketServer.clients.forEach(function each(client) {
if (client.readyState === WebSocket.OPEN) {
client.send(data);
}
});
};
// HTTP Server to accept incomming MPEG-TS Stream from ffmpeg
var streamServer = http.createServer( function(request, response) {
var params = request.url.substr(1).split('/');
if (params[0] !== STREAM_SECRET) {
console.log(
'Failed Stream Connection: '+ request.socket.remoteAddress + ':' +
request.socket.remotePort + ' - wrong secret.'
);
response.end();
}
response.connection.setTimeout(0);
console.log(
'Stream Connected: ' +
request.socket.remoteAddress + ':' +
request.socket.remotePort
);
request.on('data', function(data){
socketServer.broadcast(data);
if (request.socket.recording) {
request.socket.recording.write(data);
}
});
request.on('end',function(){
console.log('close');
if (request.socket.recording) {
request.socket.recording.close();
}
});
// Record the stream to a local file?
if (RECORD_STREAM) {
var path = 'recordings/' + Date.now() + '.ts';
request.socket.recording = fs.createWriteStream(path);
}
}).listen(STREAM_PORT);
console.log('Listening for incomming MPEG-TS Stream on http://127.0.0.1:'+STREAM_PORT+'/<secret>');
console.log('Awaiting WebSocket connections on ws://127.0.0.1:'+WEBSOCKET_PORT+'/');
</secret></secret></some>I am using PHP 7.3 and Symfony 4.3
I am able to get a successfully response from the controller but I can’t watch the video streaming on an external computer.
UPDATED : I don’t know if it may be related to the issue, but when I switch to DEV and then switch again to PROD using :
composer dump-env prod
If I try to clear the cache with :
php bin/console cache:clear
It appears :
joaquin@dev-computer:/var/www/example.com/html$ composer dump-env prod
Successfully dumped .env files in .env.local.php
joaquin@dev-computer:/var/www/example.com/html$ php bin/console cache:clear
09:15:07 ERROR [console] Error thrown while running command "cache:clear". Message: "Failed to remove file "/var/www/example.com/html/var/cache/pro~/pools/WBCr1hDG8d/-/R/iW4Vq0vqfrjVsp2Gihwg": unlink(/var/www/example.com/html/var/cache/pro~/pools/WBCr1hDG8d/-/R/iW4Vq0vqfrjVsp2Gihwg): Permission denied." ["exception" => Symfony\Component\Filesystem\Exception\IOException]8;;file:///var/www/example.com/html/vendor/symfony/filesystem/Exception/IOException.php\^]8;;\ { …},"command" => "cache:clear","message" => "Failed to remove file "/var/www/example.com/html/var/cache/pro~/pools/WBCr1hDG8d/-/R/iW4Vq0vqfrjVsp2Gihwg": unlink(/var/www/example.com/html/var/cache/pro~/pools/WBCr1hDG8d/-/R/iW4Vq0vqfrjVsp2Gihwg): Permission denied."]
In Filesystem.php line 184:
Failed to remove file "/var/www/example.com/html/var/cache/pro~/pools/WBCr1hDG8d/-/R/iW4Vq0vqfrjVsp2Gihwg": unlink(/va
r/www/example.com/html/var/cache/pro~/pools/WBCr1hDG8d/-/R/iW4Vq0vqfrjVsp2Gihwg): Permission denied.
cache:clear [--no-warmup] [--no-optional-warmers] [-h|--help] [-q|--quiet] [-v|vv|vvv|--verbose] [-V|--version] [--ansi] [--no-ansi] [-n|--no-interaction] [-e|--env ENV] [--no-debug] [--] <command>
</command>Thanks
-
Capture from multiple streams concurrently, best way to do it and how to reduce CPU usage
19 juin 2019, par DRONE_6969I am currently in the process of writing an application that will capture a lot of RTSP streams(in my case its 12) and display it on the QT widget. The problem arouses when I am going beyond around 6-7 streams, the CPU usage spikes and there is visible stutter.
The reason why I think that it is not QT draw function is because I have done some checking to measure how much time it takes to draw an incoming image from camera and just sample images I had, it is always a lot less than 33 milliseconds(even if there are 12 widgets being updated).
I also just ran opencv capture method without drawing and got pretty much the same CPU consumption as if I was drawing the frames (lost like 10% CPU at most and GPU usage went to zero).
IMPORTANT : I am using RTSP stream which is a h264 stream.
IF IT MATTERS MY SPECS :
Intel Core i7-6700 @ 3.40GHZ(8 CPUS)
Memory : 16gb
GPU : Intel HD Graphics 530(Also I ran my code on a computer with dedicated Graphics card, it did eliminate some stutter but CPU usage is still pretty high)
I am currently using OPENCV 4.1.0 with GSTREAMER enabled and built, I also have the OPENCV-WORLD version, there is no difference in performance.
I have created a special class called Camera that holds its frame size constraints and various control functions as well stream function. The stream function is being ran on a separate thread, whenever stream() function is done with current frame it sends ready Mat via onNewFrame event I created which converts to QPixmap and updates widget’s lastImage variable. This way I can update image in a more thread safe way.
I have tried to manipulate those VideoCapture.set() values, but it didn’t really help.
This is my stream function (Ignore the bool return, it doesn’t do anything it is a remnant from couple of minutes ago when I was trying to use std::async) :
bool Camera::stream() {
/* This function is meant to run on a separate thread and fill up the buffer independantly of
main stream thread */
//cv::setNumThreads(100);
/* Rules for these slightly changed! */
Mat pre; // Grab initial undoctored frame
//pre = Mat::zeros(size, CV_8UC1);
Mat frame; // Final modified frame
frame = Mat::zeros(size, CV_8UC1);
if (!pre.isContinuous()) pre = pre.clone();
ipCam.open(streamUrl, CAP_FFMPEG);
while (ipCam.isOpened() && capture) {
// If camera is opened wel need to capture and process the frame
try {
auto start = std::chrono::system_clock::now();
ipCam >> pre;
if (pre.empty()) {
/* Check for blank frame, return error if there is a blank frame*/
cerr << id << ": ERROR! blank frame grabbed\n";
for (FrameListener* i : clients) {
i->onNotification(1); // Notify clients about this shit
}
break;
}
else {
// Only continue if frame not empty
if (pre.cols != size.width && pre.rows != size.height) {
resize(pre, frame, size);
pre.release();
}
else {
frame = pre;
}
dPacket* pack = new dPacket{id,&frame};
for (auto i : clients) {
i->onPNewFrame(pack);
}
frame.release();
delete pack;
}
}
catch (int e) {
cout << endl << "-----Exception during capture process! CODE " << e << endl;
}
// End camera manipulations
}
cout << "Camera timed out, or connection is closed..." << endl;
if (tryResetConnection) {
cout << "Reconnection flag is set, retrying after 3 seconds..." << endl;
for (FrameListener* i : clients) {
i->onNotification(-1); // Notify clients about this shit
}
this_thread::sleep_for(chrono::milliseconds(3000));
stream();
}
return true;
}This is my onPNewFrame function. The conversion is still being done on camera’s thread because it was called within stream() and therefore is within that scope(and I also checked) :
void GLWidget::onPNewFrame(dPacket* inPack) {
lastFlag = 0;
if (bufferEnabled) {
buffer.push(QPixmap::fromImage(toQImageFromPMat(inPack->frame)));
}
else {
if (playing) {
/* Only process if this widget is playing */
frameProcessing = true;
lastImage.convertFromImage(toQImageFromPMat(inPack->frame));
frameProcessing = false;
}
}
if (lastFlag != -1 && !lastImage.isNull()) {
connecting = false;
}
else {
connecting = true;
}
}This is my Mat to QImage :
QImage GLWidget::toQImageFromPMat(cv::Mat* mat) {
return QImage(mat->data, mat->cols, mat->rows, QImage::Format_RGB888).rgbSwapped();NOTE : not converting does not result in CPU boost (at least not a significant one).
Minimal verifiable example
This program is large. I am going to paste GLWidget.cpp and GLWidget.h as well as Camera.h and Camera.cpp. You can put GLWidget into anything just as long as you spawn more than 6 of it. Camera relies on the CamUtils, but it is possible to just paste url in videocapture
I also supplied CamUtils, just in case
Camera.h :
#pragma once
#include <iostream>
#include <vector>
#include <fstream>
#include <map>
#include <string>
#include <sstream>
#include <algorithm>
#include "FrameListener.h"
#include
#include <thread>
#include "CamUtils.h"
#include <ctime>
#include "dPacket.h"
using namespace std;
using namespace cv;
class Camera
{
/*
CLEANED UP!
Camera now is only responsible for streaming and echoing captured frames.
Frames are now wrapped into dPacket struct.
*/
private:
string id;
vector clients;
VideoCapture ipCam;
string streamUrl;
Size size;
bool tryResetConnection = false;
//TODO: Remove these as they are not going to be used going on:
bool isPlaying = true;
bool capture = true;
//SECRET FEATURES:
bool detect = false;
public:
Camera(string url, int width = 480, int height = 240, bool detect_=false);
bool stream();
void setReconnectable(bool newReconStatus);
void addListener(FrameListener* client);
vector<bool> getState(); // Returns current state: vector[0] stream state; vector[1] stream state; TODO: Remove this as this is no longer should control behaviour
void killStream();
bool getReconnectable();
};
</bool></ctime></thread></algorithm></sstream></string></map></fstream></vector></iostream>Camera.cpp
#include "Camera.h"
Camera::Camera(string url, int width, int height, bool detect_) // Default 240p
{
streamUrl = url; // Prepare url
size = Size(width, height);
detect = detect_;
}
void Camera::addListener(FrameListener* client) {
clients.push_back(client);
}
/*
TEST CAMERAS(Paste into cameras.dViewer):
{"id":"96a73796-c129-46fc-9c01-40acd8ed7122","ip":"176.57.73.231","password":"null","username":"null"},
{"id":"96a73796-c129-46fc-9c01-40acd8ed7122","ip":"176.57.73.231","password":"null","username":"null"},
{"id":"96a73796-c129-46fc-9c01-40acd8ed7144","ip":"172.20.101.13","password":"admin","username":"root"}
{"id":"96a73796-c129-46fc-9c01-40acd8ed7144","ip":"172.20.101.13","password":"admin","username":"root"}
*/
bool Camera::stream() {
/* This function is meant to run on a separate thread and fill up the buffer independantly of
main stream thread */
//cv::setNumThreads(100);
/* Rules for these slightly changed! */
Mat pre; // Grab initial undoctored frame
//pre = Mat::zeros(size, CV_8UC1);
Mat frame; // Final modified frame
frame = Mat::zeros(size, CV_8UC1);
if (!pre.isContinuous()) pre = pre.clone();
ipCam.open(streamUrl, CAP_FFMPEG);
while (ipCam.isOpened() && capture) {
// If camera is opened wel need to capture and process the frame
try {
auto start = std::chrono::system_clock::now();
ipCam >> pre;
if (pre.empty()) {
/* Check for blank frame, return error if there is a blank frame*/
cerr << id << ": ERROR! blank frame grabbed\n";
for (FrameListener* i : clients) {
i->onNotification(1); // Notify clients about this shit
}
break;
}
else {
// Only continue if frame not empty
if (pre.cols != size.width && pre.rows != size.height) {
resize(pre, frame, size);
pre.release();
}
else {
frame = pre;
}
auto end = std::chrono::system_clock::now();
std::time_t ts = std::chrono::system_clock::to_time_t(end);
dPacket* pack = new dPacket{ id,&frame};
for (auto i : clients) {
i->onPNewFrame(pack);
}
frame.release();
delete pack;
}
}
catch (int e) {
cout << endl << "-----Exception during capture process! CODE " << e << endl;
}
// End camera manipulations
}
cout << "Camera timed out, or connection is closed..." << endl;
if (tryResetConnection) {
cout << "Reconnection flag is set, retrying after 3 seconds..." << endl;
for (FrameListener* i : clients) {
i->onNotification(-1); // Notify clients about this shit
}
this_thread::sleep_for(chrono::milliseconds(3000));
stream();
}
return true;
}
void Camera::killStream(){
tryResetConnection = false;
capture = false;
ipCam.release();
}
void Camera::setReconnectable(bool reconFlag) {
tryResetConnection = reconFlag;
}
bool Camera::getReconnectable() {
return tryResetConnection;
}
vector<bool> Camera::getState() {
vector<bool> states;
states.push_back(isPlaying);
states.push_back(ipCam.isOpened());
return states;
}
</bool></bool>GLWidget.h :
#ifndef GLWIDGET_H
#define GLWIDGET_H
#include <qopenglwidget>
#include <qmouseevent>
#include "FrameListener.h"
#include "Camera.h"
#include "FrameListener.h"
#include
#include "Camera.h"
#include "CamUtils.h"
#include
#include "dPacket.h"
#include <chrono>
#include <ctime>
#include
#include "FullScreenVideo.h"
#include <qmovie>
#include "helper.h"
#include <iostream>
#include <qpainter>
#include <qtimer>
class Helper;
class GLWidget : public QOpenGLWidget, public FrameListener
{
Q_OBJECT
public:
GLWidget(std::string camId, CamUtils *cUtils, int width, int height, bool denyFullScreen_ = false, bool detectFlag_=false, QWidget* parent = nullptr);
void killStream();
~GLWidget();
public slots:
void animate();
void setBufferEnabled(bool setState);
void setCameraRetryConnection(bool setState);
void GLUpdate(); // Call to update the widget
void onRightClickMenu(const QPoint& point);
protected:
void paintEvent(QPaintEvent* event) override;
void onPNewFrame(dPacket* frame);
void onNotification(int alert_code);
private:
// Objects and resourses
Helper* helper;
Camera* cam;
CamUtils* camUtils;
QTimer* timer; // Keep track of update
QPixmap lastImage;
QMovie* connMov;
QMovie* test;
QPixmap logo;
// Control fields
int width;
int height;
int camUtilsAddr;
int elapsed;
std::thread* camThread;
std::string camId;
bool denyFullScreen = false;
bool playing = true;
bool streaming = true;
bool debug = false;
bool connecting = true;
int lastFlag = 0;
// Debug fields
std::chrono::high_resolution_clock::time_point lastFrameAt;
std::chrono::high_resolution_clock::time_point now;
std::chrono::duration<double> painTime; // time took to draw last frame
//Buffer stuff
std::queue<qpixmap> buffer;
bool bufferEnabled = false;
bool initialBuffer = false;
bool buffering = true;
bool frameProcessing = false;
//Functions
QImage toQImageFromPMat(cv::Mat* inFrame);
void mousePressEvent(QMouseEvent* event) override;
void drawImageGLLatest(QPainter* painter, QPaintEvent* event, int elapsed);
void drawOnPaused(QPainter* painter, QPaintEvent* event, int elapsed);
void drawOnStatus(int statusFlag, QPainter* painter, QPaintEvent* event, int elapsed);
};
#endif
</qpixmap></double></qtimer></qpainter></iostream></qmovie></ctime></chrono></qmouseevent></qopenglwidget>GLWidget.cpp :
#include "glwidget.h"
#include <future>
FullScreenVideo* fullScreen;
GLWidget::GLWidget(std::string camId_, CamUtils* cUtils, int width_, int height_, bool denyFullScreen_, bool detectFlag_, QWidget* parent)
: QOpenGLWidget(parent), helper(helper)
{
cout << "Player for CAMERA " << camId_ << endl;
/* Underlying properties */
camUtils = cUtils;
cout << "GLWidget Incoming CamUtils addr " << camUtils << endl;
cout << "GLWidget Set CamUtils addr " << camUtils << endl;
camId = camId_;
elapsed = 0;
width = width_ + 5;
height = height_ + 5;
helper = new Helper();
setFixedSize(width, height);
denyFullScreen = denyFullScreen_;
/* Camera capture thread */
cam = new Camera(camUtils->getCameraStreamURL(camId), width_, height_, detectFlag_);
cam->addListener(this);
/* Sync states */
vector<bool> initState = cam->getState();
playing = initState[0];
streaming = initState[1];
cout << "Initial states: " << playing << " " << streaming << endl;
camThread = new std::thread(&Camera::stream, cam);
cout << "================================================" << endl;
// Right click set up
setContextMenuPolicy(Qt::CustomContextMenu);
/* Loading gif */
connMov = new QMovie("establishingConnection.gif");
connMov->start();
QString url = R"(RLC-logo.png)";
logo = QPixmap(url);
QTimer* timer = new QTimer(this);
connect(timer, SIGNAL(timeout()), this, SLOT(GLUpdate()));
timer->start(1000/30);
playing = true;
}
/* SYSTEM */
void GLWidget::animate()
{
elapsed = (elapsed + qobject_cast(sender())->interval()) % 1000;
std::cout << elapsed << "\n";
}
void GLWidget::GLUpdate() {
/* Process descisions before update call */
if (bufferEnabled) {
/* Process buffer before update */
now = chrono::high_resolution_clock::now();
std::chrono::duration timeSinceLastUpdate = now - lastFrameAt;
if (timeSinceLastUpdate.count() > 25) {
if (buffer.size() > 1 && playing) {
lastImage.swap(buffer.front());
buffer.pop();
lastFrameAt = chrono::high_resolution_clock::now();
}
}
//update(); // Update
}
else {
/* No buffer */
}
repaint();
}
/* EVENTS */
void GLWidget::onRightClickMenu(const QPoint& point) {
cout << "Right click request got" << endl;
QPoint globPos = this->mapToGlobal(point);
QMenu myMenu;
if (!denyFullScreen) {
myMenu.addAction("Open Full Screen");
}
myMenu.addAction("Toggle Debug Info");
QAction* selected = myMenu.exec(globPos);
if (selected) {
string optiontxt = selected->text().toStdString();
if (optiontxt == "Open Full Screen") {
cout << "Chose to open full screen of " << camId << endl;
fullScreen = new FullScreenVideo(bufferEnabled, this);
fullScreen->setUpView(camUtils, camId);
fullScreen->show();
playing = false;
}
if (optiontxt == "Toggle Debug Info") {
cout << "Chose to toggle debug of " << camId << endl;
debug = !debug;
}
}
else {
cout << "Chose nothing!" << endl;
}
}
void GLWidget::onPNewFrame(dPacket* inPack) {
lastFlag = 0;
if (bufferEnabled) {
buffer.push(QPixmap::fromImage(toQImageFromPMat(inPack->frame)));
}
else {
if (playing) {
/* Only process if this widget is playing */
frameProcessing = true;
lastImage.convertFromImage(toQImageFromPMat(inPack->frame));
frameProcessing = false;
}
}
if (lastFlag != -1 && !lastImage.isNull()) {
connecting = false;
}
else {
connecting = true;
}
}
void GLWidget::onNotification(int alert) {
lastFlag = alert;
}
/* Paint events*/
void GLWidget::paintEvent(QPaintEvent* event)
{
QPainter painter(this);
if (lastFlag != 0 || connecting) {
drawOnStatus(lastFlag, &painter, event, elapsed);
}
else {
/* Actual frame drawing */
if (playing) {
if (!frameProcessing) {
drawImageGLLatest(&painter, event, elapsed);
}
}
else {
drawOnPaused(&painter, event, elapsed);
}
}
painter.end();
}
/* DRAWING STUFF */
void GLWidget::drawOnStatus(int statusFlag, QPainter* bgPaint, QPaintEvent* event, int elapsed) {
QString str;
QFont font("times", 15);
bgPaint->eraseRect(QRect(0, 0, width, height));
if (!lastImage.isNull()) {
bgPaint->drawPixmap(QRect(0, 0, width, height), lastImage);
}
/* Test background painting */
if (connecting) {
string k = "Connecting to " + camUtils->getIp(camId);
str.append(k.c_str());
}
else {
switch (statusFlag) {
case 1:
str = "Blank frame received...";
break;
case -1:
if (cam->getReconnectable()) {
str = "Connection lost, will try to reconnect.";
bgPaint->setOpacity(0.3);
}
else {
str = "Connection lost...";
bgPaint->setOpacity(0.3);
}
break;
}
}
bgPaint->drawPixmap(QRect(0, 0, width, height), QPixmap::fromImage(connMov->currentImage()));
bgPaint->setPen(Qt::red);
bgPaint->setFont(font);
QFontMetrics fm(font);
const QRect kek(0, 0, fm.width(str), fm.height());
QRect bound;
bgPaint->setOpacity(1);
bgPaint->drawText(bgPaint->viewport().width()/2 - kek.width()/2, bgPaint->viewport().height()/2 - kek.height(), str);
bgPaint->drawPixmap(bgPaint->viewport().width() / 2 - logo.width()/2, height - logo.width() - 15, logo);
}
void GLWidget::drawOnPaused(QPainter* painter, QPaintEvent* event, int elapsed) {
painter->eraseRect(0, 0, width, height);
QFont font = painter->font();
font.setPointSize(18);
painter->setPen(Qt::red);
QFontMetrics fm(font);
QString str("Paused");
painter->drawPixmap(QRect(0, 0, width, height),lastImage);
painter->drawText(QPoint(painter->viewport().width() - fm.width(str), 50), str);
if (debug) {
QFont font = painter->font();
font.setPointSize(25);
painter->setPen(Qt::red);
string camMess = "CAMID: " + camId;
QString mess(camMess.c_str());
string camIp = "IP: " + camUtils->getIp(camId);
QString ipMess(camIp.c_str());
QString bufferSize("Buffer size: " + QString::number(buffer.size()));
QString lastFrameText("Last frame draw time: " + QString::number(painTime.count()) + "s");
painter->drawText(QPoint(10, 50), mess);
painter->drawText(QPoint(10, 60), ipMess);
QString bufferState;
if (bufferEnabled) {
bufferState = QString("Experimental BUFFER is enabled!");
QString currentBufferSize("Current buffer load: " + QString::number(buffer.size()));
painter->drawText(QPoint(10, 80), currentBufferSize);
}
else {
bufferState = QString("Experimental BUFFER is disabled!");
}
painter->drawText(QPoint(10, 70), bufferState);
painter->drawText(QPoint(10, height - 25), lastFrameText);
}
}
void GLWidget::drawImageGLLatest(QPainter* painter, QPaintEvent* event, int elapsed) {
auto start = chrono::high_resolution_clock::now();
painter->drawPixmap(QRect(0, 0, width, height), lastImage);
if (debug) {
QFont font = painter->font();
font.setPointSize(25);
painter->setPen(Qt::red);
string camMess = "CAMID: " + camId;
QString mess(camMess.c_str());
string camIp = "IP: " + camUtils->getIp(camId);
QString ipMess(camIp.c_str());
QString bufferSize("Buffer size: " + QString::number(buffer.size()));
QString lastFrameText("Last frame draw time: " + QString::number(painTime.count()) + "s");
painter->drawText(QPoint(10, 50), mess);
painter->drawText(QPoint(10, 60), ipMess);
QString bufferState;
if(bufferEnabled){
bufferState = QString("Experimental BUFFER is enabled!");
QString currentBufferSize("Current buffer load: " + QString::number(buffer.size()));
painter->drawText(QPoint(10,80), currentBufferSize);
}
else {
bufferState = QString("Experimental BUFFER is disabled!");
QString currentBufferSize("Current buffer load: " + QString::number(buffer.size()));
painter->drawText(QPoint(10, 80), currentBufferSize);
}
painter->drawText(QPoint(10, 70), bufferState);
painter->drawText(QPoint(10, height - 25), lastFrameText);
}
auto end = chrono::high_resolution_clock::now();
painTime = end - start;
}
/* END DRAWING STUFF */
/* UI EVENTS */
void GLWidget::mousePressEvent(QMouseEvent* e) {
if (e->button() == Qt::LeftButton) {
if (fullScreen == nullptr || !fullScreen->isVisible()) { // Do not unpause if window is opened
playing = !playing;
}
}
if (e->button() == Qt::RightButton) {
onRightClickMenu(e->pos());
}
}
/* Utilities */
QImage GLWidget::toQImageFromPMat(cv::Mat* mat) {
return QImage(mat->data, mat->cols, mat->rows, QImage::Format_RGB888).rgbSwapped();
}
/* State control */
void GLWidget::killStream() {
cam->killStream();
camThread->join();
}
void GLWidget::setBufferEnabled(bool newBufferState) {
cout << "Player: " << camId << ", buffer state updated: " << newBufferState << endl;
bufferEnabled = newBufferState;
buffer.empty();
}
void GLWidget::setCameraRetryConnection(bool newState) {
cam->setReconnectable(newState);
}
/* Destruction */
GLWidget::~GLWidget() {
cam->killStream();
camThread->join();
}
</bool></future>CamUtils.h :
#pragma once
#include <iostream>
#include <vector>
#include <fstream>
#include <map>
#include <string>
#include <sstream>
#include <algorithm>
#include <nlohmann></nlohmann>json.hpp>
using namespace std;
using json = nlohmann::json;
class CamUtils
{
private:
string camDb = "cameras.dViewer";
map> cameraList; // Legacy
json cameras;
ofstream dbFile;
bool dbExists(); // Always hard coded
/* Old IMPLEMENTATION */
void writeLineToDb_(const string& content, bool append = false);
void loadCameras_();
/* JSON based */
void loadCameras();
public:
CamUtils();
string generateRandomString(size_t length);
string getCameraStreamURL(string cameraId) const;
string saveCamera(string ip, string username, string pass); // Return generated id
vector<string> listAllCameraIds();
string getIp(string cameraId);
};
</string></algorithm></sstream></string></map></fstream></vector></iostream>CamUtils.cpp :
#include "CamUtils.h"
#pragma comment(lib, "rpcrt4.lib") // UuidCreate - Minimum supported OS Win 2000
#include
#include <iostream>
CamUtils::CamUtils()
{
if (!dbExists()) {
ofstream dbFile;
dbFile.open(camDb);
cameras["cameras"] = json::array();
dbFile << cameras << std::endl;
dbFile.close();
}
else {
loadCameras();
}
}
vector<string> CamUtils::listAllCameraIds() {
vector<string> ids;
cout << "IN LIST " << endl;
for (auto& cam : cameras["cameras"]) {
ids.push_back(cam["id"].get<string>());
//cout << cam["id"].get<string>() << std::endl;
}
return ids;
}
string CamUtils::getIp(string id) {
vector<string> camDetails = cameraList[id];
string ip = "NO IP WILL DISPLAYED UNTIL I FIGURE OUT A BUG";
for (auto& cam : cameras["cameras"]) {
if (id == cam["id"]) {
ip = cam["ip"].get<string>();
}
}
return ip;
}
string CamUtils::getCameraStreamURL(string id) const {
string url = "err"; // err is the default, it will be overwritten in case id is found, dont forget to check for it
for (auto& cam : cameras["cameras"]) {
if (id == cam["id"]) {
if (cam["username"].get<string>() == "null") {
url = "rtsp://" + cam["ip"].get<string>() + ":554/axis-media/media.amp?tcp";
}
else {
url = "rtsp://" + cam["username"].get<string>() + ":" + cam["password"].get<string>() + "@" + cam["ip"].get<string>() + ":554/axis-media/media.amp?streamprofile=720_30";
}
}
}
return url; // Dont forget to check for err when using this shit
}
string CamUtils::saveCamera(string ip, string username, string password) {
UUID uid;
UuidCreate(&uid);
char* str;
UuidToStringA(&uid, (RPC_CSTR*)&str);
string id = str;
cout << "GEN: " << id << endl;
json cam = json({}); //Create emtpy object
cam["id"] = id;
cam["ip"] = ip;
cam["username"] = username;
cam["password"] = password;
cameras["cameras"].push_back(cam);
std::ofstream out(camDb);
out << cameras << std::endl;
cout << cameras["cameras"] << endl;
cout << "Saved camera as " << id << endl;
return id;
}
bool CamUtils::dbExists() {
ifstream dbFile(camDb);
return (bool)dbFile;
}
void CamUtils::loadCameras() {
cout << "Load call" << endl;
ifstream dbFile(camDb);
string line;
string wholeFile;
while (std::getline(dbFile, line)) {
cout << line << endl;
wholeFile += line;
}
try {
cameras = json::parse(wholeFile);
//cout << cameras["cameras"] << endl;
}
catch (exception e) {
cout << e.what() << endl;
}
dbFile.close();
}
/*
LEGACY CODE, TO BE REMOVED!
*/
void CamUtils::loadCameras_() {
/*
LEGACY CODE:
This used to be the way to load cameras, but I moved on to JSON based configuration so this is no longer needed and will be removed soon
*/
ifstream dbFile(camDb);
string line;
while (std::getline(dbFile, line)) {
/*
This function load camera data to the map:
The order MUST be the following: 0:ID, 1:IP, 2:USERNAME, 3:PASSWORD.
Always delimited with | no spaces between!
*/
if (!line.empty()) {
stringstream ss(line);
string item;
vector<string> splitString;
while (std::getline(ss, item, '|')) {
splitString.push_back(item);
}
if (splitString.size() > 0) {
/* Dont even parse if the program didnt split right*/
//cout << "Split string: " << splitString.size() << "\n";
for (int i = 0; i < (splitString.size()); i++) cameraList[splitString[0]].push_back(splitString[i]);
}
}
}
}
void CamUtils::writeLineToDb_(const string & content, bool append) {
ofstream dbFile;
cout << "Creating?";
if (append) {
dbFile.open(camDb, ios_base::app);
}
else {
dbFile.open(camDb);
}
dbFile << content.c_str() << "\r\n";
dbFile.flush();
}
/* JSON Reworx */
string CamUtils::generateRandomString(size_t length)
{
const char* charmap = "ABCDEFGHIJKLMNOPQRSTUVWXYZ";
const size_t charmapLength = strlen(charmap);
auto generator = [&]() { return charmap[rand() % charmapLength]; };
string result;
result.reserve(length);
generate_n(back_inserter(result), length, generator);
return result;
}
</string></string></string></string></string></string></string></string></string></string></string></string></iostream>End of example
How would I go about decreasing CPU usage when dealing with large amount of streams ?