
Recherche avancée
Médias (1)
-
La conservation du net art au musée. Les stratégies à l’œuvre
26 mai 2011
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (30)
-
MediaSPIP : Modification des droits de création d’objets et de publication définitive
11 novembre 2010, parPar défaut, MediaSPIP permet de créer 5 types d’objets.
Toujours par défaut les droits de création et de publication définitive de ces objets sont réservés aux administrateurs, mais ils sont bien entendu configurables par les webmestres.
Ces droits sont ainsi bloqués pour plusieurs raisons : parce que le fait d’autoriser à publier doit être la volonté du webmestre pas de l’ensemble de la plateforme et donc ne pas être un choix par défaut ; parce qu’avoir un compte peut servir à autre choses également, (...) -
Ajouter notes et légendes aux images
7 février 2011, parPour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
Modification lors de l’ajout d’un média
Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...) -
Les statuts des instances de mutualisation
13 mars 2010, parPour des raisons de compatibilité générale du plugin de gestion de mutualisations avec les fonctions originales de SPIP, les statuts des instances sont les mêmes que pour tout autre objets (articles...), seuls leurs noms dans l’interface change quelque peu.
Les différents statuts possibles sont : prepa (demandé) qui correspond à une instance demandée par un utilisateur. Si le site a déjà été créé par le passé, il est passé en mode désactivé. publie (validé) qui correspond à une instance validée par un (...)
Sur d’autres sites (5296)
-
Handling correctly the ffmpeg & ffprobe with php
29 septembre 2014, par coccoHandling correctly the ffmpeg & ffprobe with php
maybe not relevant final goals :
- upload clip with ajax
- get ajax info from
ffprobe
using php as json executingffprobe
once only (noffmpeg
) - handle all calculations with javascript
- maybe an extra php script tool that can create gifs, extract frames(thumbs), or a video grid preview
- when rdy ajax the conversion info to the final php conversion script executing ffmpeg once only (just the final ffmpeg string.).
I’m trying to write my own ffmpeg local web video editor that converts all formats to mp4 automatically. As mp4 is the most compatible container now and the h264+aac/+ac3 is also one of the best compressions. I also want to be able to cut, crop, resize, remove streams, add streams and more. I’m stuck on some simple problems :
1. HOW TO GET THE INFO ?
I’m using ffprobe to get the file information as json with the following command :
ffprobe -v quiet -print_format json -show_format -show_streams -show_packets '.$video
this gives you a lot of information, but some relevant stuff is not always present. I need the duration (in milliseconds),the fps (as a float) and the total frames (as an integer).
i know that these values can sometimes be found inside this array :
format.duration //Total duration
streams[0].duration //Video duration
streams[1].duration //Audio duration
streams[0].avg_frame_rate //Average framerate
streams[0].r_frame_rate //Video framerate
streams[0].nb_frames //Total framesbut most of the time
nb_frames
is missing, alsoavg_frame_rate
differs fromr_frame_rate
, which is also not always available.I know that i could use multiple commands to increase the chance to get the correct values.. but srsly ???
//fps
ffmpeg -i INPUT 2>&1 | sed -n "s/.*, \(.*\) fp.*/\1/p"
//duration
ffmpeg -i INPUT 2>&1 | awk '/Duration/ {split($2,a,":");print a[1]*3600+a[2]*60+a[3]}'
//frames
ffmpeg -i INPUT -vcodec copy -f rawvideo -y /dev/null 2>&1 | tr ^M '\n' | awk '/^frame=/ {print $2}'|tail -n 1I don’t want to execute ffmpeg 3 times to get this information ; I’d prefer to just use ffprobe.
So... is there an elegant way to get the extra info that is not always present inside the ffprobe output (fps, frames, duration) ???
In the preview i want to be able to jump correctly to a specific frame (NOT TIME). if the above parameters are aviable i can do that using this command.
ffmpeg -i INPUT -vf 'select=gte(n\,FRAMENUMBER)' -vframes 1 -f image2 OUTPUT
using the above command by setting the framenumber to the last frame always returns a black frame.
if there are 50 frames (for example) the range is 1-50 — correct ? Frame 50 is black, frame 1 is ok, frame 0 returns an error...
2. WHILE READING THE LOG HOW TO SKIP ERRORS AND DETERMINE IF THE CONVERSION IS FINISHED ?
I’m able to upload one single video per time (per page) and i can read the current progress from the ffmpeg generated output log until i don’t close the page. more control/multiple conversions would be nice.
i’m reading the last line of the log with a custom tail function but as this is a log that also includes errors i don’t always get a nice line containing the desidered values. btw to check if the progress is complete i check if the last line CONTAINS the WORD
frame
....How can i find out when the conversion progress is finished ?
maybe a way to delete the log with ffmpeg command ??And skip/log the errors ??
i’m using server sent events to read the log...
here is the php code<?php
setlocale(LC_CTYPE, "en_US.UTF-8");
function tailCustom($filepath,$lines=1,$adaptive=true){
// a custom function to get the last line of a textfile.
}
function send($data){
echo "id: ".time().PHP_EOL;
echo "data: ".$data.PHP_EOL;
echo PHP_EOL;
ob_flush();
flush();
}
header('Content-Type: text/event-stream');
header('Cache-Control: no-cache');
while(true){
send(tailCustom($_GET['log'].".log"));
sleep(1);
}
?>And here the SSE js
function startSSE(fn){
sse=new EventSource("ffmpegProgress.php?log="+encodeURIComponent(fn));
sse.addEventListener('message',conversionProgress,false);
}
function conversionProgress(e){
if(e.data.substr(0,6)=='frame='){
inProgress=true;
var x=e.data.match(/frame=\s*(.*?)\s*fps=\s*(.*?)\s*q=\s*(.*?)\s*size=\s*(.*?)\s*time=\s*(.*?)\s*bitrate=\s*(.*?)\s*$/);
x.shift();x={frame:x[0]*1,fps:x[1]*1,q:x[2],size:x[3],time:x[4],bitrate:x[5]};
var elapsedTime = ((new Date().getTime()) - startTime);
var chunksPerTime = timeString2ms(x.time) / elapsedTime;
var estimatedTotalTime = duration / chunksPerTime;
var timeLeftInSeconds = Math.abs(elapsedTime-(estimatedTotalTime*1000));
var withOneDecimalPlace = Math.round(timeLeftInSeconds * 10) / 10;
conversion.innerHTML='Time Left: '+ms2TimeString(timeLeftInSeconds).split('.')[0]+'<br />'+
'Time Left2: '+(ms2TimeString(((frames-x.frame)/x.fps)*1000)+(timeString2ms(x.time)/(duration*1000)*100|0)).split('.')[0]+'<br />'+
'Estimated Total: '+ms2TimeString(estimatedTotalTime*1000).split('.')[0]+'<br />'+
'Elapsed Time: '+ms2TimeString(elapsedTime).split('.')[0];
}else{
if(inProgress){
sse.removeEventListener('message',conversionProgress,false);
sse.close();
sse=null;
conversion.textContent='Finished in '+ms2TimeString((new Date().getTime()) - startTime).split('.')[0];
//delete log/old file??
inProgress=false;
}
}
}EDIT
HERE IS A SAMPLE OUTPUT after detecting h264 codec in a m2ts with ac3 audio
As most devices can already read h264 i just need to convert the audio in aac and copy the same audio AC3 as second track. and put everything inside a mp4 container. So that i have a Android/chrome/ios & more browsers compatible file.
$opt="-map 0:0 -map 0:1 -map 0:1 -c:v copy -c:a:0 libfdk_aac -metadata:s:a:0 language=ita -b:a 128k -ar 48000 -ac 2 -c:a:1 copy -metadata:s:a:1 language=ita -movflags +faststart";
$i="in.m2ts";
$o="out.mp4";
$t="title";
$y="2014";
$progress="nameoftheLOG.log";
$cmd="ffmpeg -y -i ".escapeshellarg($i)." -metadata title=".$t." -metadata date=".$y." ".$opt." ".$o." null >/dev/null 2>".$progress." &";if you have any questions about the code or want to see more code just ask...
-
ADD Image overlay to ffmpeg video stream
1er juillet 2017, par ChrisI am new to ffmpeg and want to add an HUD to the video stream, so a few questions.
- What file do I need to edit.
- What do I need to do to achieve this.
Thanks in advance. Also I am VERY new to all of this, I will need instructions step by step
I saw other questions saying to add this :
ffmpeg -n -i video.mp4 -i logo.png -filter_complex "[0:v]setsar=sar=1[v];[v][1]blend=all_mode='overlay':all_opacity=0.7" -movflags +faststart tmb/video.mp4
But I dont know where to put it, i entered it in the terminal and got this :
pi@raspberrypi:~ $ ffmpeg -n -i video.mp4 -i logo.png -filter_complex "[0:v]setsar=sar=1[v];[v][1]blend=all_mode='overlay':all_opacity=0.7" -movflags +faststart tmb/video.mp4
ffmpeg version N-86215-gb5228e4 Copyright (c) 2000-2017 the FFmpeg developers
built with gcc 4.9.2 (Raspbian 4.9.2-10)
configuration: --arch=armel --target-os=linux --enable-gpl --enable-libx264 --enable-nonfree --extra-libs=-ldl
libavutil 55. 63.100 / 55. 63.100
libavcodec 57. 96.101 / 57. 96.101
libavformat 57. 72.101 / 57. 72.101
libavdevice 57. 7.100 / 57. 7.100
libavfilter 6. 90.100 / 6. 90.100
libswscale 4. 7.101 / 4. 7.101
libswresample 2. 8.100 / 2. 8.100
libpostproc 54. 6.100 / 54. 6.100
video.mp4: No such file or directoryI dont understand what i am supposed to do with the video.mp4 ?
HERE IS THE SCRIPT THAT SENDS THE VIDEO.
import subprocess
import shlex
import re
import os
import time
import urllib2
import platform
import json
import sys
import base64
import random
import argparse
parser = argparse.ArgumentParser(description='robot control')
parser.add_argument('camera_id')
parser.add_argument('video_device_number', default=0, type=int)
parser.add_argument('--kbps', default=450, type=int)
parser.add_argument('--brightness', default=75, type=int, help='camera brightness')
parser.add_argument('--contrast', default=75, type=int, help='camera contrast')
parser.add_argument('--saturation', default=15, type=int, help='camera saturation')
parser.add_argument('--rotate180', default=False, type=bool, help='rotate image 180 degrees')
parser.add_argument('--env', default="prod")
args = parser.parse_args()
server = "runmyrobot.com"
#server = "52.52.213.92"
from socketIO_client import SocketIO, LoggingNamespace
# enable raspicam driver in case a raspicam is being used
os.system("sudo modprobe bcm2835-v4l2")
if args.env == "dev":
print "using dev port 8122"
port = 8122
elif args.env == "prod":
print "using prod port 8022"
port = 8022
else:
print "invalid environment"
sys.exit(0)
print "initializing socket io"
print "server:", server
print "port:", port
socketIO = SocketIO(server, port, LoggingNamespace)
print "finished initializing socket io"
#ffmpeg -f qtkit -i 0 -f mpeg1video -b 400k -r 30 -s 320x240 http://52.8.81.124:8082/hello/320/240/
def onHandleCameraCommand(*args):
#thread.start_new_thread(handle_command, args)
print args
socketIO.on('command_to_camera', onHandleCameraCommand)
def onHandleTakeSnapshotCommand(*args):
print "taking snapshot"
inputDeviceID = streamProcessDict['device_answer']
snapShot(platform.system(), inputDeviceID)
with open ("snapshot.jpg", 'rb') as f:
data = f.read()
print "emit"
socketIO.emit('snapshot', {'image':base64.b64encode(data)})
socketIO.on('take_snapshot_command', onHandleTakeSnapshotCommand)
def randomSleep():
"""A short wait is good for quick recovery, but sometimes a longer delay is needed or it will just keep trying and failing short intervals, like because the system thinks the port is still in use and every retry makes the system think it's still in use. So, this has a high likelihood of picking a short interval, but will pick a long one sometimes."""
timeToWait = random.choice((0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 5))
print "sleeping", timeToWait
time.sleep(timeToWait)
def getVideoPort():
url = 'http://%s/get_video_port/%s' % (server, cameraIDAnswer)
for retryNumber in range(2000):
try:
print "GET", url
response = urllib2.urlopen(url).read()
break
except:
print "could not open url ", url
time.sleep(2)
return json.loads(response)['mpeg_stream_port']
def getAudioPort():
url = 'http://%s/get_audio_port/%s' % (server, cameraIDAnswer)
for retryNumber in range(2000):
try:
print "GET", url
response = urllib2.urlopen(url).read()
break
except:
print "could not open url ", url
time.sleep(2)
return json.loads(response)['audio_stream_port']
def runFfmpeg(commandLine):
print commandLine
ffmpegProcess = subprocess.Popen(shlex.split(commandLine))
print "command started"
return ffmpegProcess
def handleDarwin(deviceNumber, videoPort, audioPort):
p = subprocess.Popen(["ffmpeg", "-list_devices", "true", "-f", "qtkit", "-i", "dummy"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
print err
deviceAnswer = raw_input("Enter the number of the camera device for your robot from the list above: ")
commandLine = 'ffmpeg -f qtkit -i %s -f mpeg1video -b 400k -r 30 -s 320x240 http://%s:%s/hello/320/240/' % (deviceAnswer, server, videoPort)
process = runFfmpeg(commandLine)
return {'process': process, 'device_answer': deviceAnswer}
def handleLinux(deviceNumber, videoPort, audioPort):
print "sleeping to give the camera time to start working"
randomSleep()
print "finished sleeping"
#p = subprocess.Popen(["ffmpeg", "-list_devices", "true", "-f", "qtkit", "-i", "dummy"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
#out, err = p.communicate()
#print err
os.system("v4l2-ctl -c brightness={brightness} -c contrast={contrast} -c saturation={saturation}".format(brightness=args.brightness,
contrast=args.contrast,
saturation=args.saturation))
if deviceNumber is None:
deviceAnswer = raw_input("Enter the number of the camera device for your robot: ")
else:
deviceAnswer = str(deviceNumber)
#commandLine = '/usr/local/bin/ffmpeg -s 320x240 -f video4linux2 -i /dev/video%s -f mpeg1video -b 1k -r 20 http://runmyrobot.com:%s/hello/320/240/' % (deviceAnswer, videoPort)
#commandLine = '/usr/local/bin/ffmpeg -s 640x480 -f video4linux2 -i /dev/video%s -f mpeg1video -b 150k -r 20 http://%s:%s/hello/640/480/' % (deviceAnswer, server, videoPort)
# For new JSMpeg
#commandLine = '/usr/local/bin/ffmpeg -f v4l2 -framerate 25 -video_size 640x480 -i /dev/video%s -f mpegts -codec:v mpeg1video -s 640x480 -b:v 250k -bf 0 http://%s:%s/hello/640/480/' % (deviceAnswer, server, videoPort) # ClawDaddy
#commandLine = '/usr/local/bin/ffmpeg -s 1280x720 -f video4linux2 -i /dev/video%s -f mpeg1video -b 1k -r 20 http://runmyrobot.com:%s/hello/1280/720/' % (deviceAnswer, videoPort)
if args.rotate180:
rotationOption = "-vf transpose=2,transpose=2"
else:
rotationOption = ""
# video with audio
videoCommandLine = '/usr/local/bin/ffmpeg -f v4l2 -framerate 25 -video_size 640x480 -i /dev/video%s %s -f mpegts -codec:v mpeg1video -s 640x480 -b:v %dk -bf 0 -muxdelay 0.001 http://%s:%s/hello/640/480/' % (deviceAnswer, rotationOption, args.kbps, server, videoPort)
audioCommandLine = '/usr/local/bin/ffmpeg -f alsa -ar 44100 -ac 1 -i hw:1 -f mpegts -codec:a mp2 -b:a 32k -muxdelay 0.001 http://%s:%s/hello/640/480/' % (server, audioPort)
print videoCommandLine
print audioCommandLine
videoProcess = runFfmpeg(videoCommandLine)
audioProcess = runFfmpeg(audioCommandLine)
return {'video_process': videoProcess, 'audioProcess': audioProcess, 'device_answer': deviceAnswer}
def handleWindows(deviceNumber, videoPort):
p = subprocess.Popen(["ffmpeg", "-list_devices", "true", "-f", "dshow", "-i", "dummy"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
lines = err.split('\n')
count = 0
devices = []
for line in lines:
#if "] \"" in line:
# print "line:", line
m = re.search('.*\\"(.*)\\"', line)
if m != None:
#print line
if m.group(1)[0:1] != '@':
print count, m.group(1)
devices.append(m.group(1))
count += 1
if deviceNumber is None:
deviceAnswer = raw_input("Enter the number of the camera device for your robot from the list above: ")
else:
deviceAnswer = str(deviceNumber)
device = devices[int(deviceAnswer)]
commandLine = 'ffmpeg -s 640x480 -f dshow -i video="%s" -f mpegts -codec:v mpeg1video -b 200k -r 20 http://%s:%s/hello/640/480/' % (device, server, videoPort)
process = runFfmpeg(commandLine)
return {'process': process, 'device_answer': device}
def handleWindowsScreenCapture(deviceNumber, videoPort):
p = subprocess.Popen(["ffmpeg", "-list_devices", "true", "-f", "dshow", "-i", "dummy"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
lines = err.split('\n')
count = 0
devices = []
for line in lines:
#if "] \"" in line:
# print "line:", line
m = re.search('.*\\"(.*)\\"', line)
if m != None:
#print line
if m.group(1)[0:1] != '@':
print count, m.group(1)
devices.append(m.group(1))
count += 1
if deviceNumber is None:
deviceAnswer = raw_input("Enter the number of the camera device for your robot from the list above: ")
else:
deviceAnswer = str(deviceNumber)
device = devices[int(deviceAnswer)]
commandLine = 'ffmpeg -f dshow -i video="screen-capture-recorder" -vf "scale=640:480" -f mpeg1video -b 50k -r 20 http://%s:%s/hello/640/480/' % (server, videoPort)
print "command line:", commandLine
process = runFfmpeg(commandLine)
return {'process': process, 'device_answer': device}
def snapShot(operatingSystem, inputDeviceID, filename="snapshot.jpg"):
try:
os.remove('snapshot.jpg')
except:
print "did not remove file"
commandLineDict = {
'Darwin': 'ffmpeg -y -f qtkit -i %s -vframes 1 %s' % (inputDeviceID, filename),
'Linux': '/usr/local/bin/ffmpeg -y -f video4linux2 -i /dev/video%s -vframes 1 -q:v 1000 -vf scale=320:240 %s' % (inputDeviceID, filename),
'Windows': 'ffmpeg -y -s 320x240 -f dshow -i video="%s" -vframes 1 %s' % (inputDeviceID, filename)}
print commandLineDict[operatingSystem]
os.system(commandLineDict[operatingSystem])
def startVideoCapture():
videoPort = getVideoPort()
audioPort = getAudioPort()
print "video port:", videoPort
print "audio port:", audioPort
#if len(sys.argv) >= 3:
# deviceNumber = sys.argv[2]
#else:
# deviceNumber = None
deviceNumber = args.video_device_number
result = None
if platform.system() == 'Darwin':
result = handleDarwin(deviceNumber, videoPort, audioPort)
elif platform.system() == 'Linux':
result = handleLinux(deviceNumber, videoPort, audioPort)
elif platform.system() == 'Windows':
#result = handleWindowsScreenCapture(deviceNumber, videoPort)
result = handleWindows(deviceNumber, videoPort, audioPort)
else:
print "unknown platform", platform.system()
return result
def timeInMilliseconds():
return int(round(time.time() * 1000))
def main():
print "main"
streamProcessDict = None
twitterSnapCount = 0
while True:
socketIO.emit('send_video_status', {'send_video_process_exists': True,
'camera_id':cameraIDAnswer})
if streamProcessDict is not None:
print "stopping previously running ffmpeg (needs to happen if this is not the first iteration)"
streamProcessDict['process'].kill()
print "starting process just to get device result" # this should be a separate function so you don't have to do this
streamProcessDict = startVideoCapture()
inputDeviceID = streamProcessDict['device_answer']
print "stopping video capture"
streamProcessDict['process'].kill()
#print "sleeping"
#time.sleep(3)
#frameCount = int(round(time.time() * 1000))
videoWithSnapshots = False
while videoWithSnapshots:
frameCount = timeInMilliseconds()
print "taking single frame image"
snapShot(platform.system(), inputDeviceID, filename="single_frame_image.jpg")
with open ("single_frame_image.jpg", 'rb') as f:
# every so many frames, post a snapshot to twitter
#if frameCount % 450 == 0:
if frameCount % 6000 == 0:
data = f.read()
print "emit"
socketIO.emit('snapshot', {'frame_count':frameCount, 'image':base64.b64encode(data)})
data = f.read()
print "emit"
socketIO.emit('single_frame_image', {'frame_count':frameCount, 'image':base64.b64encode(data)})
time.sleep(0)
#frameCount += 1
if False:
if platform.system() != 'Windows':
print "taking snapshot"
snapShot(platform.system(), inputDeviceID)
with open ("snapshot.jpg", 'rb') as f:
data = f.read()
print "emit"
# skip sending the first image because it's mostly black, maybe completely black
#todo: should find out why this black image happens
if twitterSnapCount > 0:
socketIO.emit('snapshot', {'image':base64.b64encode(data)})
print "starting video capture"
streamProcessDict = startVideoCapture()
# This loop counts out a delay that occurs between twitter snapshots.
# Every 50 seconds, it kills and restarts ffmpeg.
# Every 40 seconds, it sends a signal to the server indicating status of processes.
period = 2*60*60 # period in seconds between snaps
for count in range(period):
time.sleep(1)
if count % 20 == 0:
socketIO.emit('send_video_status', {'send_video_process_exists': True,
'camera_id':cameraIDAnswer})
if count % 40 == 30:
print "stopping video capture just in case it has reached a state where it's looping forever, not sending video, and not dying as a process, which can happen"
streamProcessDict['video_process'].kill()
streamProcessDict['audio_process'].kill()
time.sleep(1)
if count % 80 == 75:
print "send status about this process and its child process ffmpeg"
ffmpegProcessExists = streamProcessDict['process'].poll() is None
socketIO.emit('send_video_status', {'send_video_process_exists': True,
'ffmpeg_process_exists': ffmpegProcessExists,
'camera_id':cameraIDAnswer})
#if count % 190 == 180:
# print "reboot system in case the webcam is not working"
# os.system("sudo reboot")
# if the video stream process dies, restart it
if streamProcessDict['video_process'].poll() is not None or streamProcessDict['audio_process'].poll():
# wait before trying to start ffmpeg
print "ffmpeg process is dead, waiting before trying to restart"
randomSleep()
streamProcessDict = startVideoCapture()
twitterSnapCount += 1
if __name__ == "__main__":
#if len(sys.argv) > 1:
# cameraIDAnswer = sys.argv[1]
#else:
# cameraIDAnswer = raw_input("Enter the Camera ID for your robot, you can get it by pointing a browser to the runmyrobot server %s: " % server)
cameraIDAnswer = args.camera_id
main()ERROR :
ffmpeg -n -f mpegts -i http://54.183.232.63:12221 -i logo.png -filter_complex "[0:v]setsar=sar=1[v];[v][1]blend=all_mode='overlay':all_opacity=0.7" -movflags +faststart tmb/video.mp4
ffmpeg version N-86215-gb5228e4 Copyright (c) 2000-2017 the FFmpeg developers
built with gcc 4.9.2 (Raspbian 4.9.2-10)
configuration: --arch=armel --target-os=linux --enable-gpl --enable-libx264 --enable-nonfree --extra-libs=-ldl
libavutil 55. 63.100 / 55. 63.100
libavcodec 57. 96.101 / 57. 96.101
libavformat 57. 72.101 / 57. 72.101
libavdevice 57. 7.100 / 57. 7.100
libavfilter 6. 90.100 / 6. 90.100
libswscale 4. 7.101 / 4. 7.101
libswresample 2. 8.100 / 2. 8.100
libpostproc 54. 6.100 / 54. 6.100
[mpegts @ 0x1a57390] Could not detect TS packet size, defaulting to non-FEC/DVHS
http://54.183.232.63:12221: could not find codec parameters -
Linphone OSx msx264 encryption VGA takes 97% CPU, why ?
11 septembre 2013, par Maxim ShoustinI have problem and today don't know how to fix it or even from where to start.
I have Linphone application that uses msx264 plugin.
All stuff I run on OSx and my ffmpeg version installed from
port
, I didn't using selfupdate for portbash-3.2# port installed ffmpeg-devel
The following ports are currently installed:
ffmpeg-devel @20130205_0+gpl2
ffmpeg-devel @20130328_0 (active)
ffmpeg-devel @20130328_0+gpl2So I compiled and build msx264, no errors.
Now I try to send video over CIP resolution VGA (640x480) and get huge delay 8-9 seconds, even self-view I see in big delay.
when I configure CIF (352x288), all seems fine.
It's really strange that self-view camera has delay 4-5 sec.
So from logs during the session I found that msx264 plugin takes 97% CPU
On PC (windows 7) the same code runs fine, even HD I don't see any problems.
What is the problem should be ?
warning: Video MSTicker: We are late of 32146 miliseconds.
message: Filter MSRtpRecv is not scheduled; nothing to do.
message: ===========================================================
message: AUDIO SESSION'S RTP STATISTICS
message: -----------------------------------------------------------
message: sent 2344 packets
message: 403168 bytes
message: received 2038 packets
message: 350536 bytes
message: incoming delivered to the app 325080 bytes
message: lost 0 packets
message: received too late 123 packets
message: bad formatted 0 packets
message: discarded (queue overflow) 17 packets
message: ===========================================================
message: ms_filter_unlink: MSAuRead:0x7fb5a34955b0,0-->MSResample:0x7fb5aa917820,0
message: ms_filter_unlink: MSResample:0x7fb5aa917820,0-->MSSpeexEC:0x7fb5a34f6d20,1
message: ms_filter_unlink: MSSpeexEC:0x7fb5a34f6d20,1-->MSVolume:0x7fb5a3493450,0
message: ms_filter_unlink: MSVolume:0x7fb5a3493450,0-->MSTee:0x7fb5a3498e40,0
message: ms_filter_unlink: MSTee:0x7fb5a3498e40,0-->MSUlawEnc:0x7fb5a3499410,0
message: ms_filter_unlink: MSUlawEnc:0x7fb5a3499410,0-->MSRtpSend:0x7fb5aa910ba0,0
message: ms_filter_unlink: MSRtpRecv:0x7fb5a3400170,0-->MSUlawDec:0x7fb5a34933c0,0
message: ms_filter_unlink: MSUlawDec:0x7fb5a34933c0,0-->MSGenericPLC:0x7fb5aa91b040,0
message: ms_filter_unlink: MSGenericPLC:0x7fb5aa91b040,0-->MSDtmfGen:0x7fb5a6585f00,0
message: ms_filter_unlink: MSDtmfGen:0x7fb5a6585f00,0-->MSVolume:0x7fb5aa917790,0
message: ms_filter_unlink: MSVolume:0x7fb5aa917790,0-->MSTee:0x7fb5aa914fc0,0
message: ms_filter_unlink: MSTee:0x7fb5aa914fc0,0-->MSEqualizer:0x7fb5a3498f50,0
message: ms_filter_unlink: MSEqualizer:0x7fb5a3498f50,0-->MSSpeexEC:0x7fb5a34f6d20,0
message: ms_filter_unlink: MSSpeexEC:0x7fb5a34f6d20,0-->MSResample:0x7fb5aa9178b0,0
message: ms_filter_unlink: MSResample:0x7fb5aa9178b0,0-->MSAuWrite:0x7fb5a3499380,0
message: ms_filter_unlink: MSTee:0x7fb5a3498e40,1-->MSAudioMixer:0x7fb5aa914df0,0
message: ms_filter_unlink: MSTee:0x7fb5aa914fc0,1-->MSAudioMixer:0x7fb5aa914df0,1
message: ms_filter_unlink: MSAudioMixer:0x7fb5aa914df0,0-->MSFileRec:0x7fb5aa911020,0
message: Audio MSTicker thread exiting
message: ===========================================================
message: FILTER USAGE STATISTICS
message: Name Count Time/tick (ms) CPU Usage
message: -----------------------------------------------------------
message: MSX264Enc 321 138.147 97.1677
message: MSResample 8076 0.0550274 0.97085
message: MSSpeexEC 4302 0.0873765 0.821276
message: MSH264Dec 291 0.880267 0.561463
message: MSRtpSend 6174 0.012353 0.166623
message: MSRtpRecv 6174 0.0115132 0.155295
message: MSOSXGLDisplay 375 0.0376117 0.0308912
message: MSAudioMixer 4695 0.00249638 0.0256072
message: MSV4m 1480 0.00740446 0.0239537
message: MSUlawEnc 4038 0.0019542 0.0172411
message: MSTee 6540 0.000698976 0.00998688
message: MSAuRead 4695 0.00095017 0.0097466
message: MSUlawDec 1890 0.00205553 0.00849059
message: MSVolume 5928 0.000633159 0.00820007
message: MSFileRec 4695 0.000722743 0.00741371
message: MSDtmfGen 4695 0.0005 0.00512887
message: MSGenericPLC 4695 0.000429514 0.00440585
message: MSAuWrite 4038 0.000364199 0.00321319
message: MSEqualizer 1890 0.000250661 0.00103538
message: MSSizeConv 322 0.00104334 0.000736128
message: MSJpegWriter 290 0.000694158 0.00044124
message: MSPixConv 322 0.000405573 0.000286151
message: MSFilePlayer 0 0 0
message: MSVoidSink 0 0 0
message: ===========================================================
warning: Video MSTicker: We are late of 32256 miliseconds.
message: v4m video device closed.
message: Filter MSRtpRecv is not scheduled; nothing to do.
message: ===========================================================
message: VIDEO SESSION'S RTP STATISTICS
message: -----------------------------------------------------------
message: sent 1311 packets
message: 1517528 bytes
message: received 1783 packets
message: 1049010 bytes
message: incoming delivered to the app 986868 bytes
message: lost 0 packets
message: received too late 0 packets
message: bad formatted 0 packets
message: discarded (queue overflow) 0 packets
message: ===========================================================In addition the application shows me delay status, from logs :
message:: Dialog [0x7fb5a7634940]: now updated by transaction [0x7fb5aa9685d0].
warning: Video MSTicker: We are late of 20415 miliseconds.
warning: Video MSTicker: We are late of 20564 miliseconds.
message:: A SPS is being sent.
message:: A PPS is being sent.
warning: Video MSTicker: We are late of 20609 miliseconds.
warning: Video MSTicker: We are late of 20636 miliseconds.
warning: Video MSTicker: We are late of 20694 miliseconds.
warning: Video MSTicker: We are late of 20784 miliseconds.
warning: Video MSTicker: We are late of 20894 miliseconds.
warning: Video MSTicker: We are late of 21016 miliseconds.
warning: echo canceller: we are accumulating too much reference signal, need to throw out 1216 samples
message:: audio_stream_iterate(): local statistics available
Local's current jitter buffer size:77.440002 ms
message:: bandwidth usage: audio=[d=80.1,u=80.1] video=[d=305.3,u=441.8] kbit/sec
message:: Thread processing load: audio=2.135499 video=1268.186768
warning: Video MSTicker: We are late of 21134 miliseconds.
warning: Video MSTicker: We are late of 21256 miliseconds.
warning: Video MSTicker: We are late of 21382 miliseconds.
warning: Video MSTicker: We are late of 21506 miliseconds.
warning: Video MSTicker: We are late of 21638 miliseconds.
warning: Video MSTicker: We are late of 21781 miliseconds.
warning: Video MSTicker: We are late of 21921 miliseconds.
message:: bandwidth usage: audio=[d=81.6,u=80.0] video=[d=271.9,u=185.5] kbit/sec
message:: Thread processing load: audio=1.971647 video=1342.125000
warning: Video MSTicker: We are late of 22068 miliseconds.
message:: audio_stream_iterate(): remote statistics available
remote's interarrival jitter=68
remote's lost packets percentage since last report=0.390625
round trip time=0.258850 seconds
warning: Video MSTicker: We are late of 22216 miliseconds.Please, help me to find the problem.
Thanks,
this is a msx264 git repository :
git clone git://git.linphone.org/msx264.git