
Recherche avancée
Autres articles (46)
-
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)
Sur d’autres sites (3908)
-
How to encode using the FFMpeg in Android (using H263)
3 juillet 2012, par Kenny910I am trying to follow the sample code on encoding in the ffmpeg document and successfully build a application to encode and generate a mp4 file but I face the following problems :
1) I am using the H263 for encoding but I can only set the width and height of the AVCodecContext to 176x144, for other case (like 720x480 or 640x480) it will return fail.
2) I can't play the output mp4 file by using the default Android player, isn't it support H263 mp4 file ? p.s. I can play it by using other player
3) Is there any sample code on encoding other video frame to make a new video (which mean decode the video and encode it back in different quality setting, also i would like to modify the frame content) ?
Here is my code, thanks !
JNIEXPORT jint JNICALL Java_com_ffmpeg_encoder_FFEncoder_nativeEncoder(JNIEnv* env, jobject thiz, jstring filename){
LOGI("nativeEncoder()");
avcodec_register_all();
avcodec_init();
av_register_all();
AVCodec *codec;
AVCodecContext *codecCtx;
int i;
int out_size;
int size;
int x;
int y;
int output_buffer_size;
FILE *file;
AVFrame *picture;
uint8_t *output_buffer;
uint8_t *picture_buffer;
/* Manual Variables */
int l;
int fps = 30;
int videoLength = 5;
/* find the H263 video encoder */
codec = avcodec_find_encoder(CODEC_ID_H263);
if (!codec) {
LOGI("avcodec_find_encoder() run fail.");
}
codecCtx = avcodec_alloc_context();
picture = avcodec_alloc_frame();
/* put sample parameters */
codecCtx->bit_rate = 400000;
/* resolution must be a multiple of two */
codecCtx->width = 176;
codecCtx->height = 144;
/* frames per second */
codecCtx->time_base = (AVRational){1,fps};
codecCtx->pix_fmt = PIX_FMT_YUV420P;
codecCtx->codec_id = CODEC_ID_H263;
codecCtx->codec_type = AVMEDIA_TYPE_VIDEO;
/* open it */
if (avcodec_open(codecCtx, codec) < 0) {
LOGI("avcodec_open() run fail.");
}
const char* mfileName = (*env)->GetStringUTFChars(env, filename, 0);
file = fopen(mfileName, "wb");
if (!file) {
LOGI("fopen() run fail.");
}
(*env)->ReleaseStringUTFChars(env, filename, mfileName);
/* alloc image and output buffer */
output_buffer_size = 100000;
output_buffer = malloc(output_buffer_size);
size = codecCtx->width * codecCtx->height;
picture_buffer = malloc((size * 3) / 2); /* size for YUV 420 */
picture->data[0] = picture_buffer;
picture->data[1] = picture->data[0] + size;
picture->data[2] = picture->data[1] + size / 4;
picture->linesize[0] = codecCtx->width;
picture->linesize[1] = codecCtx->width / 2;
picture->linesize[2] = codecCtx->width / 2;
for(l=0;l/encode 1 second of video
for(i=0;i/prepare a dummy image YCbCr
//Y
for(y=0;yheight;y++) {
for(x=0;xwidth;x++) {
picture->data[0][y * picture->linesize[0] + x] = x + y + i * 3;
}
}
//Cb and Cr
for(y=0;yheight/2;y++) {
for(x=0;xwidth/2;x++) {
picture->data[1][y * picture->linesize[1] + x] = 128 + y + i * 2;
picture->data[2][y * picture->linesize[2] + x] = 64 + x + i * 5;
}
}
//encode the image
out_size = avcodec_encode_video(codecCtx, output_buffer, output_buffer_size, picture);
fwrite(output_buffer, 1, out_size, file);
}
//get the delayed frames
for(; out_size; i++) {
out_size = avcodec_encode_video(codecCtx, output_buffer, output_buffer_size, NULL);
fwrite(output_buffer, 1, out_size, file);
}
}
//add sequence end code to have a real mpeg file
output_buffer[0] = 0x00;
output_buffer[1] = 0x00;
output_buffer[2] = 0x01;
output_buffer[3] = 0xb7;
fwrite(output_buffer, 1, 4, file);
fclose(file);
free(picture_buffer);
free(output_buffer);
avcodec_close(codecCtx);
av_free(codecCtx);
av_free(picture);
LOGI("finish");
return 0; } -
How to encode using the FFMpeg in Android (using H263)
10 mai 2019, par Kenny910I am trying to follow the sample code on encoding in the ffmpeg document and successfully build a application to encode and generate a mp4 file but I face the following problems :
1) I am using the H263 for encoding but I can only set the width and height of the AVCodecContext to 176x144, for other case (like 720x480 or 640x480) it will return fail.
2) I can’t play the output mp4 file by using the default Android player, isn’t it support H263 mp4 file ? p.s. I can play it by using other player
3) Is there any sample code on encoding other video frame to make a new video (which mean decode the video and encode it back in different quality setting, also i would like to modify the frame content) ?
Here is my code, thanks !
JNIEXPORT jint JNICALL Java_com_ffmpeg_encoder_FFEncoder_nativeEncoder(JNIEnv* env, jobject thiz, jstring filename){
LOGI("nativeEncoder()");
avcodec_register_all();
avcodec_init();
av_register_all();
AVCodec *codec;
AVCodecContext *codecCtx;
int i;
int out_size;
int size;
int x;
int y;
int output_buffer_size;
FILE *file;
AVFrame *picture;
uint8_t *output_buffer;
uint8_t *picture_buffer;
/* Manual Variables */
int l;
int fps = 30;
int videoLength = 5;
/* find the H263 video encoder */
codec = avcodec_find_encoder(CODEC_ID_H263);
if (!codec) {
LOGI("avcodec_find_encoder() run fail.");
}
codecCtx = avcodec_alloc_context();
picture = avcodec_alloc_frame();
/* put sample parameters */
codecCtx->bit_rate = 400000;
/* resolution must be a multiple of two */
codecCtx->width = 176;
codecCtx->height = 144;
/* frames per second */
codecCtx->time_base = (AVRational){1,fps};
codecCtx->pix_fmt = PIX_FMT_YUV420P;
codecCtx->codec_id = CODEC_ID_H263;
codecCtx->codec_type = AVMEDIA_TYPE_VIDEO;
/* open it */
if (avcodec_open(codecCtx, codec) < 0) {
LOGI("avcodec_open() run fail.");
}
const char* mfileName = (*env)->GetStringUTFChars(env, filename, 0);
file = fopen(mfileName, "wb");
if (!file) {
LOGI("fopen() run fail.");
}
(*env)->ReleaseStringUTFChars(env, filename, mfileName);
/* alloc image and output buffer */
output_buffer_size = 100000;
output_buffer = malloc(output_buffer_size);
size = codecCtx->width * codecCtx->height;
picture_buffer = malloc((size * 3) / 2); /* size for YUV 420 */
picture->data[0] = picture_buffer;
picture->data[1] = picture->data[0] + size;
picture->data[2] = picture->data[1] + size / 4;
picture->linesize[0] = codecCtx->width;
picture->linesize[1] = codecCtx->width / 2;
picture->linesize[2] = codecCtx->width / 2;
for(l=0;l/encode 1 second of video
for(i=0;i/prepare a dummy image YCbCr
//Y
for(y=0;yheight;y++) {
for(x=0;xwidth;x++) {
picture->data[0][y * picture->linesize[0] + x] = x + y + i * 3;
}
}
//Cb and Cr
for(y=0;yheight/2;y++) {
for(x=0;xwidth/2;x++) {
picture->data[1][y * picture->linesize[1] + x] = 128 + y + i * 2;
picture->data[2][y * picture->linesize[2] + x] = 64 + x + i * 5;
}
}
//encode the image
out_size = avcodec_encode_video(codecCtx, output_buffer, output_buffer_size, picture);
fwrite(output_buffer, 1, out_size, file);
}
//get the delayed frames
for(; out_size; i++) {
out_size = avcodec_encode_video(codecCtx, output_buffer, output_buffer_size, NULL);
fwrite(output_buffer, 1, out_size, file);
}
}
//add sequence end code to have a real mpeg file
output_buffer[0] = 0x00;
output_buffer[1] = 0x00;
output_buffer[2] = 0x01;
output_buffer[3] = 0xb7;
fwrite(output_buffer, 1, 4, file);
fclose(file);
free(picture_buffer);
free(output_buffer);
avcodec_close(codecCtx);
av_free(codecCtx);
av_free(picture);
LOGI("finish");
return 0; } -
How to pipe ppm data into ffmpeg from blender frameserver with while loop in PowerShell
14 octobre 2016, par RadiumBlender 2.6 manual features this little
sh
script for encoding a video from the Blender frameserver viaffmpeg
. It works great on Windows with Cygwin, but only without-hwaccel
hardware acceleration flag.#!/bin/sh
BLENDER=http://localhost:8080
OUTPUT=/tmp/output.ogv
eval `wget ${BLENDER}/info.txt -O - 2>/dev/null |
while read key val ; do
echo R_$key=$val
done`
i=$R_start
{
while [ $i -le $R_end ] ; do
wget ${BLENDER}/images/ppm/$i.ppm -O - 2>/dev/null
i=$(($i+1))
done
} | ffmpeg -vcodec ppm -f image2pipe -r $R_rate -i pipe:0 -b 6000k -vcodec libtheora $OUTPUT
wget ${BLENDER}/close.txt -O - 2>/dev/null >/dev/nullI’d like to encode my videos from Blender’s in Windows with
-hwaccel dxva2
which works with PowerShell. I’ve begun converting the script to PowerShell but I have run into one last problem. I am having difficulty replicating this part of the script in PowerShell.i=$R_start
{
while [ $i -le $R_end ] ; do
wget ${BLENDER}/images/ppm/$i.ppm -O - 2>/dev/null
i=$(($i+1))
done
} | ffmpeg -vcodec ppm -f image2pipe -r $R_rate -i pipe:0 -b 6000k -vcodec libtheora $OUTPUTBelow is my conversion to PowerShell.
echo "gathering data";
$blender = "http://localhost:8080";
$output = "C:\Users\joel\Desktop\output.mp4";
$webobj = wget $blender"/info.txt";
$lines = $webobj.Content -split('[\r\n]') | ? {$_};
$info = @{};
foreach ($line in $lines) {
$lineinfo = $line -split('[\s]') | ? {$_};
$info[$lineinfo[0]] = $lineinfo[1];
}
echo $info;
[int]$end = [convert]::ToInt32($info['end'],10);
[int]$i = [convert]::ToInt32($info['start'],10);
$video="";
( while ($i -le $end) {
$frame = wget $blender"/images/ppm/"$i".ppm" > $null;
echo $frame.Content > $null;
$i++;
} ) | ffmpeg -hwaccel dxva2 -vcodec ppm -f image2pipe -r $info['rate'] -i pipe:0 -b 6000k -vcodec libx264 $output;This is the piece I’m having trouble with. I’m not quite sure what the proper syntax is to pipe the data into the
ffmpeg
command in the same way as the bash script above.( while( $i -le $end ) {
$frame = wget $blender"/images/ppm/"$i".ppm" > $null;
echo $frame.Content > $null;
$i++;
} ) | ffmpeg -hwaccel dxva2 -vcodec ppm -f image2pipe -r $info['rate'] -i pipe:0 -b 6000k -vcodec libx264 $output;Here is the output :
PS C :\Users\joel\Desktop> .\encode.ps1 gathering data
Name Value
-----
rate 30
height 720
ratescale 1
end 57000
width 1280
start 1
while : The term ’while’ is not recognized as the name of a cmdlet, function,
script file, or operable program. Check the spelling of the name, or if a path
was included, verify that the path is correct and try again.
At C :\Users\joel\Desktop\encode.ps1:15 char:3
+ ( while ($i -le $end)
+
+ CategoryInfo : ObjectNotFound : (while:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException