
Recherche avancée
Médias (91)
-
Valkaama DVD Cover Outside
4 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Image
-
Valkaama DVD Label
4 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Image
-
Valkaama DVD Cover Inside
4 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Image
-
1,000,000
27 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Demon Seed
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
The Four of Us are Dying
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
Autres articles (54)
-
Monitoring de fermes de MediaSPIP (et de SPIP tant qu’à faire)
31 mai 2013, parLorsque l’on gère plusieurs (voir plusieurs dizaines) de MediaSPIP sur la même installation, il peut être très pratique d’obtenir d’un coup d’oeil certaines informations.
Cet article a pour but de documenter les scripts de monitoring Munin développés avec l’aide d’Infini.
Ces scripts sont installés automatiquement par le script d’installation automatique si une installation de munin est détectée.
Description des scripts
Trois scripts Munin ont été développés :
1. mediaspip_medias
Un script de (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)
Sur d’autres sites (6518)
-
MP4 Created Using FFmpeg API Can't Be Played in Media Players
11 avril 2020, par RandyCroucherI've been struggling with this issue for days. There are similar issues posted here and around the web, but none of the solutions seem to work for me. They are possibly outdated ?



Here is the current iteration of code I'm using to generate the MP4 file.



It generates a simple 2 second .mp4 file that fails to play in any player I've tried. If I run that mp4 file back through the FFmpeg command line, it will generate a perfectly playable movie out of it. So the data is there.



Also, if you modify the output file name in this code from .mp4 to .avi, this code generates a playable avi file too. So whatever it is, it is tied to the H.264 format.



I'm sure I'm missing something simple, but for the life of me, I can't figure out what that is.



Any help would be greatly appreciated !



Here is a link to the VC++ project. MovieMaker.zip



MovieMaker.h



#pragma once

extern "C"
{
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libswscale></libswscale>swscale.h>
#include <libavutil></libavutil>opt.h>
}

class FMovieMaker
{
public:
 ~FMovieMaker();

 bool Initialize(const char* FileName, int Width = 1920, int Height = 1080, int FPS = 30, int BitRate = 2000);
 bool RecordFrame(uint8_t* BGRAData);
 bool Finalize();

 bool IsInitialized() const { return bInitialized; }
 int GetWidth() const { return CodecContext ? CodecContext->width : 0; }
 int GetHeight() const { return CodecContext ? CodecContext->height : 0; }

private:
 bool EncodeFrame(bool bFinalize);
 void Log(const char* fmt, ...);

 AVOutputFormat* OutputFormat = nullptr;
 AVFormatContext* FormatContext = nullptr;
 AVCodecContext* CodecContext = nullptr;
 AVFrame* Frame = nullptr;
 SwsContext* ColorConverter = nullptr;
 int64_t RecordedFrames = 0;
 bool bInitialized = false;
};




MovieMaker.cpp



#include "MovieMaker.h"

FMovieMaker::~FMovieMaker()
{
 if (IsInitialized())
 Finalize();
}

bool FMovieMaker::Initialize(const char* FileName, int Width /*= 1920*/, int Height /*= 1080*/, int FPS /*= 30*/, int BitRate /*= 2000*/)
{
 OutputFormat = av_guess_format(nullptr, FileName, nullptr);
 if (!OutputFormat)
 {
 Log("Couldn't guess the output format from the filename: %s", FileName);
 return false;
 }

 AVCodecID CodecID = OutputFormat->video_codec;
 if (CodecID == AV_CODEC_ID_NONE)
 {
 Log("Could not determine a codec to use");
 return false;
 }

 /* allocate the output media context */
 int ErrorCode = avformat_alloc_output_context2(&FormatContext, OutputFormat, nullptr, FileName);
 if (ErrorCode < 0)
 {
 char Error[AV_ERROR_MAX_STRING_SIZE];
 av_make_error_string(Error, AV_ERROR_MAX_STRING_SIZE, ErrorCode);
 Log("Failed to allocate format context: %s", Error);
 return false;
 }
 else if (!FormatContext)
 {
 Log("Failed to get format from filename: %s", FileName);
 return false;
 }

 /* find the video encoder */
 const AVCodec* Codec = avcodec_find_encoder(CodecID);
 if (!Codec)
 {
 Log("Codec '%d' not found", CodecID);
 return false;
 }

 /* create the video stream */
 AVStream* Stream = avformat_new_stream(FormatContext, Codec);
 if (!Stream)
 {
 Log("Failed to allocate stream");
 return false;
 }

 /* create the codec context */
 CodecContext = avcodec_alloc_context3(Codec);
 if (!CodecContext)
 {
 Log("Could not allocate video codec context");
 return false;
 }

 Stream->codecpar->codec_id = OutputFormat->video_codec;
 Stream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
 Stream->codecpar->width = Width;
 Stream->codecpar->height = Height;
 Stream->codecpar->format = AV_PIX_FMT_YUV420P;
 Stream->codecpar->bit_rate = (int64_t)BitRate * 1000;
 avcodec_parameters_to_context(CodecContext, Stream->codecpar);

 CodecContext->time_base = { 1, FPS };
 CodecContext->max_b_frames = 2;
 CodecContext->gop_size = 12;
 CodecContext->framerate = { FPS, 1 };

 if (Stream->codecpar->codec_id == AV_CODEC_ID_H264)
 av_opt_set(CodecContext, "preset", "medium", 0);
 else if (Stream->codecpar->codec_id == AV_CODEC_ID_H265)
 av_opt_set(CodecContext, "preset", "medium", 0);

 avcodec_parameters_from_context(Stream->codecpar, CodecContext);

 if (FormatContext->oformat->flags & AVFMT_GLOBALHEADER)
 CodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;

 if ((ErrorCode = avcodec_open2(CodecContext, Codec, NULL)) < 0)
 {
 char Error[AV_ERROR_MAX_STRING_SIZE];
 av_make_error_string(Error, AV_ERROR_MAX_STRING_SIZE, ErrorCode);
 Log("Failed to open codec: %s", Error);
 return false;
 }

 if (!(OutputFormat->flags & AVFMT_NOFILE))
 {
 if ((ErrorCode = avio_open(&FormatContext->pb, FileName, AVIO_FLAG_WRITE)) < 0)
 {
 char Error[AV_ERROR_MAX_STRING_SIZE];
 av_make_error_string(Error, AV_ERROR_MAX_STRING_SIZE, ErrorCode);
 Log("Failed to open file: %s", Error);
 return false;
 }
 }

 Stream->time_base = CodecContext->time_base;
 if ((ErrorCode = avformat_write_header(FormatContext, NULL)) < 0)
 {
 char Error[AV_ERROR_MAX_STRING_SIZE];
 av_make_error_string(Error, AV_ERROR_MAX_STRING_SIZE, ErrorCode);
 Log("Failed to write header: %s", Error);
 return false;
 }

 CodecContext->time_base = Stream->time_base;

 av_dump_format(FormatContext, 0, FileName, 1);

 // create the frame
 {
 Frame = av_frame_alloc();
 if (!Frame)
 {
 Log("Could not allocate video frame");
 return false;
 }
 Frame->format = CodecContext->pix_fmt;
 Frame->width = CodecContext->width;
 Frame->height = CodecContext->height;

 ErrorCode = av_frame_get_buffer(Frame, 32);
 if (ErrorCode < 0)
 {
 char Error[AV_ERROR_MAX_STRING_SIZE];
 av_make_error_string(Error, AV_ERROR_MAX_STRING_SIZE, ErrorCode);
 Log("Could not allocate the video frame data: %s", Error);
 return false;
 }
 }

 // create a color converter
 {
 ColorConverter = sws_getContext(CodecContext->width, CodecContext->height, AV_PIX_FMT_BGRA,
 CodecContext->width, CodecContext->height, AV_PIX_FMT_YUV420P, 0, 0, 0, 0);
 if (!ColorConverter)
 {
 Log("Could not allocate color converter");
 return false;
 }
 }

 bInitialized = true;
 return true;
}

bool FMovieMaker::RecordFrame(uint8_t* BGRAData)
{
 if (!bInitialized)
 {
 Log("Cannot record frames on an uninitialized Video Recorder");
 return false;
 }

 /*make sure the frame data is writable */
 int ErrorCode = av_frame_make_writable(Frame);
 if (ErrorCode < 0)
 {
 char Error[AV_ERROR_MAX_STRING_SIZE];
 av_make_error_string(Error, AV_ERROR_MAX_STRING_SIZE, ErrorCode);
 Log("Could not make the frame writable: %s", Error);
 return false;
 }

 /* convert the bgra bitmap data into yuv frame data */
 int inLinesize[1] = { 4 * CodecContext->width }; // RGB stride
 sws_scale(ColorConverter, &BGRAData, inLinesize, 0, CodecContext->height, Frame->data, Frame->linesize);

 //Frame->pts = RecordedFrames++;
 Frame->pts = CodecContext->time_base.den / CodecContext->time_base.num * CodecContext->framerate.den / CodecContext->framerate.num * (RecordedFrames++);
 //The following assumes that codecContext->time_base = (AVRational){1, 1};
 //Frame->pts = frameduration * (RecordedFrames++) * Stream->time_base.den / (Stream->time_base.num * fps);
 //Frame->pts += av_rescale_q(1, CodecContext->time_base, Stream->time_base);

 return EncodeFrame(false);
}

bool FMovieMaker::EncodeFrame(bool bFinalize)
{
 /* send the frame to the encoder */
 int ErrorCode = avcodec_send_frame(CodecContext, bFinalize ? nullptr : Frame);
 if (ErrorCode < 0)
 {
 char Error[AV_ERROR_MAX_STRING_SIZE];
 av_make_error_string(Error, AV_ERROR_MAX_STRING_SIZE, ErrorCode);
 Log("Error sending a frame for encoding: %s", Error);
 return false;
 }

 AVPacket Packet;
 av_init_packet(&Packet);
 Packet.data = NULL;
 Packet.size = 0;
 Packet.flags |= AV_PKT_FLAG_KEY;
 Packet.pts = Frame->pts;

 if (avcodec_receive_packet(CodecContext, &Packet) == 0)
 {
 //std::cout << "pkt key: " << (Packet.flags & AV_PKT_FLAG_KEY) << " " << Packet.size << " " << (counter++) << std::endl;
 uint8_t* size = ((uint8_t*)Packet.data);
 //std::cout << "first: " << (int)size[0] << " " << (int)size[1] << " " << (int)size[2] << " " << (int)size[3] << " " << (int)size[4] << " " << (int)size[5] << " " << (int)size[6] << " " << (int)size[7] << std::endl;

 av_interleaved_write_frame(FormatContext, &Packet);
 av_packet_unref(&Packet);
 }

 return true;
}

bool FMovieMaker::Finalize()
{
 if (!bInitialized)
 {
 Log("Cannot finalize uninitialized Video Recorder");
 return false;
 }

 //DELAYED FRAMES
 AVPacket Packet;
 av_init_packet(&Packet);
 Packet.data = NULL;
 Packet.size = 0;

 for (;;)
 {
 avcodec_send_frame(CodecContext, NULL);
 if (avcodec_receive_packet(CodecContext, &Packet) == 0)
 {
 av_interleaved_write_frame(FormatContext, &Packet);
 av_packet_unref(&Packet);
 }
 else
 break;
 }

 av_write_trailer(FormatContext);
 if (!(OutputFormat->flags & AVFMT_NOFILE))
 {
 int ErrorCode = avio_close(FormatContext->pb);
 if (ErrorCode < 0)
 {
 char Error[AV_ERROR_MAX_STRING_SIZE];
 av_make_error_string(Error, AV_ERROR_MAX_STRING_SIZE, ErrorCode);
 Log("Failed to close file: %s", Error);
 }
 }

 if (Frame)
 {
 av_frame_free(&Frame);
 Frame = nullptr;
 }

 if (CodecContext)
 {
 avcodec_free_context(&CodecContext);
 CodecContext = nullptr;
 }

 if (FormatContext)
 {
 avformat_free_context(FormatContext);
 FormatContext = nullptr;
 }

 if (ColorConverter)
 {
 sws_freeContext(ColorConverter);
 ColorConverter = nullptr;
 }

 bInitialized = false;
 return true;
}

void FMovieMaker::Log(const char* fmt, ...)
{
 va_list args;
 fprintf(stderr, "LOG: ");
 va_start(args, fmt);
 vfprintf(stderr, fmt, args);
 va_end(args);
 fprintf(stderr, "\n");
}




Main.cpp



#include "MovieMaker.h"

uint8_t FtoB(float x)
{
 if (x <= 0.0f)
 return 0;
 if (x >= 1.0f)
 return 255;
 else
 return (uint8_t)(x * 255.0f);
}

void SetPixelColor(float X, float Y, float Width, float Height, float t, uint8_t* BGRA)
{
 t += 12.0f; // more interesting colors at this time

 float P[2] = { 0.1f * X - 25.0f, 0.1f * Y - 25.0f };
 float V = sqrtf(P[0] * P[0] + P[1] * P[1]);
 BGRA[0] = FtoB(sinf(V + t / 0.78f));
 BGRA[1] = FtoB(sinf(V + t / 10.0f));
 BGRA[2] = FtoB(sinf(V + t / 36e2f));
 BGRA[3] = 255;
}

int main()
{
 FMovieMaker MovieMaker;

 const char* FileName = "C:\\ffmpeg\\MyMovieMakerMovie.mp4";
 int Width = 640;
 int Height = 480;
 int FPS = 30;
 int BitRateKBS = 2000;

 if (MovieMaker.Initialize(FileName, Width, Height, FPS, BitRateKBS))
 {
 int Size = Width * 4 * Height;
 uint8_t* BGRAData = new uint8_t[Size];
 memset(BGRAData, 255, Size);

 for (float Frame = 0; Frame < 60; Frame++)
 {
 // fill the image data with something interesting
 for (float Y = 0; Y < Height; Y++)
 {
 for (float X = 0; X < Width; X++)
 {
 SetPixelColor(X, Y, (float)Width, (float)Height, Frame / (float)FPS, &BGRAData[(int)(Y * Width + X) * 4]);
 }
 }

 if (!MovieMaker.RecordFrame(BGRAData))
 break;
 }

 delete[] BGRAData;

 MovieMaker.Finalize();
 }
}




If I have the lines that add the
AV_CODEC_FLAG_GLOBAL_HEADER
flag like shown above, I get all sorts of issues in the output fromffprobe MyMovieMakerMovie.mp4
.


C:\ffmpeg>ffprobe MyMovieMakerMovie.mp4
ffprobe version 4.2.2 Copyright (c) 2007-2019 the FFmpeg developers
 built with gcc 9.2.1 (GCC) 20200122
 configuration: --disable-static --enable-shared --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-amf --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt
 libavutil 56. 31.100 / 56. 31.100
 libavcodec 58. 54.100 / 58. 54.100
 libavformat 58. 29.100 / 58. 29.100
 libavdevice 58. 8.100 / 58. 8.100
 libavfilter 7. 57.100 / 7. 57.100
 libswscale 5. 5.100 / 5. 5.100
 libswresample 3. 5.100 / 3. 5.100
 libpostproc 55. 5.100 / 55. 5.100
[h264 @ 000001d44b795b00] non-existing PPS 0 referenced
[h264 @ 000001d44b795b00] decode_slice_header error
[h264 @ 000001d44b795b00] no frame!
...
[h264 @ 000001d44b795b00] non-existing PPS 0 referenced
[h264 @ 000001d44b795b00] decode_slice_header error
[h264 @ 000001d44b795b00] no frame!
[mov,mp4,m4a,3gp,3g2,mj2 @ 000001d44b783880] decoding for stream 0 failed
[mov,mp4,m4a,3gp,3g2,mj2 @ 000001d44b783880] Could not find codec parameters for stream 0 (Video: h264 (avc1 / 0x31637661), none, 640x480, 20528 kb/s): unspecified pixel format
Consider increasing the value for the 'analyzeduration' and 'probesize' options
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'MyMovieMakerMovie.mp4':
 Metadata:
 major_brand : isom
 minor_version : 512
 compatible_brands: isomiso2avc1mp41
 encoder : Lavf58.29.100
 Duration: 00:00:01.97, start: 0.000000, bitrate: 20529 kb/s
 Stream #0:0(und): Video: h264 (avc1 / 0x31637661), none, 640x480, 20528 kb/s, 30.51 fps, 30 tbr, 15360 tbn, 30720 tbc (default)
 Metadata:
 handler_name : VideoHandler




Without adding the
AV_CODEC_FLAG_GLOBAL_HEADER
flag, I get a clean output from ffprobe, but the video still doesn't play. Notice it thinks the frame rate is 30.51, I'm not sure why.


C:\ffmpeg>ffprobe MyMovieMakerMovie.mp4
ffprobe version 4.2.2 Copyright (c) 2007-2019 the FFmpeg developers
 built with gcc 9.2.1 (GCC) 20200122
 configuration: --disable-static --enable-shared --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-amf --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt
 libavutil 56. 31.100 / 56. 31.100
 libavcodec 58. 54.100 / 58. 54.100
 libavformat 58. 29.100 / 58. 29.100
 libavdevice 58. 8.100 / 58. 8.100
 libavfilter 7. 57.100 / 7. 57.100
 libswscale 5. 5.100 / 5. 5.100
 libswresample 3. 5.100 / 3. 5.100
 libpostproc 55. 5.100 / 55. 5.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'MyMovieMakerMovie.mp4':
 Metadata:
 major_brand : isom
 minor_version : 512
 compatible_brands: isomiso2avc1mp41
 encoder : Lavf58.29.100
 Duration: 00:00:01.97, start: 0.000000, bitrate: 20530 kb/s
 Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 640x480, 20528 kb/s, 30.51 fps, 30 tbr, 15360 tbn, 60 tbc (default)
 Metadata:
 handler_name : VideoHandler



-
ffmpeg : How to speed up (and keep only) a specified portion of an input video
17 avril 2020, par shrimpwidgetIn my input video is a 48 second range that I wish to speed up. I wish to save only that sped up portion to a new video.



Solution :



ffmpeg -y -ss 00:00:03 -t 00:00:48 -i input.mp4 -an -crf 20 -pix_fmt yuv420p -vf "scale=1080:-1, setpts=PTS/10.0" "output.mp4"



-
Raspberry Pi Camera Module - Stream to LAN
20 août 2015, par user3096434have a little problem with the setup of my RasPi camera infrastructure. Basically I have a RPi 2 which shall act as a MontionEye server from now on and 2 Pi B+ with camera modules.
Previously, when I had only one camera in my network, I used the following command to stream the output from RPi B+ camera module to Youtube in full HD. So far, this command works flawless :
raspivid -n -vf -hf -t 0 -w 1920 -h 1080 -fps 30 -b 3750000 -g 50 -o - | b ffmpeg -ar 8000 -ac 2 -acodec pcm_s16le -f s16le -ac 2 -i /dev/zero -f h264 -i - -vcodec copy -acodec aac -ab 64k -g 50 -strict experimental -f flv $RTMP_URL/$STREAM_KEY
Now I have a 2nd RPi with a camera module and figured it might be the time for a change towards motioneye, as I then can view both/all camera’s in my network within the same software. I have motioneye installed on my RPi 2 and the software is running correctly.
I have a little problem when it comes to access the data stream from the RPi B+ camera on my local network.
Basically I cannot seem to figure out how to change the ffmpeg portion of the above mentioned command, in a way so it will stream the data to localhost (Or the RPi2 IP where motioneye runs - which one to use ?) instead of Youtube or any other videohoster.
I wonder, if changing the following part is a correct assumption :
Instead of using variables to define Youtube URL and key
-f flv $RTMP_URL/$STREAM_KEY
And change this to
-f flv 10.1.1.11:8080
Will I then be able to add this RPi B+ video stream to my RPi 2 motioneye server, by using motioneye ’add network camera’ function ?
From my understanding I should be able to enter the following details into motioneye ’add network camera’wizard :
Camera type: network camera
RTSP-URL: 10.1.1.11:8080
User: Pi
Pass: [my pwd]
Camera: [my ffmpeg stream shall show here]Thanks in advance !
Uhm, and then... How do I forwarded the video stream from a given camera connected to motioneye ? Like from motioneye to youtube (or similar), without re-encoding the stream ?
Like the command shown above streams directly to youtube. But I want to have it in a way, that video is streamed to local network/motioneye server, and from there I can decide which camera’s stream and when I want to send the videostream to youtube ?
How would a RPi professional realize this ?
The command above explained : Takes full HD video with 30 fps from Pi camera module and hardware encodes it on GPU with 3.75mbit/s. Then I streamcopy the video (no re-encoding) and add some audio, so that the stream complies with youtube rules (yes, no live stream without audio). Audio is taken from virtual SB16 /dev/zero at low sampling rate then encoded to 32k AAC and sent to youtube. Works fine xD.
Just when I have like 3 or more of these RPi cams the youtube stream approach ain’t feasible anymore, as my DSL upstream is limited (10 mbit/s=. Thus I need motioneye server and some magic, so I can watch f.e. all 3 camera’s videostream and then motioneye server can select and streamcopy the video from the Pi’s cam I choose and send it to youtube, as the original command did.
Any help, tips, links to similar projects highly appreciated.
Again, many thanks in advance, and even more thanks just cause you read until here.
—mx