
Recherche avancée
Médias (91)
-
GetID3 - Boutons supplémentaires
9 avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
-
Core Media Video
4 avril 2013, par
Mis à jour : Juin 2013
Langue : français
Type : Video
-
The pirate bay depuis la Belgique
1er avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
-
Bug de détection d’ogg
22 mars 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Video
-
Exemple de boutons d’action pour une collection collaborative
27 février 2013, par
Mis à jour : Mars 2013
Langue : français
Type : Image
-
Exemple de boutons d’action pour une collection personnelle
27 février 2013, par
Mis à jour : Février 2013
Langue : English
Type : Image
Autres articles (97)
-
Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs
12 avril 2011, parLa manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras. -
Modifier la date de publication
21 juin 2013, parComment changer la date de publication d’un média ?
Il faut au préalable rajouter un champ "Date de publication" dans le masque de formulaire adéquat :
Administrer > Configuration des masques de formulaires > Sélectionner "Un média"
Dans la rubrique "Champs à ajouter, cocher "Date de publication "
Cliquer en bas de la page sur Enregistrer -
Contribute to documentation
13 avril 2011Documentation is vital to the development of improved technical capabilities.
MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
To contribute, register to the project users’ mailing (...)
Sur d’autres sites (4997)
-
Broken output from libavcodec/swscale, depending on resolution
3 juin 2014, par dtumaykinI am writing a video conference software, I have a H.264 stream decoded with libavcoded into IYUV and than rendered into a window with VMR9 in windowless mode. I use a DirectShow graph to do so.
To avoid unnecessary conversion into RGB and back (see link), I convert IYUV video into YUY2 before passing it to VMR9, with libswscale.
I noticed that with video resolution of 848x480, output video is broken, so I investigated further and came up that for some resolutions video is always broken. To exclude the libswscale from elaboration, I added support for IYUV+padding to IYUV conversion, and it worked, with all resolutions.
Still, I was willing to avoid slow IYUV, so I implemented support for NV12 (with libswscale) and YV12 (manually, essentially the same as IYUV). After doing some tests on two different computers, I came up with strange results.
resolution YUY2 NV12 IYUV YV12
PC 1 (my laptop)
640x360 ok broken ok broken
848x480 broken broken ok broken
960x540 broken broken ok broken
1024x576 ok ok ok ok
1280x720 ok ok ok broken
1920x1080 ok broken ok broken
PC 2
640x360 ok ok ok ok
848x480 ok broken ok broken
960x540 ok ok ok ok
1024x576 ok ok ok ok
1280x720 ok broken ok ok
1920x1080 ok ok ok okTo exclude VMR9 fault, I substituted it with EVR, but with same results.
I know that padding is needed for memory alignment, and that the size of padding depends on CPU used (libavcodec doc), that may explain difference between two computers(first has Intel i7-3820QM, the second Intel Core 2 Quad Q6600). I suppose it has something to do with padding, because images are corrupted in certain way.
You can see my blue t-shirt in lower part of image, and my face in the upper one.To follow is the code for the conversion. NV12 and YUY2 conversions are performed with libswscale, while IYUV and YV12 manually.
int pixels = _outputFrame->width * _outputFrame->height;
if (_outputFormat == "YUY2") {
int stride = _outputFrame->width * 2;
sws_scale(_convertCtx, _outputFrame->data, _outputFrame->linesize, 0, _outputFrame->height, &out, &stride);
}
else if (_outputFormat == "NV12") {
int stride[] = { _outputFrame->width, _outputFrame->width };
uint8_t * dst[] = { out, out + pixels };
sws_scale(_convertCtx, _outputFrame->data, _outputFrame->linesize, 0, _outputFrame->height, dst, stride);
}
else if (_outputFormat == "IYUV") { // clean ffmpeg padding
for (int i = 0; i < _outputFrame->height; i++) // copy Y
memcpy(out + i * _outputFrame->width, _outputFrame->data[0] + i * _outputFrame->linesize[0] , _outputFrame->width);
for (int i = 0; i < _outputFrame->height / 2; i++) // copy U
memcpy(out + pixels + i * _outputFrame->width / 2, _outputFrame->data[1] + i * _outputFrame->linesize[1] , _outputFrame->width / 2);
for (int i = 0; i < _outputFrame->height / 2; i++) // copy V
memcpy(out + pixels + pixels/4 + i * _outputFrame->width / 2, _outputFrame->data[2] + i * _outputFrame->linesize[2] , _outputFrame->width / 2);
}
else if (_outputFormat == "YV12") { // like IYUV, but U is inverted with V plane
for (int i = 0; i < _outputFrame->height; i++) // copy Y
memcpy(out + i * _outputFrame->width, _outputFrame->data[0] + i * _outputFrame->linesize[0], _outputFrame->width);
for (int i = 0; i < _outputFrame->height / 2; i++) // copy V
memcpy(out + pixels + i * _outputFrame->width / 2, _outputFrame->data[2] + i * _outputFrame->linesize[2], _outputFrame->width / 2);
for (int i = 0; i < _outputFrame->height / 2; i++) // copy U
memcpy(out + pixels + pixels / 4 + i * _outputFrame->width / 2, _outputFrame->data[1] + i * _outputFrame->linesize[1], _outputFrame->width / 2);
}out
is an output buffer._outputFrame
is libavcodec output AVFrame._convertCtx
is initialized as follows.if (_outputFormat == "YUY2")
_convertCtx = sws_getContext(_width, _height, AV_PIX_FMT_YUV420P,
_width, _height, AV_PIX_FMT_YUYV422, SWS_FAST_BILINEAR, nullptr, nullptr, nullptr);
else if (_outputFormat == "NV12")
_convertCtx = sws_getContext(_width, _height, AV_PIX_FMT_YUV420P,
_width, _height, AV_PIX_FMT_NV12, SWS_FAST_BILINEAR, nullptr, nullptr, nullptr);Questions :
- Are manual conversions correct ?
- Are my assumptions correct ?
- Is previous two answers are positive, where is the problem ? And especially...
- Why it presents only with some resolutions and not others ?
- What additional info can I provide ?
-
Start of video is not labeled as "0" in QuickTime Video from GoPro
26 mars 2020, par John TerragnoliI’m trying to combine four GoPro videos into a single video, and then rotate it 90 degrees. However, the time scales on the bottom of the videos are all wrong. The videos are 17 minutes and 42 second. But the beginning time is labeled as 5:15:20:32 and the ending time is 5:33:01:32. It just looks really weird and I’d like to fix it. After I use ffmpeg to rotate and concatenate the videos, the problem persists. Could it possibly be fixed with Exiftool ?
ffmpeg -safe 0 -f concat -i list.txt -vcodec copy -acodec copy merged_videos.MP4
ffmpeg -i input.mov -vf "transpose=1" output.mov
Here is the exiftool information on one of the videos :
File Name : GOPR3023.MP4
Directory : .
File Size : 3.7 GB
File Modification Date/Time : 2018:04:12 14:56:16-05:00
File Access Date/Time : 2020:03:25 12:17:18-05:00
File Inode Change Date/Time : 2020:03:25 17:57:04-05:00
File Permissions : rwxrwxrwx
File Type : MP4
File Type Extension : mp4
MIME Type : video/mp4
Major Brand : MP4 v1 [ISO 14496-1:ch13]
Minor Version : 2013.10.18
Compatible Brands : mp41
Movie Data Size : 4001979951
Movie Data Offset : 28
Movie Header Version : 0
Create Date : 2018:04:12 14:38:32
Modify Date : 2018:04:12 14:38:32
Time Scale : 60000
Duration : 0:17:42
Preferred Rate : 1
Preferred Volume : 100.00%
Preview Time : 0 s
Preview Duration : 0 s
Poster Time : 0 s
Selection Time : 0 s
Selection Duration : 0 s
Current Time : 0 s
Next Track ID : 6
Firmware Version : HD5.03.02.51.00
Lens Serial Number : NAH6092300301117
Camera Serial Number Hash : 34676f2cdf49b86a1514817a93377bf7
Track Header Version : 0
Track Create Date : 2018:04:12 14:38:32
Track Modify Date : 2018:04:12 14:38:32
Track ID : 1
Track Duration : 0:17:42
Track Layer : 0
Track Volume : 0.00%
Image Width : 1920
Image Height : 1080
Graphics Mode : srcCopy
Op Color : 0 0 0
Compressor ID : avc1
Source Image Width : 1920
Source Image Height : 1080
X Resolution : 72
Y Resolution : 72
Compressor Name : GoPro AVC encoder
Bit Depth : 24
Color Representation : nclx 1 1 1
Video Frame Rate : 59.94
Time Code : 3
Balance : 0
Audio Format : mp4a
Audio Channels : 2
Audio Bits Per Sample : 16
Audio Sample Rate : 48000
Text Font : Unknown (21)
Text Face : Plain
Text Size : 10
Text Color : 0 0 0
Background Color : 65535 65535 65535
Font Name : Helvetica
Other Format : tmcd
Warning : [minor] The ExtractEmbedded option may find more tags in the movie data
Matrix Structure : 1 0 0 0 1 0 0 0 1
Media Header Version : 0
Media Create Date : 2018:04:12 14:38:32
Media Modify Date : 2018:04:12 14:38:32
Media Time Scale : 60000
Media Duration : 0:17:42
Handler Class : Media Handler
Handler Type : NRT Metadata
Handler Description : GoPro SOS
Gen Media Version : 0
Gen Flags : 0 0 0
Gen Graphics Mode : srcCopy
Gen Op Color : 0 0 0
Gen Balance : 0
Meta Format : fdsc
Image Size : 1920x1080
Megapixels : 2.1
Avg Bitrate : 30.1 Mbps
Rotation : 0Part 2
There is a pretty obvious "stutter" at the 17:42 mark where the two clips are combined. I’ve tried using ffmpeg and iMovie, but both give the same results. The GoPro broke up the event into multiple clips on it’s own so it seems weird that there would be any information missing. Is there any way to get rid of this stutter ?Thanks !
-
ffmpeg trouble with NetMaui and Android [closed]
15 juin 2024, par Billy VanegasI find myself in a following predicament. While I work with Visual Studio 2022, ffmpeg performs flawlessly on Windows platform. However, I'm encountering difficulties when attempting to replicate this functionality on Android. Despite exploring NuGet packages like FFMpegCore, which seem to be ffmpeg wrappers but lack ffmpeg itself, I'm still struggling to find a clear path forward. I've even tried integrating ffmpeg-kit for Android as per the instructions, only to face repeated failures and a sense of confusion. I must admit my puzzlement : why isn't there a straightforward method to seamlessly add ffmpeg to a .NET MAUI project that functions consistently across both iOS and Android platforms ?


I want to convert MP3 files to WAV format using FFmpeg on the Android platform within a .NET MAUI project.


I am using the FFmpegCore library and have downloaded the FFmpeg binaries from the official FFmpeg website. However, I encountered issues when attempting to pass the binary folder on the emulator, when I try to pass it, I get
permission denied
in working folder :
/data/user/0/com.companyname.projectname/files/ffmpeg
on this part of code :

await FFMpegArguments
 .FromFileInput(mp3Path)
 .OutputToFile(wavPath, true, options => options
 .WithAudioCodec("pcm_s16le")
 .WithAudioSamplingRate(44100)
 .WithAudioBitrate(320000)
 )
 .ProcessAsynchronously();



This is the
AndroidManifest.xml


<?xml version="1.0" encoding="utf-8"?>
<manifest>
 <application></application>
 
 
 
 
</manifest>



private async Task ConvertMp3ToWav(string mp3Path, string wavPath)
 {
 try
 {
 var directory = Path.GetDirectoryName(wavPath);
 if (!Directory.Exists(directory))
 Directory.CreateDirectory(directory!);
 if (!File.Exists(wavPath))
 Console.WriteLine($"File not found {wavPath}, creating empty file.");
 using var fs = new FileStream(wavPath, FileMode.CreateNew);
 if (!File.Exists(mp3Path))
 Console.WriteLine($"File not found {mp3Path}");

 string? ffmpegBinaryPath = await ExtractFFmpegBinaries(Platform.AppContext);
 FFMpegCore.GlobalFFOptions.Configure(new FFOptions { BinaryFolder = Path.GetDirectoryName(ffmpegBinaryPath!)! });

 await FFMpegArguments
 .FromFileInput(mp3Path)
 .OutputToFile(wavPath, true, options => options
 .WithAudioCodec("pcm_s16le")
 .WithAudioSamplingRate(44100)
 .WithAudioBitrate(320000)
 )
 .ProcessAsynchronously();
 }
 catch (Exception ex)
 {
 Console.WriteLine($"An error occurred during the conversion process: {ex.Message}");
 throw;
 }
 }



private async Task<string> ExtractFFmpegBinaries(Context context)
 {
 var architectureFolder = "x86"; // "armeabi-v7a";
 var ffmpegBinaryName = "ffmpeg"; 
 var ffmpegBinaryPath = Path.Combine(context.FilesDir!.AbsolutePath, ffmpegBinaryName);
 var tempFFMpegFileName = Path.Combine(FileSystem.AppDataDirectory, ffmpegBinaryName);

 if (!File.Exists(ffmpegBinaryPath))
 {
 try
 {
 var assetPath = $"Libs/{architectureFolder}/{ffmpegBinaryName}";
 using var assetStream = context.Assets!.Open(assetPath);
 
 await using var tempFFMpegFile = File.OpenWrite(tempFFMpegFileName);
 await assetStream.CopyToAsync(tempFFMpegFile);

 //new MainActivity().RequestStoragePermission();
 Java.Lang.Runtime.GetRuntime()!.Exec($"chmod 755 {tempFFMpegFileName}");
 }
 catch (Exception ex)
 {
 Console.WriteLine($"An error occurred while extracting FFmpeg binaries: {ex.Message}");
 throw;
 }
 }
 else
 {
 Console.WriteLine($"FFmpeg binaries already extracted to: {ffmpegBinaryPath}");
 }

 return tempFFMpegFileName!;
 }
</string>