
Recherche avancée
Autres articles (18)
-
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...) -
Demande de création d’un canal
12 mars 2010, parEn fonction de la configuration de la plateforme, l’utilisateur peu avoir à sa disposition deux méthodes différentes de demande de création de canal. La première est au moment de son inscription, la seconde, après son inscription en remplissant un formulaire de demande.
Les deux manières demandent les mêmes choses fonctionnent à peu près de la même manière, le futur utilisateur doit remplir une série de champ de formulaire permettant tout d’abord aux administrateurs d’avoir des informations quant à (...) -
Installation en mode ferme
4 février 2011, parLe mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
C’est la méthode que nous utilisons sur cette même plateforme.
L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...)
Sur d’autres sites (4489)
-
MP4 libx264 converted to libx265 results in skipped frames
5 mai 2023, par RGCFFmpeg 5-1-2-full_build on Windows10 and storage on windows server 2016.
We have about 1.5 TB data in videos as evidence to document compliance to manufacturing standards.
In a PowerShell batch script I move an libx264 mp4 to a USB3 backup location and then convert it back to the server location it came from :
FFMpeg -i backup\inputFile.mp4 c:v libx265 \server\outputFile.mp4 -n
in order to save much needed space on our server.
It did create a significantly smaller file, but while reviewing the results the length in time is the same as the input but frames seems like they were dropped.


I am stumped as to use what additional parameters as ffmpeg has so many.
Can the fact that it does the conversion over a network to the server be an influence, or the input coming from a USB3 backup drive ?


Thanks for your feedback,
RGC


$noOfFiles++ # count progress 
$serverMp4 = "$serverLine" # Server MP4 input address

# Before Conversion grab MP4 from server and move to backup drive
# First grab input file path name and add as destination path, if not exist
$backupMp4 = $serverMp4.Substring(2)
$backupMp4 = "E:\Videos$backupMp4"
$serverLastFolder = (Split-Path $serverLine -Parent)
$serverLastFolder = (Split-Path $serverLastFolder -NoQualifier)
$destinationPath = Join-Path $destinationFolder $serverLastFolder
if(!(Test-Path $destinationPath)) {
 New-Item -ItemType Directory -Path $destinationPath
}

Move-Item -Path $serverMp4 -Destination $destinationPath

$myTime = get-date -format "yyyy-MM-dd HH:mm:ss" # log Move
$msgOut = "$myTime Moved $serverMp4 to E:\Videos\"
Add-Content -Path $log -Value $msgOut # store msg in log file
Write-Host $msgOut

# Convert file back to server
ffmpeg -i $backupMp4 -c:v libx265 $serverMp4 -n

$myTime = get-date -format "yyyy-MM-dd HH:mm:ss"
$msgOut = "$myTime Converted $backupMp4 `n $arrow $serverMp4"
Write-Host $msgOut # Console msg
Add-Content -Path $log -Value $msgOut # store msg in log file

Write-Host "Completed Move of: $noOfFiles `n $sepLine" -ForegroundColor DarkYellow

# Finished Move of MP4 file to Backup Drive and Converted back to original location. 
# Grab next line 



}


-
ffmpeg - stretched pixel issue
5 juin 2023, par AdunatoContext


I'm converting a PNG sequence into a video using FFMPEG. The images are semi-transparent portraits where the background has been removed digitally.


Issue


The edge pixels of the subject are stretched all the way to the frame border, creating a fully opaque video.


Cause Analysis


The process worked fine in the previous workflow using rembg from command line however, since I started using rembg via python script using alpha_matting to obtain higher quality results, the resulting video has these issues.


The issue is present in both webm format (target) and mp4 (used for testing).


Command Used


Command used for webm is :


ffmpeg -thread_queue_size 64 -framerate 30 -i <png sequence="sequence" location="location"> -c:v libvpx -b:v 0 -crf 18 -pix_fmt yuva420p -auto-alt-ref 0 -c:a libvorbis <png output="output">
</png></png>


Throubleshooting Steps Taken


- 

- PNG Visual inspection The PNG images have a fully transparent background as desired.
- PNG Alpha Measurement I have created a couple of python scripts to look at alpha level in pixels and confirmed that there is no subtle alpha level in the background pixels
- Exported MP4 with AE Using the native AE renderer the resulting MP4/H.265 has a black background, so not showing the stretched pixel issue








Image of the Issue




Sample PNG Image from sequence



Code Context


rembg call via API using alpha_matting seems to generate a premultiplied alpha which uses non black pixels for 0 alpha pixels.


remove(input_data, alpha_matting=True, alpha_matting_foreground_threshold=250,
 alpha_matting_background_threshold=250, alpha_matting_erode_size=12)



A test using a rough RGB reset of 0-alpha pixels confirms that the images are being played with their RGB value ignoring Alpha.


def reset_alpha_pixels(img):
 # Open the image file
 # Process each pixel
 data = list(img.getdata())
 new_data = []
 for item in data:
 if item[3] == 0:
 new_data.append((0, 0, 0, 0))
 else:
 new_data.append((item[0], item[1], item[2], item[3]))
 # Replace the alpha value but keep the RGB
 

 # Update the image data
 img.putdata(new_data)

 return img



Updates


- 

- Added python context to make the question more relevant within SO scope.




-
QML multimedia cannot play m3u8 that is generated with ffmpeg
5 juin 2023, par LeXela-EDWith the following command, I am trying to stream an IP camera over web :


ffmpeg -re -i "rtsp://<user>:<password>@<ip>:<port>" -c:v copy -c:a copy -hls_segment_type mpegts -hls_list_size 5 -hls_wrap 5 -hls_time 2 -hls_flags split_by_time -segment_time_delta 1.00 -reset_timestamps 1 -hls_allow_cache 0 -movflags faststart live.m3u8```
</port></ip></password></user>


This command, produces live.m3u8 and five ts files : live0.ts, live1.ts, live2.ts, live3.ts, and live4. And conversion goes smoothly. At some random point, the content of live.m3u8 is as follows :


#EXTM3U
#EXT-X-VERSION:3
#EXT-X-TARGETDURATION:2
#EXT-X-MEDIA-SEQUENCE:92
#EXTINF:2.000000,
live2.ts
#EXTINF:2.000000,
live3.ts
#EXTINF:2.035800,
live4.ts
#EXTINF:2.000000,
live0.ts
#EXTINF:1.963278,
live1.ts



However, when I tried to play live.m3u8 with the following QML code, it only played a very first segments of it :


import QtQuick 2.15
import QtQuick.Window 2.15
import QtQuick.Controls 2.15
import QtQuick.Layouts 1.15
import QtMultimedia 5.15

Window
{
 width: 640
 height: 480
 visible: true

 Item
 {
 anchors.fill: parent

 MediaPlayer
 {
 id: mediaplayer
 source: "path-to-live/live.m3u8"
 videoOutput: videoOutput
 }

 VideoOutput
 {
 id: videoOutput
 anchors.fill: parent
 }

 MouseArea
 {
 anchors.fill: parent
 onPressed: mediaplayer.play();
 }
 }
}




The interesting thing is : I manually deleted live.m3u8, and obviously, ffmpeg generated another one after ! Then I clicked on the QML program window, and surprisingly, it played the stream nonstop as it was expected at the first run !


What is the problem here ? What I am missing ? Should I change the ffmpeg command or do something with my qml code ? Any idea or help ?


Thank you in advance.