
Recherche avancée
Médias (91)
-
Géodiversité
9 septembre 2011, par ,
Mis à jour : Août 2018
Langue : français
Type : Texte
-
USGS Real-time Earthquakes
8 septembre 2011, par
Mis à jour : Septembre 2011
Langue : français
Type : Texte
-
SWFUpload Process
6 septembre 2011, par
Mis à jour : Septembre 2011
Langue : français
Type : Texte
-
La conservation du net art au musée. Les stratégies à l’œuvre
26 mai 2011
Mis à jour : Juillet 2013
Langue : français
Type : Texte
-
Podcasting Legal guide
16 mai 2011, par
Mis à jour : Mai 2011
Langue : English
Type : Texte
-
Creativecommons informational flyer
16 mai 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
Autres articles (49)
-
Emballe Médias : Mettre en ligne simplement des documents
29 octobre 2010, parLe plugin emballe médias a été développé principalement pour la distribution mediaSPIP mais est également utilisé dans d’autres projets proches comme géodiversité par exemple. Plugins nécessaires et compatibles
Pour fonctionner ce plugin nécessite que d’autres plugins soient installés : CFG Saisies SPIP Bonux Diogène swfupload jqueryui
D’autres plugins peuvent être utilisés en complément afin d’améliorer ses capacités : Ancres douces Légendes photo_infos spipmotion (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...)
Sur d’autres sites (6365)
-
ffmpeg :how to apply animation in multiple images that will me merged in a video template in android
1er mars 2023, par Pavan GhanateI am trying to merge number of selected images from gallery to a video template in order to make video status or short video in a android app, I am able to merge the selected images in the video using below cammand now i want to add animation


ArrayList<string> cmd2 = new ArrayList<>();
 cmd2.add("-y");
 cmd2.add("-i");
 if (video_temp_path!= null){
 cmd2.add(video_temp_path);
 }else {
 cmd2.add(Environment.getExternalStorageDirectory().getPath()
 + "/Download/happy.mp4");
 }


 for (int no = 0; no < paths.length; no++) {
 cmd2.add("-i");

 cmd2.add(paths[no]);

 }

 cmd2.add("-filter_complex");



 cmd2.add("[0][1]overlay=x=100:y=200:enable='between(t,3,8)'[v1];" +
 "[v1][2]overlay=x=100:y=200:enable='between(t,10,15)'[v2];" +
 "[v2][3]overlay=x=100:y=200:enable='gt(t,17)'[v3]");
 cmd2.add("-map");
 cmd2.add("[v3]");
 cmd2.add("-map");
 cmd2.add( "0:a");
 cmd2.add(Environment.getExternalStorageDirectory().getPath()
 + "/Download/output.mp4");
</string>


but now i want to add fade in out animation to images so I am using this cammand generated by chatgpt but its giving me error


ArrayList<string> cmd2 = new ArrayList<>();
 cmd2.add("-y");
 cmd2.add("-i");

 if (video_temp_path != null) {
 cmd2.add(video_temp_path);
 } else {
 cmd2.add(Environment.getExternalStorageDirectory().getPath() +
 "/Download/happy.mp4");
}

for (int no = 0; no < paths.length; no++) {
 cmd2.add("-loop");
 cmd2.add("1"); // loop the image

 cmd2.add("-t");
 cmd2.add("5"); // duration of the image

 cmd2.add("-i");
 cmd2.add(paths[no]);

 cmd2.add("-filter_complex");
 cmd2.add("[1:v]fade=in:st=0:d=1[tin];" +
 "[1:v]fade=out:st=4:d=1[tout];" +
 "[0:v][tin]overlay=x=100:y=200" +
 "[v1];" +
 "[v1][tout]overlay=x=100:y=200:enable='between(t,10,15)'[v2];" +
 "[v2][2:v]overlay=x=100:y=200:enable='gt(t,17)'[v3]");

 cmd2.add("-map");
 cmd2.add("[v3]");
 cmd2.add("-map");
 cmd2.add("0:a");
}

cmd2.add(Environment.getExternalStorageDirectory().getPath() +
 "/Download/output.mp4");
</string>


error is


Option map (set input stream mapping) cannot be applied to input url /storage/emulated/0/Pictures/temp/1677570327312.jpg -- you are trying to apply an input option to an output file or vice versa. Move this option before the file it belongs to.



2023-03-01 12:50:50.707 5950-6326/com.android.mergevideo E/mobile-ffmpeg : Error parsing options for input file /storage/emulated/0/Pictures/temp/1677570327312.jpg.
2023-03-01 12:50:50.707 5950-6326/com.android.mergevideo E/mobile-ffmpeg : Error opening input files :
2023-03-01 12:50:50.707 5950-6326/com.android.mergevideo E/mobile-ffmpeg : Invalid argument


-
FFMPEG MP3 file size much larger than expected on Windows 10
8 avril 2018, par The GoraI’ve been using FFMPEG on Windows to :
- Convert iTunes M4A files to MP3s (with a bit rate of 128k) ; and
- Create 30 sec sample MP3s of the above MP3s (same bit rate).
When I run FFMEG on a Windows 7 64 bit machine, the size of the MP3s (both for 1. & 2.) is in line with the rough calculation of :
(Audio length in seconds) X (Bit rate)
For example, a 4 minute audio yields an approx. 3.7MB MP3 file ; a 30 second sample MP3 is approx. 470KB.
However when I run the same FFMPEG binary (copied from the Windows 7 machine) on a Windows 10 64 bit machine, all of the MP3s (both for 1. and for 2.) are inflated by approx 5MB. I’m using the same batch file on both machines to execute FFMEG with the required parameters, so pretty confident the difference is not down to user error.
My questions are :
- Why is there this apparent 5MB overhead on Windows 10 ? and more importantly ;
- What can I do to remove this ?
The large file size is a problem as the sample MP3s are to be put on a website for people to listen to a snippet of the song, and the webpage with multiple tags takes a long time to load completely (several minutes).
Here is the version and lib info :
ffmpeg version 3.4.1 Copyright (c) 2000-2017 the FFmpeg developers
built with gcc 7.2.0 (GCC)
configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-bzlib --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-cuda --enable-cuvid --enable-d3d11va --enable-nvenc --enable-dxva2 --enable-avisynth --enable-libmfx
libavutil 55. 78.100 / 55. 78.100
libavcodec 57.107.100 / 57.107.100
libavformat 57. 83.100 / 57. 83.100
libavdevice 57. 10.100 / 57. 10.100
libavfilter 6.107.100 / 6.107.100
libswscale 4. 8.100 / 4. 8.100
libswresample 2. 9.100 / 2. 9.100
libpostproc 54. 7.100 / 54. 7.100And here are the command lines I’m using :
- ffmpeg -i input.m4a -id3v2_version 3 -b:a 128k -output.mp3
- ffmpeg -i input.m4a -ss 30 -t 30 -af "afade=in:st=30:d=5,afade=out:st=55:d=5" -id3v2_version 3 -b:a 128k -output.mp3
-
TypeError : expected str, bytes or os.PathLike object, not module when trying to sream openCv frames to rtmp server
30 novembre 2022, par seriouslyI am using openCv and face-recognition api to detect a face using a webcam then compare it with a previously taken image to check and see if the people on both images are the same and the openCv and face-recognition part of the code works properly now what I am trying to achieve is to stream the openCv processed video frames to an rtmp server so for this I am trying to use ffmpeg and running the command using subprocess but when I run the code I get error
TypeError: expected str, bytes or os.PathLike object, not module
. But I am writing the frames as bytes to stdin hencep.stdin.write(frame.tobytes())
. How can I fix it and properly stream my openCv frames to an rtmp server using ffmpeg. Thanks in advance.

Traceback (most recent call last):
 File "C:\Users\blah\blah\test.py", line 52, in <module>
 p = subprocess.Popen(command, stdin=subprocess.PIPE, shell=False)
 File "C:\Python310\lib\subprocess.py", line 969, in __init__
 self._execute_child(args, executable, preexec_fn, close_fds,
 File "C:\Python310\lib\subprocess.py", line 1378, in _execute_child
 args = list2cmdline(args)
 File "C:\Python310\lib\subprocess.py", line 561, in list2cmdline
 for arg in map(os.fsdecode, seq):
 File "C:\Python310\lib\os.py", line 822, in fsdecode
 filename = fspath(filename) # Does type-checking of `filename`.
TypeError: expected str, bytes or os.PathLike object, not module
</module>


import cv2
import numpy as np
import face_recognition
import os
import subprocess
import ffmpeg

path = '../attendance_imgs'
imgs = []
classNames = []
myList = os.listdir(path)

for cls in myList:
 curruntImg = cv2.imread(f'{path}/{cls}')
 imgs.append(curruntImg)
 classNames.append(os.path.splitext(cls)[0])

def findEncodings(imgs):
 encodeList = []
 for img in imgs:
 img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
 encode = face_recognition.face_encodings(img)[0]
 encodeList.append(encode)
 return encodeList

encodeListKnown = findEncodings(imgs)
print('Encoding Complete')

cap = cv2.VideoCapture(0)

rtmp_url = "rtmp://127.0.0.1:1935/stream/webcam"

fps = int(cap.get(cv2.CAP_PROP_FPS))
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))

# command and params for ffmpeg
command = [ffmpeg,
 '-y',
 '-f', 'rawvideo',
 '-vcodec', 'rawvideo',
 '-pix_fmt', 'bgr24',
 '-s', "{}x{}".format(width, height),
 '-r', str(fps),
 '-i', '-',
 '-c:v', 'libx264',
 '-pix_fmt', 'yuv420p',
 '-preset', 'ultrafast',
 '-f', 'flv',
 'rtmp://127.0.0.1:1935/stream/webcam']

p = subprocess.Popen(command, stdin=subprocess.PIPE, shell=False)


while True:
 ret, frame, success, img = cap.read()
 if not ret:
 print("frame read failed")
 break
 imgSmall = cv2.resize(img, (0,0), None, 0.25, 0.25)
 imgSmall = cv2.cvtColor(imgSmall, cv2.COLOR_BGR2RGB)

 currentFrameFaces = face_recognition.face_locations(imgSmall)
 currentFrameEncodings = face_recognition.face_encodings(imgSmall, currentFrameFaces)

 for encodeFace, faceLocation in zip(currentFrameEncodings, currentFrameFaces):
 matches = face_recognition.compare_faces(encodeListKnown, encodeFace)
 faceDistance = face_recognition.face_distance(encodeListKnown, encodeFace)
 matchIndex = np.argmin(faceDistance)

 if matches[matchIndex]:
 name = classNames[matchIndex].upper()
 y1, x2, y2, x1 = faceLocation
 y1, x2, y2, x1 = y1 * 4, x2 * 4, y2 * 4, x1 * 4 
 cv2.rectangle(img, (x1, y1), (x2, y2), (0, 255, 0), 2)
 cv2.rectangle(img, (x1, y2 - 35), (x2, y2), (0, 255, 0), cv2.FILLED)
 cv2.putText(img, name, (x1 + 6, y2 - 6), cv2.FONT_HERSHEY_DUPLEX, 1, (255, 255, 255), 2) 

 # write to pipe
 p.stdin.write(frame.tobytes())