
Recherche avancée
Médias (91)
-
Collections - Formulaire de création rapide
19 février 2013, par
Mis à jour : Février 2013
Langue : français
Type : Image
-
Les Miserables
4 juin 2012, par
Mis à jour : Février 2013
Langue : English
Type : Texte
-
Ne pas afficher certaines informations : page d’accueil
23 novembre 2011, par
Mis à jour : Novembre 2011
Langue : français
Type : Image
-
The Great Big Beautiful Tomorrow
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
-
Richard Stallman et la révolution du logiciel libre - Une biographie autorisée (version epub)
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (19)
-
Selection of projects using MediaSPIP
2 mai 2011, parThe examples below are representative elements of MediaSPIP specific uses for specific projects.
MediaSPIP farm @ Infini
The non profit organizationInfini develops hospitality activities, internet access point, training, realizing innovative projects in the field of information and communication technologies and Communication, and hosting of websites. It plays a unique and prominent role in the Brest (France) area, at the national level, among the half-dozen such association. Its members (...) -
List of compatible distributions
26 avril 2011, parThe table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...) -
Sélection de projets utilisant MediaSPIP
29 avril 2011, parLes exemples cités ci-dessous sont des éléments représentatifs d’usages spécifiques de MediaSPIP pour certains projets.
Vous pensez avoir un site "remarquable" réalisé avec MediaSPIP ? Faites le nous savoir ici.
Ferme MediaSPIP @ Infini
L’Association Infini développe des activités d’accueil, de point d’accès internet, de formation, de conduite de projets innovants dans le domaine des Technologies de l’Information et de la Communication, et l’hébergement de sites. Elle joue en la matière un rôle unique (...)
Sur d’autres sites (4366)
-
Open CV Codec FFMPEG Error fallback to use tag 0x7634706d/'mp4v'
22 mai 2019, par CohenDoing a filter recording and all is fine. The code is running, but at the end the video is not saved as MP4. I have this error :
OpenCV: FFMPEG: tag 0x44495658/'XVID' is not supported with codec id 12 and format 'mp4 / MP4 (MPEG-4 Part 14)'
OpenCV: FFMPEG: fallback to use tag 0x7634706d/'mp4v'Using a MAC and the code is running correctly, but is not saving. I tried to find more details about this error, but wasn’t so fortunate. I use as editor Sublime. The code run on Atom tough but is giving this error :
OpenCV: FFMPEG: tag 0x44495658/'XVID' is not supported with codec id 12 and format 'mp4 / MP4 (MPEG-4 Part 14)'
OpenCV: FFMPEG: fallback to use tag 0x7634706d/'mp4v'
2018-05-28 15:04:25.274 Python[17483:2224774] AVF: AVAssetWriter status: Cannot create file....
import numpy as np
import cv2
import random
from utils import CFEVideoConf, image_resize
import glob
import math
cap = cv2.VideoCapture(0)
frames_per_seconds = 24
save_path='saved-media/filter.mp4'
config = CFEVideoConf(cap, filepath=save_path, res='360p')
out = cv2.VideoWriter(save_path, config.video_type, frames_per_seconds, config.dims)
def verify_alpha_channel(frame):
try:
frame.shape[3] # looking for the alpha channel
except IndexError:
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2BGRA)
return frame
def apply_hue_saturation(frame, alpha, beta):
hsv_image = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
h, s, v = cv2.split(hsv_image)
s.fill(199)
v.fill(255)
hsv_image = cv2.merge([h, s, v])
out = cv2.cvtColor(hsv_image, cv2.COLOR_HSV2BGR)
frame = verify_alpha_channel(frame)
out = verify_alpha_channel(out)
cv2.addWeighted(out, 0.25, frame, 1.0, .23, frame)
return frame
def apply_color_overlay(frame, intensity=0.5, blue=0, green=0, red=0):
frame = verify_alpha_channel(frame)
frame_h, frame_w, frame_c = frame.shape
sepia_bgra = (blue, green, red, 1)
overlay = np.full((frame_h, frame_w, 4), sepia_bgra, dtype='uint8')
cv2.addWeighted(overlay, intensity, frame, 1.0, 0, frame)
return frame
def apply_sepia(frame, intensity=0.5):
frame = verify_alpha_channel(frame)
frame_h, frame_w, frame_c = frame.shape
sepia_bgra = (20, 66, 112, 1)
overlay = np.full((frame_h, frame_w, 4), sepia_bgra, dtype='uint8')
cv2.addWeighted(overlay, intensity, frame, 1.0, 0, frame)
return frame
def alpha_blend(frame_1, frame_2, mask):
alpha = mask/255.0
blended = cv2.convertScaleAbs(frame_1*(1-alpha) + frame_2*alpha)
return blended
def apply_circle_focus_blur(frame, intensity=0.2):
frame = verify_alpha_channel(frame)
frame_h, frame_w, frame_c = frame.shape
y = int(frame_h/2)
x = int(frame_w/2)
mask = np.zeros((frame_h, frame_w, 4), dtype='uint8')
cv2.circle(mask, (x, y), int(y/2), (255,255,255), -1, cv2.LINE_AA)
mask = cv2.GaussianBlur(mask, (21,21),11 )
blured = cv2.GaussianBlur(frame, (21,21), 11)
blended = alpha_blend(frame, blured, 255-mask)
frame = cv2.cvtColor(blended, cv2.COLOR_BGRA2BGR)
return frame
def portrait_mode(frame):
cv2.imshow('frame', frame)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
_, mask = cv2.threshold(gray, 120,255,cv2.THRESH_BINARY)
mask = cv2.cvtColor(mask, cv2.COLOR_GRAY2BGRA)
blured = cv2.GaussianBlur(frame, (21,21), 11)
blended = alpha_blend(frame, blured, mask)
frame = cv2.cvtColor(blended, cv2.COLOR_BGRA2BGR)
return frame
def apply_invert(frame):
return cv2.bitwise_not(frame)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2BGRA)
#cv2.imshow('frame',frame)
hue_sat = apply_hue_saturation(frame.copy(), alpha=3, beta=3)
cv2.imshow('hue_sat', hue_sat)
sepia = apply_sepia(frame.copy(), intensity=.8)
cv2.imshow('sepia',sepia)
color_overlay = apply_color_overlay(frame.copy(), intensity=.8, red=123, green=231)
cv2.imshow('color_overlay',color_overlay)
invert = apply_invert(frame.copy())
cv2.imshow('invert', invert)
blur_mask = apply_circle_focus_blur(frame.copy())
cv2.imshow('blur_mask', blur_mask)
portrait = portrait_mode(frame.copy())
cv2.imshow('portrait',portrait)
if cv2.waitKey(20) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows() -
With ffmpeg print onto clipped video hh:mm:ss time *from original* and hh:mm:ss total duration from original
25 mai 2023, par KesI am using arch linux and bash and ffmpeg, all are up to date and the latest versions.


I am clipping a video that is 30 seconds long and wish to clip from 5 secs to 10 seconds to a new file, from the original.


In the bottom right hand corner of the clip I wish to show timestamps from the original video as follows


- 

- in the 5th second "00:00:05/ 00:00:30"
- in the 6th second "00:00:06/ 00:00:30"

etc - in the 10th second "00:00:10/ 00:00:30"








This is an apparentley simple question(?) but the syntax of the command is not at all obvious and I am hoping an expert may shed some light on this.


All I have so far for the drawtext part, which does not do what I want as it only counts the elapsed time from t=0 of the new clip, whereas I want it to show the timestamp and total duration of the original clip


drawtext
I started with

"drawtext=text='%{pts\:gmtime\:0\:%M\\\\\:%S}':fontsize=24:fontcolor=black:x=(w-text_w-10):y=(h-text_h-10)"



ffmpeg line with drawtext I have started with


ffmpeg -ss 00:00:05 -i "$in_file" -filter_complex "drawtext=fontfile=font.ttf:text='sample text':x=10:y=10:fontsize=12:fontcolor=white:box=1:boxcolor=black@0.5:boxborderw=5,drawtext=text='%{duration\:hms}':fontsize=12:fontcolor=black:x=(w-text_w-10):y=(h-text_h-10)" -t 5 -c:a copy -c:v libx264 out_file.mp4



-
FFMPEG - why zoompan causes stretching
14 septembre 2020, par Sarmad S.I have two images as input, both are 1600x1066. I am vertically stacking them. Then I am drawing a box and vertically stacking that box under both of the image. Inside of the box I write text, then I output a video that is 1080x1920. Everything works well, until I use zoompan to zoom in on the images, I get weird behavior. basically all input images including the box stretchs (shrink) horizontally and no longer fit the entire height of the video which is 1920.


The command (removed some drawtext commands from it) :


-filter_complex 
"color=s=1600x1066:color=blue, drawtext=fontfile=font.otf: text='My Text':fontcolor=white: fontsize=30: x=50: y=50[box]; 
[0]scale=4000x4000,zoompan=z='min(zoom+0.0015,1.5)':x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)':d=125:s=1600x1066[z0];
[1]scale=4000x4000,zoompan=z='min(zoom+0.0015,1.5)':x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)':d=125:s=1600x1600[z1];
[z0][z1][box]vstack=inputs=3"



How do I fix this ? I want to zoom in without stretching the images


Before using zoompan this is how the video looks like (I want to keep it this way while zooming in the images) :



After using zoompan this is how the video looks like :