
Recherche avancée
Médias (91)
-
Les Miserables
9 décembre 2019, par
Mis à jour : Décembre 2019
Langue : français
Type : Textuel
-
VideoHandle
8 novembre 2019, par
Mis à jour : Novembre 2019
Langue : français
Type : Video
-
Somos millones 1
21 juillet 2014, par
Mis à jour : Juin 2015
Langue : français
Type : Video
-
Un test - mauritanie
3 avril 2014, par
Mis à jour : Avril 2014
Langue : français
Type : Textuel
-
Pourquoi Obama lit il mes mails ?
4 février 2014, par
Mis à jour : Février 2014
Langue : français
-
IMG 0222
6 octobre 2013, par
Mis à jour : Octobre 2013
Langue : français
Type : Image
Autres articles (50)
-
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)
Sur d’autres sites (4560)
-
ffmpeg concatenating videos of different fps while keeping the total length not changed
23 novembre 2017, par A_MatarI wanna pad an
mp4
video stream with another video clip of a static image that I created using :def generate_white_vid (duration):
output_filename = os.path.join(p_path,'white_vid_'+" 0:.2f}".format(duration)+'.mp4')
ffmpeg_create_vid_from_static_img = 'ffmpeg -loop 1 -i /path/WhiteBackground.jpg -c:v libx264 -t %f -pix_fmt yuv420p -vf scale=1920:1080 %s' % (duration, output_filename)
p = subprocess.Popen(ffmpeg_create_vid_from_static_img, shell=True)
p.communicate()
return output_filenameI use the following to concatenate :
def concat_vids(clip_paths):
filenames_txt = open('clips_to_join.txt','w')
for clip in clip_paths:
filenames_txt.write ('file \''+ clip+'\'\n')
filenames_txt.close()
output_filename = clip_paths[0].split('.', 2)[0]
output_file_path = os.path.join(root_path, output_filename+'-padded.mp4')
# join the clips
ffmpeg_command = ["ffmpeg", "-f", "concat", "-safe", "0", "-i", "clips_to_join.txt", "-codec", "copy", output_file_path] # output_filename = ch0X-start_time-end_time
p = subprocess.Popen(ffmpeg_command)
p.communicate() # wait till the subprocess finishes. You can send commands to process as well.
return output_file_pathWhen I check the length of the resulting video after concatenation, I find that it is not equal to the sum of the two segments that I concatenated, and sometimes it is even less by some seconds !!
Here is how I get the video length in seconds :
def ffmpeg_len(vid_path):
'''
Returns length in seconds using ffmpeg
'''
ffmpeg_get_mediafile_length = ['sh', '-c', 'ffmpeg -i "$1" 2>&1 | grep Duration', '_', vid_path]
p = subprocess.Popen(ffmpeg_get_mediafile_length, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
output, err = p.communicate()
length_regexp = 'Duration: (\d{2}):(\d{2}):(\d{2})(\.\d+),'
re_length = re.compile(length_regexp)
matches = re_length.search(output)
if matches:
video_length = int(matches.group(1)) * 3600 + \
int(matches.group(2)) * 60 + \
int(matches.group(3)) + float(matches.group(4))
return video_length
else:
print("Can't determine video length.")
print err
raise SystemExitMy guess is that maybe the concatenation unifies the
fps
rate for the all the clips to be joined, if this is the case, how to prevent this from happening ? How can I get a video of the desired length exactly.Maybe it worth mentioning that the video to padded is very short
0.42 second
, the original video is210.58
and the resultant video is210.56
!I have verified that ffmpeg does generate the desired padding region and it is of the desired length
0.42
I got a 0.43 segment when I forced30 fps
but it is okay. -
I'm trying to hide information in a H264 video. When I stitch the video up, split it into frames again and try to read it, the information is lost
18 mai 2024, par Wer WerI'm trying to create a video steganography python script. The algorithm for hiding will be...


- 

- convert any video codec into h264 lossless
- save the audio of the video and split the h264 video into frames
- hide my txt secret into frame0 using LSB replacement method
- stitch the video back up and put in the audio










...and when I want to recover the text, I'll


- 

- save the audio of the video and split the encoded h264 video into frames
- retrieve my hidden text from frame0 and print the text






So, this is what I can do :


- 

- split the video
- hide the text in frame0
- retrieve the text from frame0
- stitch the video










But after stitching the video, when I tried to retrieve the text by splitting that encrypted video, it appears that the text has been lost. This is because i got the error


UnicodeEncodeError: 'charmap' codec can't encode character '\x82' in position 21: character maps to <undefined>
</undefined>


I'm not sure if my LSB replacement algorithm was lost, which results in my not being able to retrieve my frame 0 information, or if the H264 conversion command I used was a converted my video into H264 lossy version instead of lossless (which I don't believe so because I specified -qp 0)
This was the command I used to convert my video


ffmpeg -i video.mp4 -t 12 -c:v libx264 -preset veryslow -qp 0 output.mp4



These are my codes


import json
import os
import magic
import ffmpeg
import cv2
import numpy as np

import subprocess

# Path to the file you want to check
here = os.path.dirname(os.path.abspath(__file__))
file_path = os.path.join(here, "output.mp4")
raw_video = cv2.VideoCapture(file_path)
audio_output_path = os.path.join(here, "audio.aac")
final_video_file = os.path.join(here, "output.mp4")

# create a folder to save the frames.
frames_directory = os.path.join(here, "data1")
try:
 if not os.path.exists(frames_directory):
 os.makedirs(frames_directory)
except OSError:
 print("Error: Creating directory of data")

file_path_txt = os.path.join(here, "hiddentext.txt")
# Read the content of the file in binary mode
with open(file_path_txt, "r") as f:
 file_content = f.read()
# txt_binary_representation = "".join(format(byte, "08b") for byte in file_content)
# print(file_content)

"""
use this cmd to convert any video to h264 lossless. original vid in 10 bit depth format
ffmpeg -i video.mp4 -c:v libx264 -preset veryslow -qp 0 output.mp4

use this cmd to convert any video to h264 lossless. original vid in 8 bit depth format
ffmpeg -i video.mp4 -c:v libx264 -preset veryslow -crf 0 output.mp4

i used this command to only get first 12 sec of video because the h264 vid is too large 
ffmpeg -i video.mp4 -t 12 -c:v libx264 -preset veryslow -qp 0 output.mp4

check for multiple values to ensure its h264 lossless:
1. CRF = 0
2. qp = 0
3. High 4:4:4 Predictive
"""


# region --codec checking. ensure video is h264 lossless--
def check_h264_lossless(file_path):
 try:
 # Use ffprobe to get detailed codec information, including tags
 result = subprocess.run(
 [
 "ffprobe",
 "-v",
 "error",
 "-show_entries",
 "stream=codec_name,codec_long_name,profile,level,bit_rate,avg_frame_rate,nb_frames,tags",
 "-of",
 "json",
 file_path,
 ],
 stdout=subprocess.PIPE,
 stderr=subprocess.PIPE,
 text=True,
 )
 # Check if the file is lossless
 metadata = check_h264_lossless(file_path)
 print(json.dumps(metadata, indent=4))

 # Check if the CRF value is available in the tags
 for stream in metadata.get("streams", []):
 if stream.get("codec_name") == "h264":
 tags = stream.get("tags", {})
 crf_value = tags.get("crf")
 encoder = tags.get("encoder")
 print(f"CRF value: {crf_value}")
 print(f"Encoder: {encoder}")
 return json.loads(result.stdout)
 except Exception as e:
 return f"An error occurred: {e}"


# endregion


# region --splitting video into frames--
def extract_audio(input_video_path, audio_output_path):
 if os.path.exists(audio_output_path):
 print(f"Audio file {audio_output_path} already exists. Skipping extraction.")
 return
 command = [
 "ffmpeg",
 "-i",
 input_video_path,
 "-q:a",
 "0",
 "-map",
 "a",
 audio_output_path,
 ]
 try:
 subprocess.run(command, check=True)
 print(f"Audio successfully extracted to {audio_output_path}")
 except subprocess.CalledProcessError as e:
 print(f"An error occurred: {e}")


def split_into_frames():
 extract_audio(file_path, audio_output_path)
 currentframe = 0
 print("Splitting...")
 while True:
 ret, frame = raw_video.read()
 if ret:
 name = os.path.join(here, "data1", f"frame{currentframe}.png")
 # print("Creating..." + name)
 cv2.imwrite(name, frame)
 currentframe += 1
 else:
 print("Complete")
 break


# endregion


# region --merge all back into h264 lossless--
# output_video_file = "output1111.mp4"


def stitch_frames_to_video(frames_dir, output_video_path, framerate=60):
 command = [
 "ffmpeg",
 "-y",
 "-framerate",
 str(framerate),
 "-i",
 os.path.join(frames_dir, "frame%d.png"),
 "-c:v",
 "libx264",
 "-preset",
 "veryslow",
 "-qp",
 "0",
 output_video_path,
 ]

 try:
 subprocess.run(command, check=True)
 print(f"Video successfully created at {output_video_path}")
 except subprocess.CalledProcessError as e:
 print(f"An error occurred: {e}")


def add_audio_to_video(video_path, audio_path, final_output_path):
 command = [
 "ffmpeg",
 "-i",
 video_path,
 "-i",
 audio_path,
 "-c:v",
 "copy",
 "-c:a",
 "aac",
 "-strict",
 "experimental",
 final_output_path,
 ]
 try:
 subprocess.run(command, check=True)
 print(f"Final video with audio created at {final_output_path}")
 except subprocess.CalledProcessError as e:
 print(f"An error occurred: {e}")


# endregion


def to_bin(data):
 if isinstance(data, str):
 return "".join([format(ord(i), "08b") for i in data])
 elif isinstance(data, bytes) or isinstance(data, np.ndarray):
 return [format(i, "08b") for i in data]
 elif isinstance(data, int) or isinstance(data, np.uint8):
 return format(data, "08b")
 else:
 raise TypeError("Type not supported")


def encode(image_name, secret_data):
 image = cv2.imread(image_name)
 n_bytes = image.shape[0] * image.shape[1] * 3 // 8
 print("[*] Maximum bytes to encode:", n_bytes)
 secret_data += "====="
 if len(secret_data) > n_bytes:
 raise ValueError("[!] Insufficient bytes, need bigger image or less data")
 print("[*] Encoding Data")

 data_index = 0
 binary_secret_data = to_bin(secret_data)
 data_len = len(binary_secret_data)
 for row in image:
 for pixel in row:
 r, g, b = to_bin(pixel)
 if data_index < data_len:
 pixel[0] = int(r[:-1] + binary_secret_data[data_index], 2)
 data_index += 1
 if data_index < data_len:
 pixel[1] = int(g[:-1] + binary_secret_data[data_index], 2)
 data_index += 1
 if data_index < data_len:
 pixel[2] = int(b[:-1] + binary_secret_data[data_index], 2)
 data_index += 1
 if data_index >= data_len:
 break
 return image


def decode(image_name):
 print("[+] Decoding")
 image = cv2.imread(image_name)
 binary_data = ""
 for row in image:
 for pixel in row:
 r, g, b = to_bin(pixel)
 binary_data += r[-1]
 binary_data += g[-1]
 binary_data += b[-1]
 all_bytes = [binary_data[i : i + 8] for i in range(0, len(binary_data), 8)]
 decoded_data = ""
 for byte in all_bytes:
 decoded_data += chr(int(byte, 2))
 if decoded_data[-5:] == "=====":
 break
 return decoded_data[:-5]


frame0_path = os.path.join(here, "data1", "frame0.png")
encoded_image_path = os.path.join(here, "data1", "frame0.png")


def encoding_function():
 split_into_frames()

 encoded_image = encode(frame0_path, file_content)
 cv2.imwrite(encoded_image_path, encoded_image)

 stitch_frames_to_video(frames_directory, file_path)
 add_audio_to_video(file_path, audio_output_path, final_video_file)


def decoding_function():
 split_into_frames()
 decoded_message = decode(encoded_image_path)
 print(f"[+] Decoded message: {decoded_message}")


# encoding_function()
decoding_function()




So I tried to put my decoding function into my encoding function like this


def encoding_function():
 split_into_frames()

 encoded_image = encode(frame0_path, file_content)
 cv2.imwrite(encoded_image_path, encoded_image)

#immediately get frame0 and decode without stitching to check if the data is there
 decoded_message = decode(encoded_image_path)
 print(f"[+] Decoded message: {decoded_message}")

 stitch_frames_to_video(frames_directory, file_path)
 add_audio_to_video(file_path, audio_output_path, final_video_file)




This returns my secret text from frame0. But splitting it after stitching does not return my hidden text. The hidden text was lost


def decoding_function():
 split_into_frames()
#this function is after the encoding_function(). the secret text is lost, resulting in charmap codec #can't encode error
 decoded_message = decode(encoded_image_path)
 print(f"[+] Decoded message: {decoded_message}")



EDIT :
So i ran the encoding function first, copied frame0.png out and placed it some where. Then I ran the decoding function, and got another frame0.png.


I ran both frame0.png into this python function


frame0_data1_path = os.path.join(here, "data1", "frame0.png")
frame0_data2_path = os.path.join(here, "data2", "frame0.png")
frame0_data1 = cv2.imread(frame0_data1_path)
frame0_data2 = cv2.imread(frame0_data2_path)

if frame0_data1 is None:
 print(f"Error: Could not load image from {frame0_data1_path}")
elif frame0_data2 is None:
 print(f"Error: Could not load image from {frame0_data2_path}")
else:

 if np.array_equal(frame0_data1, frame0_data2):
 print("The frames are identical.")
 else:
 print("The frames are different.")



...and apparently both are different. This means my frame0 binary got changed when I stitch back into the video after encoding. Is there a way to make it not change ? Or will h264 or any video codec change a little bit when you stitch the frames back up ?


-
Play a video with ffmpeg and SDL2 on a Raspberry Pi 5
18 février 2024, par aforinoI want to create a python script that decodes a h264 1080p video and outputs it via SDL2 on a Raspberry Pi 5. The Raspberry Pi 5 is able to play a h264 1080p video without problem using VLC. Total CPU load with VLC is about 10%. However decoding with ffmpeg and outputting via SDL2 uses around 70% CPU load. Since I want to be able to switch seamlessly between two output videos I will need to decode two videos at the same time. Therefore 70% CPU load for one transcoded 1080p video is not acceptable. How can I make the code more efficient and why is VLC so much more efficient ?


This is my current python script :


import numpy as np
import ffmpeg # ffmpeg-python
import sdl2.ext

in_file = ffmpeg.input('bbb1080_x264.mp4', re=None)

width = 1920
height = 1080

process1 = (
 in_file
 .output('pipe:', format='rawvideo', pix_fmt='bgra')
 .run_async(pipe_stdout=True)
)

sdl2.ext.init()
window = sdl2.ext.Window("Hello World!", size=(width, height))
window.show()
windowsurface = sdl2.SDL_GetWindowSurface(window.window)
windowArray = sdl2.ext.pixels3d(windowsurface.contents)

sdl2.ext.mouse.hide_cursor()

while True:
 in_bytes = process1.stdout.read(width * height * 4)

 if not in_bytes:
 break

 in_frame = (
 np
 .frombuffer(in_bytes, np.uint8)
 .reshape([height, width, 4])
 .transpose(1, 0, 2)
 )

 for event in sdl2.ext.get_events():
 if event.type == sdl2.SDL_QUIT:
 exit()

 windowArray[:] = in_frame
 window.refresh()

process1.wait()



Also it is interesting to note that when I start VLC on a Raspberry Pi 5 this is the output on the terminal


[00007fff78c1a550] avcodec decoder error: cannot start codec (h264_v4l2m2m)
Fontconfig warning: ignoring UTF-8: not a valid region tag
[00007fff68002d70] gles2 generic error: parent window not available
[00007fff68002d70] xcb generic error: window not available
[00007fff680013f0] mmal_xsplitter vout display: Try drm
[00007fff68002d70] drm_vout generic: <<< OpenDrmVout: Fmt=I420
[00007fff68002d70] drm_vout generic error: Failed to get xlease`



It indicates that VLC is not using the h264_v4l2m2m hardware acceleration.