Recherche avancée

Médias (0)

Mot : - Tags -/clipboard

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (23)

  • Récupération d’informations sur le site maître à l’installation d’une instance

    26 novembre 2010, par

    Utilité
    Sur le site principal, une instance de mutualisation est définie par plusieurs choses : Les données dans la table spip_mutus ; Son logo ; Son auteur principal (id_admin dans la table spip_mutus correspondant à un id_auteur de la table spip_auteurs)qui sera le seul à pouvoir créer définitivement l’instance de mutualisation ;
    Il peut donc être tout à fait judicieux de vouloir récupérer certaines de ces informations afin de compléter l’installation d’une instance pour, par exemple : récupérer le (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • MediaSPIP Player : problèmes potentiels

    22 février 2011, par

    Le lecteur ne fonctionne pas sur Internet Explorer
    Sur Internet Explorer (8 et 7 au moins), le plugin utilise le lecteur Flash flowplayer pour lire vidéos et son. Si le lecteur ne semble pas fonctionner, cela peut venir de la configuration du mod_deflate d’Apache.
    Si dans la configuration de ce module Apache vous avez une ligne qui ressemble à la suivante, essayez de la supprimer ou de la commenter pour voir si le lecteur fonctionne correctement : /** * GeSHi (C) 2004 - 2007 Nigel McNie, (...)

Sur d’autres sites (5290)

  • Better Image Quality with Python OpenCV

    6 avril 2023, par bozolino

    I use OpenCV to capture webcam video as frames to a list variable, then write that list to disk both as image sequence and as a video file. The image quality is fine. But the video is horribly compressed, with a bitrate of less than 4mbit, sometimes even less than 3mbit.

    


    I've spent days and nights on Google finding solutions, tried many fourccs and other wrappers like WriteGear and ffmpeg-python, but to no avail.

    


    Can anyone please tell me how I

    


    a) specify the bitrate for my H.264 file, and

    


    b) choose a lossless compression codec ?

    


    (Python 3.10 on MacOS MacBook Pro M1 / MacOS 13.0.1)

    


    # ::::: import libraries and set variables

import cv2, time, os, shutil, datetime as dt
frames = []
frame_dimensions_known = False

# ::::: define file and folder names

output_folder_name = "output"
frame_folder_name = "frames"
video_file_name = "video.mp4"
frame_file_name = "frame_x.jpg"

# ::::: create output folders and paths

path_script = os.path.abspath(os.path.dirname(__file__))
path_output = os.path.join(path_script,output_folder_name)
if not os.path.exists(path_output):
    os.mkdir(path_output)
    path_session_fold
path_session_folder  = os.path.join(
    path_script,
    output_folder_name,
    dt.datetime.now().strftime("%y%m%d_%H%M%S"))
os.mkdir(path_session_folder)
path_video =  os.path.join(path_session_folder,video_file_name)
path_frames = os.path.join(path_session_folder,frame_folder_name)
os.mkdir(path_frames)


# ::::: open webcam stream

vcap = cv2.VideoCapture(0)
start_time = (time.time() * 1000)
if vcap.isOpened() is False:
    print("[Exiting]: Error accessing webcam stream.")
    exit(0)

# ::::: read frames

while True:
    _grabbed, _frame = vcap.read()
    if _grabbed is False:
        print('[Exiting] No more frames to read')
        exit(0)

    # ::::: get frame dimensions on first read
    if not frame_dimensions_known:
        height, width, channels = _frame.shape
        frame_dimensions_known = True

    # ::::: append frame to list
    frames.append(_frame)

    # ::::: process frames
    pass

    # ::::: display frame
    cv2.imshow("cam", _frame)
    if cv2.waitKey(1) == ord("q"):
        break

# ::::: close webcam stream

vcap.release()
cv2.destroyAllWindows()

# ::::: calculate frame rate

stop_time = (time.time() * 1000)
total_seconds = (stop_time - start_time) / 1000
fps = len(frames) / total_seconds

# ::::: open video writer

fourcc = cv2.VideoWriter_fourcc(*'mp4v')
video_writer = cv2.VideoWriter(
    path_video,
    fourcc,
    fps,
    (width, height)
)

# ::::: write frames to disk as files and video

frame_number = 0
for frame in frames:

    # ::::: write video

    video_writer.write(frame)  # Write out frame to video

    # ::::: construct image file path

    frame_number = frame_number + 1
    frame_number_file = str(frame_number).zfill(4)
    frame_name = frame_file_name.replace('x',frame_number_file)
    frame_path = os.path.join(path_frames,frame_name)

    # ::::: write image

    cv2.imwrite(
        frame_path,
        frame,
        [cv2.IMWRITE_JPEG_QUALITY, 100]
    )

# ::::: clean up

video_writer.release
print (f"Written {frame_number} frames at {fps} FPS.")



    


  • LibAV cannot find hardware acceleration but ffmpeg does. compiled using vcpkg [closed]

    28 novembre 2024, par CottonBuds

    I'm new to ffmpeg and I've been specifically using libav as a library for C++. I'm having troubles with libav not finding "qsv" as hardware acceleration even though I compiled ffmpeg with "qsv".

    


    ffmpeg.exe is working normally and can see "qsv" but not my program that uses libav libraries.

    


    I compiled it on Windows using vcpkg.

    


    Here are some screenshots

    


    enter image description here

    


    Here ffmpeg.exe can see it
when I run this code
enter image description here
output :
enter image description here

    


    it cant see it
Here is my CMakeLists.txt :

    


    cmake_minimum_required (VERSION 3.12)

project ("ViewTether")
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_INCLUDE_CURRENT_DIR ON)
set(CMAKE_TOOLCHAIN_FILE "C:/vcpkg/scripts/buildsystems/vcpkg.cmake")
set(QT_INSTALLATION_PATH "C:/Qt/6.8.0")

set(DEVICE_INSTALLER_64_PATH "${CMAKE_SOURCE_DIR}/thirdparty/usbmmidd_v2/deviceinstaller64.exe ")
add_definitions(-DDEVICE_INSTALLER_64_PATH=\"${DEVICE_INSTALLER_64_PATH}\")

# Include headers so that moc can be generated
# https://stackoverflow.com/questions/52413341/why-is-cmake-not-mocing-my-q-object-headers
file (GLOB_RECURSE SOURCES "./ViewTether/src/*.cpp" "./ViewTether/include/*.h" "./ViewTether/form/*.ui")
include_directories(./ViewTether/include)

set(CMAKE_PREFIX_PATH QT_INSTALLATION_PATH)
find_package(Qt6 REQUIRED COMPONENTS Core Widgets Network)
qt_standard_project_setup() 
  
add_executable(ViewTether ${SOURCES} "main.cpp" )

set(CMAKE_MODULE_PATH "C:/vcpkg/installed/x64-windows/share/ffmpeg" ${CMAKE_MODULE_PATH})
find_package(FFMPEG REQUIRED)
target_include_directories(ViewTether PRIVATE ${FFMPEG_INCLUDE_DIRS})
target_link_directories(ViewTether PRIVATE ${FFMPEG_LIBRARY_DIRS})
target_link_libraries(ViewTether PRIVATE ${FFMPEG_LIBRARIES})
message(${FFMPEG_LIBRARIES})

target_link_libraries(ViewTether PRIVATE Qt6::Core Qt6::Widgets Qt6::Network bcrypt mfplat mfuuid secur32)


    


    I've been stuck here for weeks
I cant get hardware acceleration to run with my program

    


    EDIT 1

    


    $FFMPEG_LIBRARIES showed me this

    


    optimizedC:/vcpkg/installed/x64-windows/lib/avfilter.libdebugC:/vcpkg/installed/x64-windows/debug/lib/avfilter.liboptimizedC:/vcpkg/installed/x64-windows/lib/avformat.libdebugC:/vcpkg/installed/x64-windows/debug/lib/avformat.liboptimizedC:/vcpkg/installed/x64-windows/lib/avcodec.libdebugC:/vcpkg/installed/x64-windows/debug/lib/avcodec.liboptimizedC:/vcpkg/installed/x64-windows/lib/swscale.libdebugC:/vcpkg/installed/x64-windows/debug/lib/swscale.liboptimizedC:/vcpkg/installed/x64-windows/lib/avutil.libdebugC:/vcpkg/installed/x64-windows/debug/lib/avutil.lib

    


    I dont understand it well but it said on the vcpkg-cmake-wrapper.cmake

    


    if(ON)
  find_package(PkgConfig )
  pkg_check_modules(libmfx  IMPORTED_TARGET libmfx)
  list(APPEND FFMPEG_LIBRARIES PkgConfig::libmfx)
  if(vcpkg_no_avcodec_target AND TARGET FFmpeg::avcodec)
    target_link_libraries(FFmpeg::avcodec INTERFACE PkgConfig::libmfx)
  endif()
  if(vcpkg_no_avutil_target AND TARGET FFmpeg::avutil)
    target_link_libraries(FFmpeg::avutil INTERFACE PkgConfig::libmfx)
  endif()
endif()


    


    So I'm assuming that if it found libmfx it should show on my FFMPEG_LIBRARIES

    


  • Creating a sequence of images from lyrics to use in ffmpeg

    19 septembre 2018, par SKS

    I’m trying to make an MP3 + Lyric -> MP4 program in python.

    I have a lyrics file like this :

    [00:00.60]Revelation, chapter 4
    [00:02.34]After these things I looked,
    [00:04.10]and behold a door was opened in heaven,
    [00:06.41]and the first voice which I heard, as it were,
    [00:08.78]of a trumpet speaking with me, said:
    [00:11.09]Come up hither,
    [00:12.16]and I will shew thee the things which must be done hereafter.
    [00:15.78]And immediately I was in the spirit:
    [00:18.03]and behold there was a throne set in heaven,
    [00:20.72]and upon the throne one sitting.
    [00:22.85]And he that sat,
    [00:23.91]was to the sight like the jasper and the sardine stone;
    [00:26.97]and there was a rainbow round about the throne,
    [00:29.16]in sight like unto an emerald.
    [00:31.35]And round about the throne were four and twenty seats;
    [00:34.85]and upon the seats, four and twenty ancients sitting,
    [00:38.03]clothed in white garments, and on their heads were crowns of gold.
    [00:41.97]And from the throne proceeded lightnings, and voices, and thunders;
    [00:46.03]and there were seven lamps burning before the throne,
    [00:48.60]which are the seven spirits of God.
    [00:51.23]And in the sight of the throne was, as it were,
    [00:53.79]a sea of glass like to crystal;
    [00:56.16]and in the midst of the throne, and round about the throne,
    [00:59.29]were four living creatures, full of eyes before and behind.
    [01:03.79]And the first living creature was like a lion:

    I’m trying to create a sequence of images from the lyrics to use into ffmpeg.

    os.system(ffmpeg_path + " -r 2 -i " + images_path + "image%1d.png -i " + audio_file + " -vcodec mpeg4 -y " + video_name)

    I tried finding out the number of images to make for each line. I’ve tried subtracting the seconds of the next line from the current line. It works but produces very inconsistent results.

    import os
    import datetime
    import time
    import math
    from PIL import Image, ImageDraw


    ffmpeg_path = os.getcwd() + "\\ffmpeg\\bin\\ffmpeg.exe"
    images_path = os.getcwd() + "\\test_output\\"
    audio_file = os.getcwd() + "\\audio.mp3"
    lyric_file = os.getcwd() + "\\lyric.lrc"

    video_name = "movie.mp4"


    def save():

       lyric_to_images()
       os.system(ffmpeg_path + " -r 2 -i " + images_path + "image%1d.png -i " + audio_file + " -vcodec mpeg4 -y " + video_name)


    def lyric_to_images():

       file  = open(lyric_file, "r")

       data = file.readlines()

       startOfLyric = True
       lstTimestamp = []

       images_to_make = 0
       from_second = 0.0
       to_second = 0.0

       for line in data:
           vTime = line[1:9] # 00:00.60

           temp = vTime.split(':')

           minute = float(temp[0])
           #a = float(temp[1].split('.'))
           #second = float((minute * 60) + int(a[0]))
           second = (minute * 60) + float(temp[1])

           lstTimestamp.append(second)

       counter = 1

       for i, second in enumerate(lstTimestamp):

           if startOfLyric is True:
               startOfLyric = False
               #first line is always 3 seconds (images to make = 3x2)
               for x in range(1, 7):
                   writeImage(data[i][10:], 'image' + str(counter))
                   counter += 1
           else:
               from_second = lstTimestamp[i-1]
               to_second = second

               difference = to_second - from_second
               images_to_make = int(difference * 2)

               for x in range(1, int(images_to_make+1)):
                   writeImage(data[i-1][10:], 'image'+str(counter))
                   counter += 1

       file.close()

    def writeImage(v_text, filename):

       img = Image.new('RGB', (480, 320), color = (73, 109, 137))

       d = ImageDraw.Draw(img)
       d.text((10,10), v_text, fill=(255,255,0))

       img.save(os.getcwd() + "\\test_output\\" + filename + ".png")


    save()

    Is there any efficient and accurate way to calculate how many images I need to create for each line ?

    Note : Whatever many images I create will have to be multiplied by 2 because I’m using -r 2 for FFmpeg (2 FPS).