Recherche avancée

Médias (1)

Mot : - Tags -/MediaSPIP 0.2

Autres articles (54)

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 is the first MediaSPIP stable release.
    Its official release date is June 21, 2013 and is announced here.
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Organiser par catégorie

    17 mai 2013, par

    Dans MédiaSPIP, une rubrique a 2 noms : catégorie et rubrique.
    Les différents documents stockés dans MédiaSPIP peuvent être rangés dans différentes catégories. On peut créer une catégorie en cliquant sur "publier une catégorie" dans le menu publier en haut à droite ( après authentification ). Une catégorie peut être rangée dans une autre catégorie aussi ce qui fait qu’on peut construire une arborescence de catégories.
    Lors de la publication prochaine d’un document, la nouvelle catégorie créée sera proposée (...)

Sur d’autres sites (6426)

  • Back porting ffmpeg.dll from electron for windows xp by disassembling

    28 juillet 2024, par Oosuke Ren

    I've recently gotten into a really interesting project of having a fully functional (and as futuristic as possible) physical retro gaming machine with windows xp. I had found One Core Api that successfully works to allow for some programs to work that otherwise wouldn't have. One of them is electron (5.0.13). Thanks to a vast testing between a VM with the kernel extender and a vanilla XP, I found out that the only thing stopping me from succeeding is because it's dependent on an EXTREMELY specific version/fork of ffmpeg (Chromium fork of ffmpeg 4.1) . Due to that being a relatively old fork/version, the build tools/links for some of the stuff are nonexistent right now, so even if I do have the fork locally with all the instructions, I can't build it. (and if I do I have to patch Win Vista+ Api functions, with one custom stub dll I have)

    


    AcquireSRWLockExclusive InitializeConditionVariable SleepConditionVariableSRW InitOnceBeginInitialize InitOnceComplete InitializeSRWLock ReleaseSRWLockExclusive WakeAllConditionVariable WakeConditionVariable

    


    Since I can't custom build ffmpeg I have to patch it's calls by redirecting them to my custom dll that includes these back ported functions and more.

    


    I tried many different ways => IDA Pro, Ghidra, objconv, currently am the closest with "DLL to C"

    


    Ida Pro and Ghidra seem to not be creating assembly code that I'd be able to assemble back after patching.

    


    objconv produces a really accurate disassembly, but the issue is it doesn't have an assembler. And the produced .asm won't assemble with Fasm, masm or nasm

    


    As for DLL to C-> successfully created a quite presentable VS project, the project successfully compiles with only one warning, the byte sizes is quite similar, the problem is => the functions are getting wrongly directed (towards wrong functions in my dll- and thus the ones needed are undefined) And this is too deep to be able to tell if it's a VS version issue, wrong code implementation or if just DLL to C has wrongly disassembled the logic.

    


    Question is, is my last option remaining to manually edit the HEXES of the Import Address Table and Import Names so they get redirected ? (the problem is my knowledge in Assembly isn't too good, so I'm not sure if that's all I'd have to do, and even if so, I have a feeling I'd mess up the Virtual Addresses or something.

    


  • Turn an FFmpeg command into an FFmpeg wasm exec function

    30 novembre 2024, par SeriousLee

    I have this gnarly FFmpeg command :

    


    ffmpeg -i music.mp3 -i video.mp4 -i speech.mp3 -filter_complex "[0:a]atrim=0:$(ffprobe -i speech.mp3 -show_entries format=duration -v quiet -of csv=p=0),volume=0.08[trimmed_music]; [2:a]volume=2[speech]; [1:v]loop=-1,trim=duration=$(ffprobe -i speech.mp3 -show_entries format=duration -v quiet -of csv=p=0),setdar=9/16[vout]; [trimmed_music][speech]amix=inputs=2:duration=first[aout]" -map "[vout]" -map "[aout]" -c:v libx264 -crf 23 -preset medium -c:a aac -b:a 192k final_output.mp4


    


    What it does is :

    


      

    • Get the duration of a speech.mp3 file
    • 


    • Crop a video.mp4 to portrait dimensions
    • 


    • Trim a longer music.mp3 file to the duration of the speech
    • 


    • Loop a the video and trim the final loop so that the whole thing matches the duration of the speech
    • 


    • Adjust the volume of the music and speech
    • 


    • Combine them all into a single video with talking (speech) and music
    • 


    


    I can't figure out how to run it using the FFmpeg wasm. I realise ffprobe isn't a thing with the wasm so we'll have to find a different way to get the duration of the speech.mp3 by probably breaking it up into 2 or more exec functions, but I have no idea how to do that, which is why I'm here asking for help.

    


    For reference, here's the function into which I want to insert this exec function, but feel free to change it however needed. And let me know if I need to provide more information.

    


      const processVideo = async (speech, video, music) => {
    const ffmpeg = new FFmpeg();

    // ffmpeg loading code goes here, assume that part works without issue

    await ffmpeg.writeFile("video.mp4", new Uint8Array(video));
    await ffmpeg.writeFile("speech.mp3", new Uint8Array(speech));
    await ffmpeg.writeFile("music.mp3", new Uint8Array(music));

    await ffmpeg.exec([
      // command(s) should go here
    ]);

    const fileData = await ffmpeg.readFile("final_output.mp4");
    const blob = new Blob([fileData.buffer], { type: "video/mp4" });
    const blobUrl = URL.createObjectURL(blob);

    return blobUrl;
  };


    


  • javacv and moviepy comparison for video generation

    15 septembre 2024, par Vikram

    I am trying to generate video using images, where image have some overlays text and png icons. I am using javacv library for this.
Final output video seems pixelated, i don't understand what is it since i do not have video processing domain knowledge, i am beginner to this.
I know that video bitrate and choice of video encoder are important factor which contributes to video quality and there are many more factors too.

    


    I am providing you two output for comparison, one of them is generated using javacv and another one is from moviepy library

    


    Please watch it in full screen since the problem i am talking about only gets highlighted in full screen, you will see the pixel dancing in javacv generated video, but python output seems stable

    


    https://imgur.com/a/aowNnKg - javacv generated

    


    https://imgur.com/a/eiLXrbk - Moviepy generated

    


    I am using same encoder in both of the implementation

    


    Encoder - libx264
bitrate - 
   800 Kbps for javacv 
   500 Kbps for moviepy

frame rate - 24fps for both of them

output video size -> 
    7MB (javacv)
    5MB (Moviepy)




    


    generated output size from javacv is bigger then moviepy generated video.

    


    here is my java configuration for FFmpegFrameRecorder

    


            FFmpegFrameRecorder recorder = new FFmpegFrameRecorder(this.outputPath, 
                                           this.screensizeX, this.screensizeY);
        if(this.videoCodecName!=null && "libx264".equals(this.videoCodecName)) {
            recorder.setVideoCodec(avcodec.AV_CODEC_ID_H264);
        }
        recorder.setFormat("mp4"); 
        recorder.setPixelFormat(avutil.AV_PIX_FMT_YUV420);
        recorder.setVideoBitrate(800000);
        recorder.setImageWidth(this.screensizeX);
        recorder.setFrameRate(24);



    


    and here is python configuration for writing video file

    


    Full_final_clip.write_videofile(
                            f"{video_folder_path}/{FILE_ID}_INTERMEDIATE.mp4",
                            codec="libx264",
                            audio_codec="aac",
                            temp_audiofile=f"{FILE_ID}_INTER_temp-audio.m4a",
                            remove_temp=True,
                            fps=24,
                        )



    


    as you can see i am not specifying bitrate in python, but i checked that bitrate of final output is around 500 kbps, which is lower then what i specified in java, yet java generated video quality seems poor.

    


    I have tried setting crf value also , but it seems it does not have any impact when used.

    


    increasing bitrate improve quality somewhat but at the cost of file size, still generated output seems pixelated.

    


    Can someone please highlight what might be the issue, and how python is generating better quality video, when both of the libraries use ffmpeg at the backend.

    


    Edit 1 : also, I am adding code which is being used to make zoom animation for continuous frames, As somewhere i read that this might be the cause for pixel jitter, please see and let me know if there is any improvement we can do to remove pixel jittering

    


    private Mat applyZoomEffect(Mat frame, int currentFrame, long effectFrames, int width, int height, String mode, String position, double speed) {
        long totalFrames = effectFrames;
        double i = currentFrame;
        if ("out".equals(mode)) {
            i = totalFrames - i;
        }
        double zoom = 1 + (i * ((0.1 * speed) / totalFrames));

        double originalCenterX = width/2.0;
        double originalCenterY = height/2.0;
   

        // Resize image
        //opencv_imgproc.resize(frame, resizedMat, new Size(newWidth, newHeight));

        // Determine crop region based on position
        double x = 0, y = 0;
        switch (position.toLowerCase()) {
            case "center":
                // Adjusting for center zoom
                 x = originalCenterX - originalCenterX * zoom;
                 y = originalCenterY - originalCenterY * zoom;
                 
                 x= (width-(width*zoom))/2.0;
                 y= (height-(height*zoom))/2.0;
                break;
        }

        double[][] rowData = {{zoom, 0, x},{0,zoom,y}};

        double[] flatData = flattenArray(rowData);

        // Create a DoublePointer from the flattened array
        DoublePointer doublePointer = new DoublePointer(flatData);

        // Create a Mat object with two rows and three columns
        Mat mat = new Mat(2, 3, org.bytedeco.opencv.global.opencv_core.CV_64F); // CV_64F is for double type

        // Put the data into the Mat object
        mat.data().put(doublePointer);
        Mat transformedFrame = new Mat();
        opencv_imgproc.warpAffine(frame, transformedFrame, mat, new Size(frame.cols(), frame.rows()),opencv_imgproc.INTER_LANCZOS4,0,new Scalar(0,0,0,0));
        return transformedFrame;
    }