Recherche avancée

Médias (0)

Mot : - Tags -/médias

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (66)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (8261)

  • How to make video from images using Java + x264 ; cross platform solution required

    19 octobre 2014, par Shashank Tulsyan

    I have made a software which records my entire day into a video.
    Example video : https://www.youtube.com/watch?v=ITZYMMcubdw (Note : >16hrs compressed in 2mins, video speed too high, might cause epilepsy :P )

    The approach that I use right now is, Avisynth + x264 + Java.
    This is very very efficient. The video for entire day is created in 3-4mins, and reduced to a size of 40-50MB. This is perfect, the only issue is that this solution is not cross platform.
    Does anyone have a better idea ?

    I tried using java based x246 libraries but

    1. They are slow as hell
    2. The video output size is too big
    3. The video quality is not satisfactory.

    Some website suggest a command such as :

    x264.exe --crf 18 --fps 24 --input-res 1920x1080 --input-csp rgb -o "T:\crf18.mkv" "T:\___BBB\big_buck_bunny_%05d.png"

    There are 2 problems with this approach.

    1. As far as I know, x264 does accept image sequence as input, ffmpeg does
    2. The input images are not named in sequence such as image01.png , image02.png etc. They are named as timestamp_as_longinteger.png . So inorder to allow x264 to accept these images as input, I have to rename all of them ( i make a symbolic link for all images in a new folder ). This approach is again unsatisfactory, because I need more flexibility in selecting/unselecting files which would be converted to a video. Right now my approach is a hack.

    The best solution is x264. But not sure how I can send it an image sequence from Java. That too, images which are not named in sequential fashion.


    BTW The purpose of making video is going back in time, and finding out how time was spend/wasted.
    The software is aware of what the user is doing. So using this I can find out (visually) how a class evolved with time. How much time I spend on a particular class/package/module/project/customer. The granuality right now is upto the class level, I wish to take it to the function level. The software is called jitendriya.

    Here is a sample graph


    Here is 1 solution
    How does one encode a series of images into H264 using the x264 C API ?

    But this is for C. If I have to do the same in java, and in a cross plaform fashion, I will have to resort to JNA/JNI. JNA might have a significant performance hit. JNI would be more work.
    FFMpeg also looks like a nice alternative, but I am still not satisfied by any of these solutions looking at the pros and cons.


    Solution Adapted.

    package weeklyvideomaker;

    import java.awt.AWTException;
    import java.awt.Rectangle;
    import java.awt.Robot;
    import java.awt.Toolkit;
    import java.awt.image.BufferedImage;
    import java.io.File;
    import java.io.IOException;
    import java.io.OutputStream;
    import java.util.Calendar;
    import java.util.LinkedList;
    import java.util.logging.Level;
    import java.util.logging.Logger;
    import neembuu.release1.util.StreamGobbler;
    import org.shashaank.activitymonitor.ScreenCaptureHandler;
    import org.shashaank.jitendriya.JitendriyaParams;

    /**
    *
    * @author Shashank
    */
    public class DirectVideoScreenHandler implements ScreenCaptureHandler {
       private final JitendriyaParams  jp;

       private String extension="264";
       private boolean lossless=false;
       private String fps="24/1";

       private Process p = null;
       private Rectangle r1;
       private Robot r;

       private int currentDay;

       private static final String[]weeks={"sun","mon","tue","wed","thu","fri","sat"};

       public DirectVideoScreenHandler(JitendriyaParams jp) {
           this.jp = jp;
       }

       public String getExtension() {
           return extension;
       }

       public void setExtension(String extension) {
           this.extension = extension;
       }

       public boolean isLossless() {
           return lossless;
       }

       public void setLossless(boolean lossless) {
           this.lossless = lossless;
       }

       public String getFps() {
           return fps;
       }

       public void setFps(String fps) {
           this.fps = fps;
       }

       private static int getday(){
           return Calendar.getInstance().get(Calendar.DAY_OF_WEEK) - 1;
       }

       public void make()throws IOException,AWTException{
           currentDay = getday();
           File week = jp.getWeekFolder();

           String destinationFile = week+"\\videos\\"+weeks[currentDay]+"_"+System.currentTimeMillis()+"_direct."+extension;

           r = new Robot();
           r1 = getScreenSize();

           ProcessBuilder pb = makeProcess(destinationFile, 0, r1.width, r1.height);

           p = pb.start();
           StreamGobbler out = new StreamGobbler(p.getInputStream(), "out");
           StreamGobbler err = new StreamGobbler(p.getErrorStream(), "err");
           out.start();err.start();
       }

       private static Rectangle getScreenSize(){
           return new Rectangle(Toolkit.getDefaultToolkit().getScreenSize());
       }

       private void screenShot(OutputStream os)throws IOException{        
           BufferedImage bi = r.createScreenCapture(r1);
           int[]intRawData = ((java.awt.image.DataBufferInt)
                   bi.getRaster().getDataBuffer()).getData();
           byte[]rawData = new byte[intRawData.length*3];
           for (int i = 0; i < intRawData.length; i++) {
               int rgb = intRawData[i];
               rawData[ i*3 + 0 ] = (byte) (rgb >> 16);
               rawData[ i*3 + 1 ] = (byte) (rgb >> 8);
               rawData[ i*3 + 2 ] = (byte) (rgb);
           }
           os.write(rawData);
       }

       private ProcessBuilder makeProcess(String destinationFile, int numberOfFrames,
               int width, int height){
           LinkedList<string> commands = new LinkedList&lt;>();
           commands.add("\""+encoderPath()+"\"");
           if(true){
               commands.add("-");
               if(lossless){
                   commands.add("--qp");
                   commands.add("0");
               }
               commands.add("--keyint");
               commands.add("240");
               commands.add("--sar");
               commands.add("1:1");
               commands.add("--output");
               commands.add("\""+destinationFile+"\"");
               if(numberOfFrames>0){
                   commands.add("--frames");
                   commands.add(String.valueOf(numberOfFrames));
               }else{
                   commands.add("--stitchable");
               }
               commands.add("--fps");
               commands.add(fps);
               commands.add("--input-res");
               commands.add(width+"x"+height);
               commands.add("--input-csp");
               commands.add("rgb");//i420
           }
           return new ProcessBuilder(commands);
       }

       private String encoderPath(){
           return jp.getToolsPath()+File.separatorChar+"x264_64.exe";
       }

       @Override public void run() {
           try {
               if(p==null){
                   make();
               }
               if(currentDay!=getday()){// day changed
                   destroy();
                   return;
               }
               if(!r1.equals(getScreenSize())){// screensize changed
                   destroy();
                   return;
               }
               screenShot(p.getOutputStream());
           } catch (Exception ex) {
               Logger.getLogger(DirectVideoScreenHandler.class.getName()).log(Level.SEVERE, null, ex);
           }
       }

       private void destroy()throws Exception{
           p.getOutputStream().flush();
           p.getOutputStream().close();
           p.destroy();
           p = null;
       }

    }
    </string>

    package weeklyvideomaker;

    import org.shashaank.jitendriya.JitendriyaParams;

    /**
    *
    * @author Shashank
    */
    public class DirectVideoScreenHandlerTest {
       public static void main(String[] args)throws Exception {
           JitendriyaParams  jp = new JitendriyaParams.Builder()
                   .setToolsPath("F:\\GeneralProjects\\JReminder\\development_environment\\tools")
                   .setOsDependentDataFolderPath("J:\\jt_data")
                   .build();
           DirectVideoScreenHandler w = new DirectVideoScreenHandler(jp);
           w.setExtension("264");
           w.setFps("24/1");
           w.setLossless(false);
           w.make();

           for (int i = 0; ; i++) {
               w.run();
               Thread.sleep(1000);
           }
       }
    }
  • Get Proper Progress Updates on Two Long Waited Concurrent Processes in ASP.NET

    17 juillet 2012, par irfanmcsd

    I implemented background video processing using .net ffmpeg wrapper http://www.mediasoftpro.com with progress bar indication to calculate how much video is processed and send information to web page to update progress bar indicator. Its working fine if only single process works at a time, but in case of two concurrent processes (start two video publishing at once let say from two different computers), progress bar suddenly mixed progress status.
    Here is my code where i used static objects to properly send information of single instance to progress bar.

    static string FileName = "grey_03";
    protected void Page_Load(object sender, EventArgs e)
    {
       if (!Page.IsPostBack)
       {
           if (Request.Params["file"] != null)
           {
               FileName = Request.Params["file"].ToString();
           }
       }
    }
    public static double ProgressValue = 0;
    public static MediaHandler _mhandler = new MediaHandler();

    [WebMethod]
    public static string EncodeVideo()
    {
       // MediaHandler _mhandler = new MediaHandler();
       string RootPath = HttpContext.Current.Server.MapPath(HttpContext.Current.Request.ApplicationPath);
       _mhandler.FFMPEGPath = HttpContext.Current.Server.MapPath("~\\ffmpeg_july_2012\\bin\\ffmpeg.exe");
       _mhandler.InputPath = RootPath + "\\contents\\original";
       _mhandler.OutputPath = RootPath + "\\contents\\mp4";
       _mhandler.BackgroundProcessing = true;
       _mhandler.FileName = "Grey.avi";
       _mhandler.OutputFileName =FileName;
       string presetpath = RootPath + "\\ffmpeg_july_2012\\presets\\libx264-ipod640.ffpreset";
       _mhandler.Parameters = " -b:a 192k -b:v 500k -fpre \"" + presetpath + "\"";
       _mhandler.OutputExtension = ".mp4";
       _mhandler.VCodec = "libx264";
       _mhandler.ACodec = "libvo_aacenc";
       _mhandler.Channel = 2;
       _mhandler.ProcessMedia();
       return _mhandler.vinfo.ErrorCode.ToString();
    }

    [WebMethod]
    public static string GetProgressStatus()
    {
       return Math.Round(_mhandler.vinfo.ProcessingCompleted, 2).ToString();
       // if vinfo.processingcomplete==100, then you can get complete information from vinfo object and store it in database and perform other processing.
    }

    Here is jquery functions responsible for updating progress bar indication after every second etc.

    $(function () {
            $("#vprocess").on({
                click: function (e) {
                    ProcessEncoding();
                    var IntervalID = setInterval(function () {
                        GetProgressValue(IntervalID);
                    }, 1000);
                    return false;
                }
            }, &#39;#btn_process&#39;);

        });
        function GetProgressValue(intervalid) {
            $.ajax({
                type: "POST",
                url: "concurrent_03.aspx/GetProgressStatus",
                data: "{}",
                contentType: "application/json; charset=utf-8",
                dataType: "json",
                success: function (msg) {
                    // Do something interesting here.
                    $("#pstats").text(msg.d);
                    $("#pbar_int_01").attr(&#39;style&#39;, &#39;width: &#39; + msg.d + &#39;%;&#39;);
                    if (msg.d == "100") {
                        $(&#39;#pbar01&#39;).removeClass("progress-danger");
                        $(&#39;#pbar01&#39;).addClass("progress-success");
                        if (intervalid != 0) {
                            clearInterval(intervalid);
                        }
                        FetchInfo();
                    }
                }
            });
        }

    The problem arises due to static mediahandler object

    public static MediaHandler _mhandler = new MediaHandler();

    I need a way to keep two concurrent processes information separate from each other in order to update progress bar with value exactly belong to that process.

  • Cropping Square Video using FFmpeg

    27 juin 2014, par zoruc

    Updated

    So I am trying to decode a mp4 file, crop the video into a square, and then re encode it back out to another mp4 file. This is my current code but there are a few issues with it.

    One is that the video doesn’t keep its rotation after the video has been re encoded

    Second is that the frames get outputted in a very fast video file that is not the same length as the original

    Third is that there is no sound

    Lastly and most importantly is do I need AVFilter to do the frame cropping or can it just be done per frame as a resize of the frame and then encoded back out.

    const char *inputPath = "test.mp4";
    const char *outPath = "cropped.mp4";
    const char *outFileType = "mp4";

    static AVFrame *oframe = NULL;
    static AVFilterGraph *filterGraph = NULL;  
    static AVFilterContext *crop_ctx = NULL;
    static AVFilterContext *buffersink_ctx = NULL;
    static AVFilterContext *buffer_ctx = NULL;

    int err;

    int crop_video(int width, int height) {

    av_register_all();
    avcodec_register_all();
    avfilter_register_all();

    AVFormatContext *inCtx = NULL;

    // open input file
    err = avformat_open_input(&amp;inCtx, inputPath, NULL, NULL);
    if (err &lt; 0) {
       printf("error at open input in\n");
       return err;
    }

    // get input file stream info
    err = avformat_find_stream_info(inCtx, NULL);
    if (err &lt; 0) {
       printf("error at find stream info\n");
       return err;
    }

    // get info about video
    av_dump_format(inCtx, 0, inputPath, 0);

    // find video input stream
    int vs = -1;
    int s;
    for (s = 0; s &lt; inCtx->nb_streams; ++s) {
       if (inCtx->streams[s] &amp;&amp; inCtx->streams[s]->codec &amp;&amp; inCtx->streams[s]->codec->codec_type == AVMEDIA_TYPE_VIDEO) {
           vs = s;
           break;
       }
    }

    // check if video stream is valid
    if (vs == -1) {
       printf("error at open video stream\n");
       return -1;
    }

    // set output format
    AVOutputFormat * outFmt = av_guess_format(outFileType, NULL, NULL);
    if (!outFmt) {
       printf("error at output format\n");
       return -1;
    }

    // get an output context to write to
    AVFormatContext *outCtx = NULL;
    err = avformat_alloc_output_context2(&amp;outCtx, outFmt, NULL, NULL);
    if (err &lt; 0 || !outCtx) {
       printf("error at output context\n");
       return err;
    }

    // input and output stream
    AVStream *outStrm = avformat_new_stream(outCtx, NULL);
    AVStream *inStrm = inCtx->streams[vs];

    // add a new codec for the output stream
    AVCodec *codec = NULL;
    avcodec_get_context_defaults3(outStrm->codec, codec);

    outStrm->codec->thread_count = 1;

    outStrm->codec->coder_type = AVMEDIA_TYPE_VIDEO;

    if(outCtx->oformat->flags &amp; AVFMT_GLOBALHEADER) {
       outStrm->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
    }

    outStrm->codec->sample_aspect_ratio = outStrm->sample_aspect_ratio = inStrm->sample_aspect_ratio;

    err = avio_open(&amp;outCtx->pb, outPath, AVIO_FLAG_WRITE);
    if (err &lt; 0) {
       printf("error at opening outpath\n");
       return err;
    }

    outStrm->disposition = inStrm->disposition;
    outStrm->codec->bits_per_raw_sample = inStrm->codec->bits_per_raw_sample;
    outStrm->codec->chroma_sample_location = inStrm->codec->chroma_sample_location;
    outStrm->codec->codec_id = inStrm->codec->codec_id;
    outStrm->codec->codec_type = inStrm->codec->codec_type;

    if (!outStrm->codec->codec_tag) {
       if (! outCtx->oformat->codec_tag
           || av_codec_get_id (outCtx->oformat->codec_tag, inStrm->codec->codec_tag) == outStrm->codec->codec_id
           || av_codec_get_tag(outCtx->oformat->codec_tag, inStrm->codec->codec_id) &lt;= 0) {
           outStrm->codec->codec_tag = inStrm->codec->codec_tag;
       }
    }

    outStrm->codec->bit_rate = inStrm->codec->bit_rate;
    outStrm->codec->rc_max_rate = inStrm->codec->rc_max_rate;
    outStrm->codec->rc_buffer_size = inStrm->codec->rc_buffer_size;

    const size_t extra_size_alloc = (inStrm->codec->extradata_size > 0) ?
    (inStrm->codec->extradata_size + FF_INPUT_BUFFER_PADDING_SIZE) :
    0;

    if (extra_size_alloc) {
       outStrm->codec->extradata = (uint8_t*)av_mallocz(extra_size_alloc);
       memcpy( outStrm->codec->extradata, inStrm->codec->extradata, inStrm->codec->extradata_size);
    }

    outStrm->codec->extradata_size = inStrm->codec->extradata_size;

    AVRational input_time_base = inStrm->time_base;
    AVRational frameRate = {25, 1};
    if (inStrm->r_frame_rate.num &amp;&amp; inStrm->r_frame_rate.den
       &amp;&amp; (1.0 * inStrm->r_frame_rate.num / inStrm->r_frame_rate.den &lt; 1000.0)) {
       frameRate.num = inStrm->r_frame_rate.num;
       frameRate.den = inStrm->r_frame_rate.den;
    }

    outStrm->r_frame_rate = frameRate;
    outStrm->codec->time_base = inStrm->codec->time_base;

    outStrm->codec->pix_fmt = inStrm->codec->pix_fmt;
    outStrm->codec->width = width;
    outStrm->codec->height =  height;
    outStrm->codec->has_b_frames =  inStrm->codec->has_b_frames;

    if (!outStrm->codec->sample_aspect_ratio.num) {
       AVRational r0 = {0, 1};
       outStrm->codec->sample_aspect_ratio =
       outStrm->sample_aspect_ratio =
       inStrm->sample_aspect_ratio.num ? inStrm->sample_aspect_ratio :
       inStrm->codec->sample_aspect_ratio.num ?
       inStrm->codec->sample_aspect_ratio : r0;
    }

    avformat_write_header(outCtx, NULL);

    filterGraph = avfilter_graph_alloc();
    if (!filterGraph) {
       printf("could not open filter graph");
       return -1;
    }

    AVFilter *crop = avfilter_get_by_name("crop");
    AVFilter *buffer = avfilter_get_by_name("buffer");
    AVFilter *buffersink = avfilter_get_by_name("buffersink");

    char args[512];

    snprintf(args, sizeof(args), "video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
            width, height, inStrm->codec->pix_fmt,
            inStrm->codec->time_base.num, inStrm->codec->time_base.den,
            inStrm->codec->sample_aspect_ratio.num, inStrm->codec->sample_aspect_ratio.den);

    err = avfilter_graph_create_filter(&amp;buffer_ctx, buffer, NULL, args, NULL, filterGraph);
    if (err &lt; 0) {
       printf("error initializing buffer filter\n");
       return err;
    }

    err = avfilter_graph_create_filter(&amp;buffersink_ctx, buffersink, NULL, NULL, NULL, filterGraph);
    if (err &lt; 0) {
       printf("unable to create buffersink filter\n");
       return err;
    }
    snprintf(args, sizeof(args), "%d:%d", width, height);
    err = avfilter_graph_create_filter(&amp;crop_ctx, crop, NULL, args, NULL, filterGraph);
    if (err &lt; 0) {
       printf("error initializing crop filter\n");
       return err;
    }

    err = avfilter_link(buffer_ctx, 0, crop_ctx, 0);
    if (err &lt; 0) {
       printf("error linking filters\n");
       return err;
    }

    err = avfilter_link(crop_ctx, 0, buffersink_ctx, 0);
    if (err &lt; 0) {
       printf("error linking filters\n");
       return err;
    }

    err = avfilter_graph_config(filterGraph, NULL);
    if (err &lt; 0) {
       printf("error configuring the filter graph\n");
       return err;
    }

    printf("filtergraph configured\n");

    for (;;) {

       AVPacket packet = {0};
       av_init_packet(&amp;packet);

       err = AVERROR(EAGAIN);
       while (AVERROR(EAGAIN) == err)
           err = av_read_frame(inCtx, &amp;packet);

       if (err &lt; 0) {
           if (AVERROR_EOF != err &amp;&amp; AVERROR(EIO) != err) {
               printf("eof error\n");
               return 1;
           } else {
               break;
           }
       }

       if (packet.stream_index == vs) {

           //
           //            AVPacket pkt_temp_;
           //            memset(&amp;pkt_temp_, 0, sizeof(pkt_temp_));
           //            AVPacket *pkt_temp = &amp;pkt_temp_;
           //
           //            *pkt_temp = packet;
           //
           //            int error, got_frame;
           //            int new_packet = 1;
           //
           //            error = avcodec_decode_video2(inStrm->codec, frame, &amp;got_frame, pkt_temp);
           //            if(error &lt; 0) {
           //                LOGE("error %d", error);
           //            }
           //
           //            // if (error >= 0) {
           //
           //            // push the video data from decoded frame into the filtergraph
           //            int err = av_buffersrc_write_frame(buffer_ctx, frame);
           //            if (err &lt; 0) {
           //                LOGE("error writing frame to buffersrc");
           //                return -1;
           //            }
           //            // pull filtered video from the filtergraph
           //            for (;;) {
           //                int err = av_buffersink_get_frame(buffersink_ctx, oframe);
           //                if (err == AVERROR_EOF || err == AVERROR(EAGAIN))
           //                    break;
           //                if (err &lt; 0) {
           //                    LOGE("error reading buffer from buffersink");
           //                    return -1;
           //                }
           //            }
           //
           //            LOGI("output frame");

           err = av_interleaved_write_frame(outCtx, &amp;packet);
           if (err &lt; 0) {
               printf("error at write frame");
               return -1;
           }

           //}
       }

       av_free_packet(&amp;packet);
    }

    av_write_trailer(outCtx);
    if (!(outCtx->oformat->flags &amp; AVFMT_NOFILE) &amp;&amp; outCtx->pb)
       avio_close(outCtx->pb);

    avformat_free_context(outCtx);
    avformat_close_input(&amp;inCtx);

    return 0;

    }