Recherche avancée

Médias (91)

Autres articles (48)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Mise à disposition des fichiers

    14 avril 2011, par

    Par défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
    Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
    Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

Sur d’autres sites (8707)

  • How to convert a CCTV footage into time lapse video by cutting parts of the video by a set interval on FFMPEG [duplicate]

    2 décembre 2020, par mark

    I have bunch of CCTV footages and I want it to look like it was recorded from a time lapse camera. One video file is around 3 hours long capturing scenes in real time (from 1pm-3pm for example). And in one day, I'll get around 8 footages (8files * 3hours = 24hours = 1 day)

    


    I want to convert those 24hours worth of footages to 1min making 1 day = 1min of video not just making it fast but actually cutting some of the scenes by a set interval. Usually, a time lapse camera has an interval of one photo per 10 min and at the end of the day, it will stitch them into one video. How can I do something like that on FFMPEG ?

    


    I'm using FFmpeg Batch converter and here's my code so far. It just makes my videos faster but not cutting it into itervals

    


    -filter:v "setpts=0.25*PTS" -an


    


    I ended up with this code :

    


    -vf framestep=25,select='not(mod(n,1000))',setpts=N/FRAME_RATE/TB -an


    


    The above code will make a 1hr long video into 4sec which is perfect for my needs.

    


  • Encoding of video returns 0, and nothing written to the output file

    14 juillet 2014, par AnilJ

    I have written a code to record the webcam feed into a file on disk. I am attempting to do this using IContainer class rather than IMediaWriter class. I am pasting the code snippet below showing important sections of the code.

    The problem I am facing is that nothing is being written to the file. Some of the observations I have made are as follows :

    1. In the Record() function, the ’while’ loop is kicked off, but the mVideoEncoder.encodeVideo(packet, frame, offset) ; method always returns zero (0). This results in no picture complete and no data is being written to the output file. Can you please provide clue as to what is missing ?
    2. I checked that the frame size is 80640, which confirms that frame has data.
    3. I see that only header and trailer is being written to the file.

    Let me know if you need any other information.

    public class WebcamRecorder {

       private boolean StartVideoEncoder() {

           boolean result = true;

           // Open a container
           mPositionInMicroseconds = 0;
           mOutputContainer = IContainer.make();
           mOutputContainer.open(mOutputFileName, IContainer.Type.WRITE, null);

           // Create the video stream and get its coder
           ICodec videoCodec = ICodec.findEncodingCodec(ICodec.ID.CODEC_ID_H264);
           IStream videoStream = mOutputContainer.addNewStream(videoCodec);
           mVideoEncoder = videoStream.getStreamCoder();

           // Setup the stream coder
           mFrameRate = IRational.make(1, 30);
           mVideoEncoder.setWidth(Constants.RESAMPLE_PICT_WIDTH);
           mVideoEncoder.setHeight(Constants.RESAMPLE_PICT_HEIGHT);
           mVideoEncoder.setFrameRate(mFrameRate);
           mVideoEncoder.setTimeBase(IRational.make(mFrameRate.getDenominator(),
                                     mFrameRate.getNumerator()));
           mVideoEncoder.setBitRate(350000);
           mVideoEncoder.setNumPicturesInGroupOfPictures(30);
           mVideoEncoder.setPixelType(IPixelFormat.Type.YUV420P);
           mVideoEncoder.setFlag(IStreamCoder.Flags.FLAG_QSCALE, true);
           mVideoEncoder.setGlobalQuality(0);

           // Open the encoder
           mVideoEncoder.open(null, null);

           // Write the header
           mOutputContainer.writeHeader();

           return result;
       }

       public void Record() {

           picture = GetNextPicture();
           image = Utils.videoPictureToImage(picture);
           // convert to the right image type
           BufferedImage bgrScreen = ConvertToType(image, BufferedImage.TYPE_3BYTE_BGR);
           IConverter converter = ConverterFactory.createConverter(bgrScreen, mVideoEncoder.getPixelType());
           IVideoPicture frame = converter.toPicture(bgrScreen, mPositionInMicroseconds);
           frame.setQuality(0);

           IPacket packet = IPacket.make();
           int offset = 0;
           while (offset < frame.getSize()) {
               int bytesEncoded = mVideoEncoder.encodeVideo(packet, frame, offset);
               if (bytesEncoded < 0) {
                   throw new RuntimeException("Unable to encode video.");
               }
               offset += bytesEncoded;

               if (packet.isComplete()) {
                   System.out.println("Packet is complete");
                   if (mOutputContainer.writePacket(packet) < 0) {
                       throw new RuntimeException(
                               "Could not write packet to container.");
                   }

                   // Update frame time
                   mPositionInMicroseconds += (mFrameRate.getDouble() * Math.pow(1000, 2));
                   break;
               }
           }
       }

       public void Cleanup() {

           if (mOutputContainer != null) {
               mOutputContainer.writeTrailer();
               mOutputContainer.close();
               // mOutputContainer.flushPackets();
           }

           if (mVideoEncoder != null) {
               mVideoEncoder.close();
           }
       }
    }
  • RTP : dropping old packet received too late

    2 juillet 2020, par Yves

    I'm using JavaCV to process the RTSP video stream. What I've done is to grab each frame of RTSP stream and write it into a JPG file. Here is my code :

    


    package test;

import org.bytedeco.javacv.FFmpegFrameGrabber;
import org.bytedeco.javacv.Frame;
import org.bytedeco.javacv.Java2DFrameConverter;
import javax.imageio.ImageIO;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;

public class PravegaCameraConnector
{
    public static void grabberVideoFramer() {
        Frame frame = null;
        int flag = 0;
        int max_value = 9999999;
        FFmpegFrameGrabber fFmpegFrameGrabber = new FFmpegFrameGrabber("rtsp://192.168.1.11:8554/stream");
        fFmpegFrameGrabber.setFrameRate(30);
        try {
            fFmpegFrameGrabber.start();
            BufferedImage bImage = null;
            while (flag < max_value) {
                String fileName = "/home/rtsp/tmp/imgs/img_" + String.valueOf(flag) + ".jpg";
                File outPut = new File(fileName);
                frame = fFmpegFrameGrabber.grabImage();
                if (frame != null) {
                    ImageIO.write(FrameToBufferedImage(frame), "jpg", outPut);
                }
                flag++;
                if (flag == max_value) {
                    flag = 0;
                }
            }
            fFmpegFrameGrabber.stop();
        } catch (IOException E) {
            // nothing to do
        }
    }

    public static BufferedImage FrameToBufferedImage(Frame frame) {
        Java2DFrameConverter converter = new Java2DFrameConverter();
        BufferedImage bufferedImage = converter.getBufferedImage(frame);
        return bufferedImage;
    }

    public static void main(String[] args) {
        grabberVideoFramer();
    }
}


    


    It seems to work, because I've seen that many JPG files were generated.

    


    However, I got many warnings and errors while running this code :

    


    [h264 @ 0x7fe1b0491780] error while decoding MB 26 37, bytestream -35
[h264 @ 0x7fe1b0491780] Cannot use next picture in error concealment
[h264 @ 0x7fe1b0491780] concealing 3743 DC, 3743 AC, 3743 MV errors in P frame
[rtsp @ 0x7fe1b0447480] max delay reached. need to consume packet
[rtsp @ 0x7fe1b0447480] RTP: missed 17 packets
[rtsp @ 0x7fe1b0447480] RTP: dropping old packet received too late
[rtsp @ 0x7fe1b0447480] RTP: dropping old packet received too late
[rtsp @ 0x7fe1b0447480] RTP: dropping old packet received too late
[rtsp @ 0x7fe1b0447480] RTP: dropping old packet received too late
[rtsp @ 0x7fe1b0447480] RTP: dropping old packet received too late
[rtsp @ 0x7fe1b0447480] RTP: dropping old packet received too late
[rtsp @ 0x7fe1b0447480] RTP: dropping old packet received too late
[rtsp @ 0x7fe1b0447480] RTP: dropping old packet received too late
[rtsp @ 0x7fe1b0447480] RTP: dropping old packet received too late
[rtsp @ 0x7fe1b0447480] RTP: dropping old packet received too late
[rtsp @ 0x7fe1b0447480] RTP: dropping old packet received too late
[rtsp @ 0x7fe1b0447480] RTP: dropping old packet received too late
[rtsp @ 0x7fe1b0447480] RTP: dropping old packet received too late
[rtsp @ 0x7fe1b0447480] RTP: dropping old packet received too late
[rtsp @ 0x7fe1b0447480] RTP: dropping old packet received too late
[rtsp @ 0x7fe1b0447480] RTP: dropping old packet received too late
[rtsp @ 0x7fe1b0447480] RTP: dropping old packet received too late
[h264 @ 0x7fe1b0ff6400] error while decoding MB 54 32, bytestream -15
[h264 @ 0x7fe1b0ff6400] concealing 4315 DC, 4315 AC, 4315 MV errors in B frame
[rtsp @ 0x7fe1b0447480] max delay reached. need to consume packet
[rtsp @ 0x7fe1b0447480] RTP: missed 1 packets
[rtsp @ 0x7fe1b0447480] RTP: dropping old packet received too late
[h264 @ 0x7fe1b0530c40] error while decoding MB 72 42, bytestream -6


    


    Why did I get these ?

    


    Is it because that the RTSP source was sending data fast whereas my code was processing data slow ?

    


    Need I make two threads : one thread is to receive data from the RTSP source and the other is to write data into JPG files ?