Recherche avancée

Médias (1)

Mot : - Tags -/wave

Autres articles (77)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (5849)

  • How to obtain time markers for video splitting using python/OpenCV

    10 novembre 2018, par Bleddyn Raw-Rees

    I’m working on my MSc project which is researching automated deletion of low value content in digital file stores. I’m specifically looking at the sort of long shots that often occur in natural history filming whereby a static camera is left rolling in order to capture the rare snow leopard or whatever. These shots may only have some 60s of useful content with perhaps several hours of worthless content either side.

    As a first step I have a simple motion detection program from Adrian Rosebrock’s tutorial [http://www.pyimagesearch.com/2015/05/25/basic-motion-detection-and-tracking-with-python-and-opencv/#comment-393376]. Next I intend to use FFMPEG to split the video.

    What I would like help with is how to get in and out points based on the first and last points that motion is detected in the video.

    Here is the code should you wish to see it...

    # import the necessary packages
    import argparse
    import datetime
    import imutils
    import time
    import cv2

    # construct the argument parser and parse the arguments
    ap = argparse.ArgumentParser()
    ap.add_argument("-v", "--video", help="path to the video file")
    ap.add_argument("-a", "--min-area", type=int, default=500, help="minimum area size")
    args = vars(ap.parse_args())

    # if the video argument is None, then we are reading from webcam
    if args.get("video", None) is None:
    camera = cv2.VideoCapture(0)
    time.sleep(0.25)

    # otherwise, we are reading from a video file
    else:
       camera = cv2.VideoCapture(args["video"])

    # initialize the first frame in the video stream
    firstFrame = None

    # loop over the frames of the video
    while True:
       # grab the current frame and initialize the occupied/unoccupied
       # text
       (grabbed, frame) = camera.read()
       text = "Unoccupied"

       # if the frame could not be grabbed, then we have reached the end
       # of the video
       if not grabbed:
           break

       # resize the frame, convert it to grayscale, and blur it
       frame = imutils.resize(frame, width=500)
       gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
       gray = cv2.GaussianBlur(gray, (21, 21), 0)

       # if the first frame is None, initialize it
       if firstFrame is None:
           firstFrame = gray
           continue

       # compute the absolute difference between the current frame and
       # first frame
       frameDelta = cv2.absdiff(firstFrame, gray)
       thresh = cv2.threshold(frameDelta, 25, 255, cv2.THRESH_BINARY)[1]

       # dilate the thresholded image to fill in holes, then find contours
       # on thresholded image
       thresh = cv2.dilate(thresh, None, iterations=2)
       (_, cnts, _) = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

       # loop over the contours
       for c in cnts:
           # if the contour is too small, ignore it
           if cv2.contourArea(c) < args["min_area"]:
               continue

           # compute the bounding box for the contour, draw it on the frame,
           # and update the text
           (x, y, w, h) = cv2.boundingRect(c)
           cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)
           text = "Occupied"

       # draw the text and timestamp on the frame
       cv2.putText(frame, "Room Status: {}".format(text), (10, 20),
           cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2)
       cv2.putText(frame, datetime.datetime.now().strftime("%A %d %B %Y %I:%M:%S%p"),
           (10, frame.shape[0] - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 0, 255), 1)

       # show the frame and record if the user presses a key
       cv2.imshow("Security Feed", frame)
       cv2.imshow("Thresh", thresh)
       cv2.imshow("Frame Delta", frameDelta)
       key = cv2.waitKey(1) & 0xFF

       # if the `q` key is pressed, break from the lop
       if key == ord("q"):
           break

    # cleanup the camera and close any open windows
    camera.release()
    cv2.destroyAllWindows()
  • Convert ffmpeg frame into array of YUV pixels in C

    9 juin 2016, par loneraver

    I’m using the ffmpeg C libraries and trying to convert an AVFrame into a 2d array of pixels with YUV* components for analysis. I figured out how to convert the Y component for each pixel. :

    uint8_t y_val = pFrame->data[0][pFrame->linesize[0] * y + x];

    Since all frames have a Y component this is easy. However most digital video do not have a 4:4:4 chroma subsampling, so getting the UV components is stumping me.

    I’m using straight C for this project. No C++. An ideas ?

    *Note : Yes, I know it’s technically YCbCr and not YUV.

    Edit :

    I’m rather new to C so it might not be the prettiest code out there.

    When I try :

    VisYUVFrame *VisCreateYUVFrame(const AVFrame *pFrame){
       VisYUVFrame *tmp = (VisYUVFrame*)malloc(sizeof(VisYUVFrame));
       if(tmp == NULL){ return NULL;}
       tmp->height = pFrame->height;
       tmp->width = pFrame->width;

       tmp->data = (PixelYUV***)malloc(sizeof(PixelYUV**) * pFrame->height);
       if(tmp->data == NULL) { return NULL;};

       for(int y = 0; y < pFrame->height; y++){
           tmp->data[y] = (PixelYUV**)malloc(sizeof(PixelYUV*) * pFrame->width);
           if(tmp->data[y] == NULL) { return NULL;}

           for(int x = 0; x < pFrame->width; x++){
               tmp->data[y][x] = (PixelYUV*)malloc(sizeof(PixelYUV*));
               if(tmp->data[y][x] == NULL){ return NULL;};
               tmp->data[y][x]->Y = pFrame->data[0][pFrame->linesize[0] * y + x];
               tmp->data[y][x]->U = pFrame->data[1][pFrame->linesize[1] * y + x];
               tmp->data[y][x]->V = pFrame->data[2][pFrame->linesize[2] * y + x];

           }
       }

       return tmp;

    Luma works but when I run Valgrind, I get

    0x26
    1
    InvalidRead
    Invalid read of size 1

    0x100003699
    /Users/hborcher/Library/Caches/CLion2016.2/cmake/generated/borcherscope-8e83e7dd/8e83e7dd/Debug/VisCreator2
    VisCreateYUVFrame
    /Users/hborcher/ClionProjects/borcherscope/lib
    visualization.c
    145

    0x100006B5B
    /Users/hborcher/Library/Caches/CLion2016.2/cmake/generated/borcherscope-8e83e7dd/8e83e7dd/Debug/VisCreator2
    render
    /Users/hborcher/ClionProjects/borcherscope/lib/decoder
    simpleDecoder2.c
    253

    0x100002D24
    /Users/hborcher/Library/Caches/CLion2016.2/cmake/generated/borcherscope-8e83e7dd/8e83e7dd/Debug/VisCreator2
    main
    /Users/hborcher/ClionProjects/borcherscope/src
    createvisual2.c
    93

    Address 0x10e9f91ef is 0 bytes after a block of size 92,207 alloc’d

    0x100013EEA
    /usr/local/Cellar/valgrind/3.11.0/lib/valgrind/vgpreload_memcheck-amd64-darwin.so
    malloc_zone_memalign

    0x1084B5416
    /usr/lib/system/libsystem_malloc.dylib
    posix_memalign

    0x10135D317
    /usr/local/Cellar/ffmpeg/3.0.2/lib/libavutil.55.17.103.dylib
    av_malloc

    0x27
    1
    InvalidRead
    Invalid read of size 1

    0x1000036BA
    /Users/hborcher/Library/Caches/CLion2016.2/cmake/generated/borcherscope-8e83e7dd/8e83e7dd/Debug/VisCreator2
    VisCreateYUVFrame
    /Users/hborcher/ClionProjects/borcherscope/lib
    visualization.c
    147

    0x100006B5B
    /Users/hborcher/Library/Caches/CLion2016.2/cmake/generated/borcherscope-8e83e7dd/8e83e7dd/Debug/VisCreator2
    render
    /Users/hborcher/ClionProjects/borcherscope/lib/decoder
    simpleDecoder2.c
    253

    0x100002D24
    /Users/hborcher/Library/Caches/CLion2016.2/cmake/generated/borcherscope-8e83e7dd/8e83e7dd/Debug/VisCreator2
    main
    /Users/hborcher/ClionProjects/borcherscope/src
    createvisual2.c
    93

    Address 0x10e9f91ef is 0 bytes after a block of size 92,207 alloc’d

    0x100013EEA
    /usr/local/Cellar/valgrind/3.11.0/lib/valgrind/vgpreload_memcheck-amd64-darwin.so
    malloc_zone_memalign

    0x1084B5416
    /usr/lib/system/libsystem_malloc.dylib
    posix_memalign

    0x10135D317
    /usr/local/Cellar/ffmpeg/3.0.2/lib/libavutil.55.17.103.dylib
    av_malloc

  • Is there a way to use ffmpeg audio filters to automatically synchronize 2 streams with similar content

    29 mai 2015, par user3741412

    I have a situation where I have a video capture of HD content via HDMI with audio from a sound board that goes through a impedance drop into a microphone input of a camcorder. That same signal is split at line level to a ’line in’ jack on the same computer that is capturing the HDMI. Alternatively I can capture the audio via USB from the soundboard which is probably the best plan, but carries with it the same issue.

    The point is that the line in or usb capture will be much higher quality than the one on HDMI because the line out -> impedance change -> mic in path generates inferior quality in that simply brushing the mic jack on the camera while trying to change the zoom (close proximity) can cause noise on the recording.

    So I can do this today :

    • Take the good sound and the camera captured sound and load each into
      audacity and pretty quickly use the timeshift toot to perfectly fit
      the good audio to the questionable audio from the HDMI capture and
      cut the good audio to the exact size of the video. Then I can use
      ffmpeg or other video editing software to replace the questionable
      audio with the better audio.

    But while somewhat quick and easy, it always carries with it a bit of human error and time. I’d like to automate this if possible as this process is repeated at least weekly throughout the year.

    Does anyone have a suggestion if any of these ideas have merit or could suggest another approach ?

    1. I suspect but have yet to confirm that the system timestamp of the start time may be recorded in both audio captured with something like Audacity, or the USB capture tool from the sound board as well as the HDMI mpeg-2 video. I tried ffprobe on a couple audacity captured .wav files but didn’t see anything in the results about such a time code, but perhaps other audio formats or other probing tools may include this info. Can anyone advise if this is common with any particular capture tools or file formats ?

      • if so, I think I could get best results by extracting this information and then using simple adelay and atrim filters in ffmpeg to sync reliably directly from the two sources in one ffmpeg call. This is all theoretical for me right now— I’ve never tried either of these filters yet— just trying to optimize against blind alleys by asking for advice up front.
    2. If such timestamps are not embedded, possibly I can use the file system timestamp for the same idea expressed in 1a, but I suspect the file open of the two capture tools may have different inherant delays. Possibly these delays will be found to be nearly constant and the approach can work with a built-in constant anticipation delay but sounds messy and less reliable than idea 1. Still, I’d take it, if it turns out reasonably reliable

    3. Are there any ffmpeg or general digital audio experts out there that know of particular filters that can be employed on the actual data to look for similarities like normalizing the peak amplitudes or normalizing the amplification of the two to some RMS value and then stepping through a short 10 second snippet of audio, moving one time stream .01s left against the other repeatedly and subtracting the two and looking for a minimum ? Sounds like it could take a while, but if it could do this in less than a minute and be reliable, I suspect it could work. But I have only rudimentary knowledge of audio streams and perhaps what I suggest is just not plausible— but since each stream starts with the same source I think there should be a chance. I am just way out of my depth as to how to go down this road, so if someone out there knows such magic or can throw me some names of filters and example calls, I can explore if I can make it work.

    4. any hardware level suggestions to take a line level output down to a mic level input and not have the problems I am seeing using a simple in-line impedance drop module, so that I can simply rely on the audio from the HDMI ?

    Thanks in advance for any pointers or suggestinons !