Recherche avancée

Médias (1)

Mot : - Tags -/Rennes

Autres articles (31)

  • Initialisation de MediaSPIP (préconfiguration)

    20 février 2010, par

    Lors de l’installation de MediaSPIP, celui-ci est préconfiguré pour les usages les plus fréquents.
    Cette préconfiguration est réalisée par un plugin activé par défaut et non désactivable appelé MediaSPIP Init.
    Ce plugin sert à préconfigurer de manière correcte chaque instance de MediaSPIP. Il doit donc être placé dans le dossier plugins-dist/ du site ou de la ferme pour être installé par défaut avant de pouvoir utiliser le site.
    Dans un premier temps il active ou désactive des options de SPIP qui ne le (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

Sur d’autres sites (6184)

  • FPS goes down while performing object detection using TensorFlow on multiple threads

    14 mai 2020, par Apoorv Mishra

    I am trying to run object detection on multiple cameras. I am using SSD mobinet v2 frozen graph to perform object detection with TensorFlow and OpenCV. I had implemented threading to invoke the separate thread for separate camera. But doing so I'm getting low FPS with multiple video streams.

    



    Note : The model is working fine with single stream. Also when number of detected objects in different frames are low, I'm getting decent FPS.

    



    My threading logic is working fine. I guess I'm having issue with using the graph and session. Please let me know what am I doing wrong.

    



    with tf.device('/GPU:0'):
    with detection_graph.as_default():
        with tf.Session(config=config, graph=detection_graph) as sess:
            while True:
                # Read frame from camera
                raw_image = pipe.stdout.read(IMG_H*IMG_W*3)
                image =  np.fromstring(raw_image, dtype='uint8')     # convert read bytes to np
                image_np = image.reshape((IMG_H,IMG_W,3))
                img_copy = image_np[170:480, 100:480]
                # Expand dimensions since the model expects images to have shape: [1, None, None, 3]
                image_np_expanded = np.expand_dims(img_copy, axis=0)
                # Extract image tensor
                image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
                # Extract detection boxes
                boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
                # Extract detection scores
                scores = detection_graph.get_tensor_by_name('detection_scores:0')
                # Extract detection classes
                classes = detection_graph.get_tensor_by_name('detection_classes:0')
                # Extract number of detections
                num_detections = detection_graph.get_tensor_by_name(
                        'num_detections:0')
                # Actual detection.
                (boxes, scores, classes, num_detections) = sess.run(
                        [boxes, scores, classes, num_detections],
                        feed_dict={image_tensor: image_np_expanded})
                # Visualization of the results of a detection.
                boxes = np.squeeze(boxes)
                scores = np.squeeze(scores)
                classes = np.squeeze(classes).astype(np.int32)

                box_to_display_str_map = collections.defaultdict(list)
                box_to_color_map = collections.defaultdict(str)

                for i in range(min(max_boxes_to_draw, boxes.shape[0])):
                    if scores is None or scores[i] > threshold:
                        box = tuple(boxes[i].tolist())
                        if classes[i] in six.viewkeys(category_index):
                            class_name = category_index[classes[i]]['name']
                        display_str = str(class_name)
                        display_str = '{}: {}%'.format(display_str, int(100 * scores[i]))
                        box_to_display_str_map[box].append(display_str)
                        box_to_color_map[box] = STANDARD_COLORS[
                                classes[i] % len(STANDARD_COLORS)]
                for box,color in box_to_color_map.items():
                    ymin, xmin, ymax, xmax = box
                    flag = jam_check(xmin, ymin, xmax, ymax, frame_counter)
                    draw_bounding_box_on_image_array(
                            img_copy,
                            ymin,
                            xmin,
                            ymax,
                            xmax,
                            color=color,
                            thickness=line_thickness,
                            display_str_list=box_to_display_str_map[box],
                            use_normalized_coordinates=use_normalized_coordinates)

                image_np[170:480, 100:480] = img_copy

                image_np = image_np[...,::-1]

                pipe.stdout.flush()

                yield cv2.imencode('.jpg', image_np, [int(cv2.IMWRITE_JPEG_QUALITY), 50])[1].tobytes()


    



    I've set the config as :

    



    config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction=0.4


    


  • vf_dnn_processing.c : add dnn backend openvino

    25 mai 2020, par Guo, Yejun
    vf_dnn_processing.c : add dnn backend openvino
    

    We can try with the srcnn model from sr filter.
    1) get srcnn.pb model file, see filter sr
    2) convert srcnn.pb into openvino model with command :
    python mo_tf.py —input_model srcnn.pb —data_type=FP32 —input_shape [1,960,1440,1] —keep_shape_ops

    See the script at https://github.com/openvinotoolkit/openvino/tree/master/model-optimizer
    We'll see srcnn.xml and srcnn.bin at current path, copy them to the
    directory where ffmpeg is.

    I have also uploaded the model files at https://github.com/guoyejun/dnn_processing/tree/master/models

    3) run with openvino backend :
    ffmpeg -i input.jpg -vf format=yuv420p,scale=w=iw*2:h=ih*2,dnn_processing=dnn_backend=openvino:model=srcnn.xml:input=x:output=srcnn/Maximum -y srcnn.ov.jpg
    (The input.jpg resolution is 720*480)

    Also copy the logs on my skylake machine (4 cpus) locally with openvino backend
    and tensorflow backend. just for your information.

    $ time ./ffmpeg -i 480p.mp4 -vf format=yuv420p,scale=w=iw*2:h=ih*2,dnn_processing=dnn_backend=tensorflow:model=srcnn.pb:input=x:output=y -y srcnn.tf.mp4

    frame= 343 fps=2.1 q=31.0 Lsize= 2172kB time=00:00:11.76 bitrate=1511.9kbits/s speed=0.0706x
    video:1973kB audio:187kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead : 0.517637%
    [aac @ 0x2f5db80] Qavg : 454.353
    real 2m46.781s
    user 9m48.590s
    sys 0m55.290s

    $ time ./ffmpeg -i 480p.mp4 -vf format=yuv420p,scale=w=iw*2:h=ih*2,dnn_processing=dnn_backend=openvino:model=srcnn.xml:input=x:output=srcnn/Maximum -y srcnn.ov.mp4

    frame= 343 fps=4.0 q=31.0 Lsize= 2172kB time=00:00:11.76 bitrate=1511.9kbits/s speed=0.137x
    video:1973kB audio:187kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead : 0.517640%
    [aac @ 0x31a9040] Qavg : 454.353
    real 1m25.882s
    user 5m27.004s
    sys 0m0.640s

    Signed-off-by : Guo, Yejun <yejun.guo@intel.com>
    Signed-off-by : Pedro Arthur <bygrandao@gmail.com>

    • [DH] doc/filters.texi
    • [DH] libavfilter/vf_dnn_processing.c
  • How to get a list of video devices with a particular VendorID and/or ProductID using FFmpeg (on Mac OS)

    26 mai 2020, par mcgregor94086

    The master task :

    &#xA;&#xA;

    I am writing software for a multiple USB camera array. The camera may be connected to a CPU that has other USB cameras builtin or attached. I need to capture one 1920x1080 JPG image from EACH of MY array's cameras into a directory of images - but I need to EXCLUDE any OTHER cameras when taking pictures.

    &#xA;&#xA;

    Right now I am trying to implement for MacOS, but in the future I will be implementing for Windows, Linux, iOS and Android.

    &#xA;&#xA;

    What is known :

    &#xA;&#xA;

    I know the VendorID, ProductID and UniqueID (reported by the MacOS USB system System Hardware Report) for each of the cameras in the camera array.

    &#xA;&#xA;

    UNIQUESKY_CAR_CAMERA #5:&#xA;&#xA;  Model ID: UVC Camera VendorID_7119 ProductID_2825&#xA;  Unique ID:    0x143400001bcf0b09&#xA;&#xA;UNIQUESKY_CAR_CAMERA #6:&#xA;&#xA;  Model ID: UVC Camera VendorID_7119 ProductID_2825&#xA;  Unique ID:    0x143300001bcf0b09&#xA;

    &#xA;&#xA;

    The problem code :

    &#xA;&#xA;

    I made the following call to FFmpeg

    &#xA;&#xA;

    ffmpeg -nostdin -hide_banner -an -sn -dn  -f avfoundation -list_devices true -I 1&#xA;

    &#xA;&#xA;

    I received the following result in stderr :

    &#xA;&#xA;

        [AVFoundation indev @ 0x7fc870e21800] AVFoundation video devices:&#xA;    [AVFoundation indev @ 0x7fc870e21800] [0] FaceTime HD Camera (Built-in)&#xA;    [AVFoundation indev @ 0x7fc870e21800] [1] UNIQUESKY_CAR_CAMERA #5&#xA;    [AVFoundation indev @ 0x7fc870e21800] [2] UNIQUESKY_CAR_CAMERA #6&#xA;    [AVFoundation indev @ 0x7fc870e21800] [3] Capture screen 0&#xA;    [AVFoundation indev @ 0x7fc870e21800] [4] Capture screen 1&#xA;    [AVFoundation indev @ 0x7fc870e21800] AVFoundation audio devices:&#xA;    [AVFoundation indev @ 0x7fc870e21800] [0] MacBook Pro Microphone&#xA;    1: Input/output error&#xA;

    &#xA;&#xA;

    Of these devices listed in the output, I happen to know (in this particular TEST CASE) that the cameras in my test array, are only these two :

    &#xA;&#xA;

    [1] UNIQUESKY_CAR_CAMERA #5&#xA;[2] UNIQUESKY_CAR_CAMERA #6&#xA;

    &#xA;&#xA;

    With this knowledge, I can select the indices (1 2) that I need, and then use the following code to capture an image from each :

    &#xA;&#xA;

    for i in 1 2; &#xA;do&#xA;    ffmpeg  -y -hide_banner  -f avfoundation  -r 15 -video_size 1920x1080 -i 1  -f image2 -qscale:v 1 -qmin 1 -qmax 1 -frames:v 1 img$i.jpg;&#xA;done&#xA;&#xA;

    &#xA;&#xA;

    Unfortunately, in production use (i.e. outside of this test case), I CAN NOT rely on cameras all identifying themselves with the same TEXT "name" prefix (e.g. "UNIQUESKY_CAR_CAMERA"), so I can't just use grep to select the ones I want.

    &#xA;&#xA;

    But I CAN be sure that I will know their VendorID, ProductID and UniqueIDs.

    &#xA;&#xA;

    So, IF I could get

    &#xA;&#xA;

    ffmpeg  -nostdin -hide_banner -an -sn -dn  -f avfoundation -list_devices true -I 1&#xA;

    &#xA;&#xA;

    to also list the VendorID and ProductID, then I could grep for those values.

    &#xA;&#xA;

    Is this solution possible ? I can't find syntax in the documentation or examples for how to set such limits.

    &#xA;&#xA;

    ALTERNATIVELY : if I could specify to FFmpeg -list_devices to limit the output list to just those with my specified VendorID and ProductIDs then I could easily get the list of device indices I will need for the succeeding image extraction.

    &#xA;&#xA;

    Is this solution possible ? I can't find syntax in the documentation or examples for how to set such limits.

    &#xA;&#xA;

    Special Note :
    &#xA;The cameras I am using are all 1920x1080, and that resolution is necessary for my application. As has been noted by many Mac users on the web, other image capture programs (such as imagesnap) sometimes capture images at lower resolution (1024x720) than the cameras are capable of, and will not capture images at higher resolutions such as (1920x1080). For this reason, and because it is available on ALL the OS platforms of interest to us, I chose to use FFmpeg.

    &#xA;