Recherche avancée

Médias (0)

Mot : - Tags -/optimisation

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (34)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Submit enhancements and plugins

    13 avril 2011

    If you have developed a new extension to add one or more useful features to MediaSPIP, let us know and its integration into the core MedisSPIP functionality will be considered.
    You can use the development discussion list to request for help with creating a plugin. As MediaSPIP is based on SPIP - or you can use the SPIP discussion list SPIP-Zone.

  • D’autres logiciels intéressants

    12 avril 2011, par

    On ne revendique pas d’être les seuls à faire ce que l’on fait ... et on ne revendique surtout pas d’être les meilleurs non plus ... Ce que l’on fait, on essaie juste de le faire bien, et de mieux en mieux...
    La liste suivante correspond à des logiciels qui tendent peu ou prou à faire comme MediaSPIP ou que MediaSPIP tente peu ou prou à faire pareil, peu importe ...
    On ne les connais pas, on ne les a pas essayé, mais vous pouvez peut être y jeter un coup d’oeil.
    Videopress
    Site Internet : (...)

Sur d’autres sites (4935)

  • Updating isoparser-1.0-RC-37 from isoparser-1.0-RC-15 issues in getDecodingTimeEntries()

    5 janvier 2017, par Rakki s

    I found sample code to cut the video based on the duration from Link,
    but i cannot find method getDecodingTimeEntries() in isoparser-1.0-RC-37. It’s available in the older version of isoparser(-1.0-RC-15).

    So my question are.

    • Why that methode removed from the update jar ?

    • Is there any alternate method available ?

    • Any one have example source to trim(Cut by duration) using
      isoparser-1.0-RC-37 jar.

  • Streaming issues with HLS setup using Nginx and FFmpeg, and TS video files

    12 septembre 2024, par Jacob Anderson

    I've been working on setting up an HLS stream on my Raspberry Pi to broadcast video from a security camera that's physically connected to my Raspberry Pi through my web server, making it accessible via my website. The .ts video files and the .m3u8 playlist are correctly being served from /var/www/html/hls. However, when I attempt to load the stream on Safari (as well as other browsers), the video continuously appears to be loading without ever displaying any content.

    


    Here are some details about my setup :

    


      

    • Camera : I am using an Arducam 1080p Day & Night Vision USB Camera which is available on /dev/video0.
    • 


    • Server Configuration : I haven't noticed any errors in the Safari console or on the server logs. When I access the .ts files directly from the browser, they only show a black screen but they do play.
    • 


    


    Given the situation, I suspect there might be an issue with my FFmpeg command or possibly with my Nginx configuration.

    


    Here is what I have :

    


    ffmpeg stream service :
/etc/systemd/system/ffmpeg-stream.service

    


    [Unit]
Description=FFmpeg RTMP Stream
After=network.target

[Service]
ExecStart=/usr/local/bin/start_ffmpeg.sh
Restart=always
User=jacobanderson
Group=jacobanderson
StandardError=syslog
SyslogIdentifier=ffmpeg-stream
Environment=FFMPEG_LOGLEVEL=error

[Install]
WantedBy=multi-user.target


    


    ffmpeg command :
/usr/local/bin/start_ffmpeg.sh

    


    #!/bin/bash

/usr/bin/ffmpeg -f v4l2 -input_format mjpeg -video_size 1280x720 -framerate 30 -i /dev/video0 -vcodec libx264 -preset veryfast -acodec aac -strict -2 -f flv rtmp://localhost/live/


    


    nginx.conf :
/etc/nginx/nginx.conf

    


    user www-data;
worker_processes auto;
pid /run/nginx.pid;
error_log /var/log/nginx/error.log;
include /etc/nginx/modules-enabled/*.conf;

events {
    worker_connections 768;
    # multi_accept on;
}

rtmp {
    server {
        listen 1935;
        chunk_size 4096;
        #allow publish 127.0.0.1;
        #deny publish all;

    application live {
        #allow 192.168.0.100;
        live on;
        hls on;
        hls_path /var/www/html/hls;
        hls_fragment 3;
        hls_nested on; 
        #hls_fragment_naming stream;
        hls_playlist_length 120;
        hls_cleanup on;
        hls_continuous on;
        #deny play all;
    }
    }
}

http {
    ##
    # Basic Settings
    ##

    sendfile on;
    #sendfile off;
    tcp_nopush on;
    types_hash_max_size 2048;
    # server_tokens off;

    # Additional for video
    directio 512;

    # server_names_hash_bucket_size 64;
    # server_name_in_redirect off;

    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    ##
    # SSL Settings
    ##

    ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
    #ssl_protocols TLSv1.2 TLSv1.3; # Use only secure protocols
    ssl_prefer_server_ciphers on;
    #ssl_ciphers "HIGH:!aNULL:!MD5";

    ##
    # Logging Settings
    ##

    access_log /var/log/nginx/access.log;
    error_log /var/log/nginx/error.log;

    ##
    # Gzip Settings
    ##

    #gzip on;
    gzip off;  # Ensure gzip is off for HLS

    ##
    # Virtual Host Configs
    ##

    include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*;
}


    


    sites-available :
/etc/nginx/sites-available/myStream.mysite.com

    


    server {
    listen 443 ssl;
    server_name myStream.mysite.com;

    ssl_certificate /etc/letsencrypt/live/myStream.mysite.com/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/myStream.mysite.com/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

    location / {
        root /var/www/html/hls;
        index index.html;
    }

    location /hls {
        # Password protection
        auth_basic "Restricted Content";
        auth_basic_user_file /etc/nginx/.htpasswd;

        # Disable cache
        add_header Cache-Control no-cache;

        # CORS setup
        add_header 'Access-Control-Allow-Origin' '*' always;
        add_header 'Access-Control-Expose-Headers' 'Content-Length';

        # Allow CORS preflight requests
        if ($request_method = 'OPTIONS') {
            add_header 'Access-Control-Allow-Origin' '*';
            add_header 'Access-Control-Max-Age' 1728000;
            add_header 'Content-Type' 'text/plain charset=UTF-8';
            add_header 'Content-Length' 0;
            return 204;
        }

        types {
            application/vnd.apple.mpegurl m3u8;
            video/mp2t ts;
        text/html html;
        text/css css;
        }

    root /var/www/html;
    }
}

server {
    listen 80;
    server_name myStream.mysite.com;

    if ($host = myStream.mysite.com) {
        return 301 https://$host$request_uri;
    }

    return 404; # managed by Certbot
}


    


    index.html :
/var/www/html/hls/index.html

    


    &#xA;&#xA;&#xA;    &#xA;    &#xA;    &#xA;    &#xA;    <code class="echappe-js">&lt;script src='http://stackoverflow.com/feeds/tag/js/hls.min.js'&gt;&lt;/script&gt;&#xA;&#xA;&#xA;    &#xA;        &#xA;    &#xA;    &#xA;&#xA;    &lt;script src=&quot;https://vjs.zencdn.net/7.10.2/video.js&quot;&gt;&lt;/script&gt;&#xA;    &lt;script&gt;&amp;#xA;        if (Hls.isSupported()) {&amp;#xA;            var video = document.getElementById(&amp;#x27;my-video_html5_api&amp;#x27;); // Updated ID to target the correct video element&amp;#xA;            var hls = new Hls();&amp;#xA;            hls.loadSource(&amp;#x27;https://myStream.mysite.com/hls/index.m3u8&amp;#x27;);&amp;#xA;            hls.attachMedia(video);&amp;#xA;            hls.on(Hls.Events.MANIFEST_PARSED,function() {&amp;#xA;                video.play();&amp;#xA;            });&amp;#xA;        } else if (video.canPlayType(&amp;#x27;application/vnd.apple.mpegurl&amp;#x27;)) {&amp;#xA;            video.src = &amp;#x27;https://myStream.mysite.com/hls/index.m3u8&amp;#x27;;&amp;#xA;            video.addEventListener(&amp;#x27;loadedmetadata&amp;#x27;, function() {&amp;#xA;                video.play();&amp;#xA;            });&amp;#xA;        }&amp;#xA;    &lt;/script&gt;&#xA;&#xA;&#xA;

    &#xA;

    Has anyone experienced similar issues or can spot an error in my configuration ? Any help would be greatly appreciated as I have already invested over 30 hours trying to resolve this.

    &#xA;

  • Issues with Publishing and Subscribing Rates for H.264 Video Streaming over RabbitMQ

    7 octobre 2024, par Luis

    I am working on a project to stream an H.264 video file using RabbitMQ (AMQP protocol) and display it in a web application. The setup involves capturing video frames, encoding them, sending them to RabbitMQ, and then consuming and decoding them on the web application side using Flask and Flask-SocketIO.

    &#xA;

    However, I am encountering performance issues with the publishing and subscribing rates in RabbitMQ. I cannot seem to achieve more than 10 messages per second. This is not sufficient for smooth video streaming.&#xA;I need help to diagnose and resolve these performance bottlenecks.

    &#xA;

    Here is my code :

    &#xA;

      &#xA;
    • Video Capture and Publishing Script :
    • &#xA;

    &#xA;

    # RabbitMQ setup&#xA;RABBITMQ_HOST = &#x27;localhost&#x27;&#xA;EXCHANGE = &#x27;DRONE&#x27;&#xA;CAM_LOCATION = &#x27;Out_Front&#x27;&#xA;KEY = f&#x27;DRONE_{CAM_LOCATION}&#x27;&#xA;QUEUE_NAME = f&#x27;DRONE_{CAM_LOCATION}_video_queue&#x27;&#xA;&#xA;# Path to the H.264 video file&#xA;VIDEO_FILE_PATH = &#x27;videos/FPV.h264&#x27;&#xA;&#xA;# Configure logging&#xA;logging.basicConfig(level=logging.INFO)&#xA;&#xA;@contextmanager&#xA;def rabbitmq_channel(host):&#xA;    """Context manager to handle RabbitMQ channel setup and teardown."""&#xA;    connection = pika.BlockingConnection(pika.ConnectionParameters(host))&#xA;    channel = connection.channel()&#xA;    try:&#xA;        yield channel&#xA;    finally:&#xA;        connection.close()&#xA;&#xA;def initialize_rabbitmq(channel):&#xA;    """Initialize RabbitMQ exchange and queue, and bind them together."""&#xA;    channel.exchange_declare(exchange=EXCHANGE, exchange_type=&#x27;direct&#x27;)&#xA;    channel.queue_declare(queue=QUEUE_NAME)&#xA;    channel.queue_bind(exchange=EXCHANGE, queue=QUEUE_NAME, routing_key=KEY)&#xA;&#xA;def send_frame(channel, frame):&#xA;    """Encode the video frame using FFmpeg and send it to RabbitMQ."""&#xA;    ffmpeg_path = &#x27;ffmpeg/bin/ffmpeg.exe&#x27;&#xA;    cmd = [&#xA;        ffmpeg_path,&#xA;        &#x27;-f&#x27;, &#x27;rawvideo&#x27;,&#xA;        &#x27;-pix_fmt&#x27;, &#x27;rgb24&#x27;,&#xA;        &#x27;-s&#x27;, &#x27;{}x{}&#x27;.format(frame.shape[1], frame.shape[0]),&#xA;        &#x27;-i&#x27;, &#x27;pipe:0&#x27;,&#xA;        &#x27;-f&#x27;, &#x27;h264&#x27;,&#xA;        &#x27;-vcodec&#x27;, &#x27;libx264&#x27;,&#xA;        &#x27;-pix_fmt&#x27;, &#x27;yuv420p&#x27;,&#xA;        &#x27;-preset&#x27;, &#x27;ultrafast&#x27;,&#xA;        &#x27;pipe:1&#x27;&#xA;    ]&#xA;    &#xA;    start_time = time.time()&#xA;    process = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)&#xA;    out, err = process.communicate(input=frame.tobytes())&#xA;    encoding_time = time.time() - start_time&#xA;    &#xA;    if process.returncode != 0:&#xA;        logging.error("ffmpeg error: %s", err.decode())&#xA;        raise RuntimeError("ffmpeg error")&#xA;    &#xA;    frame_size = len(out)&#xA;    logging.info("Sending frame with shape: %s, size: %d bytes", frame.shape, frame_size)&#xA;    timestamp = time.time()&#xA;    formatted_timestamp = datetime.fromtimestamp(timestamp).strftime(&#x27;%H:%M:%S.%f&#x27;)&#xA;    logging.info(f"Timestamp: {timestamp}") &#xA;    logging.info(f"Formatted Timestamp: {formatted_timestamp[:-3]}")&#xA;    timestamp_bytes = struct.pack(&#x27;d&#x27;, timestamp)&#xA;    message_body = timestamp_bytes &#x2B; out&#xA;    channel.basic_publish(exchange=EXCHANGE, routing_key=KEY, body=message_body)&#xA;    logging.info(f"Encoding time: {encoding_time:.4f} seconds")&#xA;&#xA;def capture_video(channel):&#xA;    """Read video from the file, encode frames, and send them to RabbitMQ."""&#xA;    if not os.path.exists(VIDEO_FILE_PATH):&#xA;        logging.error("Error: Video file does not exist.")&#xA;        return&#xA;    cap = cv2.VideoCapture(VIDEO_FILE_PATH)&#xA;    if not cap.isOpened():&#xA;        logging.error("Error: Could not open video file.")&#xA;        return&#xA;    try:&#xA;        while True:&#xA;            start_time = time.time()&#xA;            ret, frame = cap.read()&#xA;            read_time = time.time() - start_time&#xA;            if not ret:&#xA;                break&#xA;            frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)&#xA;            frame_rgb = np.ascontiguousarray(frame_rgb) # Ensure the frame is contiguous&#xA;            send_frame(channel, frame_rgb)&#xA;            cv2.imshow(&#x27;Video&#x27;, frame)&#xA;            if cv2.waitKey(1) &amp; 0xFF == ord(&#x27;q&#x27;):&#xA;                break&#xA;            logging.info(f"Read time: {read_time:.4f} seconds")&#xA;    finally:&#xA;        cap.release()&#xA;        cv2.destroyAllWindows()&#xA;

    &#xA;

      &#xA;
    • the backend (flask) :
    • &#xA;

    &#xA;

    app = Flask(__name__)&#xA;CORS(app)&#xA;socketio = SocketIO(app, cors_allowed_origins="*")&#xA;&#xA;RABBITMQ_HOST = &#x27;localhost&#x27;&#xA;EXCHANGE = &#x27;DRONE&#x27;&#xA;CAM_LOCATION = &#x27;Out_Front&#x27;&#xA;QUEUE_NAME = f&#x27;DRONE_{CAM_LOCATION}_video_queue&#x27;&#xA;&#xA;def initialize_rabbitmq():&#xA;    connection = pika.BlockingConnection(pika.ConnectionParameters(RABBITMQ_HOST))&#xA;    channel = connection.channel()&#xA;    channel.exchange_declare(exchange=EXCHANGE, exchange_type=&#x27;direct&#x27;)&#xA;    channel.queue_declare(queue=QUEUE_NAME)&#xA;    channel.queue_bind(exchange=EXCHANGE, queue=QUEUE_NAME, routing_key=f&#x27;DRONE_{CAM_LOCATION}&#x27;)&#xA;    return connection, channel&#xA;&#xA;def decode_frame(frame_data):&#xA;    # FFmpeg command to decode H.264 frame data&#xA;    ffmpeg_path = &#x27;ffmpeg/bin/ffmpeg.exe&#x27;&#xA;    cmd = [&#xA;        ffmpeg_path,&#xA;        &#x27;-f&#x27;, &#x27;h264&#x27;,&#xA;        &#x27;-i&#x27;, &#x27;pipe:0&#x27;,&#xA;        &#x27;-pix_fmt&#x27;, &#x27;bgr24&#x27;,&#xA;        &#x27;-vcodec&#x27;, &#x27;rawvideo&#x27;,&#xA;        &#x27;-an&#x27;, &#x27;-sn&#x27;,&#xA;        &#x27;-f&#x27;, &#x27;rawvideo&#x27;,&#xA;        &#x27;pipe:1&#x27;&#xA;    ]&#xA;    process = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)&#xA;    start_time = time.time()  # Start timing the decoding process&#xA;    out, err = process.communicate(input=frame_data)&#xA;    decoding_time = time.time() - start_time  # Calculate decoding time&#xA;    &#xA;    if process.returncode != 0:&#xA;        print("ffmpeg error: ", err.decode())&#xA;        return None&#xA;    frame_size = (960, 1280, 3)  # frame dimensions expected by the frontend&#xA;    frame = np.frombuffer(out, np.uint8).reshape(frame_size)&#xA;    print(f"Decoding time: {decoding_time:.4f} seconds")&#xA;    return frame&#xA;&#xA;def format_timestamp(ts):&#xA;    dt = datetime.fromtimestamp(ts)&#xA;    return dt.strftime(&#x27;%H:%M:%S.%f&#x27;)[:-3]&#xA;&#xA;def rabbitmq_consumer():&#xA;    connection, channel = initialize_rabbitmq()&#xA;    for method_frame, properties, body in channel.consume(QUEUE_NAME):&#xA;        message_receive_time = time.time()  # Time when the message is received&#xA;&#xA;        # Extract the timestamp from the message body&#xA;        timestamp_bytes = body[:8]&#xA;        frame_data = body[8:]&#xA;        publish_timestamp = struct.unpack(&#x27;d&#x27;, timestamp_bytes)[0]&#xA;&#xA;        print(f"Message Receive Time: {message_receive_time:.4f} ({format_timestamp(message_receive_time)})")&#xA;        print(f"Publish Time: {publish_timestamp:.4f} ({format_timestamp(publish_timestamp)})")&#xA;&#xA;        frame = decode_frame(frame_data)&#xA;        decode_time = time.time() - message_receive_time  # Calculate decode time&#xA;&#xA;        if frame is not None:&#xA;            _, buffer = cv2.imencode(&#x27;.jpg&#x27;, frame)&#xA;            frame_data = buffer.tobytes()&#xA;            socketio.emit(&#x27;video_frame&#x27;, {&#x27;frame&#x27;: frame_data, &#x27;timestamp&#x27;: publish_timestamp}, namespace=&#x27;/&#x27;)&#xA;            emit_time = time.time()  # Time after emitting the frame&#xA;&#xA;            # Log the time taken to emit the frame and its size&#xA;            rtt = emit_time - publish_timestamp  # Calculate RTT from publish to emit&#xA;            print(f"Current Time: {emit_time:.4f} ({format_timestamp(emit_time)})")&#xA;            print(f"RTT: {rtt:.4f} seconds")&#xA;            print(f"Emit time: {emit_time - message_receive_time:.4f} seconds, Frame size: {len(frame_data)} bytes")&#xA;        channel.basic_ack(method_frame.delivery_tag)&#xA;&#xA;@app.route(&#x27;/&#x27;)&#xA;def index():&#xA;    return render_template(&#x27;index.html&#x27;)&#xA;&#xA;@socketio.on(&#x27;connect&#x27;)&#xA;def handle_connect():&#xA;    print(&#x27;Client connected&#x27;)&#xA;&#xA;@socketio.on(&#x27;disconnect&#x27;)&#xA;def handle_disconnect():&#xA;    print(&#x27;Client disconnected&#x27;)&#xA;&#xA;if __name__ == &#x27;__main__&#x27;:&#xA;    consumer_thread = threading.Thread(target=rabbitmq_consumer)&#xA;    consumer_thread.daemon = True&#xA;    consumer_thread.start()&#xA;    socketio.run(app, host=&#x27;0.0.0.0&#x27;, port=5000)&#xA;&#xA;

    &#xA;

    How can I optimize the publishing and subscribing rates to handle a higher number of messages per second ?

    &#xA;

    Any help or suggestions would be greatly appreciated !

    &#xA;

    I attempted to use threading and multiprocessing to handle multiple frames concurrently and I tried to optimize the frame decoding function to make it faster but with no success.

    &#xA;