
Recherche avancée
Médias (91)
-
Chuck D with Fine Arts Militia - No Meaning No
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Paul Westerberg - Looking Up in Heaven
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Le Tigre - Fake French
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Thievery Corporation - DC 3000
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Dan the Automator - Relaxation Spa Treatment
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Gilberto Gil - Oslodum
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
Autres articles (68)
-
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Mise à disposition des fichiers
14 avril 2011, parPar défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...) -
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)
Sur d’autres sites (5160)
-
Unable to retrieve video stream from RTSP URL inside Docker container
6 février, par birdalugurI have a FastAPI application running inside a Docker container that is trying to stream video from an RTSP camera URL using OpenCV. The setup works fine locally, but when running inside Docker, the
/video
endpoint does not return a stream and times out. Below are the details of the issue.

Docker Setup :


Dockerfile :


FROM python:3.10.12

RUN apt-get update && apt-get install -y \
 libgl1-mesa-glx \
 libglib2.0-0

WORKDIR /app

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY . .

CMD ["python", "app.py"]




- 

- Docker Compose :




services:
 api:
 build: ./api
 ports:
 - "8000:8000"
 depends_on:
 - redis
 - mongo
 networks:
 - app_network
 volumes:
 - ./api:/app
 environment:
 - REDIS_HOST=redis
 - REDIS_PORT=6379
 - MONGO_URI=mongodb://mongo:27017/app_db

 frontend:
 build: ./frontend
 ports:
 - "3000:3000"
 depends_on:
 - api
 networks:
 - app_network
 volumes:
 - ./frontend:/app
 - /app/node_modules

redis:
 image: "redis:alpine"
 restart: always
 networks:
 - app_network
 volumes:
 - redis_data:/data

 mongo:
 image: "mongo:latest"
 restart: always
 networks:
 - app_network
 volumes:
 - mongo_data:/data/db

networks:
 app_network:
 driver: bridge

volumes:
 redis_data:
 mongo_data:




Issue :


When I try to access the
/video
endpoint, the following warnings appear :

[ WARN:0@46.518] global cap_ffmpeg_impl.hpp:453 _opencv_ffmpeg_interrupt_callback Stream timeout triggered after 30037.268665 ms



However, locally, the RTSP stream works fine using OpenCV with the same code.


Additional Information :


- 

- Network : The Docker container can successfully ping the camera IP (
10.100.10.94
). - Local Video : I can read frames from a local video file without issues.
- RTSP Stream : I am able to access the RTSP stream directly using OpenCV locally, but not inside the Docker container.








Code :


Here's the relevant part of the code in my
api/app.py
:

import cv2
from fastapi import FastAPI
from fastapi.responses import StreamingResponse

RTSP_URL = "rtsp://deneme:155115@10.100.10.94:554/axis-media/media.amp?adjustablelivestream=1&fps=10"

def generate_frames():
 cap = cv2.VideoCapture(RTSP_URL)
 if not cap.isOpened():
 print("Failed to connect to RTSP stream.")
 return

 while True:
 success, frame = cap.read()
 if not success:
 print("Failed to capture frame.")
 break

 _, buffer = cv2.imencode(".jpg", frame)
 frame_bytes = buffer.tobytes()

 yield (
 b"--frame\r\n" b"Content-Type: image/jpeg\r\n\r\n" + frame_bytes + b"\r\n"
 )

 cap.release()

@app.get("/video")
async def video_feed():
 """Return MJPEG stream to the browser."""
 return StreamingResponse(
 generate_frames(), media_type="multipart/x-mixed-replace; boundary=frame"
 )



Has anyone faced similar issues or have suggestions on how to resolve this ?



 -
Failed to use h264_v4l2 codec in ffmpeg to decode video
6 janvier, par wangt13I am working on an embedded Linux system (kernel-5.10.24) and I want to use
ffmpeg
libraries (ffmpeg-4.4.4) to do video decoding.

The C code is as follows, it uses
h264_v4l2m2m
decoder to decode the video,

#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libavutil></libavutil>imgutils.h>
#include <libavutil></libavutil>opt.h>
#include <libswscale></libswscale>swscale.h>
#include 
#include 

int main(int argc, char *argv[]) {
 if (argc < 3) {
 printf("Usage: %s \n", argv[0]);
 return -1;
 }

 const char *input_file = argv[1];
 const char *output_file = argv[2];

 AVFormatContext *fmt_ctx = NULL;
 AVCodecContext *codec_ctx = NULL;
 AVCodec *codec = NULL;
 AVPacket pkt;
 AVFrame *frame = NULL;
 AVFrame *rgb_frame = NULL;
 struct SwsContext *sws_ctx = NULL;

 FILE *output = NULL;
 int video_stream_index = -1;

 avformat_network_init();

 if (avformat_open_input(&fmt_ctx, input_file, NULL, NULL) < 0) {
 fprintf(stderr, "Could not open input file %s\n", input_file);
 return -1;
 }

 if (avformat_find_stream_info(fmt_ctx, NULL) < 0) {
 fprintf(stderr, "Could not find stream information\n");
 return -1;
 }

 for (int i = 0; i < fmt_ctx->nb_streams; i++) {
 if (fmt_ctx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO) {
 video_stream_index = i;
 break;
 }
 }

 if (video_stream_index == -1) {
 fprintf(stderr, "Could not find video stream\n");
 return -1;
 }

 //// codec = avcodec_find_decoder(fmt_ctx->streams[video_stream_index]->codecpar->codec_id);
 codec = avcodec_find_decoder_by_name("h264_v4l2m2m");
 if (!codec) {
 fprintf(stderr, "Codec not found\n");
 return -1;
 }

 codec_ctx = avcodec_alloc_context3(codec);
 if (!codec_ctx) {
 fprintf(stderr, "Could not allocate codec context\n");
 return -1;
 }

 if (avcodec_parameters_to_context(codec_ctx, fmt_ctx->streams[video_stream_index]->codecpar) < 0) {
 fprintf(stderr, "Failed to copy codec parameters to decoder context\n");
 return -1;
 }

 if (avcodec_open2(codec_ctx, codec, NULL) < 0) {
 fprintf(stderr, "Could not open codec\n");
 return -1;
 }

 output = fopen(output_file, "wb");
 if (!output) {
 fprintf(stderr, "Could not open output file %s\n", output_file);
 return -1;
 }

 frame = av_frame_alloc();
 rgb_frame = av_frame_alloc();
 if (!frame || !rgb_frame) {
 fprintf(stderr, "Could not allocate frames\n");
 return -1;
 }

 int width = codec_ctx->width;
 int height = codec_ctx->height;
 int num_bytes = av_image_get_buffer_size(AV_PIX_FMT_RGB24, width, height, 1);
 uint8_t *buffer = (uint8_t *)av_malloc(num_bytes * sizeof(uint8_t));
 av_image_fill_arrays(rgb_frame->data, rgb_frame->linesize, buffer, AV_PIX_FMT_RGB24, width, height, 1);

printf("XXXXXXXXXXXX width: %d, height: %d, fmt: %d\n", width, height, codec_ctx->pix_fmt);
 sws_ctx = sws_getContext(width, height, codec_ctx->pix_fmt,
 width, height, AV_PIX_FMT_RGB24,
 SWS_BILINEAR, NULL, NULL, NULL);
 if (!sws_ctx) {
 fprintf(stderr, "Could not initialize the conversion context\n");
 return -1;
 }

 while (av_read_frame(fmt_ctx, &pkt) >= 0) {
 if (pkt.stream_index == video_stream_index) {
 int ret = avcodec_send_packet(codec_ctx, &pkt);
 if (ret < 0) {
 fprintf(stderr, "Error sending packet for decoding\n");
 return -1;
 }

 while (ret >= 0) {
 ret = avcodec_receive_frame(codec_ctx, frame);
 if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
 break;
 } else if (ret < 0) {
 fprintf(stderr, "Error during decoding\n");
 return -1;
 }

 sws_scale(sws_ctx, (const uint8_t *const *)frame->data, frame->linesize,
 0, height, rgb_frame->data, rgb_frame->linesize);

 fprintf(output, "P6\n%d %d\n255\n", width, height);
 fwrite(rgb_frame->data[0], 1, num_bytes, output);
 }
 }
 av_packet_unref(&pkt);
 }

 fclose(output);
 av_frame_free(&frame);
 av_frame_free(&rgb_frame);
 avcodec_free_context(&codec_ctx);
 avformat_close_input(&fmt_ctx);
 sws_freeContext(sws_ctx);

 return 0;
}



It ran with some error logs from
swscale
as follows,

# ./test_ffmpeg ./test.mp4 /tmp/output
[h264_v4l2m2m @ 0x1d76320] Using device /dev/video0
[h264_v4l2m2m @ 0x1d76320] driver 'mysoc-vdec' on card 'msoc-vdec' in mplane mode
[h264_v4l2m2m @ 0x1d76320] requesting formats: output=H264 capture=NV12
[h264_v4l2m2m @ 0x1d76320] the v4l2 driver does not support end of stream VIDIOC_SUBSCRIBE_EVENT
XXXXXXXXXXXX width: 1280, height: 720, fmt: 0
[swscaler @ 0x1dadaa0] No accelerated colorspace conversion found from yuv420p to rgb24.
[h264_v4l2m2m @ 0x1d76320] VIDIOC_G_SELECTION ioctl
[swscaler @ 0x1dadaa0] bad src image pointers
[swscaler @ 0x1dadaa0] bad src image pointers
[swscaler @ 0x1dadaa0] bad src image pointers
[swscaler @ 0x1dadaa0] bad src image pointers
[swscaler @ 0x1dadaa0] bad src image pointers
[swscaler @ 0x1dadaa0] bad src image pointers
[swscaler @ 0x1dadaa0] bad src image pointers
[swscaler @ 0x1dadaa0] bad src image pointers
[swscaler @ 0x1dadaa0] bad src image pointers
[swscaler @ 0x1dadaa0] bad src image pointers
[swscaler @ 0x1dadaa0] bad src image pointers
[swscaler @ 0x1dadaa0] bad src image pointers
......



And it ran for about 4 seconds, while the
test.mp4
is about 13 seconds.
If I did NOT specify theh264_v4l2m2m
as the decoder, there is NObad src image pointers
and its run-time is as long as themp4
file.

What is wrong with above codes using
h264_v4l2m2m
and how to fix it ?

-
Decoding with QSV on ffmpeg [closed]
21 janvier, par Grant UpsonI'm having trouble trying to get decoding working using QSV with ffmpeg on Manjaro.


After installing the dependencies I've run a few commands to verify it's installed correctly.


The driver in use seems to be correct.


echo $LIBVA_DRIVER_NAME

iHD




The decoders are available.


ffmpeg -decoders | grep qsv

 V....D av1_qsv AV1 video (Intel Quick Sync Video acceleration) (codec av1)
 V....D h264_qsv H264 video (Intel Quick Sync Video acceleration) (codec h264)
 V....D hevc_qsv HEVC video (Intel Quick Sync Video acceleration) (codec hevc)
 V....D mjpeg_qsv MJPEG video (Intel Quick Sync Video acceleration) (codec mjpeg)
 V....D mpeg2_qsv MPEG2VIDEO video (Intel Quick Sync Video acceleration) (codec mpeg2video)
 V....D vc1_qsv VC1 video (Intel Quick Sync Video acceleration) (codec vc1)
 V....D vp8_qsv VP8 video (Intel Quick Sync Video acceleration) (codec vp8)
 V....D vp9_qsv VP9 video (Intel Quick Sync Video acceleration) (codec vp9)
 V....D vvc_qsv VVC video (Intel Quick Sync Video acceleration) (codec vvc)



and QSV seems to be available to ffmpeg on this device


ffmpeg hwaccels

Hardware acceleration methods:
vdpau
cuda
vaapi
qsv
drm
opencl
vulkan



yet when I try to decode a video I get spammed with this error :


ffmpeg -hwaccel_output_format qsv -c:v h264_qsv -i fixed_output.mkv -f null -

[vist#0:0/h264 @ 0x5b5e52273bc0] [dec:h264_qsv @ 0x5b5e521dea80] Error submitting packet to decoder: Unknown error occurred
[h264_qsv @ 0x5b5e521df640] Error creating a MFX session: -9.
[h264_qsv @ 0x5b5e521df640] Error initializing an MFX session
[h264_qsv @ 0x5b5e521df640] Error decoding header



Which led me to believe it may have been a device error, so I tried to init the device also when decoding like so :


ffmpeg -init_hw_device qsv=qsv,child_device_type=dxva2 -hwaccel_output_format qsv -c:v h264_qsv -i fixed_output.mkv -f mp4 outputfile.mp4 -loglevel debug > qsv_log.txt 2>&1

Applying option init_hw_device (initialise hardware device) with argument qsv=qsv,child_device_type=dxva2.
[AVHWDeviceContext @ 0x60b977fcc940] No supported child device type is enabled
Device creation failed: -38.
Failed to set value 'qsv=qsv,child_device_type=dxva2' for option 'init_hw_device': Function not implemented
Error parsing global options: Function not implemented



And this seems to think there is no supported child device, but based on my understanding there is or the first few commands to verify it was linked and available wouldn't have worked ? So i'm quite confused as to what a solution would be. Any advice ?