
Recherche avancée
Médias (1)
-
1 000 000 (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
Autres articles (76)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...) -
Organiser par catégorie
17 mai 2013, parDans MédiaSPIP, une rubrique a 2 noms : catégorie et rubrique.
Les différents documents stockés dans MédiaSPIP peuvent être rangés dans différentes catégories. On peut créer une catégorie en cliquant sur "publier une catégorie" dans le menu publier en haut à droite ( après authentification ). Une catégorie peut être rangée dans une autre catégorie aussi ce qui fait qu’on peut construire une arborescence de catégories.
Lors de la publication prochaine d’un document, la nouvelle catégorie créée sera proposée (...)
Sur d’autres sites (2940)
-
MPlayer not playing HTTP video stream for a specific type of content from the same source
2 août 2017, par JoelImplementation overview
Before I dive into the question, I need to establish the context from the start.
I am currently implementing a cloud gaming solution utilising the following :
- Nvidia Capture SDK
- Nvidia Video Codec SDK
- FFmpeg
- MPlayer
The Nvidia Capture SDK is used to produce a shim layer (via DXGI.dll), intercepting and capturing DirectX frames so that they can be passed to the Nvidia Video Codec SDK to be encoded into an h264 video format. All this is done within DXGI.dll.
I then pass the encoded video to FFmpeg. FFmpeg acts as an HTTP server that broadcasts the video stream for MPlayer to play.
Problem
I am running an Unreal Engine 4 game called "Epic Survival Game Series". The Nvidia Capture SDK’s shim layer kicks off when the game starts, and FFmpeg launches the HTTP server to start streaming. However, when I start MPlayer to receive the stream, MPlayer stops at the following message, and nothing happens after that.
libavformat version 57.72.101 (internal)
Stream not seekable!
H264-ES file format detectedThe thing is, when I play the same video using ffplay, it works without any issue. This is not the only quirk. When I launch a different Unreal Engine 4 game called "First Person Shooter Template", MPlayer can play that video as well. Also, if I modify the Survival Game to load directly into the game level by skipping the menu, MPlayer is also able to play the video.
Using FFmpeg to write the video to a file instead of streaming it to a video also works, no matter the game or whether I loaded into the menu or game level.
This is very strange and I do not have any idea why this is the case. Any ideas ?
Edit : One strange quirk I forgot to mention is that MPlayer does manage to play the video in very rare occasions - maybe once every 10-20 tries or so.
Implementation Details
Additional details of how certain parts are implemented.
(1) For the Nvidia Capture SDK, I use the provided DXIFRShim example that is provided in the SDK
(2) for the Nvidia Video Codec SDK, I use the provided NvEncoder example that is provided in the SDK
(3) The FFmpeg command I use is this :
ffmpeg -i - -listen 1 -threads 1 -vcodec copy -preset ultrafast -an -tune zerolatency -f h264 http://address:port
The encoded frames from Nvidia Video Codec SDK is piped to FFmpeg.(4) The MPlayer command I use is this :
mplayer -quiet -vo gl -nosound -benchmark http://address:port
Things I’ve tried
I am suspecting MPlayer to be the cause, so I’ve only played around with MPlayer parameters.
mplayer http://address:port
mplayer -fps 30 -vo gl -nosound -benchmark http://address:port
mplayer -fps 30 -screenw 720 -screenh 1280 -vo gl -nosound -benchmark http://address:port
mplayer -fps 30 -vo directx -nosound -benchmark http://address:port
mplayer -fps 30 -vo null -nosound -benchmark http://address:port
None of these worked.
-
Audio PTS is not equal but play time is equal
20 juillet 2023, par KaGa WuI have a video which has an dubbed Portuguese audio stream. The stream does not have the video head, like BGM of companies playing logos. So, the audio stream to delay some time play, it's first data package PTS is 9s. Then I transcode the audio stream to AAC, it's first package PTS is 2s. In my thinking the audio stream put the first data package in more front, it will play in more front originally, but it plays delay time is right !


Here is the first package PTS, origin and transcoded.


origin


https://files.videohelp.com/u/302682/025_sisu_hevc_1920x800_30s_yuv420p_aac_copy.ts


0x00458B74 Transport Packet { PID = 0x101, Payload = Yes (182), Counter = 15, Start indicator }
Adaptation Field ():
adaptation_field_length = 1
discontinuity_indicator = 0
random_access_indicator = 1
elementary_stream_priority_indicator = 0
PCR_flag = 0
OPCR_flag = 0
splicing_point_flag = 0
transport_private_data_flag = 0
adaptation_field_extension_flag = 0

0x00458B7A PES Packet { stream_id = 0xC0 (audio stream)}
packet_length = 2898
PES_scrambling_control = 0
PES_priority = 0
data_alignment_indicator = 0
copyright = 0
original_or_copy = 0
PTS_DTS_flags = 2
ESCR_flag = 0
ES_rate_flag = 0
DSM_trick_mode_flag = 0
additional_copy_info_flag = 0
PES_CRC_flag = 0
PES_extension_flag = 0
PES_header_data_length = 5
 PTS = 0: 0: 9: 570 (861 300)

0x00458B88 AAC Frame
id = 0
layer = 0
protection_absent = 1
profile = 1
sf_index = 3
private_bit = 0
channel_configuration = 6
original = 0
home = 0
copyright_identification_bit = 0
copyright_identification_start = 0
aac_frame_length = 285
adts_buffer_fullness = 2047
no_raw_data_blocks_in_frame = 0
SamplingRate = 48000
Channels = 6
Duration = 0.021333



transcoded


https://files.videohelp.com/u/302682/025_susi_hevc_1920x800_30s_yuv420p_aac_trans.ts


0x000010E4 Transport Packet { PID = 0x101, Payload = Yes (182), Counter = 0, Start indicator }
Adaptation Field ():
adaptation_field_length = 1
discontinuity_indicator = 0
random_access_indicator = 1
elementary_stream_priority_indicator = 0
PCR_flag = 0
OPCR_flag = 0
splicing_point_flag = 0
transport_private_data_flag = 0
adaptation_field_extension_flag = 0

0x000010EA PES Packet { stream_id = 0xC0 (audio stream)}
packet_length = 467
PES_scrambling_control = 0
PES_priority = 0
data_alignment_indicator = 0
copyright = 0
original_or_copy = 0
PTS_DTS_flags = 2
ESCR_flag = 0
ES_rate_flag = 0
DSM_trick_mode_flag = 0
additional_copy_info_flag = 0
PES_CRC_flag = 0
PES_extension_flag = 0
PES_header_data_length = 5
 PTS = 0: 0: 1: 400 (126 000)

0x000010F8 AAC Frame
id = 0
layer = 0
protection_absent = 1
profile = 1
sf_index = 3
private_bit = 0
channel_configuration = 6
original = 0
home = 0
copyright_identification_bit = 0
copyright_identification_start = 0
aac_frame_length = 43
adts_buffer_fullness = 2047
no_raw_data_blocks_in_frame = 0
SamplingRate = 48000
Channels = 6
Duration = 0.021333



I decode the origin video and transcoded video,and print screen of them.


-
ffmpeg C API (libswscale) : Scale a frame into an output frame with given width/height preserving aspect ratio, fill out the rest with transparency
12 juin 2015, par nik4emniyNote : I need to use ffmpeg’s C API in my project.
Input : video(let’s say,first frame)/image, output_width, output_height
Output : PNG image with output_width/output_height.What I’ve done so far :
a) decoded a frame from input into frame1
b) sws_scale frame1 with needed context (output_width, output_height) into frame2
c) initialized AVCodec CODEC_ID_PNG
d) encoded frame2 into initialized AVPacket
e) wrote AVPacket’s data into file
So I have a working cycle that produces a PNG image, but it doesn’t save the aspect ratio (obviously).
What I want to achieve is do the same thing preserving aspect ratio (which would change the frame2’s width||height), but then "centring" that image and filling out the "empty" parts with tranparent layer in the final PNG.Does anyone have an idea on how this can be achieved ?
Again, I need to use C API, not command line.