
Recherche avancée
Médias (91)
-
Géodiversité
9 septembre 2011, par ,
Mis à jour : Août 2018
Langue : français
Type : Texte
-
USGS Real-time Earthquakes
8 septembre 2011, par
Mis à jour : Septembre 2011
Langue : français
Type : Texte
-
SWFUpload Process
6 septembre 2011, par
Mis à jour : Septembre 2011
Langue : français
Type : Texte
-
La conservation du net art au musée. Les stratégies à l’œuvre
26 mai 2011
Mis à jour : Juillet 2013
Langue : français
Type : Texte
-
Podcasting Legal guide
16 mai 2011, par
Mis à jour : Mai 2011
Langue : English
Type : Texte
-
Creativecommons informational flyer
16 mai 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
Autres articles (68)
-
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...) -
Configurer la prise en compte des langues
15 novembre 2010, parAccéder à la configuration et ajouter des langues prises en compte
Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...) -
XMP PHP
13 mai 2011, parDixit Wikipedia, XMP signifie :
Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...)
Sur d’autres sites (5767)
-
ffmpeg how to ignore initial empty audio frames when decoding to loop a sound
1er décembre 2020, par cs guyI am trying to loop a ogg sound file. The goal is to make a loopable audio interface for my mobile app.


I decode the given ogg file into a buffer and that buffer is sent to audio card for playing. All good until it the audio finishes (end of file). When it finishes I use
av_seek_frame(avFormatContext, streamInfoIndex, 0, AVSEEK_FLAG_FRAME);
to basically loop back to beginning. And continue decoding into writing to the same buffer. At first sight I thought this would give me perfect loops. One problem I had was, the decoder in the end gives me extra empty frames. So I ignored them by keeping track of how many samples are decoded :

durationInMillis = avFormatContext->duration * 1000;
numOfTotalSamples =
 (uint64_t) avFormatContext->duration *
 (uint64_t) pLocalCodecParameters->sample_rate *
 (uint64_t) pLocalCodecParameters->channels /
 (uint64_t) AV_TIME_BASE;



When the threshold is reached I ignore the frames sent by the codec. I thought this was it and ran some test. I recorded 5 minutes of my app and in the end I compared the results in FL studio by customly adding the same sound clip several times to match the length of my audio recording :


Here it is after 5 minutes :




In the first loops the difference is very low I thought it was working and I used this for several days until I tested this on 5 minute recording. As the looping approached to 5 minutes mark the difference got very huge. My code is not looping the audio correctly. I suspect that the codec is adding 1 or 2 empty frames at the very beginning in each loop caused by
av_seek_frame
knowing that a frame can contain up several audio samples. These probably accumulate and cause the mismatch.

My question is : how can I drop the empty frames that is sent by codec while decoding so that I can create a perfect loop of the audio ?


My code is below here. Please be aware that I deleted lots of if checks that was inteded for safety to make it more readable in the code below, these removed checks are always false so it doesnt matter for the reader.


helper.cpp


int32_t
outputAudioFrame(AVCodecContext *avCodecContext, AVFrame *avResampledDecFrame, int32_t &ret,
 LockFreeQueue<float> *&buffer, int8_t *&mediaLoadPointer,
 AVFrame *avDecoderFrame, SwrContext *swrContext,
 std::atomic_bool *&signalExitFuture,
 uint64_t &currentNumSamples, uint64_t &numOfTotalSamples) {
 // resampling is done here but its boiler code so I removed it.
 auto *floatArrPtr = (float *) (avResampledDecFrame->data[0]);

 int32_t numOfSamples = avResampledDecFrame->nb_samples * avResampledDecFrame->channels;

 for (int32_t i = 0; i < numOfSamples; i++) {
 if (currentNumSamples == numOfTotalSamples) {
 break;
 }

 buffer->push(*floatArrPtr);
 currentNumSamples++;
 floatArrPtr++;
 }

 return 0;
}



int32_t decode(int32_t &ret, AVCodecContext *avCodecContext, AVPacket *avPacket,
 LockFreeQueue<float> *&buffer,
 AVFrame *avDecoderFrame,
 AVFrame *avResampledDecFrame,
 std::atomic_bool *&signalExitFuture,
 int8_t *&mediaLoadPointer, SwrContext *swrContext,
 uint64_t &currentNumSamples, uint64_t &numOfTotalSamples) {
 
 ret = avcodec_send_packet(avCodecContext, avPacket);
 if (ret < 0) {
 LOGE("decode: Error submitting a packet for decoding %s", av_err2str(ret));
 return ret;
 }

 // get all the available frames from the decoder
 while (ret >= 0) {

 // submit the packet to the decoder
 ret = avcodec_receive_frame(avCodecContext, avDecoderFrame);
 if (ret < 0) {
 // those two return values are special and mean there is no output
 // frame available, but there were no errors during decoding
 if (ret == AVERROR_EOF || ret == AVERROR(EAGAIN)) {
 //LOGD("avcodec_receive_frame returned special %s", av_err2str(ret));
 return 0;
 }

 LOGE("avcodec_receive_frame Error during decoding %s", av_err2str(ret));
 return ret;
 }

 ret = outputAudioFrame(avCodecContext, avResampledDecFrame, ret, buffer,
 mediaLoadPointer, avDecoderFrame, swrContext, signalExitFuture,
 currentNumSamples, numOfTotalSamples);

 av_frame_unref(avDecoderFrame);
 av_frame_unref(avResampledDecFrame);

 if (ret < 0)
 return ret;
 }

 return 0;
}
</float></float>


Main.cpp


while (!*signalExitFuture) {
 while ((ret = av_read_frame(avFormatContext, avPacket)) >= 0) {

 ret = decode(ret, avCodecContext, avPacket, buffer, avDecoderFrame,
 avResampledDecFrame, signalExitFuture,
 mediaLoadPointer, swrContext,
 currentNumSamples, numOfTotalSamples);

 // The packet must be freed with av_packet_unref() when it is no longer needed.
 av_packet_unref(avPacket);

 if (ret < 0) {
 LOGE("Error! %s", av_err2str(ret));

 goto cleanup;
 }
 }

 if (ret == AVERROR_EOF) {

 ret = av_seek_frame(avFormatContext, streamInfoIndex, 0, AVSEEK_FLAG_FRAME);

 currentNumSamples = 0;
 avcodec_flush_buffers(avCodecContext);
 }
 }



-
Choppy Audio while playing Video from StreamingAssets in Unity's VideoPlayer
8 novembre 2017, par Saad AneesI have been trying to load video that I recorded from AVPro Movie Capture (Free Version). The video file was in GB so I converted it using ffmpeg command
-y -i RawVideo.avi -qscale 7 FinalVideo.avi
and saved it toStreamingAssets
. I got the desired result. Now I want to play that converted video file in video player for preview. But the problem is when video is played from URL the audio is very choppy. I played it in windows player and VLC and it was fine. The problem occurs only in Unity’sVideoPlayer
.PreviewVideo Class :
public class PreviewVideo : MonoBehaviour
{
public GameObject VideoSelection;
public GameObject RecordingCanvas;
public GameObject FacebookCanvas;
public GameObject Home;
public Sprite pauseImage;
public Sprite playImage;
public VideoPlayer videoPlayer;
public GameObject EmailCanvas;
// Use this for initialization
void Start ()
{
}
public void Referesh()
{
videoPlayer.gameObject.GetComponent<spriterenderer> ().sprite = Resources.Load<sprite> ("Thumbnails/" + StaticVariables.VideoToPlay);
videoPlayer.url = Application.streamingAssetsPath + "/FinalVideo.avi";
}
public void PlayVideo()
{
if (!videoPlayer.isPlaying) {
videoPlayer.Play ();
}
}
public void Back ()
{
this.gameObject.SetActive (false);
VideoSelection.SetActive (true);
}
public void HomeBtn ()
{
SceneManager.LoadScene (0);
}
public void SendEmailDialogue()
{
EmailCanvas.SetActive (true);
this.gameObject.SetActive (false);
}
public void FacebookShare()
{
FacebookCanvas.SetActive (true);
}
}
</sprite></spriterenderer>Refresh()
is called from RecordingCanvas class :public class RecordingCanvas : MonoBehaviour {
public GameObject VideoSelection;
public GameObject PreviewVideo;
public GameObject Home;
public GameObject canvas;
public RawImage rawImage;
public GameObject videoThumbnail;
float _seconds;
bool canStart = false;
public SpriteRenderer NumSprite;
public VideoPlayer videoPlayer;
WebCamTexture webcamTexture;
Process process;
void Start ()
{
Refresh ();
}
public void Refresh()
{
_seconds = 0;
NumSprite.gameObject.SetActive(true);
webcamTexture = new WebCamTexture (1280, 720);
webcamTexture.Stop ();
rawImage.texture = webcamTexture;
rawImage.material.mainTexture = webcamTexture;
webcamTexture.Play ();
videoPlayer.loopPointReached += VideoEndReached;
videoPlayer.gameObject.GetComponent<spriterenderer> ().sprite = Resources.Load<sprite> ("Thumbnails/" + StaticVariables.VideoToPlay);
videoThumbnail.GetComponent<spriterenderer> ().sprite = Resources.Load<sprite> ("Thumbnails/" + StaticVariables.VideoToPlay);
videoPlayer.clip = Resources.Load<videoclip> ("Videos/" + StaticVariables.VideoToPlay);
}
void Update()
{
_seconds += Time.deltaTime;
print ((int)_seconds);
if (_seconds < 1) {
NumSprite.sprite = Resources.Load<sprite> ("Numbers/3");
}
else if(_seconds <2)
NumSprite.sprite = Resources.Load<sprite>("Numbers/2");
else if(_seconds <3)
NumSprite.sprite = Resources.Load<sprite>("Numbers/1");
if (_seconds >= 3 && _seconds <=4 ) {
canStart = true;
}
if (canStart) {
NumSprite.gameObject.SetActive(false);
canStart = false;
FindObjectOfType<capturegui> ().StartCapture();
videoPlayer.Play ();
videoThumbnail.SetActive (false);
}
}
IEnumerator StartConversion()
{
yield return new WaitForSeconds (1.5f);
process = new Process();
if (File.Exists (Application.streamingAssetsPath + "/FinalVideo.avi"))
File.Delete(Application.streamingAssetsPath + "/FinalVideo.avi");
process.StartInfo.WorkingDirectory = Application.streamingAssetsPath;
process.StartInfo.FileName = Application.streamingAssetsPath + "/ffmpeg.exe";
process.StartInfo.Arguments = " -y -i " + StaticVariables.RawVideo + ".avi " + "-qscale 7 " + StaticVariables.FinalVideo + ".avi";
process.StartInfo.CreateNoWindow = false;
process.EnableRaisingEvents = true;
process.Exited += new EventHandler(Process_Exited);
process.StartInfo.WindowStyle = ProcessWindowStyle.Hidden;
process.Start();
process.WaitForExit ();
canvas.SetActive (false);
PreviewVideo.SetActive (true);
FindObjectOfType<previewvideo> ().Referesh ();
File.Copy (Application.streamingAssetsPath + "/FinalVideo.avi", @"C:\xampp\htdocs\facebook\images\FinalVideo.avi", true);
this.gameObject.SetActive (false);
}
void Process_Exited(object sender, EventArgs e)
{
process.Dispose ();
}
void VideoEndReached(UnityEngine.Video.VideoPlayer vp)
{
videoPlayer.Stop ();
FindObjectOfType<capturegui> ().StopCapture();
webcamTexture.Stop ();
canvas.SetActive (true);
StartCoroutine(StartConversion ());
}
}
</capturegui></previewvideo></capturegui></sprite></sprite></sprite></videoclip></sprite></spriterenderer></sprite></spriterenderer>I am using Unity version 2017.1.1p4 personal edition. Windows 10 with high end PC. I am making this app for standalone PC.
I am stuck here. Can’t proceed further. Please help me with this issue.
-
How to get video pixel location from screen pixel location ?
22 février 2024, par AmLearningWall of Text so I tried breaking it up into sections to make it better sorry in advance


The problem


I have some video files that I am reading with ffmpeg to get the colors at specific pixels, and all seems well, but I just ran into a problem with finding the right pixel to input. I realized (or mistakingly believe) that the pixel location (x,y) on the screen will be different than the local pixel location so to speak of the video (ie. If I want to get pixel 50,0 of the video that will be different than my screen's pixel 50,0 because the resolutions don't match). I was trying to think of a way to convert my screen's pixel location into the "local pixel location", and I have two ideas but I am not sure if any of them is any good. Note I am currently using cmd+shift+4 on macos to get the screen coordinates and the video is playing fullscreen like in the screenshot below.


Ideas


- 

-
If I manually measure and account for this vertical offset, would it effectively convert the screen coordinate into the "local" one ?


-
If I instead adjust my
SwsContext
to put the destination height and width as that of my screen, will it effectively replace the need to convert screen coordinates to the video coordinates ?







Problems with the Ideas


The problems I see with the first solution are that I am assuming there is no hidden horizontal offset (or conversely that all of the width of the video is actually renderable on the screen). Additionally, this solution would only get an approximate result as I would need to manually measure the offsets, screen width, and screen height using the method I currently am using to get the screen coordinates.


With the second solution, aside from the question of if it will even work, the problem becomes that I can no longer measure what the screen coordinates I want are because I can't seem to get rid of those black bars in VLC.


Some Testing I did


Given that if the black bars are part of the video itself, my entire problem would be fixed (maybe ?) I tried seeing if the black bars were part of the video, and when I looked at the frame data's first pixel, it was black. The problem then is that if the black bars are entirely part of the video, then why are the colors I get for some pixels slightly off (I am checking with ColorSync Utility). These colors aren't just slightly off as in wrong but it seems more that they belong to a slightly offset region of the video.


However, this may be somewhat explained if ffmpeg reads right to left. When I put the top left corner of the video into the program and looked again at the pixel data in the frame for that location (location again was calculated by assuming the video location would be the same as the screen location) instead of getting white, I got a bluish color much like the glove in the top right corner.


The Watered Down Code


struct SwsContext *rescaler = NULL;
 rescaler = sws_getContext(codec_context->width, codec_context->height, codec_context->pix_fmt, codec_context->width, codec_context->height, AV_PIX_FMT_RGB0, SWS_FAST_BILINEAR, NULL, NULL, 0);

// Get Packets (containers for frames but not guaranteed to have a full frame) and Frames
 while (av_read_frame(avformatcontext, packet) >= 0)
 {
 
 // determine if packet is video packet
 if (packet->stream_index != video_index)
 {
 continue;
 }
 
 // send packet to decoder
 if (avcodec_send_packet(codec_context, packet) < 0)
 {
 perror("Failed to decode packet");
 }
 
 // get frame from decoder
 int response = avcodec_receive_frame(codec_context, frame);
 if (response == AVERROR(EAGAIN))
 {
 continue;
 }
 else if (response < 0)
 {
 perror("Failed to get frame");
 }
 
 // convert frame to RGB0 colorspace 4 bytes per pixel 1 per channel
 response = sws_scale_frame(rescaler, scaled_frame, frame);
 if(response < 0){
 perror("Failed to change colorspace");
 }
 // get data and write it
 int pixel_number = y*(scaled_frame->linesize[0]/4)+x; // divide by four gets pixel linesize (4 byte per pixel)
 int byte_number = 4*(pixel_number-1); // position of pixel in array
 // start of debugging things
 int temp = scaled_frame->data[0][byte_number]; // R
 int one_after = scaled_frame->data[0][byte_number+1]; // G
 int two_after = scaled_frame->data[0][byte_number+2]; // B
 int als; // where i put the breakpoint
 // end of debugging things
 }



In Summary


I have no idea what is happening.


I take the data for a pixel and compare it to what colorsync utility says should be there, but it is always slightly off as though the pixel I was actually reading was offset from what I thought I was reading. Therefore, I want to find a way to get the pixel location in a video given a screen coordinate when the video is in fullscreen, but I have no idea how to (aside from a few ideas that are probably bad at best).


Also does FFMPEG put the frame data right to left ?


A Video Better Showing My Problem


https://www.youtube.com/watch?v=NSEErs2lC3A


-