
Recherche avancée
Médias (1)
-
Bug de détection d’ogg
22 mars 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Video
Autres articles (53)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (4528)
-
FFMPEG - Non-monotonous DTS in output stream 0:0
6 novembre 2015, par TayaxI’m trying to save an online input stream from a m3u8 playlist, to a Mp4 file.
Here is my command :
usr/bin/ffmpeg -y -i '.$stream.' -t 20 \
-vcodec copy -s 640x480 \
-acodec copy -bsf:a aac_adtstoasc \
-f mp4 '.$filename.'.mp4 > block.txt 2>&1 &And here is my output file :
ffmpeg version 2.6.3 Copyright (c) 2000-2015 the FFmpeg developers
built with gcc 4.8.3 (GCC) 20140911 (Red Hat 4.8.3-9)
configuration: --prefix=/usr --bindir=/usr/bin --datadir=/usr/share/ffmpeg --incdir=/usr/include/ffmpeg --libdir=/usr/lib64 --mandir=/usr/share/man --arch=x86_64 --optflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' --enable-bzlib --disable-crystalhd --enable-gnutls --enable-ladspa --enable-libass --enable-libcdio --enable-libdc1394 --disable-indev=jack --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-openal --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libvorbis --enable-libv4l2 --enable-libvpx --enable-libx264 --enable-libx265 --enable-libxvid --enable-x11grab --enable-avfilter --enable-avresample --enable-postproc --enable-pthreads --disable-static --enable-shared --enable-gpl --disable-debug --disable-stripping --shlibdir=/usr/lib64 --enable-runtime-cpudetect
libavutil 54. 20.100 / 54. 20.100
libavcodec 56. 26.100 / 56. 26.100
libavformat 56. 25.101 / 56. 25.101
libavdevice 56. 4.100 / 56. 4.100
libavfilter 5. 11.102 / 5. 11.102
libavresample 2. 1. 0 / 2. 1. 0
libswscale 3. 1.101 / 3. 1.101
libswresample 1. 1.100 / 1. 1.100
libpostproc 53. 3.100 / 53. 3.100
Input #0, hls,applehttp, from 'playlist.m3u8':
Duration: N/A, start: 2375.973000, bitrate: N/A
Program 0
Metadata:
variant_bitrate : 613420
Stream #0:0: Data: timed_id3 (ID3 / 0x20334449)
Metadata:
variant_bitrate : 613420
Stream #0:1: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p(tv), 640x480 [SAR 1:1 DAR 4:3], 30 fps, 30 tbr, 90k tbn, 60 tbc
Metadata:
variant_bitrate : 613420
Stream #0:2: Audio: aac (LC) ([15][0][0][0] / 0x000F), 16000 Hz, mono, fltp, 53 kb/s
Metadata:
variant_bitrate : 613420
Output #0, mp4, to 'output.mp4':
Metadata:
encoder : Lavf56.25.101
Stream #0:0: Video: h264 ([33][0][0][0] / 0x0021), yuv420p, 640x480 [SAR 1:1 DAR 4:3], q=2-31, 30 fps, 30 tbr, 90k tbn, 90k tbc
Metadata:
variant_bitrate : 613420
Stream #0:1: Audio: aac ([64][0][0][0] / 0x0040), 16000 Hz, mono, 53 kb/s
Metadata:
variant_bitrate : 613420
Stream mapping:
Stream #0:1 -> #0:0 (copy)
Stream #0:2 -> #0:1 (copy)
Press [q] to stop, [?] for help
[mp4 @ 0x18e7700] Non-monotonous DTS in output stream 0:0; previous: 168300, current: 168300; changing to 168301. This may result in incorrect timestamps in the output file.
frame= 53 fps=0.0 q=-1.0 size= 163kB time=00:00:02.75 bitrate= 485.9kbits/s
frame= 79 fps= 74 q=-1.0 size= 294kB time=00:00:04.28 bitrate= 560.7kbits/s
[mp4 @ 0x18e7700] Non-monotonous DTS in output stream 0:0; previous: 400320, current: 400320; changing to 400321. This may result in incorrect timestamps in the output file.
[mp4 @ 0x18e7700] Non-monotonous DTS in output stream 0:0; previous: 416970, current: 416970; changing to 416971. This may result in incorrect timestamps in the output file.
frame= 131 fps= 81 q=-1.0 size= 466kB time=00:00:07.10 bitrate= 537.4kbits/s
frame= 150 fps= 67 q=-1.0 size= 506kB time=00:00:08.12 bitrate= 510.2kbits/s
frame= 163 fps= 59 q=-1.0 size= 549kB time=00:00:08.89 bitrate= 505.4kbits/s
frame= 177 fps= 54 q=-1.0 size= 621kB time=00:00:09.79 bitrate= 519.4kbits/s
[mp4 @ 0x18e7700] Non-monotonous DTS in output stream 0:0; previous: 902160, current: 902160; changing to 902161. This may result in incorrect timestamps in the output file.
frame= 206 fps= 54 q=-1.0 size= 732kB time=00:00:11.52 bitrate= 520.8kbits/s
frame= 236 fps= 55 q=-1.0 size= 869kB time=00:00:13.44 bitrate= 529.7kbits/s
frame= 276 fps= 57 q=-1.0 size= 1036kB time=00:00:15.74 bitrate= 539.0kbits/s
[mp4 @ 0x18e7700] Non-monotonous DTS in output stream 0:0; previous: 1433880, current: 1433880; changing to 1433881. This may result in incorrect timestamps in the output file.
frame= 284 fps= 36 q=-1.0 size= 1079kB time=00:00:16.25 bitrate= 543.7kbits/s
[mp4 @ 0x18e7700] Non-monotonous DTS in output stream 0:0; previous: 1454490, current: 1454490; changing to 1454491. This may result in incorrect timestamps in the output file.
frame= 295 fps= 35 q=-1.0 size= 1129kB time=00:00:16.70 bitrate= 553.9kbits/s
frame= 317 fps= 35 q=-1.0 size= 1222kB time=00:00:18.04 bitrate= 554.8kbits/s
[mp4 @ 0x18e7700] Non-monotonous DTS in output stream 0:0; previous: 1696050, current: 1696050; changing to 1696051. This may result in incorrect timestamps in the output file.
frame= 347 fps= 36 q=-1.0 size= 1342kB time=00:00:19.39 bitrate= 566.9kbits/s
frame= 361 fps= 37 q=-1.0 Lsize= 1400kB time=00:00:20.03 bitrate= 572.5kbits/s
video:1274kB audio:115kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.784608%As you can see, it seems that there are some dropped frame during the recording and I have no idea how to correct this in order to have a clean file at the end.
I believe it’s a setting problem maybe ? but I can’t really pinpoint what’s wrong.
I tried to play with the buffer & the framerate but nothing would change.Any help would be appreciated.
-
Read multiple frames in demuxer
22 décembre 2024, par AyconI use
FFmpeg.AutoGen
. This is C# wrapper offfmpeg
(7.0) C++ library for reading mediafiles and generate stream of frames for other application.
I want get n frames and hold pointers in memory indefinitely.
However, I am completely confused trying to figure out which API is deprecated and how I can tell ffmpeg to hold the pointers in memory until I tell it to.

I don't want to copy the frame after receiving it if I can avoid it.

I tried many ways.

My last attempt was to receive the frames usingffmpeg.avcodec_send_packet()
,ffmpeg.av_read_frame()
andffmpeg.avcodec_receive_frame()
functions as it is specified in the current manual.

My code fragment for reading frames :

using Core.Backends.FFmpeg.Helpers;
using Core.Backends.FFmpeg.UnsafeWrappers;
using Core.Enums;
using Core.Interfaces;
using FFmpeg.AutoGen.Abstractions;
using System.Diagnostics;
using System.Drawing;

namespace Core.Backends.FFmpeg.Internal;

internal class Video : IVideo
{
 private readonly AVFormatHandler p_format;
 private readonly AVCodecHandler p_codec;
 private readonly AVPacketWrapper p_packet;
 private readonly FramesPool p_framesPool;
 private readonly FramesPool p_framesBufferPool;
 private bool p_disposedValue;

 public Video(VideoMetadata _videoMetadata, AVFormatHandler _format, AVCodecHandler _codec, int _bufferizedFramesCount = 1)
 {
 Duration = _videoMetadata.Duration;
 FrameRate = _videoMetadata.FrameRate;
 FrameSize = _videoMetadata.FrameSize;
 PixelFormat = _videoMetadata.PixelFormat;
 SelectedStreamID = _videoMetadata.SelectedStreamID;
 p_format = _format;
 p_codec = _codec;
 p_packet = new AVPacketWrapper();
 var frame = new AVFrameWrapper(p_format, p_packet);
 p_framesPool = new(frame, _bufferizedFramesCount);
 p_framesBufferPool = new(frame, _bufferizedFramesCount);
 }

 /// <inheritdoc></inheritdoc>
 public long Duration { get; init; }

 /// <inheritdoc></inheritdoc>
 public (int num, int den) FrameRate { get; init; }

 /// <inheritdoc></inheritdoc>
 public Size FrameSize { get; init; }

 /// <inheritdoc></inheritdoc>
 public PixelFormat PixelFormat { get; init; }

 private int SelectedStreamID { get; init; }

 private unsafe int SendPacket(AVPacketWrapper? _packet)
 {
 if (_packet == null)
 return ffmpeg.avcodec_send_packet(p_codec.AVCodecContextPointer, null);

 return ffmpeg.avcodec_send_packet(p_codec.AVCodecContextPointer, _packet.AVPacketPointer);
 }

 private unsafe bool IsSelectedStream(AVPacketWrapper _packet)
 {
 return _packet.AVPacketPointer->stream_index == SelectedStreamID;
 }

 private unsafe int ReadFrame(AVPacketWrapper _packet)
 {
 return ffmpeg.av_read_frame(p_format.AVFormatPointer, _packet.AVPacketPointer);
 }

 private static unsafe void UnrefPacket(AVPacketWrapper _packet) => ffmpeg.av_packet_unref(_packet.AVPacketPointer);

 private IEnumerable<int> ReadToSelectedStream(AVPacketWrapper _packet)
 {
 do
 {
 UnrefPacket(p_packet);
 yield return ReadFrame(_packet);
 } while (!IsSelectedStream(_packet));
 }

 private unsafe void FlushBuffers() => ffmpeg.avcodec_flush_buffers(p_codec.AVCodecContextPointer);

 private IEnumerable<avpacketwrapper> GetNextPacketPrivate()
 {
 try
 {
 while (true)
 {
 foreach (int errorCodeRead in ReadToSelectedStream(p_packet))
 {
 if (errorCodeRead == ffmpeg.AVERROR_EOF)
 break;

 errorCodeRead.ThrowInvalidOperationExceptionIfError();
 }

 int errorCodeSend = SendPacket(p_packet);

 if (errorCodeSend == ffmpeg.AVERROR(ffmpeg.EAGAIN))
 {
 yield return p_packet;
 continue;
 }

 if (errorCodeSend == ffmpeg.AVERROR_EOF)
 {
 yield return p_packet;
 break;
 }

 errorCodeSend.ThrowInvalidOperationExceptionIfError();

 yield return p_packet;
 }

 // Last iteration special case handling
 int errorCodeSendLast = SendPacket(null);

 if (errorCodeSendLast != ffmpeg.AVERROR_EOF)
 errorCodeSendLast.ThrowInvalidOperationExceptionIfError();

 yield return p_packet;
 }
 finally
 {
 UnrefPacket(p_packet);
 FlushBuffers();
 }
 }

 private unsafe int ReceiveFrame(AVFrameWrapper _frame)
 {
 return ffmpeg.avcodec_receive_frame(p_codec.AVCodecContextPointer, _frame.AVFramePointer);
 }

 private unsafe AVFrameWrapper HWFrameCopyIfRequired(AVCodecHandler _codec, AVFrameWrapper _frame, AVFrameWrapper _buffer)
 {
 if (_codec.AVCodecContextPointer->hw_device_ctx != null)
 {
 int errorCode = ffmpeg.av_hwframe_transfer_data(_buffer.AVFramePointer, _frame.AVFramePointer, flags: 0);
 errorCode.ThrowInvalidOperationExceptionIfError();
 return _buffer;
 }

 return _frame;
 }

 private IEnumerable GetNextFramePrivate(AVFrameWrapper _fresh_frame, AVFrameWrapper _fresh_frameBuffer)
 {
 int readCode;

 while (true)
 {
 readCode = ReceiveFrame(_fresh_frame);

 if (readCode == ffmpeg.AVERROR(ffmpeg.EAGAIN) || readCode == ffmpeg.AVERROR_EOF)
 yield break;

 readCode.ThrowInvalidOperationExceptionIfError();
 
 yield return HWFrameCopyIfRequired(p_codec, _fresh_frame, _fresh_frameBuffer);
 }
 }

 private static void RefreshFrames
 (
 IEnumerator<avframewrapper> _framesEnumerator,
 IEnumerator<avframewrapper> _framesBufferEnumerator,
 out AVFrameWrapper _frame,
 out AVFrameWrapper _frameBuffer
 )
 {
 // Catch fresh frame from pool
 Debug.Assert(_framesEnumerator.MoveNext(), "Пул фреймов никогда не должен завершать предоставление фреймов.");
 _frame = _framesEnumerator.Current;

 // Catch fresh frame buffer from pool
 Debug.Assert(_framesBufferEnumerator.MoveNext(), "Пул фреймов никогда не должен завершать предоставление фреймов.");
 _frameBuffer = _framesBufferEnumerator.Current;
 }

 /// <inheritdoc></inheritdoc>
 public IEnumerable GetNextFrame()
 {
 IEnumerator<avframewrapper> framesEnumerator = p_framesPool.GetNextFrame().GetEnumerator();
 IEnumerator<avframewrapper> framesBufferEnumerator = p_framesBufferPool.GetNextFrame().GetEnumerator();
 RefreshFrames(framesEnumerator, framesBufferEnumerator, out AVFrameWrapper fresh_frame, out AVFrameWrapper fresh_frameBuffer);
 foreach (var packet in GetNextPacketPrivate())
 foreach (var frame in GetNextFramePrivate(fresh_frame, fresh_frameBuffer))
 {
 yield return frame;
 RefreshFrames(framesEnumerator, framesBufferEnumerator, out fresh_frame, out fresh_frameBuffer);
 }
 }

 protected virtual void Dispose(bool disposing)
 {
 if (!p_disposedValue)
 {
 if (disposing)
 {
 }

 p_packet.Dispose();
 p_framesPool.Flush();
 p_framesBufferPool.Flush();

 p_disposedValue = true;
 }
 }

 ~Video()
 {
 Dispose(disposing: false);
 }

 public void Dispose()
 {
 Dispose(disposing: true);
 GC.SuppressFinalize(this);
 }
}
</avframewrapper></avframewrapper></avframewrapper></avframewrapper></avpacketwrapper></int>


My
FramesPool
class :

using Core.Backends.FFmpeg.UnsafeWrappers;
using FFmpeg.AutoGen.Abstractions;

namespace Core.Backends.FFmpeg.Internal;

internal class FramesPool
{
 private readonly AVFrameWrapper p_frameWrapper;
 private readonly Queue<avframewrapper> p_frames;
 private readonly int p_count;

 public FramesPool(AVFrameWrapper _initframeWrapper, int _count = 1)
 {
 p_frameWrapper = _initframeWrapper;
 p_frames = new(_count);
 p_frames.Enqueue(p_frameWrapper);
 p_count = _count;
 }

 private static unsafe void UnrefFrame(AVFrameWrapper _frame) => ffmpeg.av_frame_unref(_frame.AVFramePointer);

 public IEnumerable<avframewrapper> GetNextFrame()
 {
 // First frame case
 UnrefFrame(p_frameWrapper);
 yield return p_frameWrapper;

 while (true)
 {
 if (p_frames.Count < p_count)
 {
 var new_frame = p_frameWrapper.Clone();
 p_frames.Enqueue(new_frame);
 yield return new_frame;
 }
 else
 {
 var frame = p_frames.Dequeue();
 UnrefFrame(frame);
 yield return frame;
 p_frames.Enqueue(frame);
 }
 }
 }

 public void Flush()
 {
 foreach(var frame in p_frames)
 {
 UnrefFrame(frame);
 frame.Dispose();
 }

 p_frames.Clear();
 }
}
</avframewrapper></avframewrapper>


Additional calls, among others :


ffmpeg.avformat_alloc_context();
ffmpeg.avformat_open_input(pptr, p_filePath, null, null);
ffmpeg.av_hwdevice_ctx_create(&p_avCodecHandler!.AVCodecContextPointer->hw_device_ctx, strongDevice, null, null, 0);
ffmpeg.av_find_best_stream(/*args*/);
ffmpeg.avcodec_alloc_context3(p_avCodec);
ffmpeg.avcodec_parameters_to_context(/*args*/);
ffmpeg.avcodec_open2(/*args*/);



Function
ffmpeg.av_hwframe_transfer_data(_buffer.AVFramePointer, _frame.AVFramePointer, flags: 0);
returns "-22" (message : "Invalid argument")

Please, help me)


-
How to stream synchronized video and audio in real-time from an Android smartphone using HLS while preserving orientation metadata ?
6 mars, par Jérôme LAROSEHello, 
I am working on an Android application where I need to stream video
from one or two cameras on my smartphone, along with audio from the
microphone, in real-time via a link or web page accessible to users.
The stream should be live, allow rewinding (DVR functionality), and be
recorded simultaneously. A latency of 1 to 2 minutes is acceptable,
and the streaming is one-way. 

I have chosen HLS (HTTP Live Streaming) for its browser compatibility
and DVR support. However, I am encountering issues with audio-video
synchronization, managing camera orientation metadata, and format
conversions.



Here are my attempts :


- 

-
MP4 segmentation with
MediaRecorder


- 

- I used
MediaRecorder
withsetNextOutputFile
to generate short MP4 segments, thenffmpeg-kit
to convert them to fMP4 for HLS. - Expected : Well-aligned segments for smooth HLS playback.
- Result : Timestamp issues causing jumps or interruptions in playback.








- I used
-
MPEG2-TS via local socket


- 

- I configured
MediaRecorder
to produce an MPEG2-TS stream sent via a local socket toffmpeg-kit
. - Expected : Stable streaming with preserved metadata.
- Result : Streaming works, but orientation metadata is lost, leading to incorrectly oriented video (e.g., rotated 90°).








- I configured
-
Orientation correction with
ffmpeg


- 

- I tested
-vf transpose=1
inffmpeg
to correct the orientation. - Expected : Correctly oriented video without excessive latency.
- Result : Re-encoding takes too long for real-time streaming, causing unacceptable latency.








- I tested
-
MPEG2-TS to fMP4 conversion


- 

- I converted the MPEG2-TS stream to fMP4 with
ffmpeg
to preserve orientation. - Expected : Perfect audio-video synchronization.
- Result : Slight desynchronization between audio and video, affecting the user experience.








- I converted the MPEG2-TS stream to fMP4 with










I am looking for a solution to :


- 

- Stream an HLS feed from Android with correctly timestamped segments.
- Preserve orientation metadata without heavy re-encoding.
- Ensure perfect audio-video synchronization.








UPDATE


package com.example.angegardien

import android.Manifest
import android.content.Context
import android.content.pm.PackageManager
import android.graphics.SurfaceTexture
import android.hardware.camera2.*
import android.media.*
import android.os.*
import android.util.Log
import android.view.Surface
import android.view.TextureView
import android.view.WindowManager
import androidx.activity.ComponentActivity
import androidx.core.app.ActivityCompat
import com.arthenica.ffmpegkit.FFmpegKit
import fi.iki.elonen.NanoHTTPD
import kotlinx.coroutines.*
import java.io.File
import java.io.IOException
import java.net.ServerSocket
import android.view.OrientationEventListener

/**
 * MainActivity class:
 * - Manages camera operations using the Camera2 API.
 * - Records video using MediaRecorder.
 * - Pipes data to FFmpeg to generate HLS segments.
 * - Hosts a local HLS server using NanoHTTPD to serve the generated HLS content.
 */
class MainActivity : ComponentActivity() {

 // TextureView used for displaying the camera preview.
 private lateinit var textureView: TextureView
 // Camera device instance.
 private lateinit var cameraDevice: CameraDevice
 // Camera capture session for managing capture requests.
 private lateinit var cameraCaptureSession: CameraCaptureSession
 // CameraManager to access camera devices.
 private lateinit var cameraManager: CameraManager
 // Directory where HLS output files will be stored.
 private lateinit var hlsDir: File
 // Instance of the HLS server.
 private lateinit var hlsServer: HlsServer

 // Camera id ("1" corresponds to the rear camera).
 private val cameraId = "1"
 // Flag indicating whether recording is currently active.
 private var isRecording = false

 // MediaRecorder used for capturing audio and video.
 private lateinit var activeRecorder: MediaRecorder
 // Surface for the camera preview.
 private lateinit var previewSurface: Surface
 // Surface provided by MediaRecorder for recording.
 private lateinit var recorderSurface: Surface

 // Port for the FFmpeg local socket connection.
 private val ffmpegPort = 8080

 // Coroutine scope to manage asynchronous tasks.
 private val scope = CoroutineScope(Dispatchers.IO + SupervisorJob())

 // Variables to track current device rotation and listen for orientation changes.
 private var currentRotation = 0
 private lateinit var orientationListener: OrientationEventListener

 override fun onCreate(savedInstanceState: Bundle?) {
 super.onCreate(savedInstanceState)

 // Initialize the TextureView and set it as the content view.
 textureView = TextureView(this)
 setContentView(textureView)

 // Get the CameraManager system service.
 cameraManager = getSystemService(CAMERA_SERVICE) as CameraManager
 // Setup the directory for HLS output.
 setupHLSDirectory()

 // Start the local HLS server on port 8081.
 hlsServer = HlsServer(8081, hlsDir, this)
 try {
 hlsServer.start()
 Log.d("HLS_SERVER", "HLS Server started on port 8081")
 } catch (e: IOException) {
 Log.e("HLS_SERVER", "Error starting HLS Server", e)
 }

 // Initialize the current rotation.
 currentRotation = getDeviceRotation()

 // Add a listener to detect orientation changes.
 orientationListener = object : OrientationEventListener(this) {
 override fun onOrientationChanged(orientation: Int) {
 if (orientation == ORIENTATION_UNKNOWN) return // Skip unknown orientations.
 // Determine the new rotation angle.
 val newRotation = when {
 orientation >= 315 || orientation < 45 -> 0
 orientation >= 45 && orientation < 135 -> 90
 orientation >= 135 && orientation < 225 -> 180
 orientation >= 225 && orientation < 315 -> 270
 else -> 0
 }
 // If the rotation has changed and recording is active, update the rotation.
 if (newRotation != currentRotation && isRecording) {
 Log.d("ROTATION", "Orientation change detected: $newRotation")
 currentRotation = newRotation
 }
 }
 }
 orientationListener.enable()

 // Set up the TextureView listener to know when the surface is available.
 textureView.surfaceTextureListener = object : TextureView.SurfaceTextureListener {
 override fun onSurfaceTextureAvailable(surface: SurfaceTexture, width: Int, height: Int) {
 // Open the camera when the texture becomes available.
 openCamera()
 }
 override fun onSurfaceTextureSizeChanged(surface: SurfaceTexture, width: Int, height: Int) {}
 override fun onSurfaceTextureDestroyed(surface: SurfaceTexture) = false
 override fun onSurfaceTextureUpdated(surface: SurfaceTexture) {}
 }
 }

 /**
 * Sets up the HLS directory in the public Downloads folder.
 * If the directory exists, it deletes it recursively and creates a new one.
 */
 private fun setupHLSDirectory() {
 val downloadsDir = Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_DOWNLOADS)
 hlsDir = File(downloadsDir, "HLS_Output")

 if (hlsDir.exists()) {
 hlsDir.deleteRecursively()
 }
 hlsDir.mkdirs()

 Log.d("HLS", "📂 HLS folder created: ${hlsDir.absolutePath}")
 }

 /**
 * Opens the camera after checking for necessary permissions.
 */
 private fun openCamera() {
 if (ActivityCompat.checkSelfPermission(this, Manifest.permission.CAMERA) != PackageManager.PERMISSION_GRANTED ||
 ActivityCompat.checkSelfPermission(this, Manifest.permission.RECORD_AUDIO) != PackageManager.PERMISSION_GRANTED) {
 // Request permissions if they are not already granted.
 ActivityCompat.requestPermissions(this, arrayOf(Manifest.permission.CAMERA, Manifest.permission.RECORD_AUDIO), 101)
 return
 }

 try {
 // Open the specified camera using its cameraId.
 cameraManager.openCamera(cameraId, object : CameraDevice.StateCallback() {
 override fun onOpened(camera: CameraDevice) {
 cameraDevice = camera
 // Start the recording session once the camera is opened.
 startNextRecording()
 }
 override fun onDisconnected(camera: CameraDevice) { camera.close() }
 override fun onError(camera: CameraDevice, error: Int) { camera.close() }
 }, null)
 } catch (e: CameraAccessException) {
 e.printStackTrace()
 }
 }

 /**
 * Starts a new recording session:
 * - Sets up the preview and recorder surfaces.
 * - Creates a pipe for MediaRecorder output.
 * - Creates a capture session for simultaneous preview and recording.
 */
 private fun startNextRecording() {
 // Get the SurfaceTexture from the TextureView and set its default buffer size.
 val texture = textureView.surfaceTexture!!
 texture.setDefaultBufferSize(1920, 1080)
 // Create the preview surface.
 previewSurface = Surface(texture)

 // Create and configure the MediaRecorder.
 activeRecorder = createMediaRecorder()

 // Create a pipe to route MediaRecorder data.
 val pipe = ParcelFileDescriptor.createPipe()
 val pfdWrite = pipe[1] // Write end used by MediaRecorder.
 val pfdRead = pipe[0] // Read end used by the local socket server.

 // Set MediaRecorder output to the file descriptor of the write end.
 activeRecorder.setOutputFile(pfdWrite.fileDescriptor)
 setupMediaRecorder(activeRecorder)
 // Obtain the recorder surface from MediaRecorder.
 recorderSurface = activeRecorder.surface

 // Create a capture request using the RECORD template.
 val captureRequestBuilder = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_RECORD)
 captureRequestBuilder.addTarget(previewSurface)
 captureRequestBuilder.addTarget(recorderSurface)

 // Create a capture session including both preview and recorder surfaces.
 cameraDevice.createCaptureSession(
 listOf(previewSurface, recorderSurface),
 object : CameraCaptureSession.StateCallback() {
 override fun onConfigured(session: CameraCaptureSession) {
 cameraCaptureSession = session
 captureRequestBuilder.set(CaptureRequest.CONTROL_MODE, CameraMetadata.CONTROL_MODE_AUTO)
 // Start a continuous capture request.
 cameraCaptureSession.setRepeatingRequest(captureRequestBuilder.build(), null, null)

 // Launch a coroutine to start FFmpeg and MediaRecorder with synchronization.
 scope.launch {
 startFFmpeg()
 delay(500) // Wait for FFmpeg to be ready.
 activeRecorder.start()
 isRecording = true
 Log.d("HLS", "🎥 Recording started...")
 }

 // Launch a coroutine to run the local socket server to forward data.
 scope.launch {
 startLocalSocketServer(pfdRead)
 }
 }
 override fun onConfigureFailed(session: CameraCaptureSession) {
 Log.e("Camera2", "❌ Configuration failed")
 }
 },
 null
 )
 }

 /**
 * Coroutine to start a local socket server.
 * It reads from the MediaRecorder pipe and sends the data to FFmpeg.
 */
 private suspend fun startLocalSocketServer(pfdRead: ParcelFileDescriptor) {
 withContext(Dispatchers.IO) {
 val serverSocket = ServerSocket(ffmpegPort)
 Log.d("HLS", "Local socket server started on port $ffmpegPort")

 // Accept connection from FFmpeg.
 val socket = serverSocket.accept()
 Log.d("HLS", "Connection accepted from FFmpeg")

 // Read data from the pipe and forward it through the socket.
 val inputStream = ParcelFileDescriptor.AutoCloseInputStream(pfdRead)
 val outputStream = socket.getOutputStream()
 val buffer = ByteArray(8192)
 var bytesRead: Int
 while (inputStream.read(buffer).also { bytesRead = it } != -1) {
 outputStream.write(buffer, 0, bytesRead)
 }
 outputStream.close()
 inputStream.close()
 socket.close()
 serverSocket.close()
 }
 }

 /**
 * Coroutine to start FFmpeg using a local TCP input.
 * Applies a video rotation filter based on device orientation and generates HLS segments.
 */
 private suspend fun startFFmpeg() {
 withContext(Dispatchers.IO) {
 // Retrieve the appropriate transpose filter based on current rotation.
 val transposeFilter = getTransposeFilter(currentRotation)

 // FFmpeg command to read from the TCP socket and generate an HLS stream.
 // Two alternative commands are commented below.
 // val ffmpegCommand = "-fflags +genpts -i tcp://localhost:$ffmpegPort -c copy -bsf:a aac_adtstoasc -movflags +faststart -f dash -seg_duration 10 -hls_playlist 1 ${hlsDir.absolutePath}/manifest.mpd"
 // val ffmpegCommand = "-fflags +genpts -i tcp://localhost:$ffmpegPort -c copy -bsf:a aac_adtstoasc -movflags +faststart -f hls -hls_time 5 -hls_segment_type fmp4 -hls_flags split_by_time -hls_list_size 0 -hls_playlist_type event -hls_fmp4_init_filename init.mp4 -hls_segment_filename ${hlsDir.absolutePath}/segment_%03d.m4s ${hlsDir.absolutePath}/playlist.m3u8"
 val ffmpegCommand = "-fflags +genpts -i tcp://localhost:$ffmpegPort -vf $transposeFilter -c:v libx264 -preset ultrafast -crf 23 -c:a copy -movflags +faststart -f hls -hls_time 0.1 -hls_segment_type mpegts -hls_flags split_by_time -hls_list_size 0 -hls_playlist_type event -hls_segment_filename ${hlsDir.absolutePath}/segment_%03d.ts ${hlsDir.absolutePath}/playlist.m3u8"

 FFmpegKit.executeAsync(ffmpegCommand) { session ->
 if (session.returnCode.isValueSuccess) {
 Log.d("HLS", "✅ HLS generated successfully")
 } else {
 Log.e("FFmpeg", "❌ Error generating HLS: ${session.allLogsAsString}")
 }
 }
 }
 }

 /**
 * Gets the current device rotation using the WindowManager.
 */
 private fun getDeviceRotation(): Int {
 val windowManager = getSystemService(Context.WINDOW_SERVICE) as WindowManager
 return when (windowManager.defaultDisplay.rotation) {
 Surface.ROTATION_0 -> 0
 Surface.ROTATION_90 -> 90
 Surface.ROTATION_180 -> 180
 Surface.ROTATION_270 -> 270
 else -> 0
 }
 }

 /**
 * Returns the FFmpeg transpose filter based on the rotation angle.
 * Used to rotate the video stream accordingly.
 */
 private fun getTransposeFilter(rotation: Int): String {
 return when (rotation) {
 90 -> "transpose=1" // 90° clockwise
 180 -> "transpose=2,transpose=2" // 180° rotation
 270 -> "transpose=2" // 90° counter-clockwise
 else -> "transpose=0" // No rotation
 }
 }

 /**
 * Creates and configures a MediaRecorder instance.
 * Sets up audio and video sources, formats, encoders, and bitrates.
 */
 private fun createMediaRecorder(): MediaRecorder {
 return MediaRecorder().apply {
 setAudioSource(MediaRecorder.AudioSource.MIC)
 setVideoSource(MediaRecorder.VideoSource.SURFACE)
 setOutputFormat(MediaRecorder.OutputFormat.MPEG_2_TS)
 setVideoEncodingBitRate(5000000)
 setVideoFrameRate(24)
 setVideoSize(1080, 720)
 setVideoEncoder(MediaRecorder.VideoEncoder.H264)
 setAudioEncoder(MediaRecorder.AudioEncoder.AAC)
 setAudioSamplingRate(16000)
 setAudioEncodingBitRate(96000) // 96 kbps
 }
 }

 /**
 * Prepares the MediaRecorder and logs the outcome.
 */
 private fun setupMediaRecorder(recorder: MediaRecorder) {
 try {
 recorder.prepare()
 Log.d("HLS", "✅ MediaRecorder prepared")
 } catch (e: IOException) {
 Log.e("HLS", "❌ Error preparing MediaRecorder", e)
 }
 }

 /**
 * Custom HLS server class extending NanoHTTPD.
 * Serves HLS segments and playlists from the designated HLS directory.
 */
 private inner class HlsServer(port: Int, private val hlsDir: File, private val context: Context) : NanoHTTPD(port) {
 override fun serve(session: IHTTPSession): Response {
 val uri = session.uri.trimStart('/')

 // Intercept the request for `init.mp4` and serve it from assets.
 /*
 if (uri == "init.mp4") {
 Log.d("HLS Server", "📡 Intercepting init.mp4, sending file from assets...")
 return try {
 val assetManager = context.assets
 val inputStream = assetManager.open("init.mp4")
 newFixedLengthResponse(Response.Status.OK, "video/mp4", inputStream, inputStream.available().toLong())
 } catch (e: Exception) {
 Log.e("HLS Server", "❌ Error reading init.mp4 from assets: ${e.message}")
 newFixedLengthResponse(Response.Status.INTERNAL_ERROR, MIME_PLAINTEXT, "Server error")
 }
 }
 */

 // Serve all other HLS files normally from the hlsDir.
 val file = File(hlsDir, uri)
 return if (file.exists()) {
 newFixedLengthResponse(Response.Status.OK, getMimeTypeForFile(uri), file.inputStream(), file.length())
 } else {
 newFixedLengthResponse(Response.Status.NOT_FOUND, MIME_PLAINTEXT, "File not found")
 }
 }
 }

 /**
 * Clean up resources when the activity is destroyed.
 * Stops recording, releases the camera, cancels coroutines, and stops the HLS server.
 */
 override fun onDestroy() {
 super.onDestroy()
 if (isRecording) {
 activeRecorder.stop()
 activeRecorder.release()
 }
 cameraDevice.close()
 scope.cancel()
 hlsServer.stop()
 orientationListener.disable()
 Log.d("HLS", "🛑 Activity destroyed")
 }
}



I have three examples of ffmpeg commands.


- 

- One command segments into DASH, but the camera does not have the correct rotation.
- One command segments into HLS without re-encoding with 5-second segments ; it’s fast but does not have the correct rotation.
- One command segments into HLS with re-encoding, which applies a rotation. It’s too slow for 5-second segments, so a 1-second segment was chosen.








Note :


- 

- In the second command ("One command segments into HLS without re-encoding with 5-second segments ; it’s fast but does not have the correct rotation."), it returns fMP4. To achieve the correct rotation, I provide a preconfigured
init.mp4
file during the HTTP request to retrieve it (see comment). - In the third command ("One command segments into HLS with re-encoding, which applies a rotation. It’s too slow for 5-second segments, so a 1-second segment was chosen."), it returns TS.






-