Recherche avancée

Médias (91)

Autres articles (70)

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

Sur d’autres sites (4717)

  • How to Record Video of a Dynamic Div Containing Multiple Media Elements in React Konva ?

    14 septembre 2024, par Humayoun Saeed

    I'm working on a React application where I need to record a video of a specific div with the class name "layout." This div contains multiple media elements (such as images and videos) that are dynamically rendered inside divisions. I've tried several approaches, including using MediaRecorder, canvas-based recording with html2canvas, RecordRTC, and even ffmpeg, but none seem to capture the entire div along with its dynamic content effectively.

    


    What would be the best approach to achieve this ? How can I record a video of this dynamically rendered div including all its media elements, ensuring a smooth capture of the transitions ?

    


    What I’ve Tried :
MediaRecorder API : Didn't work effectively for capturing the entire div and its elements.
html2canvas : Captures snapshots but struggles with smooth transitions between media elements.
RecordRTC HTML Element Recording : Attempts to capture the canvas, but the output video size is 0 bytes.
CanvasRecorder, FFmpeg, and various other libraries also didn't provide the desired result.

    


    import React, { useEffect, useState, useRef } from "react";&#xA;&#xA;const Preview = ({ layout, onClose }) => {&#xA;  const [currentContent, setCurrentContent] = useState([]);&#xA;  const totalDuration = useRef(0);&#xA;  const videoRefs = useRef([]); // Store refs to each video element&#xA;  const [totalTime, setTotalTime] = useState(0); // Add this line&#xA;  const [elapsedTime, setElapsedTime] = useState(0); // Track elapsed time in seconds&#xA;&#xA;  // video recording variable and state declaration&#xA;  //  video recorder end&#xA;  // for video record useffect&#xA;  // Function to capture the renderDivision content&#xA;&#xA;  const handleDownload = async () => {&#xA;    console.log("video downlaod function in developing mode.");&#xA;  };&#xA;&#xA;  // end video record useffect&#xA;&#xA;  // to apply motion and swtich in media of division start&#xA;  useEffect(() => {&#xA;    if (layout &amp;&amp; layout.divisions) {&#xA;      const content = layout.divisions.map((division) => {&#xA;        let divisionDuration = 0;&#xA;&#xA;        division.imageSrcs.forEach((src, index) => {&#xA;          const mediaDuration = division.durations[index]&#xA;            ? division.durations[index] * 1000 // Convert to milliseconds&#xA;            : 5000; // Fallback to 5 seconds if duration is missing&#xA;          divisionDuration &#x2B;= mediaDuration;&#xA;        });&#xA;&#xA;        return {&#xA;          division,&#xA;          contentIndex: 0,&#xA;          divisionDuration,&#xA;        };&#xA;      });&#xA;&#xA;      // Find the maximum duration&#xA;      const maxDuration = Math.max(...content.map((c) => c.divisionDuration));&#xA;&#xA;      // Filter divisions that have the max duration&#xA;      const maxDurationDivisions = content.filter(&#xA;        (c) => c.divisionDuration === maxDuration&#xA;      );&#xA;&#xA;      // Select the first one if there are multiple with the same max duration&#xA;      const selectedMaxDurationDivision = maxDurationDivisions[0];&#xA;&#xA;      totalDuration.current = selectedMaxDurationDivision.divisionDuration; // Update the total duration in milliseconds&#xA;&#xA;      setTotalTime(Math.floor(totalDuration.current / 1000000)); // Convert to seconds and set in state&#xA;&#xA;      // console.log(&#xA;      //   "Division with max duration (including ties):",&#xA;      //   selectedMaxDurationDivision&#xA;      // );&#xA;&#xA;      setCurrentContent(content);&#xA;    }&#xA;  }, [layout]);&#xA;&#xA;  useEffect(() => {&#xA;    if (currentContent.length > 0) {&#xA;      const timers = currentContent.map(({ division, contentIndex }, i) => {&#xA;        const duration = division.durations[contentIndex]&#xA;          ? division.durations[contentIndex] // Duration is already in ms&#xA;          : 5000; // Default to 5000ms if no duration is defined&#xA;&#xA;        const mediaElement = videoRefs.current[i];&#xA;        if (mediaElement &amp;&amp; mediaElement.pause) {&#xA;          mediaElement.pause();&#xA;        }&#xA;&#xA;        // Set up a timeout for each division to move to the next media after duration&#xA;        const timeoutId = setTimeout(() => {&#xA;          // Update content for each division independently&#xA;          updateContent(i, division, contentIndex, duration); // Move to the next content after duration&#xA;&#xA;          // Ensure proper cleanup&#xA;          if (contentIndex &#x2B; 1 >= division.imageSrcs.length) {&#xA;            clearTimeout(timeoutId); // Clear timeout to stop looping&#xA;          }&#xA;        }, duration);&#xA;&#xA;        // Cleanup timers on component unmount&#xA;        return timeoutId;&#xA;      });&#xA;&#xA;      // Return cleanup function to clear all timeouts&#xA;      return () => timers.forEach((timer) => clearTimeout(timer));&#xA;    }&#xA;  }, [currentContent]);&#xA;  // to apply motion and swtich in media of division end&#xA;&#xA;  // Handle video updates when the duration is changed or a new video starts&#xA;  const updateContent = (i, division, contentIndex, duration) => {&#xA;    const newContent = [...currentContent];&#xA;&#xA;    // Check if we are on the last media item&#xA;    if (contentIndex &#x2B; 1 &lt; division.imageSrcs.length) {&#xA;      // Move to next media if not the last one&#xA;      newContent[i].contentIndex = contentIndex &#x2B; 1;&#xA;    } else {&#xA;      // If this is the last media item, pause here&#xA;      newContent[i].contentIndex = contentIndex; // Keep it at the last item&#xA;      setCurrentContent(newContent);&#xA;&#xA;      // Handle video pause if the last media is a video&#xA;      const mediaElement = videoRefs.current[i];&#xA;      if (mediaElement &amp;&amp; mediaElement.tagName === "VIDEO") {&#xA;        mediaElement.pause();&#xA;        mediaElement.currentTime = mediaElement.duration; // Pause at the end of the video&#xA;      }&#xA;      return; // Exit the function as we don&#x27;t want to loop anymore&#xA;    }&#xA;&#xA;    // Update state to trigger rendering of the next media&#xA;    setCurrentContent(newContent);&#xA;&#xA;    // Handle video playback for the next media item&#xA;    const mediaElement = videoRefs.current[i];&#xA;    if (mediaElement) {&#xA;      mediaElement.pause();&#xA;      mediaElement.currentTime = 0;&#xA;      mediaElement&#xA;        .play()&#xA;        .catch((error) => console.error("Error playing video:", error));&#xA;    }&#xA;  };&#xA;&#xA;  const renderDivision = (division, contentIndex, index) => {&#xA;    const mediaSrc = division.imageSrcs[contentIndex];&#xA;&#xA;    if (!division || !division.imageSrcs || division.imageSrcs.length === 0) {&#xA;      return (&#xA;        &#xA;          <p>No media available</p>&#xA;        &#xA;      );&#xA;    }&#xA;&#xA;    if (!mediaSrc) {&#xA;      return (&#xA;        &#xA;          <p>No media available</p>&#xA;        &#xA;      );&#xA;    }&#xA;&#xA;    if (mediaSrc.endsWith(".mp4")) {&#xA;      return (&#xA;        > (videoRefs.current[index] = el)}&#xA;          src={mediaSrc}&#xA;          autoPlay&#xA;          controls={false}&#xA;          style={{&#xA;            width: "100%",&#xA;            height: "100%",&#xA;            objectFit: "cover",&#xA;            pointerEvents: "none",&#xA;          }}&#xA;          onLoadedData={() => {&#xA;            // Ensure video is properly loaded&#xA;            const mediaElement = videoRefs.current[index];&#xA;            if (mediaElement &amp;&amp; mediaElement.readyState >= 3) {&#xA;              mediaElement.play().catch((error) => {&#xA;                console.error("Error attempting to play the video:", error);&#xA;              });&#xA;            }&#xA;          }}&#xA;        />&#xA;      );&#xA;    } else {&#xA;      return (&#xA;        &#xA;      );&#xA;    }&#xA;  };&#xA;&#xA;  // progress bar code start&#xA;  useEffect(() => {&#xA;    if (totalDuration.current > 0) {&#xA;      // Reset elapsed time at the start&#xA;      setElapsedTime(0);&#xA;&#xA;      const interval = setInterval(() => {&#xA;        setElapsedTime((prevTime) => {&#xA;          // Increment the elapsed time by 1 second if it&#x27;s less than the total time&#xA;          if (prevTime &lt; totalTime) {&#xA;            return prevTime &#x2B; 1;&#xA;          } else {&#xA;            clearInterval(interval); // Clear the interval when totalTime is reached&#xA;            return prevTime;&#xA;          }&#xA;        });&#xA;      }, 1000); // Update every second&#xA;&#xA;      // Clean up the interval on component unmount&#xA;      return () => clearInterval(interval);&#xA;    }&#xA;  }, [totalTime]);&#xA;&#xA;  // progress bar code end&#xA;&#xA;  return (&#xA;    &#xA;      &#xA;        &#xA;          Close&#xA;        &#xA;        <h2>Preview Layout: {layout.name}</h2>&#xA;        &#xA;          {currentContent.map(({ division, contentIndex }, i) => (&#xA;            &#xA;              {renderDivision(division, contentIndex, i)}&#xA;            &#xA;          ))}&#xA;          {/* canvas code for video start */}&#xA;          {/* canvas code for video end */}&#xA;          {/* Progress Bar and Time */}&#xA;          / Background color for progress bar track&#xA;              display: "flex",&#xA;              justifyContent: "space-between",&#xA;              alignItems: "center",&#xA;            }}&#xA;          >&#xA;             totalTime) * 100}%)`,&#xA;                backgroundColor: "#28a745", // Green color for progress bar&#xA;                transition: "width 0.5s linear", // Smooth transition&#xA;              }}&#xA;            >&#xA;&#xA;            {/* Time display */}&#xA;            {/* / Fixed right margin&#xA;                zIndex: 1, // Ensure it&#x27;s above the progress bar&#xA;                padding: "5px",&#xA;                fontSize: "18px",&#xA;                fontWeight: "600",&#xA;                color: "#333",&#xA;                // backgroundColor: "rgba(255, 255, 255, 0.8)", // Add a subtle background for readability&#xA;              }}&#xA;            >&#xA;              {elapsedTime} / {totalTime}s&#xA;             */}&#xA;          &#xA;        &#xA;&#xA;        {/* Download button */}&#xA;        > (e.target.style.backgroundColor = "#218838")}&#xA;          onMouseOut={(e) => (e.target.style.backgroundColor = "#28a745")}&#xA;        >&#xA;          Download Video&#xA;        &#xA;        {/* {recording &amp;&amp; <p>Recording in progress...</p>} */}&#xA;      &#xA;    &#xA;  );&#xA;};&#xA;&#xA;export default Preview;&#xA;&#xA;

    &#xA;

    I tried several methods to record the content of the div with the class "layout," which contains dynamic media elements such as images and videos. The approaches I attempted include :

    &#xA;

    MediaRecorder API : I expected this API to capture the entire div and its contents, but it didn't handle the rendering of all dynamic media elements properly.

    &#xA;

    html2canvas : I used this to capture the layout as a canvas and then attempted to convert it into a video stream. However, it could not capture smooth transitions between media elements, leading to a choppy or incomplete video output.

    &#xA;

    RecordRTC : I integrated RecordRTC to capture the canvas stream of the div. Despite setting up the recorder, the resulting video file either had a 0-byte size or only captured parts of the content inconsistently.

    &#xA;

    FFmpeg and other libraries : I explored these tools hoping they would provide a seamless capture of the dynamic content, but they also failed to capture the full media elements, including videos playing within the layout.

    &#xA;

    In all cases, I expected to get a complete video recording of the div, including all media transitions, but the results were incomplete or not functional.

    &#xA;

    Now, I’m seeking an approach or best practice to record the entire div with its dynamic content and media playback.

    &#xA;

  • Merged Video Contains Inverted Clips After First Video Ends

    3 février, par Nikunj Agrawal

    I am working on a Flutter application that merges multiple videos using ffmpeg_kit_flutter . However, after merging, I notice that the second video (and any subsequent ones) appear inverted or rotated in the final output.

    &#xA;

    Issue Details :

    &#xA;

      &#xA;
    1. The first video appears normal.
    2. &#xA;

    3. The videos can be recorded using both front and back cameras.
    4. &#xA;

    5. The second (and later) videos are flipped or rotated upside down.
    6. &#xA;

    7. This happens after merging using ffmpeg_kit_flutter.
    8. &#xA;

    &#xA;

    Question :&#xA;How can I correctly merge multiple videos in Flutter without rotation issues ? Is there a way to normalize video orientation before merging using ffmpeg_kit_flutter ?

    &#xA;

    Any help would be appreciated ! 🚀

    &#xA;

    Code :

    &#xA;

    import &#x27;dart:io&#x27;;&#xA;import &#x27;dart:math&#x27;;&#xA;&#xA;import &#x27;package:camera/camera.dart&#x27;;&#xA;import &#x27;package:ffmpeg_kit_flutter/ffmpeg_kit.dart&#x27;;&#xA;import &#x27;package:ffmpeg_kit_flutter/return_code.dart&#x27;;&#xA;import &#x27;package:flutter/material.dart&#x27;;&#xA;import &#x27;package:path_provider/path_provider.dart&#x27;;&#xA;import &#x27;package:permission_handler/permission_handler.dart&#x27;;&#xA;import &#x27;package:record/record.dart&#x27;;&#xA;import &#x27;package:videotest/video_player.dart&#x27;;&#xA;&#xA;class MergeVideoRecording extends StatefulWidget {&#xA;  const MergeVideoRecording({super.key});&#xA;&#xA;  @override&#xA;  State<mergevideorecording> createState() => _MergeVideoRecordingState();&#xA;}&#xA;&#xA;class _MergeVideoRecordingState extends State<mergevideorecording> {&#xA;  CameraController? _cameraController;&#xA;  final AudioRecorder _audioRecorder = AudioRecorder();&#xA;&#xA;  bool _isRecording = false;&#xA;  String? _videoPath;&#xA;  String? _audioPath;&#xA;  List<cameradescription> _cameras = [];&#xA;  int _currentCameraIndex = 0;&#xA;  final List<string> _recordedVideos = [];&#xA;&#xA;  @override&#xA;  Widget build(BuildContext context) {&#xA;    return Scaffold(&#xA;      body: Column(&#xA;        mainAxisAlignment: MainAxisAlignment.center,&#xA;        children: [&#xA;          _cameraController != null &amp;&amp; _cameraController!.value.isInitialized&#xA;              ? SizedBox(&#xA;                  width: MediaQuery.of(context).size.width * 0.4,&#xA;                  height: MediaQuery.of(context).size.height * 0.3,&#xA;                  child: Stack(&#xA;                    children: [&#xA;                      ClipRRect(&#xA;                        borderRadius: BorderRadius.circular(16),&#xA;                        child: SizedBox(&#xA;                          width: MediaQuery.of(context).size.width * 0.4,&#xA;                          height: MediaQuery.of(context).size.height * 0.3,&#xA;                          child: Transform(&#xA;                            alignment: Alignment.center,&#xA;                            transform:&#xA;                                _cameras[_currentCameraIndex].lensDirection ==&#xA;                                        CameraLensDirection.front&#xA;                                    ? Matrix4.rotationY(pi)&#xA;                                    : Matrix4.identity(),&#xA;                            child: CameraPreview(_cameraController!),&#xA;                          ),&#xA;                        ),&#xA;                      ),&#xA;                      Align(&#xA;                        alignment: Alignment.topRight,&#xA;                        child: InkWell(&#xA;                          onTap: _switchCamera,&#xA;                          child: const Padding(&#xA;                            padding: EdgeInsets.all(8.0),&#xA;                            child: CircleAvatar(&#xA;                              radius: 18,&#xA;                              backgroundColor: Colors.white,&#xA;                              child: Icon(&#xA;                                Icons.flip_camera_android,&#xA;                                color: Colors.black,&#xA;                              ),&#xA;                            ),&#xA;                          ),&#xA;                        ),&#xA;                      ),&#xA;                    ],&#xA;                  ),&#xA;                )&#xA;              : const CircularProgressIndicator(),&#xA;          const SizedBox(height: 16),&#xA;          Row(&#xA;            mainAxisAlignment: MainAxisAlignment.center,&#xA;            children: [&#xA;              FloatingActionButton(&#xA;                heroTag: &#x27;record_button&#x27;,&#xA;                onPressed: _toggleRecording,&#xA;                child: Icon(&#xA;                  _isRecording ? Icons.stop : Icons.video_camera_back,&#xA;                ),&#xA;              ),&#xA;              const SizedBox(&#xA;                width: 50,&#xA;              ),&#xA;              FloatingActionButton(&#xA;                heroTag: &#x27;merge_button&#x27;,&#xA;                onPressed: _mergeVideos,&#xA;                child: const Icon(&#xA;                  Icons.merge,&#xA;                ),&#xA;              ),&#xA;            ],&#xA;          ),&#xA;          if (!_isRecording)&#xA;            ListView.builder(&#xA;              shrinkWrap: true,&#xA;              itemCount: _recordedVideos.length,&#xA;              itemBuilder: (context, index) => InkWell(&#xA;                onTap: () {&#xA;                  Navigator.push(&#xA;                    context,&#xA;                    MaterialPageRoute(&#xA;                      builder: (context) => VideoPlayerScreen(&#xA;                        videoPath: _recordedVideos[index],&#xA;                      ),&#xA;                    ),&#xA;                  );&#xA;                },&#xA;                child: ListTile(&#xA;                  title: Text(&#x27;Video ${index &#x2B; 1}&#x27;),&#xA;                  subtitle: Text(&#x27;Path ${_recordedVideos[index]}&#x27;),&#xA;                  trailing: const Icon(Icons.play_arrow),&#xA;                ),&#xA;              ),&#xA;            ),&#xA;        ],&#xA;      ),&#xA;    );&#xA;  }&#xA;&#xA;  @override&#xA;  void dispose() {&#xA;    _cameraController?.dispose();&#xA;    _audioRecorder.dispose();&#xA;    super.dispose();&#xA;  }&#xA;&#xA;  @override&#xA;  void initState() {&#xA;    super.initState();&#xA;    _initializeDevices();&#xA;  }&#xA;&#xA;  Future<void> _initializeCameraController(CameraDescription camera) async {&#xA;    _cameraController = CameraController(&#xA;      camera,&#xA;      ResolutionPreset.high,&#xA;      enableAudio: true,&#xA;      imageFormatGroup: ImageFormatGroup.yuv420, // Add this line&#xA;    );&#xA;&#xA;    await _cameraController!.initialize();&#xA;    await _cameraController!.setExposureMode(ExposureMode.auto);&#xA;    await _cameraController!.setFocusMode(FocusMode.auto);&#xA;    setState(() {});&#xA;  }&#xA;&#xA;  Future<void> _initializeDevices() async {&#xA;    final cameraStatus = await Permission.camera.request();&#xA;    final micStatus = await Permission.microphone.request();&#xA;&#xA;    if (!cameraStatus.isGranted || !micStatus.isGranted) {&#xA;      _showError(&#x27;Camera and microphone permissions required&#x27;);&#xA;      return;&#xA;    }&#xA;&#xA;    _cameras = await availableCameras();&#xA;    if (_cameras.isNotEmpty) {&#xA;      final frontCameraIndex = _cameras.indexWhere(&#xA;          (camera) => camera.lensDirection == CameraLensDirection.front);&#xA;      _currentCameraIndex = frontCameraIndex != -1 ? frontCameraIndex : 0;&#xA;      await _initializeCameraController(_cameras[_currentCameraIndex]);&#xA;    }&#xA;  }&#xA;&#xA;  // Merge video&#xA;  Future<void> _mergeVideos() async {&#xA;    if (_recordedVideos.isEmpty) {&#xA;      _showError(&#x27;No videos to merge&#x27;);&#xA;      return;&#xA;    }&#xA;&#xA;    try {&#xA;      // Debug logging&#xA;      print(&#x27;Starting merge process&#x27;);&#xA;      print(&#x27;Number of videos to merge: ${_recordedVideos.length}&#x27;);&#xA;      for (var i = 0; i &lt; _recordedVideos.length; i&#x2B;&#x2B;) {&#xA;        final file = File(_recordedVideos[i]);&#xA;        final exists = await file.exists();&#xA;        final size = exists ? await file.length() : 0;&#xA;        print(&#x27;Video $i: ${_recordedVideos[i]}&#x27;);&#xA;        print(&#x27;Exists: $exists, Size: $size bytes&#x27;);&#xA;      }&#xA;&#xA;      final Directory appDir = await getApplicationDocumentsDirectory();&#xA;      final String outputPath =&#xA;          &#x27;${appDir.path}/merged_${DateTime.now().millisecondsSinceEpoch}.mp4&#x27;;&#xA;      final String listFilePath = &#x27;${appDir.path}/list.txt&#x27;;&#xA;&#xA;      print(&#x27;Output path: $outputPath&#x27;);&#xA;      print(&#x27;List file path: $listFilePath&#x27;);&#xA;&#xA;      // Create and verify list file&#xA;      final listFile = File(listFilePath);&#xA;      final fileContent = _recordedVideos&#xA;          .map((path) => "file &#x27;${path.replaceAll("&#x27;", "&#x27;\\&#x27;&#x27;")}&#x27;")&#xA;          .join(&#x27;\n&#x27;);&#xA;      await listFile.writeAsString(fileContent);&#xA;&#xA;      print(&#x27;List file content:&#x27;);&#xA;      print(await listFile.readAsString());&#xA;&#xA;      // Simpler FFmpeg command for testing&#xA;      final command = &#x27;&#x27;&#x27;&#xA;      -f concat&#xA;      -safe 0&#xA;      -i "$listFilePath"&#xA;      -c copy&#xA;      -y&#xA;      "$outputPath"&#xA;    &#x27;&#x27;&#x27;&#xA;          .trim()&#xA;          .replaceAll(&#x27;\n&#x27;, &#x27; &#x27;);&#xA;&#xA;      print(&#x27;Executing FFmpeg command: $command&#x27;);&#xA;&#xA;      final session = await FFmpegKit.execute(command);&#xA;      final returnCode = await session.getReturnCode();&#xA;      final logs = await session.getAllLogsAsString();&#xA;      final failStackTrace = await session.getFailStackTrace();&#xA;&#xA;      print(&#x27;FFmpeg return code: ${returnCode?.getValue() ?? "null"}&#x27;);&#xA;      print(&#x27;FFmpeg logs: $logs&#x27;);&#xA;      if (failStackTrace != null) {&#xA;        print(&#x27;FFmpeg fail stack trace: $failStackTrace&#x27;);&#xA;      }&#xA;&#xA;      if (ReturnCode.isSuccess(returnCode)) {&#xA;        final outputFile = File(outputPath);&#xA;        final outputExists = await outputFile.exists();&#xA;        final outputSize = outputExists ? await outputFile.length() : 0;&#xA;&#xA;        print(&#x27;Output file exists: $outputExists&#x27;);&#xA;        print(&#x27;Output file size: $outputSize bytes&#x27;);&#xA;&#xA;        if (outputExists &amp;&amp; outputSize > 0) {&#xA;          setState(() => _recordedVideos.add(outputPath));&#xA;          _showSuccess(&#x27;Videos merged successfully&#x27;);&#xA;        } else {&#xA;          _showError(&#x27;Merged file is empty or not created&#x27;);&#xA;        }&#xA;      } else {&#xA;        _showError(&#x27;Failed to merge videos. Check logs for details.&#x27;);&#xA;      }&#xA;&#xA;      // Clean up&#xA;      try {&#xA;        await listFile.delete();&#xA;        print(&#x27;List file cleaned up successfully&#x27;);&#xA;      } catch (e) {&#xA;        print(&#x27;Failed to delete list file: $e&#x27;);&#xA;      }&#xA;    } catch (e, s) {&#xA;      print(&#x27;Error during merge: $e&#x27;);&#xA;      print(&#x27;Stack trace: $s&#x27;);&#xA;      _showError(&#x27;Error merging videos: ${e.toString()}&#x27;);&#xA;    }&#xA;  }&#xA;&#xA;  void _showError(String message) {&#xA;    ScaffoldMessenger.of(context).showSnackBar(&#xA;      SnackBar(content: Text(message), backgroundColor: Colors.red),&#xA;    );&#xA;  }&#xA;&#xA;  void _showSuccess(String message) {&#xA;    ScaffoldMessenger.of(context).showSnackBar(&#xA;      SnackBar(content: Text(message), backgroundColor: Colors.green),&#xA;    );&#xA;  }&#xA;&#xA;  Future<void> _startAudioRecording() async {&#xA;    try {&#xA;      final Directory tempDir = await getTemporaryDirectory();&#xA;      final audioPath = &#x27;${tempDir.path}/recording.wav&#x27;;&#xA;      await _audioRecorder.start(const RecordConfig(), path: audioPath);&#xA;      setState(() => _isRecording = true);&#xA;    } catch (e) {&#xA;      _showError(&#x27;Recording start error: $e&#x27;);&#xA;    }&#xA;  }&#xA;&#xA;  Future<void> _startVideoRecording() async {&#xA;    try {&#xA;      await _cameraController!.startVideoRecording();&#xA;      setState(() => _isRecording = true);&#xA;    } catch (e) {&#xA;      _showError(&#x27;Recording start error: $e&#x27;);&#xA;    }&#xA;  }&#xA;&#xA;  Future<void> _stopAndSaveAudioRecording() async {&#xA;    _audioPath = await _audioRecorder.stop();&#xA;    if (_audioPath != null) {&#xA;      final Directory appDir = await getApplicationDocumentsDirectory();&#xA;      final timestamp = DateTime.now().millisecondsSinceEpoch;&#xA;      final String audioFileName = &#x27;audio_$timestamp.wav&#x27;;&#xA;      await File(_audioPath!).copy(&#x27;${appDir.path}/$audioFileName&#x27;);&#xA;      _showSuccess(&#x27;Saved: $audioFileName&#x27;);&#xA;    }&#xA;  }&#xA;&#xA;  Future<void> _stopAndSaveVideoRecording() async {&#xA;    try {&#xA;      final video = await _cameraController!.stopVideoRecording();&#xA;      _videoPath = video.path;&#xA;&#xA;      if (_videoPath != null) {&#xA;        final Directory appDir = await getApplicationDocumentsDirectory();&#xA;        final timestamp = DateTime.now().millisecondsSinceEpoch;&#xA;        final String videoFileName = &#x27;video_$timestamp.mp4&#x27;;&#xA;        final savedVideoPath = &#x27;${appDir.path}/$videoFileName&#x27;;&#xA;        await File(_videoPath!).copy(savedVideoPath);&#xA;&#xA;        setState(() {&#xA;          _recordedVideos.add(savedVideoPath);&#xA;          _isRecording = false;&#xA;        });&#xA;&#xA;        _showSuccess(&#x27;Saved: $videoFileName&#x27;);&#xA;      }&#xA;    } catch (e) {&#xA;      _showError(&#x27;Recording stop error: $e&#x27;);&#xA;    }&#xA;  }&#xA;&#xA;  Future<void> _switchCamera() async {&#xA;    if (_cameras.length &lt;= 1) return;&#xA;&#xA;    if (_isRecording) {&#xA;      await _stopAndSaveVideoRecording();&#xA;      _currentCameraIndex = (_currentCameraIndex &#x2B; 1) % _cameras.length;&#xA;      await _initializeCameraController(_cameras[_currentCameraIndex]);&#xA;      await _startVideoRecording();&#xA;    } else {&#xA;      _currentCameraIndex = (_currentCameraIndex &#x2B; 1) % _cameras.length;&#xA;      await _initializeCameraController(_cameras[_currentCameraIndex]);&#xA;    }&#xA;  }&#xA;&#xA;  Future<void> _toggleRecording() async {&#xA;    if (_cameraController == null) return;&#xA;&#xA;    if (_isRecording) {&#xA;      await _stopAndSaveVideoRecording();&#xA;      await _stopAndSaveAudioRecording();&#xA;    } else {&#xA;      _startVideoRecording();&#xA;      _startAudioRecording();&#xA;      setState(() => _recordedVideos.clear());&#xA;    }&#xA;  }&#xA;}&#xA;</void></void></void></void></void></void></void></void></void></string></cameradescription></mergevideorecording></mergevideorecording>

    &#xA;

  • Squeeze image when images are larger image than 1024X768 in FFMpeg with javacv

    14 avril 2016, par Saty

    I am using this below code to stream RTMP to my Adobe FMS Server... its doing good. It shows squeezed image if the camera resolution is above 1024X768.
    The issue came when we tested on a Tab which has camera resolution of 1200X800, The recorder automatically takes the resolution of 1024X768 which makes the preview and the actual video a squeezed one and one more thing is recording format does not support MP4

    Can anyone describe why its not working with this and can we use that format.

    public class MainActivity extends Activity implements OnClickListener {

    private final static String LOG_TAG = "MainActivity";

    private PowerManager.WakeLock mWakeLock;

    private String ffmpeg_link = "rtmp://username:password@xxx.xxx.xxx.xxx:1935/live/test.flv";

    //private String ffmpeg_link = "/mnt/sdcard/new_stream.flv" ;

    private volatile FFmpegFrameRecorder recorder;
    boolean recording = false;
    long startTime = 0;

    private int sampleAudioRateInHz = 44100;
    private int imageWidth = 320;
    private int imageHeight = 240;
    private int frameRate = 30;

    private Thread audioThread;
    volatile boolean runAudioThread = true;
    private AudioRecord audioRecord;
    private AudioRecordRunnable audioRecordRunnable;

    private CameraView cameraView;
    private IplImage yuvIplimage = null;

    private Button recordButton;
    private LinearLayout mainLayout;

    @Override
    public void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);

    setRequestedOrientation(ActivityInfo.SCREEN_ORIENTATION_LANDSCAPE);
    setContentView(R.layout.activity_main);

    initLayout();
    initRecorder();
    }

    @Override
    protected void onResume() {
     super.onResume();

    if (mWakeLock == null) {
       PowerManager pm = (PowerManager) getSystemService(Context.POWER_SERVICE);
       mWakeLock = pm.newWakeLock(PowerManager.SCREEN_BRIGHT_WAKE_LOCK, LOG_TAG);
       mWakeLock.acquire();
    }
    }

    @Override
    protected void onPause() {
    super.onPause();

    if (mWakeLock != null) {
       mWakeLock.release();
       mWakeLock = null;
    }
    }

    @Override
    protected void onDestroy() {
    super.onDestroy();

    recording = false;
    }


    private void initLayout() {

    mainLayout = (LinearLayout) this.findViewById(R.id.record_layout);

    recordButton = (Button) findViewById(R.id.recorder_control);
    recordButton.setText("Start");
    recordButton.setOnClickListener(this);

    cameraView = new CameraView(this);

    LinearLayout.LayoutParams layoutParam = new LinearLayout.LayoutParams(imageWidth, imageHeight);        
    mainLayout.addView(cameraView, layoutParam);
    Log.v(LOG_TAG, "added cameraView to mainLayout");
    }

    private void initRecorder() {
    Log.w(LOG_TAG,"initRecorder");

    if (yuvIplimage == null) {
       // Recreated after frame size is set in surface change method
       yuvIplimage = IplImage.create(imageWidth, imageHeight, IPL_DEPTH_8U, 2);
       //yuvIplimage = IplImage.create(imageWidth, imageHeight, IPL_DEPTH_32S, 2);

       Log.v(LOG_TAG, "IplImage.create");
     }

    recorder = new FFmpegFrameRecorder(ffmpeg_link, imageWidth, imageHeight, 1);
    Log.v(LOG_TAG, "FFmpegFrameRecorder: " + ffmpeg_link + " imageWidth: " + imageWidth + " imageHeight " + imageHeight);

    recorder.setFormat("flv");
    Log.v(LOG_TAG, "recorder.setFormat(\"flv\")");

    recorder.setSampleRate(sampleAudioRateInHz);
    Log.v(LOG_TAG, "recorder.setSampleRate(sampleAudioRateInHz)");

    // re-set in the surface changed method as well
    recorder.setFrameRate(frameRate);
    Log.v(LOG_TAG, "recorder.setFrameRate(frameRate)");

    // Create audio recording thread
    audioRecordRunnable = new AudioRecordRunnable();
    audioThread = new Thread(audioRecordRunnable);

    }

     // Start the capture
        public void startRecording() {
        try {
       recorder.start();
       startTime = System.currentTimeMillis();
       recording = true;
       audioThread.start();
    } catch (FFmpegFrameRecorder.Exception e) {
       e.printStackTrace();
        }
        }

        public void stopRecording() {
        // This should stop the audio thread from running
       runAudioThread = false;

       if (recorder != null &amp;&amp; recording) {
       recording = false;
       Log.v(LOG_TAG,"Finishing recording, calling stop and release on recorder");
       try {
           recorder.stop();
           recorder.release();
       } catch (FFmpegFrameRecorder.Exception e) {
           e.printStackTrace();
       }
       recorder = null;
       }
       }

      @Override
      public boolean onKeyDown(int keyCode, KeyEvent event) {
    // Quit when back button is pushed
      if (keyCode == KeyEvent.KEYCODE_BACK) {
       if (recording) {
           stopRecording();
       }
       finish();
       return true;
       }
       return super.onKeyDown(keyCode, event);
       }

       @Override
       public void onClick(View v) {
       if (!recording) {
       startRecording();
       Log.w(LOG_TAG, "Start Button Pushed");
       recordButton.setText("Stop");
       } else {
       stopRecording();
       Log.w(LOG_TAG, "Stop Button Pushed");
       recordButton.setText("Start");
       }
       }

       //---------------------------------------------
       // audio thread, gets and encodes audio data
       //---------------------------------------------
       class AudioRecordRunnable implements Runnable {

       @Override
        public void run() {
       // Set the thread priority
         android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO);

       // Audio
       int bufferSize;
       short[] audioData;
       int bufferReadResult;

       bufferSize = AudioRecord.getMinBufferSize(sampleAudioRateInHz,
               AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT);
       audioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC, sampleAudioRateInHz,
               AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT, bufferSize);

       audioData = new short[bufferSize];

       Log.d(LOG_TAG, "audioRecord.startRecording()");
       audioRecord.startRecording();

       // Audio Capture/Encoding Loop
       while (runAudioThread) {
           // Read from audioRecord
           bufferReadResult = audioRecord.read(audioData, 0, audioData.length);
           if (bufferReadResult > 0) {
               //Log.v(LOG_TAG,"audioRecord bufferReadResult: " + bufferReadResult);

               // Changes in this variable may not be picked up despite it being "volatile"
               if (recording) {
                   try {
                       // Write to FFmpegFrameRecorder
                       Buffer[] buffer = {ShortBuffer.wrap(audioData, 0, bufferReadResult)};                        
                       recorder.record(buffer);
                   } catch (FFmpegFrameRecorder.Exception e) {
                       Log.v(LOG_TAG,e.getMessage());
                       e.printStackTrace();
                   }
               }
           }
       }
       Log.v(LOG_TAG,"AudioThread Finished");

       /* Capture/Encoding finished, release recorder */
       if (audioRecord != null) {
           audioRecord.stop();
           audioRecord.release();
           audioRecord = null;
           Log.v(LOG_TAG,"audioRecord released");
       }
    }

    }

     class CameraView extends SurfaceView implements SurfaceHolder.Callback, PreviewCallback {

    private boolean previewRunning = false;

    private SurfaceHolder holder;
    private Camera camera;

    private byte[] previewBuffer;

    long videoTimestamp = 0;

    Bitmap bitmap;
    Canvas canvas;

    public CameraView(Context _context) {
       super(_context);

       holder = this.getHolder();
       holder.addCallback(this);
       holder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS);
    }

    @Override
    public void surfaceCreated(SurfaceHolder holder) {
       camera = Camera.open();

       try {
           camera.setPreviewDisplay(holder);
           camera.setPreviewCallback(this);

           Camera.Parameters currentParams = camera.getParameters();
           Log.v(LOG_TAG,"Preview Framerate: " + currentParams.getPreviewFrameRate());
           Log.v(LOG_TAG,"Preview imageWidth: " + currentParams.getPreviewSize().width + " imageHeight: " + currentParams.getPreviewSize().height);

           // Use these values
           imageWidth = currentParams.getPreviewSize().width;
           imageHeight = currentParams.getPreviewSize().height;
           frameRate = currentParams.getPreviewFrameRate();                

           bitmap = Bitmap.createBitmap(imageWidth, imageHeight, Bitmap.Config.ALPHA_8);


           /*
           Log.v(LOG_TAG,"Creating previewBuffer size: " + imageWidth * imageHeight * ImageFormat.getBitsPerPixel(currentParams.getPreviewFormat())/8);
           previewBuffer = new byte[imageWidth * imageHeight * ImageFormat.getBitsPerPixel(currentParams.getPreviewFormat())/8];
           camera.addCallbackBuffer(previewBuffer);
           camera.setPreviewCallbackWithBuffer(this);
           */              

           camera.startPreview();
           previewRunning = true;
       }
       catch (IOException e) {
           Log.v(LOG_TAG,e.getMessage());
           e.printStackTrace();
       }  
    }

    public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {
       Log.v(LOG_TAG,"Surface Changed: width " + width + " height: " + height);

       // We would do this if we want to reset the camera parameters
       /*
       if (!recording) {
           if (previewRunning){
               camera.stopPreview();
           }
           try {
               //Camera.Parameters cameraParameters = camera.getParameters();
               //p.setPreviewSize(imageWidth, imageHeight);
               //p.setPreviewFrameRate(frameRate);
               //camera.setParameters(cameraParameters);

               camera.setPreviewDisplay(holder);
               camera.startPreview();
               previewRunning = true;
           }
           catch (IOException e) {
               Log.e(LOG_TAG,e.getMessage());
               e.printStackTrace();
           }  
       }            
       */

       // Get the current parameters
       Camera.Parameters currentParams = camera.getParameters();
       Log.v(LOG_TAG,"Preview Framerate: " + currentParams.getPreviewFrameRate());
       Log.v(LOG_TAG,"Preview imageWidth: " + currentParams.getPreviewSize().width + " imageHeight: " + currentParams.getPreviewSize().height);

       // Use these values
       imageWidth = currentParams.getPreviewSize().width;
       imageHeight = currentParams.getPreviewSize().height;
       frameRate = currentParams.getPreviewFrameRate();

       // Create the yuvIplimage if needed
       yuvIplimage = IplImage.create(imageWidth, imageHeight, IPL_DEPTH_8U, 2);
       //yuvIplimage = IplImage.create(imageWidth, imageHeight, IPL_DEPTH_32S, 2);
    }

    @Override
    public void surfaceDestroyed(SurfaceHolder holder) {
       try {
           camera.setPreviewCallback(null);

           previewRunning = false;
           camera.release();

       } catch (RuntimeException e) {
           Log.v(LOG_TAG,e.getMessage());
           e.printStackTrace();
       }
    }

    @Override
    public void onPreviewFrame(byte[] data, Camera camera) {

       if (yuvIplimage != null &amp;&amp; recording) {
           videoTimestamp = 1000 * (System.currentTimeMillis() - startTime);

           // Put the camera preview frame right into the yuvIplimage object
           yuvIplimage.getByteBuffer().put(data);

           // FAQ about IplImage:
           // - For custom raw processing of data, getByteBuffer() returns an NIO direct
           //   buffer wrapped around the memory pointed by imageData, and under Android we can
           //   also use that Buffer with Bitmap.copyPixelsFromBuffer() and copyPixelsToBuffer().
           // - To get a BufferedImage from an IplImage, we may call getBufferedImage().
           // - The createFrom() factory method can construct an IplImage from a BufferedImage.
           // - There are also a few copy*() methods for BufferedImage&lt;->IplImage data transfers.

           // Let's try it..
           // This works but only on transparency
           // Need to find the right Bitmap and IplImage matching types

           /*
           bitmap.copyPixelsFromBuffer(yuvIplimage.getByteBuffer());
           //bitmap.setPixel(10,10,Color.MAGENTA);

           canvas = new Canvas(bitmap);
           Paint paint = new Paint();
           paint.setColor(Color.GREEN);
           float leftx = 20;
           float topy = 20;
           float rightx = 50;
           float bottomy = 100;
           RectF rectangle = new RectF(leftx,topy,rightx,bottomy);
           canvas.drawRect(rectangle, paint);

           bitmap.copyPixelsToBuffer(yuvIplimage.getByteBuffer());
           */
           //Log.v(LOG_TAG,"Writing Frame");

           try {

               // Get the correct time
               recorder.setTimestamp(videoTimestamp);

               // Record the image into FFmpegFrameRecorder
               recorder.record(yuvIplimage);

           } catch (FFmpegFrameRecorder.Exception e) {
               Log.v(LOG_TAG,e.getMessage());
               e.printStackTrace();
           }
       }
    }

    }
    }