Recherche avancée

Médias (1)

Mot : - Tags -/Rennes

Autres articles (107)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Dépôt de média et thèmes par FTP

    31 mai 2013, par

    L’outil MédiaSPIP traite aussi les média transférés par la voie FTP. Si vous préférez déposer par cette voie, récupérez les identifiants d’accès vers votre site MédiaSPIP et utilisez votre client FTP favori.
    Vous trouverez dès le départ les dossiers suivants dans votre espace FTP : config/ : dossier de configuration du site IMG/ : dossier des média déjà traités et en ligne sur le site local/ : répertoire cache du site web themes/ : les thèmes ou les feuilles de style personnalisées tmp/ : dossier de travail (...)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

Sur d’autres sites (6372)

  • I want to print HLS files using ffmpeg in aws lambda (python)

    14 avril 2021, par 최우선

    I implemented it through the link(https://aws.amazon.com/ko/blogs/media/processing-user-generated-content-using-aws-lambda-and-ffmpeg/) here, and it works well.

    


    s3_source_bucket = event['Records'][0]['s3']['bucket']['name']
s3_source_key = event['Records'][0]['s3']['object']['key']

s3_source_basename = os.path.splitext(os.path.basename(s3_source_key))[0]
s3_destination_filename = s3_source_basename + ".m3u8"

s3_client = boto3.client('s3')
s3_source_signed_url = s3_client.generate_presigned_url('get_object',
    Params={'Bucket': s3_source_bucket, 'Key': s3_source_key},
    ExpiresIn=SIGNED_URL_TIMEOUT)


ffmpeg_cmd = "/opt/bin/ffmpeg -i \"" + s3_source_signed_url + "\" -codec: copy -start_number 0 -hls_time 10 -hls_list_size 0 -f hls -"
command1 = shlex.split(ffmpeg_cmd)
p1 = subprocess.run(command1, stdout=subprocess.PIPE, stderr=subprocess.PIPE)

resp = s3_client.put_object(Body=p1.stdout, Bucket=S3_DESTINATION_BUCKET, Key=s3_destination_filename)


    


    However, the actual output through ffmpeg is multiple files. For example test.m3u8, test0.ts, test1.ts .....

    


    But when I print p1.stdout, it looks like multiple files (test.m3u8,test0.ts....) are merged into one file.

    


    Is there a way to get the actual output multiple files (test.m3u8,test0.ts......) from p1.stdout ? Please help.

    


  • Live streaming of processed frames to AWS

    22 avril 2021, par MinasCham

    I'm working on a project where i need to capture live video feed from an RTSP camera source, process the video frame-by-frame and stream the result to an AWS Service.

    


    So far, my solution :

    


      

    • Captures frames from the RTSP camera source using OpenCV and performs some processing.
    • 


    • Feeds the processed frames to an ffmpeg pipe that packages the content for online streaming (HTTP Live Streaming - hls) and saves it locally.
    • 


    • Transfers the media content to an Amazon Kinesis Video Stream using a Gstreamer pipeline element with kvssink as a sink element.
    • 


    


    My questions are :

    


      

    • Currently I'm saving the content both locally and on an Amazon Kinesis Video Stream. Is this efficient ?
    • 


    • Is it possible to directly stream the frames to the Amazon kinesis Video Stream (perhaps by connecting the ffmpeg output with the gstreamer pipeline element) ?
    • 


    • Is the file format suitable for this implementation or would it better to encode the media differently ?
    • 


    


  • AWS Lambda and Fluent FFMPEG error "cannot read property "isStream" of undefined"

    29 mai 2021, par Travis Lee

    so here's the goal : convert a .webm file hosted in an S3 into a gif and upload that to a new bucket. This all works fine when run locally, but when trying to translate it into a lambda, fluent-ffmpeg throws errors when it runs the command.

    


    Here's the code snippet :

    


    ffmpeg(new URL(vid))
  .outputOptions("-vf", "scale=320:-1:flags=lanczos,fps=14")
  .on('progress', () => {
      console.log('progress');
  })
  .on('end', () => {
     //Do stuff with the result when it is done
  })
  .output(newKey)
  .run(newKey);


    


    in this snippet, "vid" is a presigned GET url for an S3 bucket containing the .webm video file, and "newKey" is the name of the new bucket (and a temporary writeStream/File that is created in the lambda to store the new .gif file until we upload it to S3 - not super relevant to this issue).

    


    What should happen (and does locally) is that a new output is created containing the converted .gif file

    


    What happens when it is deployed in a lambda is that it reaches the .outputOptions call and throws a type error saying that it cannot read property isStream of undefined.

    


    At first glance, this seems like I simply don't have FFMPEG installed in the lambda, but I do. I have tried with the prebuilt layer using NodeJS 10 found here : https://serverlessrepo.aws.amazon.com/applications/us-east-1/145266761615/ffmpeg-lambda-layer ,
with a NodeJS 12 layer that was built by some engineers here previously, and tried building a NodeJS 14 FFMPEG layer myself and using that. I tried for all three using no configuration and letting it call the PATH ffmpeg, using the FFMPEG_PATH and FFPROBE_PATH environment variables set to either what was specified in the previous layers, or what I made it in the newly built one, and even manually setting the path to the executables using the setFfmpegPath and setFfprobePath functions found on the fluent-ffmpeg object.

    


    Lastly, I even tried bundling the executables in with the actual lambda code itself and uploading it through an S3, trying all three above methods of getting it to point to the correct paths once again to no avail.

    


    I'm seriously in need of help if anyone else has encountered something similar or just might know what is going on. I'm at wit's end here trying to figure this out.