Recherche avancée

Médias (1)

Mot : - Tags -/wave

Autres articles (77)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (5849)

  • Help us Reset The Net today on June 5th

    5 juin 2014, par Piwik Core Team — Community, Meta

    This blog post explains why the Piwik project is joining ResetTheNet online protest and how you can help make a difference against mass surveillance. It also includes an infographic and links to useful resources which may be of interest to you.

    Snowden revelations, a year ago today

    On June 5, 2013 the Guardian newspaper published the first of Edward Snowden’s astounding revelations. It was the first of a continuous stream of stories that pointed out what we’ve suspected for a long time : that the world’s digital communications are being continuously spied upon by nation states with precious little oversight.

    Unfortunately, mass surveillance is affecting the internet heavily. The Internet is a powerful force that can promote democracy, innovation, and creativity, but it’s being subverted as a tool for government spying. That is why Piwik has decided to join Reset The Net.

    June 5, 2014 marks a new year : a year that will not just be about listening to the inside story of mass surveillance, but a new year of fighting back !

    How do I protect myself and others ?

    Reset the Net is asking everyone to help by installing free software tools that are designed to protect your privacy on a computer or a mobile device.

    Reset the Net is also calling on websites and developers to add surveillance resistant features such as HTTPS and forward secrecy.

    Participate in ResetTheNet online protest

    Have you got your own website, blog or tumblr ? Maybe you can show the Internet Defense League’s “Cat Signal !” on your website.Get the code now to run the Reset the Net splash screen or banner to help make privacy viral on June 5th.

    Message from Edward Snowden

    Evan from FFTF sent us this message from Edward Snowden and we thought we would share it with you :

    One year ago, we learned that the internet is under surveillance, and our activities are being monitored to create permanent records of our private lives — no matter how innocent or ordinary those lives might be.

    Today, we can begin the work of effectively shutting down the collection of our online communications, even if the US Congress fails to do the same. That’s why I’m asking you to join me on June 5th for Reset the Net, when people and companies all over the world will come together to implement the technological solutions that can put an end to the mass surveillance programs of any government. This is the beginning of a moment where we the people begin to protect our universal human rights with the laws of nature rather than the laws of nations.

    We have the technology, and adopting encryption is the first effective step that everyone can take to end mass surveillance. That’s why I am excited for Reset the Net — it will mark the moment when we turn political expression into practical action, and protect ourselves on a large scale.

    Join us on June 5th, and don’t ask for your privacy. Take it back.

    – Message by Edward Snowden

    ResetTheNet privacy pack infographic

    Additional Resources

    Configure Piwik for Security and Privacy

    More info

  • Remuxing mp4 on the fly with FFmpeg API

    25 octobre 2015, par Vadym

    My goal is to stream out mpegts. As input, I take an mp4 file stream. That is, video producer writes mp4 file into a stream that I try to work with. The video may be anywhere from one minute to ten minutes. Because producer writes bytes into stream, the originally written mp4 header is not complete (first 32 bytes prior to ftyp are 0x00 because it doesn’t know yet various offsets... which are written post-recording, I think) :

    This is how the header of typical mp4 looks like :

    00 00 00 18   66 74 79 70   69 73 6f 6d   00 00 00 00   ....ftypisom....
    69 73 6f 6d   33 67 70 34   00 01 bb 8c   6d 64 61 74   isom3gp4..»Œmdat

    This is how the header of "in progress" mp4 looks like :

    00 00 00 00   66 74 79 70   69 73 6f 6d   00 00 00 00   ....ftypisom....
    69 73 6f 6d   33 67 70 34   00 00 00 18   3f 3f 3f 3f   isom3gp4....????
    6d 64 61 74                                             mdat

    It is my guess but I assume that once the producer completes recording, it updates the header by writing all the necessary offsets.

    I have run into two issues while trying to make this work :

    1. I created a custom AVIO with read function that does not support seeking. In my driver program, I decided to stream in a properly formatted mp4 file. I am able to detect its input format. When I try to open it, I see that my custom read function gets executed within avformat_open_input until the entire file is read in.

    My code sample :

    av_register_all();

    AVFormatContext* pCtx = avformat_alloc_context();
    pCtx->pb = avio_alloc_context(
       pBuffer,         // internal buffer
       iBufSize,        // internal buffer size
       0,               // bWriteable (1=true,0=false)
       stream,          // user data ; will be passed to our callback functions
       read_stream,     // read callback function
       NULL,            // write callback function (not used in this example)
       NULL             // seek callback function
    );
    pCtx->pb->seekable = 0;
    pCtx->pb->write_flag = 0;
    pCtx->iformat = av_find_input_format( "mp4" );
    pCtx->flags |= AVFMT_FLAG_CUSTOM_IO;

    avformat_open_input( &pCtx, "", pCtx->iformat, NULL );

    Obviously, this does not work as I need (I expectations were wrong). Once I substitute the file of finite size by a stream of varies length, I cannot have avformat_open_input wait around for the stream to finish before attempting to do further processing.

    As such, I need to find a way to open input without attempt to read it and only read when I execute av_read_frame. Is this at all possible to do by using custom AVIO. That is, prepare/open input -> read initial input data into input buffer -> read frame/packet from input buffer -> write packet to output -> repeat read input data until the end of stream.

    I did scavenge google and only saw two alternatives : providing custom URLProtocol and using AVFMT_NOFILE.

    Custom URLProtocol
    This sounds like a little backwards way for what I’m trying to accomplish. I understand that it is best used when there is a file source available. Whereas I am trying to read from a byte stream. Also, another reason I think it doesn’t fit my needs is that custom URLProtocol needs to be compiled into ffmpeg lib, correct ? Or is there a way to manually register it during runtime ?

    AVFMT NOFILE
    This seems like something that should actually work best for me. The flag itself says that there is no underlying source file and assumes that I will handle all the reading and provisioning of input data. The trouble is that I haven’t seen any online code snippets so far but my assumption is as follows :

    I am really hoping to get some suggestions of food for brain from anyone because I am a newbie to ffmpeg and digital media and my second issue expects that I can stream output while ingesting input.

    1. As I mentioned above, I have a handle on the mp4 file bytestream as it would be written to the hard disk. The format is mp4 (h.264 and aac). I need to remux it to mpegts prior to streaming it out. This shouldn’t be difficult because mp4 and mpegts are simply containers. From what I learned so far, mp4 file looks the following :

      [header info containing format versions]
      mdat
      [stream data, in my case h.264 and aac streams]
      [some trailer separator]
      [trailer data]

    If that is correct, I should be able to get the handle on h.264 and aac interleaved data by simply starting to read the stream after "mdat" identifier, correct ?

    If that is true and I decide to go with AVFMT_NOFILE approach of managing input data, I can just ingest stream data (into AVFormatContext buffer) -> av_read_frame -> process it -> populate AVFormatContext with more data -> av_read_frame -> and so on until the end of stream.

    I know, this is a mouthful and a dump of my thoughts but I would appreciate any discussion, pointers, thoughts !

  • Revisiting Nosefart and Discovering GME

    30 mai 2011, par Multimedia Mike — Game Hacking

    I found the following screenshot buried deep in an old directory structure of mine :



    I tried to recall how this screenshot came to exist. Had I actually created a functional KDE frontend to Nosefart yet neglected to release it ? I think it’s more likely that I used some designer tool (possibly KDevelop) to prototype a frontend. This would have been sometime in 2000.

    However, this screenshot prompted me to revisit Nosefart.

    Nosefart Background
    Nosefart is a program that can play Nintendo Sound Format (NSF) files. NSF files are files containing components that were surgically separated from Nintendo Entertainment System (NES) ROM dumps. These components contain the music playback engines for various games. An NSF player is a stripped down emulation system that can simulate the NES6502 CPU along with the custom hardware (2 square waves, 1 triangle wave, 1 noise generator, and 1 limited digital channel).

    Nosefart was written by Matt Conte and eventually imported into a Sourceforge project, though it has not seen any development since then. The distribution contains standalone command line players for Linux and DOS, a GTK frontend for the Linux command line version, and plugins for Winamp, XMMS, and CL-Amp.

    The Sourceforge project page notes that Nosefart is also part of XBMC. Let the record show that Nosefart is also incorporated into xine (I did that in 2002, I think).

    Upgrading the API
    When I tried running the command line version of Nosefart under Linux, I hit hard against the legacy audio API : OSS. Remember that ?

    In fairly short order, I was able to upgrade the CL program to use PulseAudio. The program is not especially sophisticated. It’s a single-threaded affair which checks for a keypress, processes an audio frame, and sends the frame out to the OSS file interface. All that was needed was to rewrite open_hardware() and close_hardware() for PA and then replace the write statement in play(). The only quirk that stood out is that including <pulse/pulseaudio.h> is insufficient for programming PA’s simple API. <pulse/simple.h> must be included separately.

    For extra credit, I adapted the program to ALSA. The program uses the most simplistic audio output API possible — just keep filling a buffer and sending it out to the DAC.

    Discovering GME
    I’m not sure what to do with the the program now since, during my research to attempt to bring Nosefart up to date, I became aware of a software library named Game Music Emu, or GME. It’s a pure C++ library that can essentially play any classic video game format you can possible name. Wow. A lot can happen in 10 years when you’re not paying attention.

    It’s such a well-written library that I didn’t need any tutorial or documentation to come up to speed. Just a quick read of the main gme.h header library enabled me in short order to whip up a quick C program that could play NSF and SPC files. Path of least resistance : Client program asks library to open a hardcoded file, synthesize 10 seconds of audio, and dump it into a file ; ask the FLAC command line program to transcode raw data to .flac file ; use ffplay to verify the results.

    I might develop some other uses for this library.