Recherche avancée

Médias (91)

Autres articles (32)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

  • Contribute to a better visual interface

    13 avril 2011

    MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
    Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.

  • Submit bugs and patches

    13 avril 2011

    Unfortunately a software is never perfect.
    If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
    If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
    You may also (...)

Sur d’autres sites (4529)

  • exec.Command not formating input correctly ?

    20 mai 2016, par nadermx

    I’m having a bizzar issue. I have a ffmpeg I run as a subprocess

       ffmpeg = exec.Command("nice", "-n", "10",
               "ffmpeg",
               "-http_proxy", RandomProxyAddress(),
               "-i", fmt.Sprintf(`%s`, vi.GetUrl()),
               "-acodec", "libmp3lame",
               "-metadata", fmt.Sprintf(`title=%s`, vi.GetTitle()),
               "-b:a", fmt.Sprintf("%s", vi.GetAudioQuality()),
               "-threads", "0",
               "-f", "mp3",
               "-")

    But when I run this command I get

    Server returned 403 Forbidden (access denied)

    but when I run the command outside of go it functions correctly.

    ffmpeg -http_proxy http://user:user@123.123.123.123:29842 -i 'https://r5---sn-uqx2-aphl.googlevideo.com/videoplayback?fexp=9405981%2C9414672%2C9416126%2C9416891%2C9422342%2C9422596%2C9423965%2C9425077%2C9428398%2C9431012%2C9432564%2C9433096%2C9433946%2C9434085%2C9435697%2C9435736%2C9435876%2C9437285%2C9437344&clen=3716608&itag=140&ipbits=0&upn=QpGXy0IIxmU&key=yt6&expire=1463728239&id=o-AIlDlUBTmu9UXu0yMp77VXR502YekQi98e6JpvbSzjo0&ms=au&gcr=pe&mv=m&source=youtube&mm=31&mn=sn-uqx2-aphl&pl=24&requiressl=yes&ip=190.234.105.134&mime=audio%2Fmp4&mt=1463706418&sparams=clen%2Cdur%2Cgcr%2Cgir%2Cid%2Cinitcwndbps%2Cip%2Cipbits%2Citag%2Ckeepalive%2Clmt%2Cmime%2Cmm%2Cmn%2Cms%2Cmv%2Cpl%2Crequiressl%2Csource%2Cupn%2Cexpire&lmt=1458211566542818&gir=yes&keepalive=yes&sver=3&dur=233.964&initcwndbps=563750&signature=53A8732F8841FC6AC6D6737B35B4EF6FC000F2F0.59E11FB2D3DF96F4C00CE8D84C28D3A546E04F78&ratebypass=yes' -acodec 'libmp3lame' -metadata 'title=This awesome song' -b:a '128k' -threads '0' -f 'mp3' test.mp3

    Does it have something to do with my string formatting or quotations in the exec.Command ?

    Edit :

    fmt.Print output

    nice-n10ffmpeg-http_proxyhttp://user:user@1.1.1.1:29842-ihttps://r17---sn-q4f7snes.googlevideo.com/videoplayback?pl=21&amp;source=youtube&amp;sparams=clen%2Cdur%2Cgir%2Cid%2Cip%2Cipbits%2Citag%2Ckeepalive%2Clmt%2Cmime%2Cmm%2Cmn%2Cms%2Cmv%2Cnh%2Cpl%2Crequiressl%2Csource%2Cupn%2Cexpire&amp;upn=mwBuBq7HijI&amp;keepalive=yes&amp;nh=IgpwcjAzLmRmdzA2KgkxMjcuMC4wLjE&amp;expire=1462445495&amp;id=o-ADXLtGoWBQDEKqNcD9aFNT4IFc8EZ8gu_TwZfCshkmjs&amp;lmt=1453349484731934&amp;ip=142.91.200.197&amp;sver=3&amp;dur=264.382&amp;mv=u&amp;mt=1462423294&amp;ms=au&amp;mn=sn-q4f7snes&amp;signature=6DD5ED6A89A2FB8A30DD7EBAA87333EDDFF1D832.B5D0C233EA672891006D2993A745E7224F29BF02&amp;mm=31&amp;itag=140&amp;gir=yes&amp;clen=4199716&amp;mime=audio%2Fmp4&amp;ipbits=0&amp;requiressl=yes&amp;key=yt6&amp;fexp=9416126%2C9416891%2C9422596%2C9428398%2C9431012%2C9433096%2C9433946&amp;ratebypass=yes-acodeclibmp3lame-ss0-metadatatitle=Khong Duong no sugar Music Video-b:a128k-threads0-fmp3-&amp;{/usr/bin/nice [nice -n 10 ffmpeg -http_proxy http://user:user@1.1.1.1:29842 -i https://r17---sn-q4f7snes.googlevideo.com/videoplayback?pl=21&amp;source=youtube&amp;sparams=clen%2Cdur%2Cgir%2Cid%2Cip%2Cipbits%2Citag%2Ckeepalive%2Clmt%2Cmime%2Cmm%2Cmn%2Cms%2Cmv%2Cnh%2Cpl%2Crequiressl%2Csource%2Cupn%2Cexpire&amp;upn=mwBuBq7HijI&amp;keepalive=yes&amp;nh=IgpwcjAzLmRmdzA2KgkxMjcuMC4wLjE&amp;expire=1462445495&amp;id=o-ADXLtGoWBQDEKqNcD9aFNT4IFc8EZ8gu_TwZfCshkmjs&amp;lmt=1453349484731934&amp;ip=142.91.200.197&amp;sver=3&amp;dur=264.382&amp;mv=u&amp;mt=1462423294&amp;ms=au&amp;mn=sn-q4f7snes&amp;signature=6DD5ED6A89A2FB8A30DD7EBAA87333EDDFF1D832.B5D0C233EA672891006D2993A745E7224F29BF02&amp;mm=31&amp;itag=140&amp;gir=yes&amp;clen=4199716&amp;mime=audio%2Fmp4&amp;ipbits=0&amp;requiressl=yes&amp;key=yt6&amp;fexp=9416126%2C9416891%2C9422596%2C9428398%2C9431012%2C9433096%2C9433946&amp;ratebypass=yes -acodec libmp3lame -metadata title=Khong Duong no sugar Music Video -b:a 128k -threads 0 -f mp3 -] []  <nil> <nil> <nil> [] <nil> <nil> <nil> <nil> false [] [] [] [] <nil>}
    </nil></nil></nil></nil></nil></nil></nil></nil>
  • AVCONV / FFMPEG hardware acceleration for video conversion

    21 mars 2016, par Alex Bern

    I have an Ubuntu PC with no video card.

    I use avconv for video conversion :

    avconv -i video.wmv -c:v libx264 -c:a libmp3lame -b:v 1800K video.mp4

    My CPU (Intel Core i7-4770K) processes 1.5-2Gb video in around 7-10 minutes.

    In the avconv github sources I saw the options

    Hardware accelerators:
     --enable-d3d11va         enable D3D11VA code
     --enable-dxva2           enable DXVA2 code
     --enable-vaapi           enable VAAPI code
     --enable-vda             enable VDA code
     --enable-vdpau           enable VDPAU code

    I am thinking of compiling avconv with --enable-vdpau and putting a video card into the PC.

    Does this allow avconv to use the video card for video conversion ?

    How can this increase the speed of video conversion (I mean my command) ?

    Can you help me to do this test, if you have avconv in your PC with a video card ?

    Here is an example of WMV.

  • How To Write An Oscilloscope

    29 avril 2012, par Multimedia Mike — General, gme, oscilloscope, visualization

    I’m trying to figure out how to write a software oscilloscope audio visualization. It’s made more frustrating by the knowledge that I am certain that I have accomplished this task before.

    In this context, the oscilloscope is used to draw the time-domain samples of an audio wave form. I have written such a plugin as part of the xine project. However, for that project, I didn’t have to write the full playback pipeline— my plugin was just handed some PCM data and drew some graphical data in response. Now I’m trying to write the entire engine in a standalone program and I’m wondering how to get it just right.



    This is an SDL-based oscilloscope visualizer and audio player for Game Music Emu library. My approach is to have an audio buffer that holds a second of audio (44100 stereo 16-bit samples). The player updates the visualization at 30 frames per second. The o-scope is 512 pixels wide. So, at every 1/30th second interval, the player dips into the audio buffer at position ((frame_number % 30) * 44100 / 30) and takes the first 512 stereo frames for plotting on the graph.

    It seems to be working okay, I guess. The only problem is that the A/V sync seems to be slightly misaligned. I am just wondering if this is the correct approach. Perhaps the player should be performing some slightly more complicated calculation over those (44100/30) audio frames during each update in order to obtain a more accurate graph ? I described my process to an electrical engineer friend of mine and he insisted that I needed to apply something called hysteresis to the output or I would never get accurate A/V sync in this scenario.

    Further, I know that some schools of thought on these matters require that the dots in those graphs be connected, that the scattered points simply won’t do. I guess it’s a stylistic choice.

    Still, I think I have a reasonable, workable approach here. I might just be starting the visualization 1/30th of a second too late.