Recherche avancée

Médias (91)

Autres articles (111)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

Sur d’autres sites (11573)

  • Cloaked Archive Wiki

    16 mai 2011, par Multimedia Mike — General

    Google’s Chrome browser has made me phenomenally lazy. I don’t even attempt to type proper, complete URLs into the address bar anymore. I just type something vaguely related to the address and let the search engine take over. I saw something weird when I used this method to visit Archive Team’s site :



    There’s greater detail when you elect to view more results from the site :



    As the administrator of a MediaWiki installation like the one that archiveteam.org runs on, I was a little worried that they might have a spam problem. However, clicking through to any of those out-of-place pages does not indicate anything related to pharmaceuticals. Viewing source also reveals nothing amiss.

    I quickly deduced that this is a textbook example of website cloaking. This is when a website reports different content to a search engine than it reports to normal web browsers (humans, presumably). General pseudocode :

    C :
    1. if (web_request.user_agent_string == CRAWLER_USER_AGENT)
    2.  return cloaked_data ;
    3. else
    4.  return real_data ;

    You can verify this for yourself using the wget command line utility :

    <br />
    $ wget --quiet --user-agent="<strong>Mozilla/5.0</strong>" \<br />
     http://www.archiveteam.org/index.php?title=Geocities -O - | grep \&lt;title\&gt;<br />
    &lt;title&gt;GeoCities - Archiveteam&lt;/title&gt;

    $ wget —quiet —user-agent="Googlebot/2.1"
    http://www.archiveteam.org/index.php?title=Geocities -O - | grep \<title\>
    <title>Cheap xanax | Online Drug Store, Big Discounts</title>

    I guess the little web prank worked because the phaux-pharma stuff got indexed. It makes we wonder if there’s a MediaWiki plugin that does this automatically.

    For extra fun, here’s a site called the CloakingDetector which purports to be able to detect whether a page employs cloaking. This is just one humble observer’s opinion, but I don’t think the site works too well :



  • build : Group general components separate from de/encoders in arch Makefiles

    20 décembre 2013, par Diego Biurrun
    build : Group general components separate from de/encoders in arch Makefiles
    

    This is in line with how the top-level libavcodec Makefile is structured.

    • [DBH] libavcodec/aarch64/Makefile
    • [DBH] libavcodec/arm/Makefile
    • [DBH] libavcodec/ppc/Makefile
    • [DBH] libavcodec/x86/Makefile
  • How to fix ffmpeg-php Version : not loaded or not installed

    4 février 2014, par user3149210

    I installed these

    ffmpeg-php version => 0.6.0-svn

    But still that shows an error ffmpeg-php Version : not loaded or not installed.